question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
79,365,034 | 2025-1-17 | https://stackoverflow.com/questions/79365034/how-to-recycle-list-to-build-a-new-column | How can I create the type column recycling a two-elements list ["lat","lon"]? adresse coord type "place 1" 48.943837 lat "place 1" 2.387917 lon "place 2" 37.843837 lat "place 2" 6.387917 lon As it would be automatically done in R with d$type <- c("lat","lon") Reprex : d0 = pl.DataFrame( { "adresse": ["place 1", "place 2"], "coord": [[48.943837, 2.387917], [37.843837, 6.387917]], } ) d1 = d0.explode("coord") What I tried: d1 = d1.with_columns(type=pl.Series(["1","2"])) # ShapeError: unable to add a column of length 2 to a DataFrame of height 4 d1 = d1.join(pl.DataFrame({"id":["1", "2"]}), how="cross") # logically, 8 rows instead of 4 | pl.int_range() and pl.len() to create a "row number". pl.Expr.over() to do it within adresse column. ( d0.explode("coord") .with_columns( type = pl.int_range(pl.len()).over("adresse") ) ) shape: (4, 3) βββββββββββ¬ββββββββββββ¬βββββββ β adresse β coord β type β β --- β --- β --- β β str β f64 β i64 β βββββββββββͺββββββββββββͺβββββββ‘ β place 1 β 48.943837 β 0 β β place 1 β 2.387917 β 1 β β place 2 β 37.843837 β 0 β β place 2 β 6.387917 β 1 β βββββββββββ΄ββββββββββββ΄βββββββ Or if you need polars.datatypes.Enum(): dtype = pl.Enum(["lat", "lon"]) ( d0.explode("coord") .with_columns( type = pl.int_range(pl.len()).over("adresse").cast(dtype) ) ) shape: (4, 3) βββββββββββ¬ββββββββββββ¬βββββββ β adresse β coord β type β β --- β --- β --- β β str β f64 β enum β βββββββββββͺββββββββββββͺβββββββ‘ β place 1 β 48.943837 β lat β β place 1 β 2.387917 β lon β β place 2 β 37.843837 β lat β β place 2 β 6.387917 β lon β βββββββββββ΄ββββββββββββ΄βββββββ Alternatively, you can first create additional lists with pl.int_ranges() and then explode both lists together, so you don't need pl.Expr.over() aka window function. ( d0 .with_columns(type = pl.int_ranges(2)) # or using pl.col.coord.list.len() # .with_columns(type = pl.int_ranges(pl.col.coord.list.len())) .explode("coord", "type") .with_columns(type = pl.col.type.cast(dtype)) ) | 5 | 3 |
79,364,336 | 2025-1-17 | https://stackoverflow.com/questions/79364336/how-to-get-a-python-function-to-work-on-an-np-array-or-a-float-with-conditional | I have a function that I'd like to take numpy arrays or floats as input. I want to keep doing an operation until some measure of error is less than a threshold. A simple example would be the following to divide a number or array by 2 until it's below a threshold (if a float), or until it's maximum is below a threshold (if an array). def f(x): #float version while x>1e-5: x = x/2 return x def f(x): #np array version while max(x)>1e-5: x = x/2 return x Unfortunately, max won't work if I've got something that is not iterable, and x>1e-5 won't work if x is an array. I can't find anything to do this, except perhaps vectorize, but that seems to not be as efficient as I would want. How can I get a single function to handle both cases? | What about checking the type of input inside the function and adapt it ? def f(x): # for float or np.array if type(x) is float: x = x * np.ones(1) while np.max(x)>1e-5: x = x/2 return x | 2 | 2 |
79,364,619 | 2025-1-17 | https://stackoverflow.com/questions/79364619/how-to-write-a-cell-from-an-if-statement | I'm learning Python (using MU) and trying to implement it in Excel (with Python in Excel), however I have found some difficulties which I can't understand: Cell values are (just an example): C1= 1 C2= 3 D1: i1=xl("C1") #gives value 1 D2: i2=xl("C2") #gives value 3 F1: i3=0 #i3 is defined, just to see what is happening F2: i4=0 #same as above F6: if i1<i2: i3=i1+i2 else: i3=i1*i2 #comparing values there is no output F9: i3 #outputs the value and seems to work well What I can't understand is why is in this case value of F6 cell none (Python editor says "No output"), even if I adjust the code to: if i1<i2: i1+i2 else: i1*i2 If i write i1+i2 in a random python activated cell, it gives off value as expected, but inside the if statement, it doesn't work. What seems to work is adding if statement into function: def foo(x1, x2): if x1<x2: return x1+x2 elif x1>x2: return x1*x2 else: return x1 foo(i1, i2) in that case function returns to a cell and cell has a value as expected. If there is no way to achieve cell value directly from if statement, I assume using a function is a way to go. | TL;DR Python in Excel will only evaluate the last expression or assignment as output. Option 1: Add i3 to the code block after the if statement as the last expression to be evaluated. if i1 < i2: i3 = i1 + i2 else: i3 = i1 * i2 i3 # add Option 2: Use a conditional expression. i3 = i1 + i2 if i1 < i2 else i1 * i2 Option 3: Declare a function and call it (as in OP's "EDIT" section). The "documentation" on Python in Excel (hereafter: PiE) seems rather meagre. However, this feature being a collaboration with Anaconda, there are some useful blogs. In 5 Quick Tips for Using Python in Excel, one can read: So what constitutes a valid output for a cell? The execution of each Python cell works in a readβevalβprint loop (REPL) fashion, similar to running Python in Jupyter Notebook cells. The last expression in the cell that will be evaluated (e.g., a Python object or the return value of a called Python function) will represent the output of the cell. The comparison with Jupyter Notebook is instructive, but rather imprecise. Cf. InteractiveShell.ast_node_interactivity, which shows you that it uses 'last_expr' as the default. PiE, however, seems to use the equivalent of 'last_expr_or_assign'. Cf. Jupyter Notebook: With PiE: x = 1 y = 2 Otherwise, it is true that Jupyter Notebook (by default) also does not output an expression that is part of a control flow like the if statement. But it can be made to do so for an expression with 'all': It would appear that PiE simply does not allow this. As a result, it seems you are left with 3 options: Option 1: Add the variable at the end of the code block, after the if statement: x = 1 if 1 == 1: x = 2 x Option 2: Use a conditional expression ("ternary operator", cf. this SO post): x = 1 x = 2 if 1 == 1 else x Option 3: The workaround already provided by the OP; declare a function and call it. | 4 | 6 |
79,364,611 | 2025-1-17 | https://stackoverflow.com/questions/79364611/errormessage-nonetype-object-is-not-iterable | its an aws lambda function try: with urllib.request.urlopen(api_url) as response: data=json.loads(response.read().decode()) print(json.dumps(data, indent = 4 )) except Exception as e : print(f"error reading data : {e} ") return {"statuscode":"500","body":"Error fetching data"} games_messages = [game_format(game) for game in data] the error that I get : "errorMessage": "'NoneType' object is not iterable", "errorType": "TypeError", "requestId": "11a9a46e-3abe-4f6e-8c38-278c0697238a", "stackTrace": [ " File \"/var/task/lambda_function.py\", line 69, in lambda_handler\n games_messages = [game_format(game) for game in data]\n", " File \"/var/task/lambda_function.py\", line 15, in game_format\n quarter_scores = ', '.join([f\"Q{q['Number']}: {q.get('AwayScore', 'N/A')}-{q.get('HomeScore', 'N/A')}\" for q in quarters])\n" ] } the data that I fetched: | The issue is not originating from the code block you provided, rather it is originating from line 15: quarter_scores = ', '.join([f"Q{q['Number']}: {q.get('AwayScore', 'N/A')}-{q.get('HomeScore', 'N/A')}" for q in quarters]) Error message states quarters is None. Indeed, in the data you fetched, quarters is null, (json.loads function turns null into None) Solution: before iterating, make sure quarters is not None. | 2 | 3 |
79,364,551 | 2025-1-17 | https://stackoverflow.com/questions/79364551/how-to-subtract-pd-dataframegroupby-objects-from-each-other | I have the following pd.DataFrame match_id player_id round points A B C D E 5890 3750 1 10 0 0 0 3 1 5890 3750 2 10 0 0 0 1 0 5890 3750 3 10 0 8 0 0 1 5890 2366 1 9 0 0 0 5 0 5890 2366 2 9 0 0 0 5 0 5890 2366 3 9 0 0 0 2 0 I want to subtract the values of A, B, C, D and E of the two players and create two new columns that represent the number of points of the two players. My desired output looks as follows: match_id round points_home points_away A B C D E 5890 1 10 9 0 0 0 -2 1 5890 2 10 9 0 0 0 -4 0 5890 3 10 9 0 8 0 -2 1 Please advice | Use GroupBy.first with GroupBy.last first, subtract necessary columns with add points columns in concat: g = df.groupby(['match_id','round']) df1 = g.first() df2 = g.last() cols = ['A','B','C','D','E'] out = pd.concat([df1['points'].rename('points_home'), df2['points'].rename('points_away'), df1[cols].sub(df2[cols])], axis=1).reset_index() print (out) match_id round points_home points_away A B C D E 0 5890 1 10 9 0 0 0 -2 1 1 5890 2 10 9 0 0 0 -4 0 2 5890 3 10 9 0 8 0 -2 1 Alternative with MultiIndex with GroupBy.agg: df3 = (df.groupby(['match_id','round']) .agg(['first','last']) .rename(columns={'first':'home', 'last':'away'})) cols = ['A','B','C','D','E'] out = pd.concat([df3['points'].add_prefix('points_'), df3.xs('home', axis=1, level=1)[cols] .sub(df3.xs('away', axis=1, level=1)[cols])], axis=1).reset_index() print (out) match_id round points_home points_away A B C D E 0 5890 1 10 9 0 0 0 -2 1 1 5890 2 10 9 0 0 0 -4 0 2 5890 3 10 9 0 8 0 -2 1 | 2 | 3 |
79,363,421 | 2025-1-17 | https://stackoverflow.com/questions/79363421/how-to-simplify-a-linear-system-of-equations-by-eliminating-intermediate-variabl | I have a linear system shown in the block diagram below. This system is described with the following set of linear equations: err = inp - fb out = a * err fb = f * out I would like to use sympy to compute the output (out) of the function of the input (inp). Thus, I would like to eliminate the variables err and fb. I would like some help as I been unable to figure out how to express what I want. So far I have: from sympy import symbols, Eq, solve inp, err, out, fb, a, f = symbols("inp err out fb a f") eqns = [ Eq(err, inp - fb), Eq(out, a * err), Eq(fb, f * out), ] solution = solve(eqns, [out]) solution # [] That clearly does not work. I thought perhaps simplify() might help here, but I don't know how to apply the simplify function to a system of equations. The result I am hoping to achieve is: a out = ------ * inp 1 + af Can anyone point me in the right direction? | I think you should specify all "outgoing" parameters, e.g., [out, fb, err], rather than [out] only, since [inp, a, f] could be treated as "constants" in this system of equations. from sympy import simplify, symbols, Eq, solve inp, err, out, fb, a, f = symbols("inp, err, out, fb, a, f") eqns = ( Eq(inp - fb, err), Eq(a * err, out), Eq(f * out, fb) ) solution = solve(eqns, [out, fb, err]) and you will see {err: inp/(a*f + 1), fb: a*f*inp/(a*f + 1), out: a*inp/(a*f + 1)} | 4 | 4 |
79,360,591 | 2025-1-16 | https://stackoverflow.com/questions/79360591/ssl-certificate-verification-failed-with-sendgrid-send | I am getting an SSL verification failed error when trying to send emails with the Sendgrid web api. I'm not even sure what cert it is trying to verify here. I have done all of the user and domain verification on my Sendgrid account and I am using very straightforward sending process. Here the error urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1000)> Heres the code mailClient = SendGridAPIClient(os.environ.get('sendGridKey')) message = Mail( from_email=os.environ['sendGridEmail'], to_emails="[email protected]", subject="User Message for %s" %newMessage['chatId'], html_content = "<strong>" + newMessage['message']['text'] + "</strong>" ) mailRes = mailClient.send(message) | Here is a link to the answer. It is not sendgrid specific but Python version specific. I was able to use the link provided in this answer to find the script that I needed to run to update the SSL certs for python. urllib and "SSL: CERTIFICATE_VERIFY_FAILED" Error | 2 | 1 |
79,363,266 | 2025-1-16 | https://stackoverflow.com/questions/79363266/how-can-i-write-zeros-to-a-2d-numpy-array-by-both-row-and-column-indices | I have a large (90k x 90k) numpy ndarray and I need to zero out a block of it. I have a list of about 30k indices that indicate which rows and columns need to be zero. The indices aren't necessarily contiguous, so a[min:max, min:max] style slicing isn't possible. As a toy example, I can start with a 2D array of non-zero values, but I can't seem to write zeros the way I expect. import numpy as np a = np.ones((6, 8)) indices = [2, 3, 5] # I thought this would work, but it does not. # It correctly writes to (2,2), (3,3), and (5,5), but not all # combinations of (2, 3), (2, 5), (3, 2), (3, 5), (5, 2), or (5, 3) a[indices, indices] = 0.0 print(a) [[1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 0. 1. 1. 1. 1. 1.] [1. 1. 1. 0. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 0. 1. 1.]] # I thought this would fix that problem, but it doesn't change the array. a[indices, :][:, indices] = 0.0 print(a) [[1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1. 1. 1.]] In this toy example, I'm hoping for this result. [[1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 0. 0. 1. 0. 1. 1.] [1. 1. 0. 0. 1. 0. 1. 1.] [1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 0. 0. 1. 0. 1. 1.]] I could probably write a cumbersome loop or build some combinatorically huge list of indices to do this, but it seems intuitive that this must be supported in a cleaner way, I just can't find the syntax to make it happen. Any ideas? | Based on hpaulj's comment, I came up with this, which works perfectly on the toy example. a[np.ix_(indices, indices)] = 0.0 print(a) [[1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 0. 0. 1. 0. 1. 1.] [1. 1. 0. 0. 1. 0. 1. 1.] [1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 0. 0. 1. 0. 1. 1.]] It also worked beautifully on the real data. It was faster than I expected and didn't noticeably increase memory consumption. Exhausting memory has been a constant concern with these giant arrays. | 2 | 3 |
79,362,636 | 2025-1-16 | https://stackoverflow.com/questions/79362636/how-to-unjoin-of-different-plots-when-plotting-multiple-scatter-line-plots-on-on | I am plotting multiple line and scatter graphs on a single figure and axis. My code sets one variable called total_steel_area and then goes through a set of values of another variable called phi_x__h. It then calculates x and y values from these variables and places them in a list. It then plots the values. The code then moves to the next value of total_steel_area and repeats. The output graph is shown below. The diagonal lines connect the last value of one set of x,y values to the first value of the next set. My question is how can I remove this connecting line? My code is below: phi_N_bh = [] phi_M_bh2 = [] fig, ax = plt.subplots(dpi=100, figsize=(8,4)) for total_steel_area in np.arange(0.01,0.06,0.01): for phi_x__h in np.arange(0.1,2,0.1): phi_N__bh, phi_M__bh2 = calculate_phi_N__bh_and_phi_M__bh2(phi_x__h, lamb, alpha_cc, eta, f_ck, E_s, varepsilon_cu2, phi_d__h, phi_d2__h, f_yk, total_steel_area/2, total_steel_area/2) phi_N_bh.append(phi_N__bh) phi_M_bh2.append(phi_M__bh2) ax.plot(phi_M_bh2, phi_N_bh, c='b') ax.scatter(phi_M_bh2, phi_N_bh, c='b', s=10) ax.set_title('Column Design Chart for Rectangular Column with Symmetrical Compression and Tension Reinforcement') ax.set_xlabel('M/bhΒ²') ax.set_ylabel('N/bh') ax.text(1-0.1, 1-0.1, f'f_ck = {f_ck}, f_yk = {f_yk}', horizontalalignment='center', verticalalignment='center', transform=ax.transAxes) ax.text(1-0.1, 1-0.2, f'd/h = {phi_d__h}, d2/h = {phi_d2__h}', horizontalalignment='center', verticalalignment='center', transform=ax.transAxes) ax.set_ylim(0) ax.set_xlim(0) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.legend() | You should initialize/reset your lists at each step of the outer loop: for total_steel_area in np.arange(0.01,0.06,0.01): phi_N_bh = [] phi_M_bh2 = [] for phi_x__h in np.arange(0.1,2,0.1): phi_N__bh, phi_M__bh2 = calculate_phi_N__bh_and_phi_M__bh2(phi_x__h, lamb, alpha_cc, eta, f_ck, E_s, varepsilon_cu2, phi_d__h, phi_d2__h, f_yk, total_steel_area/2, total_steel_area/2) phi_N_bh.append(phi_N__bh) phi_M_bh2.append(phi_M__bh2) ax.plot(phi_M_bh2, phi_N_bh, c='b') ax.scatter(phi_M_bh2, phi_N_bh, c='b', s=10) There might be a vectorial way to compute your values, but this is difficult to guess without a reproducible example of the functions. Output: | 2 | 0 |
79,362,404 | 2025-1-16 | https://stackoverflow.com/questions/79362404/pandas-change-values-of-a-dataframe-based-on-an-override | I have a pandas dataframe which looks something like this. orig | dest | type | class | BKT | BKT_order | value | fc_Cap | sc_Cap -----+-------+-------+-------+--------+-----------+---------+--------+--------- AMD | TRY | SA | fc | MA | 1 | 12.04 | 20 | 50 AMD | TRY | SA | fc | TY | 2 | 11.5 | 20 | 50 AMD | TRY | SA | fc | NY | 3 | 17.7 | 20 | 50 AMD | TRY | SA | fc | MU | 4 | 09.7 | 20 | 50 AMD | TRY | PE | fc | RE | 1 | 09.7 | 20 | 50 AMD | TRY | PE | sc | EW | 5 | 07.7 | 20 | 50 NCL | MNK | PE | sc | PO | 2 | 08.7 | 20 | 50 NCL | MNK | PE | sc | TU | 3 | 12.5 | 20 | 50 NCL | MNK | PE | sc | MA | 1 | 16.7 | 20 | 50 Also i have an override Dataframe which may look something like this: orig | dest | type | max_BKT -----+-------+-------+----------- AMD | TRY | SA | TY NCL | MNK | PE | PO NCL | AGZ | PE | PO what i want to do is modify the original dataframe such that after comparison of orig dest type & BKT ( with max_BKT) values, the value column for any rows which have the BKT_order higher than or equal to the max_BKT in override DF is set to either fc_Cap or sc_Cap depending on the class value. For Example in above scenario, Since the Override DF sets max_BKT as TY for AMD | TRY | SA and the bucket order for TY is 2 in original Df, i need to set the value column equal to fc_Cap or sc_Cap depending on the value of class for all rows where BKT_order >= 2 So basically: filter the rows for orig dest type combination Get the BKT_order of max_BKT from the Original DF for each row that matches the above criteria if class == fc update value column with fc_Cap if class == sc update value column with sc_Cap So our original DF looks something like this: orig | dest | type | class | BKT | BKT_order | value | fc_Cap | sc_Cap -----+-------+-------+-------+--------+-----------+---------+--------+--------- AMD | TRY | SA | fc | MA | 1 | 12.04 | 20 | 50 AMD | TRY | SA | fc | TY | 2 | 20 | 20 | 50 AMD | TRY | SA | fc | NY | 3 | 20 | 20 | 50 AMD | TRY | SA | fc | MU | 4 | 20 | 20 | 50 AMD | TRY | PE | fc | RE | 1 | 09.7 | 20 | 50 AMD | TRY | PE | sc | EW | 5 | 07.7 | 20 | 50 NCL | MNK | PE | sc | PO | 2 | 50 | 20 | 50 NCL | MNK | PE | sc | TU | 3 | 50 | 20 | 50 NCL | MNK | PE | sc | MA | 1 | 16.7 | 20 | 50 I have tried an approach to iterate over the override df and try to handle 1 row at a time but, i get stuck when i need to do a reverse lookup to get the BKT_order of the max_BKT from original df. Hope that makes sense... i am fairly new to pandas. | That's a fairly complex task. The individual steps are straightforward though. You need: indexing lookup to find the cap values based on class merge to match the max_BKT mask+groupby.transform to identify the rows to mask idx, cols = pd.factorize(df['class']+'_Cap') group = ['orig', 'dest', 'type'] out = ( df.merge(override, on=group, how='left') .assign( value=lambda x: x['value'].mask( x['BKT_order'].ge( x['BKT_order'] .where(x['BKT'].eq(x['max_BKT'])) .groupby([x[c] for c in group]) .transform('first') ), x.reindex(cols, axis=1).to_numpy()[np.arange(len(x)), idx], ) ) .reindex(columns=df.columns) ) Output: orig dest type class BKT BKT_order value fc_Cap sc_Cap 0 AMD TRY SA fc MA 1 12.04 20 50 1 AMD TRY SA fc TY 2 20.00 20 50 2 AMD TRY SA fc NY 3 20.00 20 50 3 AMD TRY SA fc MU 4 20.00 20 50 4 AMD TRY PE fc RE 1 9.70 20 50 5 AMD TRY PE sc EW 5 7.70 20 50 6 NCL MNK PE sc PO 2 50.00 20 50 7 NCL MNK PE sc TU 3 50.00 20 50 8 NCL MNK PE sc MA 1 16.70 20 50 Intermediates: tmp = df.merge(override, on=group, how='left') tmp['cap'] = tmp.reindex(cols, axis=1).to_numpy()[np.arange(len(tmp)), idx] tmp['mask'] = tmp['BKT'].eq(tmp['max_BKT']) tmp['masked_BKT'] = tmp['BKT_order'].where(tmp['mask']) tmp['ref_BKT'] = tmp.groupby(group)['masked_BKT'].transform('first') tmp['>= ref_BKT'] = tmp['BKT_order'].ge(tmp['ref_BKT']) orig dest type class BKT BKT_order value fc_Cap sc_Cap max_BKT cap mask masked_BKT ref_BKT >= ref_BKT 0 AMD TRY SA fc MA 1 12.04 20 50 TY 20 False NaN 2.0 False 1 AMD TRY SA fc TY 2 11.50 20 50 TY 20 True 2.0 2.0 True 2 AMD TRY SA fc NY 3 17.70 20 50 TY 20 False NaN 2.0 True 3 AMD TRY SA fc MU 4 9.70 20 50 TY 20 False NaN 2.0 True 4 AMD TRY PE fc RE 1 9.70 20 50 NaN 20 False NaN NaN False 5 AMD TRY PE sc EW 5 7.70 20 50 NaN 50 False NaN NaN False 6 NCL MNK PE sc PO 2 8.70 20 50 PO 50 True 2.0 2.0 True 7 NCL MNK PE sc TU 3 12.50 20 50 PO 50 False NaN 2.0 True 8 NCL MNK PE sc MA 1 16.70 20 50 PO 50 False NaN 2.0 False | 2 | 2 |
79,362,308 | 2025-1-16 | https://stackoverflow.com/questions/79362308/how-to-use-skimage-to-denoise-2d-array-with-nan-values | I'm trying to apply the TV filter to 2D array which includes many nan values: from skimage.restoration import denoise_tv_chambolle import numpy as np data_random = np.random.random ([100,100])*100 plt.imshow(data_random) plt.imshow(denoise_tv_chambolle(data_random)) data_random[20:30, 50:60] = np.nan data_random[30:40, 55:60] = np.nan data_random[40:50, 65:75] = np.nan plt.imshow(denoise_tv_chambolle(data_random)) The TV filter works well with all valid data, but will return a nan array if there're nan values. Original data: Deonised data: Data with nan values: | You can use a masked array: m = np.isnan(data_random) data = np.ma.masked_array(np.where(m, 0, data_random), m) plt.imshow(denoise_tv_chambolle(data, weight=50)) Example output (with weight = 50): For less artifacts you could fill the holes with the average instead of zero: m = np.isnan(data_random) data = np.ma.masked_array(np.where(m, np.nanmean(data_random), data_random), m) plt.imshow(denoise_tv_chambolle(data, weight=50)) Output: Another option would be to fill the holes with the nearest neighbors (e.g. with distance_transform_edt), then denoise, then restore the NaNs: from scipy.ndimage import distance_transform_edt m = np.isnan(data_random) data_fill = data_random[tuple(distance_transform_edt(m, return_distances=False, return_indices=True))] plt.imshow(np.where(m, np.nan, denoise_tv_chambolle(data_fill, weight=50))) Output: Intermediate data_fill: | 3 | 6 |
79,357,401 | 2025-1-15 | https://stackoverflow.com/questions/79357401/why-is-black-hole-null-geodesic-not-printing-zero-are-my-trajectories-correct | Using matplotlib, I am path tracing some 2D photons bending due to a non-rotating standard black hole defined by Schwarzschild metric. I set my initial velocities in terms of r (radius), phi (angle), and t (time) with respect to the affine parameter lambda and then iteratively update the space-time vector based on the Christoffel symbols at the respective point. The main concerns are is why the null geodesic statements don't print something closer to zero. Everything scaled down by 1e6 to plot import numpy as np import matplotlib.pyplot as plt from math import sin, cos, sqrt, atan2, pi # Constants M = 9e32 # Mass of the black hole in kg c = 299792458 # Speed of light in m/s G = 6.6743e-11 # Gravitational constant in N m^2/kg^2 # Simulation parameters curve_num = 2000 # Resolution of the curve d_lambda = 1e-4 # Step size for integration # Black hole parameters horizon = 2 * G * M / c**2 # Event horizon radius photon_sphere = 3 * G * M / c**2 # Photon sphere radius # Test point parameters num = 30 # Number of photons x_i = 7.5 y_i = 2.5 y_f = 4 # Initial velocity in Cartesian coordinates v_x = -c # Speed of light in negative x-direction v_y = 0 # No velocity in the y-direction # Prepare for matplotlib plotting trajectories = [] for i in range(num): trajectory = [] # Define initial test point if num > 1: test_point = np.array([x_i, y_i + i / (num - 1) * (y_f - y_i), 0]) # linear interpolation else: test_point = np.array([x_i, y_i, 0]) # Convert to polar coordinates r = np.linalg.norm(test_point) * 1e6 # Radius in meters p = atan2(test_point[1], test_point[0]) # Initial planar angle # Metric coefficients g_tt = -(1 - 2 * G * M / (r * c**2)) g_rr = 1 / (1 - 2 * G * M / (r * c**2)) g_pp = r**2 # Initial velocities v_r = (v_x * cos(p) + v_y * sin(p)) # Radial velocity v_p = (-v_x * sin(p) + v_y * cos(p)) / r # Angular velocity v_t = sqrt(-(g_rr * v_r**2 + g_pp * v_p**2) / g_tt) # Check the null geodesic condition for the initial point #print(g_tt * v_t**2 + g_rr * v_r**2 + g_pp * v_p**2) # Integrate geodesics for j in range(curve_num): if r > horizon + 10000: # Precompute common terms term1 = G * M / (r**2 * c**2) # Common term: GM / r^2c^2 term2 = 1 - 2 * G * M / (r * c**2) # Common term: 1 - 2GM / rc^2 # Christoffel symbols using common terms Ξ_r_tt = term1 * term2 Ξ_r_rr = -term1 / term2 Ξ_r_pp = -r * term2 Ξ_p_rp = 1 / r #Ξ_t_rt = term1 / term2 # ignoring time marching # Update change in velocities dv_r = -Ξ_r_tt * v_t**2 - Ξ_r_rr * v_r**2 - Ξ_r_pp * v_p**2 dv_p = -2 * Ξ_p_rp * v_r * v_p #dv_t = -2 * Ξ_t_rt * v_r * v_t # ignoring time marching # Update velocities v_r += dv_r * d_lambda v_p += dv_p * d_lambda #v_t += dv_t * d_lambda # ignoring time marching # Update positions r += v_r * d_lambda p += v_p * d_lambda # Metric tensor components (update for new r) g_tt = -term2 g_rr = 1 / term2 g_pp = r**2 # Recalculate v_t from the metric components v_t = sqrt(-(g_rr * v_r**2 + g_pp * v_p**2) / g_tt) # Check the null geodesic condition at each step #print(g_tt * v_t**2 + g_rr * v_r**2 + g_pp * v_p**2) # Store Cartesian coordinates x = (r / 1e6) * cos(p) y = (r / 1e6) * sin(p) # Only store points within the -10 to 10 range if -10 <= x <= 10 and -10 <= y <= 10: trajectory.append((x, y)) else: break trajectories.append(trajectory) # Plot using matplotlib plt.figure(figsize=(8, 8)) # Plot each trajectory for trajectory in trajectories: trajectory = np.array(trajectory) if len(trajectory) > 0: # Plot only if there are points plt.plot(trajectory[:, 0], trajectory[:, 1]) # Plot the event horizon circle = plt.Circle((0, 0), horizon / 1e6, color='black', fill=True, label="Event Horizon") plt.gca().add_artist(circle) # Plot the photon sphere photon_sphere_circle = plt.Circle((0, 0), photon_sphere / 1e6, color='red', fill=False, linestyle='--', linewidth=1.5, label="Photon Sphere") plt.gca().add_artist(photon_sphere_circle) # Set plot limits explicitly plt.xlim(-10, 10) plt.ylim(-10, 10) # Configure plot plt.title("Photon Trajectories Around a Black Hole") plt.xlabel("x (scaled by 1e6 meters)") plt.ylabel("y (scaled by 1e6 meters)") plt.axhline(0, color="gray", linewidth=0.5) plt.axvline(0, color="gray", linewidth=0.5) plt.axis('equal') plt.grid() plt.legend() plt.show() I expect my null geodesic print statement inside the for loop to be either 0 or a relatively small integer. For example, the first null geodesic print statement may be 0, 16, -8, etc. due to a small amount of imprecision with the large float addition (e17 magnitudes). I have tried to debug by replacing my "else" statement with "elif j == 1" and looking at the very next iteration, it can be seen the null geodesic prints a much larger float. I believe figuring out the null geodesic error will reveal if my trajectories are incorrect. | You are missing a factor of 2 in the expression for the acceleration component dv_p. Change that line to dv_p = - 2 * Ξ_p_rp * v_r * v_p This is because Ξ_p_pr = Ξ_p_rp and you need to include that term as well in the summation. Neil Butcher's post also picks out another non-zero Christoffel symbol Ξ_t_rt that I missed and would be necessary to get v_t right and then affect the trajectories. BUT ... I believe the metric check is a red herring - you initialise v_t at the start with that metric and I think you can (and should) do the same again each time the metric components are recalculated from a new r. That is then bound to keep you on a geodesic. For completeness, I believe that you are solving systems of the form or, in terms of your quasi-velocities, If you do have to print the null-geodesic error then I think it should be normalised. You do have to see it in the context of c2, which is very large. Anyway, making the factor-of-2 change does reduce this error by 10 orders of magnitude. You will have to tell me the outcome of the plotting. I don't have, or know how to use, Blender. As you define a variable horizon it would be useful to use that (or the more common rs), rather than repeating 2 * G * M / c**2 In summary: put the extra factor of 2 in the expression for dv_p update v_t from the metric coefficients every time you recalculate the metric from a new r. | 10 | 1 |
79,362,317 | 2025-1-16 | https://stackoverflow.com/questions/79362317/text-representation-of-a-list-with-gaps | I have a list of integers that is sorted and contains no duplicates: mylist = [2, 5,6,7, 11,12, 19,20,21,22, 37,38, 40] I want a summarized text representation that shows groups of adjacent integers in a compressed form as a hyphenated pair. To be specific: Adjacent implies magnitude differing by 1. So an integer i is considered to be adjacent to j if j = i Β± 1. Recall that the list is sorted. That means that adjacent integers will appear in monotonically increasing series in the list. So I want some elegant Python that will represent mylist as the string "2, 5-7, 11-12, 19-22, 37-38, 40," That is, an isolated integer (example: 2, because the list contains neither 1 nor 3) is represented as 2, a group of adjacent integers (example: 19,20,21,22 because each member of the group differs from one other member by 1) is represented as βΉlowestβΊ-βΉhighestβΊ, that is 19-22,. I can't believe this is a problem nobody has thought important enough to solve. Feel free to point me at a solution I have missed. | You can try this: mylist = [2, 5, 6, 7, 11, 12, 19, 20, 21, 22, 37, 38, 40] ans = [] ret = '' # assuming mylist is sorted for i in mylist: if len(ans) == 0 or ans[-1][1] < i - 1: #if the array is empty or we can't add current value to the last range ans.append([i, i]) # make a new range else: ans[-1][1] = i # add to last range for i in ans: # formating if i != ans[0]: ret = ret + ' ' if i[0] == i[1]: ret = f'{ret}{i[0]},' else: ret = f'{ret}{i[0]}-{i[1]},' print(ret) Hope this helps! | 1 | 0 |
79,361,494 | 2025-1-16 | https://stackoverflow.com/questions/79361494/creating-reusable-and-composable-filters-for-pandas-dataframes | I am working with multiple Pandas DataFrames with a similar structure and would like to create reusable filters that I can define once and then apply or combine as needed. The only working solution I came up with so far feels clunky to me and makes it hard to combine filters with OR: import pandas as pd df = pd.DataFrame({"A":[1,1,2],"B":[1,2,3]}) def filter_A(df): return df.loc[df["A"]==1] def filter_B(df): return df.loc[df["B"]==2] print(filter_A(filter_B(df)).head()) I am hoping for something along the lines of filter_A = (df["A"]==1) filter_B = (df["B"]==2) print(df.loc[(filter_A) & (filter_B)]) but reusable after changing the df and also applicable to other DataFrames with the same columns. Is there any cleaner or more readable way to do this? | You can use the .eval() method, which allows for the evaluation of a string describing operations on dataframe columns: Evaluate these string expressions on the dataframe df. Combine the results of these evaluations using the bitwise AND operator (&), which performs element-wise logical AND operation. Use the .loc accessor to filter the dataframe based on the combined condition. filter_A = 'A == 1' filter_B = 'B == 2' df.loc[df.eval(filter_A) & df.eval(filter_B)] Output: A B 1 1 2 | 3 | 2 |
79,359,213 | 2025-1-15 | https://stackoverflow.com/questions/79359213/efficient-parsing-and-processing-of-millions-of-json-objects-in-python | I have some working code that I need to improve the run time on dramatically and I am pretty lost. Essentially, I will get zip folders containing tens of thousands of json files, each containing roughly 1,000 json messages. There are about 15 different types of json objects interspersed in each of these files and some of those objects have lists of dictionaries inside of them while others are pretty simple. I need to read in all the data, parse the objects and pull out the relevant information, and then pass that parsed data back and insert it into a different program using an API for a third party software (kind of a wrapper around a proprietary implementation of SQL). So I have code that does all of that. The problem is it takes around 4-5 hours to run each time and I need to get that closer to 30 minutes. My current code relies heavily on asyncio. I use that to get some concurrency, particularly while reading the json files. I have also started to profile my code and have so far moved to using orjson to read in the data from each file and rewrote each of my parser functions in cython to get some improvements on that side as well. However, I use asyncio queues to pass stuff back and forth and my profiler shows a lot of time is spent just in the queue.get and queue.put calls. I also looked into msgspec to improving reading in the json data and while that was faster, it became slower when I had to send the msgspec.Struct objects into my cython code and use them instead of just a dictionary. So was just hoping for some general help on how to improve this process. I have read about multiprocessing both with multiprocessing.pools and concurrent.futures but both of those turned out to be slower than my current implementation. I was thinking maybe I need to change how I pass stuff through the queues so I passed the full json data for each file instead of each individual message (about 1,000 documents each) but that didn't help. I have read so many SO questions/answers but it seems like a lot of people have very uniform json data (not 15 different message types). I looked into batching but I don't fully understand how that changes things - that was what I was doing using concurrent.futures but again it actually took longer. Overall I would like to keep it as queues because in the future I would like to run this same process on streaming data, so that part would just take the place of the json reading and instead each message received over the stream would be put into the queue and everything else would work the same. Some psuedo-code is included below. main.py import asyncio from glob import glob import orjson from parser_dispatcher import ParserDispatcher from sql_dispatcher import SqlDispatcher async def load_json(file_path, queue): async with aiofiles.open(file_path, mode="rb") as f: data = await f.read() json_data = await asyncio.to_thread(orjson.loads(data)) for msg in json_data: await queue.put(msg) async def load_all_json_files(base_path, queue): file_list = glob(f"{base_path}/*.json") tasks = [load_json(file_path, queue) for file_path in file_list] await asyncio.gather(*tasks) await queue.put(None) # to end the processing def main() base_path = "\path\to\json\folder" paser_queue = asyncio.queue() sql_queue = asyncio.queue() parser_dispatch = ParserDispatcher() sql_dispatch = SqlDispatcher() load_task = load_all_json_files(base_path, parser_queue) parser_task = parser_dispatch.process_queue(parser_queue, sql_queue) sql_task = sql_dispatch.process_queue(sql_queue) await asyncio.gather(load_task, parser_task, sqlr_task) if __name__ -- "__main__": asyncio.run(main)) parser_dispatcher.py import asyncio import message_parsers as mp class ParserDispatcher: def __init__(self): self.parsers = { ("1", "2", "3"): mp.parser1, .... etc } # this is a dictionary where keys are tuples and values are the parser functions def dispatch(self, msg): parser_key = tuple(msg.get("type"), msg.get("source"), msg.get("channel")) parser = self.parsers.get(parser_key) if parser: new_msg = parser(msg) else: new_msg = [] return new_msg async def process_queue(self, parser_queue, sql_queue): while True: msg = await process_queue.get() if msg is None: await sql_put.put(None) process_queue.task_done() parsed_messages = self.dispatch(msg) for parsed_message in parsed_messages: await sql_queue.put(parsed_message) sql_dispatcher.py import asycnio import proprietarySqlLibrary as sql class SqlDispatcher: def __init__(self): # do all the connections to the DB in here async def process_queue(self, sql_queue): while True: msg = await sql_queue.get() # then go through and add this data to the DB # this part is also relatively slow but I'm focusing on the first half for now # since I don't have control over the DB stuff | This might improve performance, but whether significantly enough is still an open question: The parsing of JSON data is a CPU-bound task and by concurrently doing this parsing in a thread pool will not buy you anything unless orjson is implemented in C (probably) and releases the GIL (very questionable; see this). The code should therefore be re-arranged to submit the parsing of messages to a concurrent.futures.ProcessPoolExecutor instance in batches. This is the general idea, which could not be tested since I do not have access to the data. Note that I am not attempting to read in the JSON files concurrently, since it is unclear whether doing so would actually hurt performance instead of helping. You can always modify the code to use multiple tasks to perform this reading. ... from concurrent.futures import ProcessPoolExecutor def process_batch(file_data): msgs = [] for data in file_data: msgs.extend(orjson.loads(data)) return msgs async def load_json(executor, file_data, queue): loop = asyncio.get_running_loop() msgs = await loop.run_in_executor(executor, process_batch, file_data) for msg in msgs: await queue.put(msg) async def load_all_json_files(base_path, queue): # If you have the memory, ideally BATCH_SIZE should be: # ceil(number_of_files / (4 * number_of_cores))) # So if you had 50_000 files and 10 cores, then # BATCH_SIZE should be 1250 BATCH_SIZE = 1_000 # Use a multiprocessing pool: executor = ProcessPoolExecutor() tasks = [] file_data = [] file_list = glob(f"{base_path}/*.json") for file_path in file_list: async with aiofiles.open(file_path, mode="rb") as f: file_data.append(await f.read()) if len(file_data) == BATCH_SIZE: tasks.append(asyncio.create_task(load_json(executor, file_data, queue))) file_data = [] if file_data: tasks.append(asyncio.create_task(load_json(executor, file_data, queue))) await asyncio.gather(*tasks) await queue.put(None) # to end the processing ... | 2 | 1 |
79,361,674 | 2025-1-16 | https://stackoverflow.com/questions/79361674/subplot-four-pack-under-another-subplot-the-size-of-the-four-pack | I want to make a matplotlib figure that has two components: A 2x2 "four pack" of subplots in the lower half of the figure A subplot above the four pack that is the size of the four pack. I have seen this answer where subplots can have different dimensions. How can that approach be tweaked when there are multiple columns and rows, however? Or is a completely different approach warranted? My final figure should be arranged something like this: If there is a straightforward way to have a left-right oritentation, shown below, that would be helpful, too. | You can use plt.subplot_mosaic or GridSpec. Someone else can write an answer using GridSpec, but here is how you'd do it using subplot_mosaic. For the large plot on top and 4 below it: import matplotlib.pyplot as plt fig, axs_dict = plt.subplot_mosaic("AA;AA;BC;DE") If you want to put the large plot on the left and the four smaller on the right then just using this string: "AABC;AADE". | 2 | 6 |
79,359,767 | 2025-1-15 | https://stackoverflow.com/questions/79359767/implementation-of-f1-score-iou-and-dice-score | This paper proposes a medical image segmentation hybrid CNN - Transformer model for segmenting organs and lesions in medical images simultaneously. Their model has two output branches, one to output organ mask, and the other to output lesion mask. Now they describe the testing process as follows: In order to compare the performance of our approach with the state- of-the-art approaches, the following evaluation metrics have been used: F1-score (F1-S), Dice score (D-S), Intersection Over Union (IoU), and HD95, which are defined as follows: where T P is True Positives, T N is True Negatives, F P is False Positives,and F N is False Negatives, all associated with the segmentation classes of the test images. The Dice score is a macro metric, which is calculated for N testing images as follow: where TPi, FPi and FNi are True Positives, True Negatives, False. Positives and False Negative for the ith image, respectively. I am confused regarding how to implement those metrics (excluding HD95) like in this paper, what I understand is that to compute TP, FP, and FN for f1-score and IoU, I need to aggregate those 3 quantities (TP, FP, and FN) across all the samples in the test set for the two outputs (lesion and organ), and the aggregation is a sum operation. So for example to calculate the TP, I need to calculate it for every output of every sample and sum this TP. Then repeat this for calculating the TP for every sample in a similar manner and then add all those TPs to get the overall TP. Then I do the same for FP and FN and then plug them in the formulas. I am not sure if my understanding is correct or not. For Dice score, I need to calculate it for every output separately and then average them? I am not sure about that, so I accessed the GitHub for this paper. The model is defined here, and the coding for the testing procedure is defined here. The used framework is PyTorch. I don't have any knowledge regarding PyTorch, so still I can't understand how these metrics have been implemented, and hence, I cant confirm if my understanding is correct or not. So please can somebody explain the logic used to implement these metrics. Edit 1 : I went through the code for calculating TP,FP, and FN in train_test_DTrAttUnet_BinarySegmentation.py: TP += np.sum(((preds == 1).astype(int) + (yy == 1).astype(int)) == 2) TN += np.sum(((preds == 0).astype(int) + (yy == 0).astype(int)) == 2) FP += np.sum(((preds == 1).astype(int) + (yy == 0).astype(int)) == 2) FN += np.sum(((preds == 0).astype(int) + (yy == 1).astype(int)) == 2) It seems like they were doing the forward pass using a for loop and then accumulating the these quantities, and after this loop they calculate the metrics: F1score = TP / (TP + ((1/2)*(FP+FN)) + 1e-8) IoU = TP / (TP+FP+FN) So this means that they are accumulating the TP,FP and FN through all the images for both outputs and then they calculate the metrics, Is that correct ? For Dice Score it seems tricky for me, they still inside the loop calculate some quantities : for idice in range(preds.shape[0]): dice_scores += (2 * (preds[idice] * yy[idice]).sum()) / ( (preds[idice] + yy[idice]).sum() + 1e-8 ) predss = np.logical_not(preds).astype(int) yyy = np.logical_not(yy).astype(int) for idice in range(preds.shape[0]): dice_sc1 = (2 * (preds[idice] * yy[idice]).sum()) / ( (preds[idice] + yy[idice]).sum() + 1e-8 ) dice_sc2 = (2 * (predss[idice] * yyy[idice]).sum()) / ( (predss[idice] + yyy[idice]).sum() + 1e-8 ) dice_scores2 += (dice_sc1 + dice_sc2) / 2 Then at the end of the loop : epoch_dise = dice_scores/len(dataloader.dataset) epoch_dise2 = dice_scores2/len(dataloader.dataset) Still, I cant understand what is going on for Dice Score. | Disclaimers: My answer is a mix of code reading and "educated guessing". I did not run the actual code, but a run with the help of a debugger should help you verify/falsify my assumptions. The code shared below is a condensed version of the relevant section of the score/metrics calculations, to help focus on the essentials. It is not runnable and should be understood as pseudocode. Anyway, let's break down their code (maybe put the code sample side by side with the explanations below it): dice_scores, dice_scores2, TP, TN, FP, FN = 0, 0, 0, 0, 0, 0 for batch in tqdm(dataloader): x, y, _, _ = batch outputs, _ = model(x) preds = segm(outputs) > 0.5 yy = y > 0.5 TP += np.sum(((preds == 1) + (yy == 1)) == 2) TN += np.sum(((preds == 0) + (yy == 0)) == 2) FP += np.sum(((preds == 1) + (yy == 0)) == 2) FN += np.sum(((preds == 0) + (yy == 1)) == 2) for idice in range(preds.shape[0]): dice_scores += ((2 * (preds[idice] * yy[idice]).sum()) / ((preds[idice] + yy[idice]).sum() + 1e-8)) predss = np.logical_not(preds) yyy = np.logical_not(yy) for idice in range(preds.shape[0]): dice_sc1 = ((2 * (preds[idice] * yy[idice]).sum()) / ((preds[idice] + yy[idice]).sum() + 1e-8)) dice_sc2 = ((2 * (predss[idice] * yyy[idice]).sum()) / ((predss[idice] + yyy[idice]).sum() + 1e-8)) dice_scores2 += (dice_sc1 + dice_sc2) / 2 epoch_dise = dice_scores/len(dataloader.dataset) epoch_dise2 = dice_scores2/len(dataloader.dataset) F1score = TP / (TP + ((1/2)*(FP+FN)) + 1e-8) IoU = TP / (TP+FP+FN) The first line initializes all accumulated values to 0. With for batch in tqdm(dataloader), the code iterates over all samples in the data set (or rather, over all samples accessible to the used DataLoader, which might be a subset or an otherwise preprocessed version of the underlying data). This implies that the accumulated values represent the "global" results, i.e the results for the complete data set. After applying the model to the sample data in the batch, x, via model(x), the resulting predictions, outputs, are thresholded to a binary representation, preds, via segm(outputs) > 0.5. The segm function, in this case, is simply a sigmoid (see line 190 in the original code), which maps all values to the range [.0, .1]. A similar step is performed for the "ground truth" (i.e. the true/known segmentation), y, to produce its binary representation, yy. [Update] The outputs variable, in this context, holds the outputs of one of the two branches of the model only (compare relevant model code), thus either lesion or organ segmentation. Also compare Figure 2 in the paper: the corresponding branches are the ones with blue and green background. While suppressed in my pseudocode above (outputs, _ = model(x)), the actual code also uses the output of the other branch, though only for loss calculation and not for the calculation of the scores/metrics. [/Update] So, to summarize, what we have at the current point: A binary mask preds that holds the predicted segmentations for all samples in the current batch. A binary mask yy that holds the true/known segmentations for all samples in the current batch. The sum() calculations in the following lines are counting (albeit written in a bit of an unconventional way maybe) the number of voxels matching between predictions and true/known segmentations, over all samples in the current batch. For example, this means summing the number of matching voxels of predicted foreground and true/known background for the false positives, which is then added on top of their global count, FP. The Dice scores are handled a bit differently: values are calculated for each index along dimension 0 separately, before adding them to the global value, dice_scores. Dimension 0 usually indexes individual samples in the batch, so this would mean calculating a separate Dice score for each sample. The values are later normalized by dividing through the number of samples, len(dataloader.dataset), to gain epoch_dise. These two steps are in accordance with the equation Dice score = β¦ shared in the question, which calculates the Dice score separately for each sample, adds all corresponding results, then divides by the number of samples (called testing images there), N. A second Dice score is then calculated by not only measuring foreground overlaps, but also measuring background overlaps: The masks predss and yyy are the negations of preds and yy, respectively, i.e. True for background voxels, False for foreground voxels. For each sample, the Dice score for the foreground voxels is recalculated as dice_sc1; but then also the Dice score for the background voxels is calculated as dice_sc2. Then their average is taken. This sample average is accumulated in the global value dice_scores2, which is later normalized to epoch_dise2, just as dice_scores and epoch_dise above. At that point what is missing is the calculation of the F1 score and IoU. This is done with F1score = β¦ and IoU = β¦ over the global values of the true/false positives/negatives, in accordance with the corresponding equations cited in the question. So, to summarize once more: [Update] While the model produces two outputs (lesion and organ segmentation) for each sample via two branches, only one output is used for calculating the scores/metrics, while both outputs are used for calculating the losses. I am not sure which branch is used for the score calculation, but assuming the authors use the same order in their code as they use in their paper, it would be the lesion branch. [/Update] Indeed, as assumed in the question, F1 score and IoU are calculated over all samples. Dice scores are calculated for each sample separately, then averaged over all samples. This is done in two versions: Version 1 (dice_scores, epoch_dise) calculates what I would call the "standard" Dice coefficient, i.e. the score for overlapping foreground voxels. Version 2 (dice_scores2, epoch_dise2) calculates what I would call a "weighted" Dice coefficient: for each sample, it calculates both the score for overlapping foreground voxels and the score for overlapping background voxels, then averages them as the sample's score, and only then accumulates and averages again to get the global score. | 3 | 1 |
79,360,047 | 2025-1-16 | https://stackoverflow.com/questions/79360047/issue-with-django-checkconstraint | I'm trying to add some new fields to an existing model and also a constraint related to those new fields: class User(models.Model): username = models.CharField(max_length=32) # New fields ################################## has_garden = models.BooleanField(default=False) garden_description = models.CharField( max_length=32, null=True, blank=True, ) class Meta: constraints = [ models.CheckConstraint( check=Q(has_garden=models.Value(True)) & Q(garden_description__isnull=True), name="garden_description_if_has_garden", ) ] The problem is that when I run my migrations I get the following error: django.db.utils.IntegrityError: check constraint "garden_description_if_has_garden" is violated by some row But I don't understand how the constraint is being violated if no User has a has_garden, the field is just being created and also its default value is False π€. I'm using django 3.2 with postgresql. What is the proper way to add this constraint? If it's of any use here's the autogenerated migration: # Generated by Django 3.2.25 on 2025-01-15 23:52 import django.db.models.expressions from django.db import migrations, models class Migration(migrations.Migration): dependencies = [ ("some_app", "0066_user"), ] operations = [ migrations.AddField( model_name="user", name="garden_description", field=models.CharField(blank=True, max_length=32, null=True), ), migrations.AddField( model_name="user", name="has_garden", field=models.BooleanField(default=False), ), migrations.AddConstraint( model_name="user", constraint=models.CheckConstraint( check=models.Q( ("has_garden", django.db.models.expressions.Value(True)), ("garden_description__isnull", True), ), name="garden_description_if_has_garden", ), ), ] | Far as I can see your constraint is trying to make sure that all instances of user have has_garden=True, which would cause a violation if a user has has_garden=False. Here we add a constraint to either check if has_garden is true and garden_description is not null, or if has_garden is false and garden_descriptionn is null class User(models.Model): username = models.CharField(max_length=32) # New fields ################################## has_garden = models.BooleanField(default=False) garden_description = models.CharField( max_length=32, null=True, blank=True, ) class Meta: constraints = [ models.CheckConstraint( check=( Q(has_garden=models.Value(True)) & Q(garden_description__isnull=False) | ( Q(has_garden=models.Value(False)) & Q(garden_description__isnull=True) ) ), name="garden_description_if_has_garden", ) ] | 1 | 2 |
79,360,975 | 2025-1-16 | https://stackoverflow.com/questions/79360975/how-to-color-nodes-in-network-graph-based-on-categories-in-networkx-python | I am trying to create a network graph on correlation data and would like to color the nodes based on categories. Data sample view: Data: import pandas as pd links_data = pd.read_csv("https://raw.githubusercontent.com/johnsnow09/network_graph/refs/heads/main/links_filtered.csv") graph code: import networkx as nx G = nx.from_pandas_edgelist(links_data, 'var1', 'var2') # Plot the network: nx.draw(G, with_labels=True, node_color='orange', node_size=200, edge_color='black', linewidths=.5, font_size=2.5) All the nodes in this network graph is colored as orange but I would like to color them based on Category variable. I have looked for more examples but not sure how to do it. I am also open to using other python libraries if required. Appreciate any help here !! | Since you have a unique relationship from var1 to Category, you could build a list of colors for all the nodes using: import matplotlib as mpl cmap = mpl.colormaps['Set3'].colors # this has 12 colors for 11 categories cat_colors = dict(zip(links_data['Category'].unique(), cmap)) colors = (links_data .drop_duplicates('var1').set_index('var1')['Category'] .map(cat_colors) .reindex(G.nodes) ) nx.draw(G, with_labels=True, node_color=colors, node_size=200, edge_color='black', linewidths=.5, font_size=2.5) If you also want a legend: import matplotlib.patches as mpatches plt.legend(handles=[mpatches.Patch(color=c, label=label) for label, c in cat_colors.items()], bbox_to_anchor=(1, 1)) Output: | 2 | 0 |
79,360,171 | 2025-1-16 | https://stackoverflow.com/questions/79360171/is-there-a-better-way-to-use-zip-with-an-arbitrary-number-of-iters | With a set of data from an arbitrary set of lists (or dicts or other iter), I want to create a new list or tuple that has all the first entries, then all the 2nd, and so on, like an hstack. If I have a known set of data, I can zip them together like this: data = {'2015': [2, 1, 4, 3, 2, 4], '2016': [5, 3, 3, 2, 4, 6], '2017': [3, 2, 4, 4, 5, 3]} hstack = sum(zip(data['2015'], data['2016'], data['2017']), ()) print(hstack) # hstack: (2, 5, 3, 1, 3, 2, 4, 3, 4, 3, 2, 4, 2, 4, 5, 4, 6, 3) But what if I don't know how many entries (or the keys) of the dict? For processing an arbitrary set of iterators, I tried: combined_lists = sum(zip(data[val] for val in data.keys()), ()) # combined_lists: ([2, 1, 4, 3, 2, 4], [5, 3, 3, 2, 4, 6], [3, 2, 4, 4, 5, 3]) And also: nums = sum(zip(num for val in data.keys() for num in data[val]), ()) # nums: (2, 1, 4, 3, 2, 4, 5, 3, 3, 2, 4, 6, 3, 2, 4, 4, 5, 3) But both of these just keep the same order I could get from adding the sequences together. I was able to get it to work with: counts = [] entry = list(data.keys())[0] for idx, count in enumerate(data[entry]): for val in list(data.keys()): counts.append(data[val][idx]) # counts: [2, 5, 3, 1, 3, 2, 4, 3, 4, 3, 2, 4, 2, 4, 5, 4, 6, 3] Which is fine, but it's a bit bulky. Seems like there should be a better way. Is there a way with list comprehension or some feature of zip that I missed? No imports preferred. | While @Tinyfold's answer works, it should be pointed out that using sum to concatenate a series of iterables is rather inefficient as noted in the documentation: For some use cases, there are good alternatives to sum(). The preferred, fast way to concatenate a sequence of strings is by calling ''.join(sequence). To add floating-point values with extended precision, see math.fsum(). To concatenate a series of iterables, consider using itertools.chain(). This is because sum() at its core performs a series of additions, each of which is a binary operation that "adds" two operands into one value. This is fine for numeric values, but for tuples as in your example, each addition creates a new tuple by copying all items in tuples of both operands, so a series of additions of tuples becomes quadratic in time complexity. To keep the time complexity linear you can use itertools.chain instead as suggested by the documentation: from itertools import chain hstack = tuple(chain.from_iterable(zip(*data.values()))) or if you absolutely don't want any import (which should not be a problem because itertools is part of the standard library), use a generator expression: hstack = tuple(i for t in zip(*data.values()) for i in t) Note that itertools.chain is faster than a generator expression/comprehension because the former is written purely in C while the latter has to be interpreted. | 2 | 2 |
79,360,261 | 2025-1-16 | https://stackoverflow.com/questions/79360261/why-does-list-call-len | The setup code: class MyContainer: def __init__(self): self.stuff = [1, 2, 3] def __iter__(self): print("__iter__") return iter(self.stuff) def __len__(self): print("__len__") return len(self.stuff) mc = MyContainer() Now, in my shell: >>> i = iter(mc) __iter__ >>> [x for x in i] [1, 2, 3] >>> list(mc) __iter__ __len__ [1, 2, 3] Why is __len__() getting called by list()? And where is that documented? | The behavior of calling __len__ of the given iterable during initialization of a new list is an implementation detail and is meant to help pre-allocate memory according to the estimated size of the result list, as opposed to naively and inefficiently grow the list as it is iteratively extended with items produced by a given generic iterable. You can find the logics in Objects/listobject.c of CPython, where it defaults the pre-allocation of memory to a size of 8 if the iterable has neither __len__ nor __length_hint__, which is documented in PEP-424: static int list_extend_iter_lock_held(PyListObject *self, PyObject *iterable) { PyObject *it = PyObject_GetIter(iterable); if (it == NULL) { return -1; } PyObject *(*iternext)(PyObject *) = *Py_TYPE(it)->tp_iternext; /* Guess a result list size. */ Py_ssize_t n = PyObject_LengthHint(iterable, 8); ... } | 7 | 15 |
79,358,709 | 2025-1-15 | https://stackoverflow.com/questions/79358709/parallelisation-optimisation-of-integrals-in-python | I need to compute a large set of INDEPENDENT integrals and I have currently written the following code for it in Python: # We define the integrand def integrand(tau,k,t,z,config, epsilon=1e-7): u = np.sqrt(np.maximum(epsilon,tau**2 - z**2)) return np.sin(config.omega * (t - tau))*k/2, np.sin(config.omega * (t - tau)) * j1(k * u) / u NON-VECTORISED partial_integral = np.zeros((len(n_values),len(t_values),len(z_values))) for n in range(1, len(n_values)): # We skip the n=0 case as it is trivially = 0 for j in range(1, len(z_values)): # We skip the z=0 case as it is trivially = 0 for i in range(1, len(t_values)): # We skip the t=0 case as it is trivially = 0 partial_integral[n,i,j],_ = quad(integrand, x_min[n,i,j], x_max[n,i,j], args=(k_n_values[n],t_values[i],z_values[j],config), limit=np.inf) # We use quad Now, all of len(n_values),len(t_values),len(z_values) are big numbers, so I would like to accelerate the code as much as possible. Any suggestions? Apart from playing with different integration libraries (among which Scipy.quad seems the best), I have thought about vectorising the code: # VECTORISED def compute_integral(n,i,j): quad(integrand, x_min[n,i,j], x_max[n,i,j], args=(k_n_values[n],t_values[i],z_values[j],config, epsilon), limit=10000) # We use quad # Use np.meshgrid to create the index grids for n, i, j (starting from 1 to avoid 0-index) n_grid, i_grid, j_grid = np.meshgrid(np.arange(0, len(n_values)), np.arange(0, len(t_values)), np.arange(0, len(z_values)), indexing='ij') # Flatten the grids to vectorize the loop over n, i, j indices = np.vstack([n_grid.ravel(), i_grid.ravel(), j_grid.ravel()]).T # Vectorize the integral computation using np.vectorize vectorized_integral = np.vectorize(lambda n, i, j: compute_integral(n, i, j)) # Apply the vectorized function to all combinations of (n, i, j) partial_integral = np.empty((len(n_values),len(t_values),len(z_values))) partial_integral[tuple(indices.T)] = vectorized_integral(*indices.T) But it is not clear that this wins me much... I have also tried to use Numba (with numba-scipy for the j1) to JIT the integrand in the hope of gaining a bit of performance, and indeed this has served me to get a x5 performance! To do this I just changed my function to the following: from numba import njit # Remember to also have numba-scipy installed!!!!! # We define the integrand and JIT it with Numba (and numba-scipy) for a faster performance @njit def integrand(tau,k,t,z,omega, epsilon): u = np.sqrt(np.maximum(0,tau**2 - z**2)) if u < epsilon: return np.sin(omega * (t - tau))*k/2 else: return np.sin(omega * (t - tau)) * j1(k * u) / u PS: As an extra, I am currently running this script in my PC, but I would probably be able to run it on a cluster. Full code: import os import numpy as np from scipy.special import j1 from scipy.integrate import quad from numba import njit # Remember to also have numba-scipy installed!!!!! from tqdm import tqdm def perform_integrals(config): ''' We perform the integrals using quad ''' # We store the range of n, kn and gn in arrays of length N_max n_values = np.linspace(0, config.N_max-1, config.N_max, dtype=int) k_n_values = 2 * np.pi * n_values # We store the range of t, z in the arrays of dimension (N_t) and (N_z) t_values = np.linspace(0, config.N_t*config.delta_t, config.N_t) z_values = np.linspace(0, config.N_z*config.delta_z, config.N_z) # Preallocate the result arrays (shape: len(n_values) x len(z_values)) x_min = np.zeros((len(t_values), len(z_values))) x_max = np.empty((len(t_values), len(z_values))) # Compute the values t1_values = np.roll(t_values, 1) t1_values[0] = 0. x_min = np.maximum(z_values[None, :], t1_values[:, None]) # Max between z_values[j] and t_values[i-1] x_max = np.maximum(z_values[None, :], t_values[:, None]) # Max between z_values[j] and t_values[i] # We define the integrand and JIT it with Numba (and numba-scipy) for a faster performance @njit def integrand(tau,k,t,z,omega, epsilon=1e-7): u = np.sqrt(np.maximum(0,tau**2 - z**2)) if u < epsilon: return np.sin(omega * (t - tau))*k/2 else: return np.sin(omega * (t - tau)) * j1(k * u) / u # NON-VECTORISED partial_integral = np.zeros((len(n_values),len(t_values),len(z_values))) for n in tqdm(range(1, len(n_values))): # We skip the n=0 case as it is trivially = 0 for i in range(1, len(t_values)): # We skip the t=0 case as it is trivially = 0 for j in range(1, len(z_values)): # We skip the z=0 case as it is trivially = 0 partial_integral[n,i,j],_ = quad(integrand, x_min[i,j], x_max[i,j], args=(k_n_values[n],t_values[i],z_values[j],config.omega, 1e-7), limit=10000, epsabs=1e-7, epsrel=1e-4) # We use quad return partial_integral class TalbotConfig: def __init__(self): self.A = 1. # Amplitude of signal self.c = 1. # Speed of light self.d = 1. # Distance between gratings we fix it = 1 self._lambda = self.d / 10. # Wavelength self.w = 2 * self._lambda # Width of the gratings # Other relevant magnitudes self.omega = 2 * np.pi * self.c / self._lambda # Frequency of the signal self.z_T = self._lambda/(1. - np.sqrt(1.-(self._lambda/self.d) ** 2)) # Talbot distance = 2 d^2/Ξ» # Simulation parameters self.N_x = 27*2 -1 # Number of samples in x direction self.N_z = 192*2 -1 # Number of samples in z direction self.N_t = 100 -1 # Number of samples in time self.N_max = int(self.d / self._lambda * 4) # Number of terms in the series # Other relevant magnitudes self.last_t_zT = 1. # Final time / Z_t self.delta_t = self.z_T/self.c/self.N_t * self.last_t_zT # Time between photos self.delta_x = self.d/2/self.N_x # X-Distance between points self.delta_z = self.z_T/self.N_z # Z-Distance between points def __str__(self): params = { "Amplitude of signal (A)": self.A, "Speed of light (c)": self.c, "Distance between gratings (d)": self.d, "Wavelength (lambda)": self._lambda, "Width of the gratings (w)": self.w, "Frequency of the signal (omega)": self.omega, "Talbot distance (z_T)": self.z_T, "Number of samples in x direction (N_x)": self.N_x, "Number of samples in z direction (N_z)": self.N_z, "Number of samples in time (N_t)": self.N_t, "Number of terms in the series (N_max)": self.N_max, "Time between photos (delta_t)": self.delta_t, "X-Distance between points (delta_x)": self.delta_x, "Z-Distance between points (delta_z)": self.delta_z, "Final time / z_T": self.last_t_zT } print("{:<45} {:<40}".format('\nParameter', 'Value')) print("-" * 65) for key, value in params.items(): print("{:<45} {:<40}".format(key, value)) return "" def debugging(self): if self.Debugging: # Simulation parameters self.N_x = 5 # Number of samples in x direction self.N_z = 5 # Number of samples in z direction self.N_t = 5 # Number of samples in time self.N_max = int(self.d / self._lambda)*2 # Number of terms in the series return if __name__ == "__main__": config = TalbotConfig() # Are we debugging? config.Debugging = False config.debugging() print(config) integral = perform_integrals(config) | A significant fraction of the time is spent in the Bessel function J1 and this function is relatively well optimized when numba-scipy is installed and the integrand function is compiled with numba (i.e. no need to call it from the slow CPython interpreter). That being said, in practice, this function is implemented in the Cephes library which is apparently not the fastest one. The GSL seems to be a bit faster. Still, using the GSL from Python should be slow too. Using it from Numba is possible but a bit cumbersome (the library file needs to be manually loaded with modules like ctypes/cffi and then you need to define the function prototype before calling it). Moreover, the quad function of Scipy seems to spend about 20~30% of the time in CPython overheads (e.g. calling CPython functions, checking CPython types, converting them, etc.). These overheads are usually reduced with scipy.LowLevelCallable. However, this is pretty hard to use it here since you need many arguments. Indeed, this requires the arguments to be wrapped in the LowLevelCallable objects as a ctypes pointer to a Numpy/ctypes array and also the items to be modified every time in the innermost loop. Setting the items takes some time due to the slow interpreter and inefficient scalar Numpy access... On top of that, the pointer of type void* passed to the integrand function needs to be converted to a double* and this turns out to be surprisingly complicated in Numba... I tried to do all of that and in the and I got a frustrating "segmentation fault" and found meanwhile weird Numba bugs... I do not recommend this solution at all. An alternative solution is simply to rewrite this in a native language like C or C++, typically using the GSL (which turns out to be pretty fast for this kind of operation). Only the nested loops needs to be rewritten. The main downside of this solution is that all Python variables required for the computation need to be provided to the native function and wrapped as native types (which is a bit tedious since there are quite a lot of arguments needed). That being said, the native C++ code can be trivially and efficiently parallelized with OpenMP and it is much faster! Here is the C++ code computing the nested loops: #include <cstdlib> #include <cstdint> #include <cstdio> #include <cmath> #include <gsl/gsl_integration.h> #include <gsl/gsl_sf_bessel.h> double integrand(double tau, void* generic_params) { const double* params = (double*)generic_params; const double k = params[0]; const double t = params[1]; const double z = params[2]; const double omega = params[3]; const double epsilon = params[4]; const double u = sqrt(std::max(0.0, tau*tau - z*z)); if(u < epsilon) return sin(omega * (t - tau)) * k * 0.5; else return sin(omega * (t - tau)) * gsl_sf_bessel_J1(k * u) / u; } extern "C" void compute(double* partial_integral, double* x_min, double* x_max, double* k_n_values, double* t_values, double* z_values, int n_size, int t_size, int z_size, int limit, double epsabs, double epsrel, double omega, double epsilon) { #pragma omp parallel for collapse(2) schedule(dynamic) for (int n = 1; n < n_size; ++n) { for (int t = 1; t < t_size; ++t) { gsl_integration_workspace* workspace = gsl_integration_workspace_alloc(1024*1024); double params[5]; double err [[maybe_unused]]; gsl_function func; func.function = &integrand; func.params = ¶ms; for (int z = 1; z < z_size; ++z) { params[0] = k_n_values[n]; params[1] = t_values[t]; params[2] = z_values[z]; params[3] = omega; params[4] = epsilon; gsl_integration_qags(&func, x_min[t*z_size+z], x_max[t*z_size+z], epsabs, epsrel, limit, workspace, &partial_integral[(n*t_size+t)*z_size+z], &err); } gsl_integration_workspace_free(workspace); } } } Here is the Python code to load the C++ library and its main function: import ctypes # Load the compiled library with ctypes kernel = ctypes.CDLL('./libkernel.so') int_t = ctypes.c_int double_t = ctypes.c_double ptr_t = ctypes.POINTER(double_t) kernel.compute.argtypes = [ ptr_t, ptr_t, ptr_t, ptr_t, ptr_t, ptr_t, int_t, int_t, int_t, int_t, double_t, double_t, double_t, double_t ] kernel.compute.restype = None # Utility function ptr = lambda array: array.ctypes.data_as(ptr_t) Here is the Python code to call the C++ function (instead of the nested Python loop): kernel.compute( ptr(partial_integral), ptr(x_min), ptr(x_max), ptr(k_n_values), ptr(t_values), ptr(z_values), len(n_values), len(t_values), len(z_values), 10000, 1e-7, 1e-4, config.omega, 1e-7 ) You can compile this code on Linux with g++ -O3 kernel.cpp -lgsl --shared -fPIC -o libkernel.so -fopenmp. You may need to install On Windows, the standard compiler is MSVC, but it is certainly simpler to use Clang-CL for this (so you need to change the executable name of the previous command). Moreover, the name of the output library needs to be changed to kernel.dll (and so the name of the loaded file in the ctypes section of the Python code). For Mac, using Clang is also good idea too (AFAIK the output file should be kernel.dylib). On my i5-9600KF CPU (6 cores) on Linux, the optimized code is 17 times faster! It only takes 3.3 seconds! If for some reasons you cannot use a native language or do not want to, then I think the only simple solution is to parallelize the code with libraries like joblib (or possibly multiprocessing). This will clearly not be as fast as the native version but it should scale well as long as the Numba function is cached (using the cache=True compilation flag). Otherwise, the function will certainly be recompiled in each worker process (pretty expensive). The native version should be about 3 times faster than this Python version. Distributed computing with MPI PS: As an extra, I am currently running this script in my PC, but I would probably be able to run it on a cluster. The standard way to do that in the scientific community is to use MPI. For Python codes, there is the mpi4py module. You can also use it in C/C++ quite easily. MPI needs to be installed and setup on the cluster (which is the case on nearly all clusters used for scientific computations on nearly all supercomputers nowadays). To support MPI, you need to first initialize/finalize it in the Python script (at the beginning/end) and then you can adapt the C++ function so to use MPI. Each MPI process can compute a part of the n loop (you can compute the range to compute thanks to MPI_Comm_rank and MPI_Comm_size functions). Each process can compute a slice of the partial_integral array and can then send it to the master process (the one with the rank 0). Note you need to send all the arrays to the other processes using MPI_Bcast (or to recompute them in each process). Regarding OpenMP, you can move the #pragma to the next t-based loop (since the former now distributed with MPI), while taking care to remove the collapse(2). You should also take care of spawning 1 MPI process per node (so to avoid creating far too many threads). This MPI code can scale up to n_size nodes (i.e. ~40 here) and using all the available cores on each nodes. Alternatively, you can use 1 MPI process per core and not use OpenMP at all, but this requires the t-based loop to be also distributed for the code to scale (which is more complex than just using OpenMP). | 2 | 2 |
79,359,839 | 2025-1-15 | https://stackoverflow.com/questions/79359839/return-a-different-class-based-on-an-optional-flag-in-the-arguments-without-fact | I am implementing a series of classes in Equinox to enable taking derivatives with respect to the class parameters. Most of the time, the user will be instantiating class A and using the fn function to generate some data, the details of which are unimportant. However, in cases where we are interested in gradients, it is beneficial to represent param_c in terms of a sigmoid function to ensure that it remains clamped in the range (0,1). However, I don't want the user to notice a difference in how the class behaves if they do this. As such, I implement another class A_sigmoid that has param_c as a property and use A_abstract to ensure that both classes inherit the fn method, which will call param_c in its logic. While I could simply have the user instantiate an A_sigmoid object with a _param_c_sigmoid instead of param_c I don't want to force the user to have to make this distinction. Rather, I would want them to pass in the same kwargs dictionary no matter the class and have conversion happen behind the scenes. I also wanted to make it so that when making a new A one could simply pass an optional flag to direct the program to use the sigmoid version of the code. To do so, I implemented the following MWE: class A_abstract(eqx.Module): param_a: jax.Array param_b: jax.Array param_c: eqx.AbstractVar[jax.Array] def fn(self,*args,**kwargs): pass class A_sigmoid(A_abstract): _param_c_sigmoid: jax.Array @property def param_c(self): return 1 / (1 + jnp.exp(-self._param_c_sigmoid)) class A(A_abstract): param_c: jax.Array def __new__(cls, **kwargs): sigmoid_flag = kwargs.pop('use_sigmoid_c',False) if sigmoid_flag == True: param_c = kwargs.pop('param_c') _param_c_sigmoid = jnp.log(param_c / (1 - param_c)) kwargs['_param_c_sigmoid'] = _param_c_sigmoid instance = A_sigmoid.__new__(A_sigmoid) instance.__init__(**kwargs) print(type(instance)) return instance else: return super(A,cls).__new__(cls) classA = A(param_a = 1.,param_b = 2.,param_c = 0.5,use_sigmoid_c=True) print(type(classA)) The code correctly says that instance has type A_sigmoid when print is called in the __new__ method. However, when I print type(classA), it is of type A and has no attribute param_c, though it does have a value for _param_c_sigmoid. Why is this the case? Am I missing something in my use of __new__ that is causing this error? While I know that in principle a factory would be the best way to do this, there are other classes of types B, C, etc. that don't have this need for a sigmoid implementation and that I would like to behave exactly the same way as A to enable them to be easily swapped. Thus, I don't want some custom method to instantiate A that would be different from calling the default constructor on the other classes. I am running this on a Jupyter notebook with the following package versions: Python : 3.12.4 IPython : 8.30.0 ipykernel : 6.29.5 jupyter_client : 8.6.3 jupyter_core : 5.7.2 | If you were using a normal class, what you did is perfectly reasonable: class A_abstract: pass class A_sigmoid(A_abstract): pass class A(A_abstract): def __new__(cls, flag, **kwds): if flag: instance = A_sigmoid.__new__(A_sigmoid) else: instance = super().__new__(cls) instance.__init__(**kwds) return instance print(type(A(True))) # <class '__main__.A_sigmoid'> However, eqx.Module includes a bunch of metaclass logic that overrides how __new__ works, and this seems to collide with the __new__ overrides that you're making. Notice here the only difference is that A_abstract inherits from eqx.Module, and the result is A rather than A_sigmoid: import equinox as eqx class A_abstract(eqx.Module): pass class A_sigmoid(A_abstract): pass class A(A_abstract): def __new__(cls, flag, **kwds): if flag: instance = A_sigmoid.__new__(A_sigmoid) else: instance = super().__new__(cls) instance.__init__(**kwds) return instance print(type(A(True))) # <class '__main__.A'> I dug-in for a few minutes to try and find the exact cause of this change, but wasn't able to pin it down. If you're trying to do metaprogramming during class construction, you'll have to modify it to work within the construction-time metaprogramming that equinox is already doing. | 2 | 0 |
79,359,697 | 2025-1-15 | https://stackoverflow.com/questions/79359697/why-does-python-tuple-unpacking-work-on-sets | Sets don't have a deterministic order in Python. Why then can you do tuple unpacking on a set in Python? To demonstrate the problem, take the following in CPython 3.10.12: a, b = {"foo", "bar"} # sets `a = "bar"`, `b = "foo"` a, b = {"foo", "baz"} # sets `a = "foo"`, `b = "baz"` I recognize that the literal answer is that Python tuple unpacking works on any iterable. For example, you can do the following: def f(): yield 1 yield 2 a, b = f() But why is there not a check used by tuple unpacking that the thing being unpacked has deterministic ordering? | The core "why" is: Because all features start at -100 points, and nobody thought it was worth preventing sets from being used in this context. Every new feature costs developer resources to write it, write tests for it, code review it, and then maintain it forever. There has to be a significant benefit to the feature to justify it. "Preventing people from doing something that is potentially useful in niche contexts to avoid (possibly accidental) misuse in other niche contexts" is essentially neutral on pros and cons. You could propose a feature that would enable this. If someone came up with a significant benefit that would not only cancel out the -100 points all features start at, but also cancel out the negative points applied because this would definitely break existing code in use right now, then they might deprecate (with a warning) iterable unpacking using sets and other unordered iterables, and in a year or three some new version of Python could eventually forbid it. I don't see it happening. Fundamentally: It is useful for sets to be iterable (even if unordered iteration is bad in your opinion, sorted(someset) relies on being able to iterate sets to produce the list that it then sorts). So that's not going away. Iterable unpacking applies to all iterables; you'd need to special-case unordered iterables to explicitly block it. You'll never prevent all forms of the misuse you seem to dislike (something as simple as a, b = list(theset) will prevent the "misuse" from being detected) There are always going to be valid use cases this needlessly blocks, e.g. checking for a single element set by unpacking, [obj] = theset (with try/except used to handle when it's more than one), or processing elements in a destructive fashion one at a time, when you just need one arbitrary element at a time, e.g. first, *collection = collection (where collection starts as set but becomes a list as a side-effect here). Even if they put in an explicit means of detecting unordered iterables, e.g. an __unordered__ attribute on the class with C level support so it can be checked efficiently, that's still slowing down a highly optimized code path for little to no benefit. So it's a feature that will never handle all cases of "misuse", slows down uses to prevent the misuse, breaks existing code, and is only an arguable benefit in the first place. So they haven't done it, and almost certainly never will do it. | 1 | 3 |
79,359,204 | 2025-1-15 | https://stackoverflow.com/questions/79359204/specify-the-position-of-the-legend-title-variable-name-with-legend-on-top-with | When I move the legend of a Seaborn plot from its default position to the top of the plot, I want to have the variable (hue) name in the same row as the possible variable values. Starting from this plot: import seaborn as sns penguins = sns.load_dataset("penguins") ax = sns.relplot(penguins, x="bill_length_mm",y="flipper_length_mm" , hue="species") I want to move the legend on top of the plot(e.g. like this): ax = sns.relplot(penguins, x="bill_length_mm",y="flipper_length_mm" , hue="species") sns.move_legend(ax, "lower center", bbox_to_anchor=(.5, 1), ncol=4) But I want the variable name (species in the example) to be left of its values instead of on top of them. When also setting the style argument, this works fine(ignoring that the legend is to wide in this example): ax = sns.relplot(penguins, x="bill_length_mm",y="flipper_length_mm" , hue="species", style="sex") sns.move_legend(ax, "lower center", bbox_to_anchor=(.5, 1), ncol=7) | You could retrieve the legend, get its bbox and move the title down/left. Here I used half of the width of the bbox to the left and a fixed extra left and down: import seaborn as sns penguins = sns.load_dataset('penguins') g = sns.relplot(penguins, x='bill_length_mm', y='flipper_length_mm', hue='species') sns.move_legend(g, 'lower center', bbox_to_anchor=(0.5, 1), ncol=7) bbox = g.legend.get_window_extent() g.legend.get_title().set_position((-bbox.width/2-20, -20)) You might have to tweak a bit the fixed values (here 20/20), it's otherwise relatively stable to changes of the figure size. Here with 6x3 inches as an example: | 2 | 1 |
79,358,829 | 2025-1-15 | https://stackoverflow.com/questions/79358829/how-can-i-make-ruff-check-assume-a-specific-python-version-for-allowed-syntax | I am on Linux, created a Python 3.9 venv, installed ruff in the venv, wrote this code: def process_data(data: list[int]) -> str: match data: case []: return "No data" case [first, *_] if (average := lambda: sum(data) / len(data)) and average() > 50: return f"Data average is high: {average():.2f}, starting with {first}" case _: return f"Processed {len(data)} items." The match syntax is not available in Python 3.9, so running ruff check, I would expect an error. I have tried to set project.requires-python and ruff.target-version but the latter seems to be used only for the formatter according to the docs. What am I missing? | Ruff doesn't support that (yet?). There's an open issue for precisely this problem. Our parser doesn't take into account the Python version aka target-version setting while parsing the source code. This means that we would allow having a match statement when the target Python version is 3.9 or lower. We want to signal this to the user. | 3 | 3 |
79,358,567 | 2025-1-15 | https://stackoverflow.com/questions/79358567/polars-replace-letter-in-string-with-uppercase-letter | Is there any way in polars to replace character just after the _ with uppercase using regex replace? So far I have achieved it using polars.Expr.map_elements. Is there any alternative using native expression API? import re import polars as pl # Initialize df = pl.DataFrame( { "id": [ "accessible_bidding_strategy.id", "accessible_bidding_strategy.name", "accessible_bidding_strategy.owner_customer_id", ] } ) # Transform df = df.with_columns( pl.col("id") .map_elements( lambda val: re.sub(r"_\w", lambda match: match.group(0)[1].upper(), val), return_dtype=pl.String, ) .alias("parsed_id") ) print(df) Output shape: (3, 2) βββββββββββββββββββββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββ β id β parsed_id β β --- β --- β β str β str β βββββββββββββββββββββββββββββββββββββββββββββββββͺββββββββββββββββββββββββββββββββββββββββββββ‘ β accessible_bidding_strategy.id β accessibleBiddingStrategy.id β β accessible_bidding_strategy.name β accessibleBiddingStrategy.name β β accessible_bidding_strategy.owner_customer_id β accessibleBiddingStrategy.ownerCustomerId β βββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββ | I don't think it's possible to "dynamically" modify the replacement in any of the Polars replace methods. You could create all possible mappings and use .str.replace_many() import string pl.Config(fmt_table_cell_list_len=10, fmt_str_lengths=120) df.with_columns( pl.col("id").str.replace_many( [f"_{c}" for c in string.ascii_lowercase], [f"_{c}" for c in string.ascii_uppercase], ) .str.replace_all("_", "") .alias("parsed_id") ) shape: (3, 2) βββββββββββββββββββββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββ β id β parsed_id β β --- β --- β β str β str β βββββββββββββββββββββββββββββββββββββββββββββββββͺββββββββββββββββββββββββββββββββββββββββββββ‘ β accessible_bidding_strategy.id β accessibleBiddingStrategy.id β β accessible_bidding_strategy.name β accessibleBiddingStrategy.name β β accessible_bidding_strategy.owner_customer_id β accessibleBiddingStrategy.ownerCustomerId β βββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββ Otherwise you'd probably need some form of .str.split() and .list.eval() df.with_columns( pl.col("id").str.split("_").list.eval( pl.element().first() + ( pl.element().slice(1).str.slice(0, 1).str.to_uppercase() + pl.element().slice(1).str.slice(1) ) .str.join() ) .list.first() .alias("parsed_id") ) | 2 | 3 |
79,358,141 | 2025-1-15 | https://stackoverflow.com/questions/79358141/create-row-for-each-data-in-list-python | I'm trying to create a code to show me some stock stats. For that I need to iterate through a list of stocks in python and for each one show some details. So far I have this code: import yfinance as yf import pandas as pd tickerlist = ['AAPL', 'MSFT', 'AMZN'] stock_data = [] for stock in tickerlist: info = yf.Ticker(stock).info stock_data.append(info.get("symbol")) stock_data.append(info.get("bookValue"))) df = pd.DataFrame(stock_data) df But it shows only one column with all data instead of one line per stock, with data headers. I tweaked the code a little bit and I got one row per item but it repeats the last item only, instead of add each entry to row with its values. import yfinance as yf import pandas as pd tickerlist = ['AAPL', 'MSFT', 'AMZN'] for stock in tickerlist: info = yf.Ticker(stock).info df = pd.DataFrame({ 'symbol': info.get("symbol"), 'bookValue': info.get("bookValue")}, index=('0', '1', '2')) df | Instead of adding each piece of info you need as a separate element to stock_data, you should have a list of iterables, where each element of the list contains all of the relevant data for that stock. Note that you can also explicitly provide the column names to get a "nicer" output: mport yfinance as yf import pandas as pd tickerlist = ['AAPL', 'MSFT', 'AMZN'] stock_data = [] for stock in tickerlist: info = yf.Ticker(stock).info stock_data.append((info.get("symbol"), info.get("bookValue"))) df = pd.DataFrame(stock_data, columns=("symbol", "bookValue")) print(df) Output: symbol bookValue 0 AAPL 3.767 1 MSFT 38.693 2 AMZN 24.655 | 3 | 3 |
79,392,937 | 2025-1-28 | https://stackoverflow.com/questions/79392937/django-5-1-postgresql-debian-server | Trying to connect to posgresql base as Django wrote in its docs: https://docs.djangoproject.com/en/5.1/ref/databases/#postgresql-notes DATABASES = { "default": { "ENGINE": "django.db.backends.postgresql", "OPTIONS": { "service": "my_service", "passfile": ".my_pgpass", }, } } I've created 2 files in my home directory .pg_service.conf [my_service] host=/var/run/postgresql user=dbms dbname=dbms_db port=5432 .pgpass /var/run/postgresql:5432:dbms_db:dbms:my_password Such command as test of .pgpass: psql -h localhost -U dbms dbms_db is working. But the connect doesn't work: DATABASES = { "default": { "ENGINE": "django.db.backends.postgresql", "OPTIONS": { "service": "my_service", "passfile": ".pgpass", }, } } with such error Traceback (most recent call last): File "/home/www/projects/amodulesu/venv/lib/python3.11/site-packages/django/db/backends/base/base.py", line 279, in ensure_connection self.connect() File "/home/www/projects/amodulesu/venv/lib/python3.11/site-packages/django/utils/asyncio.py", line 26, in inner return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/www/projects/amodulesu/venv/lib/python3.11/site-packages/django/db/backends/base/base.py", line 256, in connect self.connection = self.get_new_connection(conn_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/www/projects/amodulesu/venv/lib/python3.11/site-packages/django/utils/asyncio.py", line 26, in inner return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/www/projects/amodulesu/venv/lib/python3.11/site-packages/django/db/backends/postgresql/base.py", line 332, in get_new_connection connection = self.Database.connect(**conn_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/www/projects/amodulesu/venv/lib/python3.11/site-packages/psycopg/connection.py", line 119, in connect raise last_ex.with_traceback(None) psycopg.OperationalError: connection failed: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: Peer authentication failed for user "dbms" ... File "/home/www/projects/amodulesu/venv/lib/python3.11/site-packages/psycopg/connection.py", line 119, in connect raise last_ex.with_traceback(None) django.db.utils.OperationalError: connection failed: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: Peer authentication failed for user "dbms" whats wrong with my code? Trying to use export vars in debian - but it doesnt work too.. Here is pg_hba.conf | Cant use 'service': 'my_service', 'passfile': '.pgpass', So use decouple and tradional DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'dbms_db', 'USER': 'dbms', 'PASSWORD': DB_PASS, 'HOST': '127.0.0.1', 'PORT': '5432', } } | 1 | 0 |
79,394,429 | 2025-1-28 | https://stackoverflow.com/questions/79394429/performance-and-memory-duplication-when-using-multiprocessing-with-a-multiple-ar | I am having difficulties understanding logic of optimizing multip-rocessing in Python 3.11. Context: I am running on Ubuntu 22.04, x86 12 cores / 32 threads, 128 GB memory Concerns: (Please refer to code and result below). Both multiprocessing function using a local df (using map+partial or starmap) take a lot more time than multiprocessing using the global df. It seemed ok to me but ... ... by activating the sleep(10) in both fun, I noticed that every spawned process takes around 1.2 GB of memory (using system monitor in Ubuntu), so I guess the df must have been duplicated even when declared global. My question (eventually :-)) Why multi-processings using local df take so much time compared to the one using a global df if memory is copied in each process (whether it is local or global) ? Or maybe system monitor in Ubuntu report the memory accessible to a process and not necessarily its intrinsic footprint ? (shared memory would then appear duplicated in several processes and the sum of all these processes could be greater than my system memory) ? Thank you for your enlightenment Code from time import sleep from multiprocessing import Pool from functools import partial from time import time import numpy as np import pandas as pd def _fun_local(df, imax: int): # sleep(10) df_temp = df[:imax] return df_temp def _fun_global(imax: int): global mydf # sleep(10) df_temp = mydf[:imax] return df_temp def dummy(): # Create a 1 GB df global mydf n_rows = 13_421_773 n_cols = 10 data = np.random.rand(n_rows, n_cols) mydf = pd.DataFrame(data, columns=[f'col_{i}' for i in range(n_cols)]) # check memory footprint print('mydf', mydf.memory_usage(deep=True).sum() / 1024 / 1024) imaxs = range(1, 33) # With local df, call function with partial start = time() with Pool() as pool: fun = partial(_fun_local, mydf) results_partial = pool.map(fun, imaxs) print(f'local-partial took: {time()-start}') # With local df, call function with starmap start = time() with Pool() as pool: results_starmap = pool.starmap(_fun_local, [(mydf, imax) for imax in imaxs]) print(f'local-starmap took: {time()-start}') # With global mydf start = time() with Pool() as pool: results_global = pool.map(_fun_global, imaxs) print(f'global took: {time()-start}') return results_local, results_global if __name__ == '__main__': results = dummy() Result: mydf (MB): 1024.0001411437988 local-partial took: 89.05407881736755 local-starmap took: 88.06274890899658 global took: 0.09803605079650879 | When you fork a child process (in this case you are forking multiprocessing.count() processes) it is true that the child process inherits the memory of the forking process. But copy-on-write semantics is used such that when that inherited memory is modified, it is first copied resulting in increased memory utilization. Even though the child process is not explicitly updating the global dataframe, once it is referenced it gets copied because Python is using reference counts for memory management and the reference count for the dataframe will get incremented. Now consider the following code. from multiprocessing import Pool import os import time SLEEP = False some_data = [0, 1, 2, 3, 4, 5, 6, 7] def worker(i): if SLEEP: time.sleep(.5) return some_data[i], os.getpid() def main(): with Pool(8) as pool: results = pool.map(worker, range(8)) print('results =', results) if __name__ == '__main__': main() Prints: results = [(0, 35988), (1, 35988), (2, 35988), (3, 35988), (4, 35988), (5, 35988), (6, 35988), (7, 35988)] We see that it a single pool process (PID=35988) processed all 8 submitted tasks. This is because the task was incredibly short running resulting in a single pool process being able to pull from the pool's task queue all 8 tasks before the other pool processes finished starting and attempted to process tasks. This also means that global some_data was only referenced by a single pool process and therefore only copied once. If now we change SLEEP to True, the output is now: results = [(0, 45324), (1, 4828), (2, 19760), (3, 8420), (4, 41944), (5, 58220), (6, 46340), (7, 21628)] Each of the 8 pool processes processed one task and therefore some_data was copied 8 times. What Does This Mean? Your _fun_global worker function is also short running if it is not sleeping and probably only a single pool process is processing all submitted tasks resulting in the dataframe being copied only once. But if you do sleep, then each pool process will get to process a task and the dataframe will be copied N times where N is the size of the pool (os.cpu_count()). But even when the inherited memory must be copied, this is a relatively fast operation compared to the situation where you are passing a local dataframe as an argument, in which case the copying is done by using pickle to first serialize the dataframe in the main process and pickle again to de-serialize the dataframe in the child process. Summary Using a local dataframe is is slower because copying a dataframe using pickle is slower than just doing a byte-for-byte copy. Using a global dataframe will result in lower memory utilization if your worker function is short-running so that a single pool process handles all submitted tasks resulting in the copy-on-write occurring only once. Use Shared Memory To Reduce Memory Utilization You can reduce memory by sharing a single copy of the dataframe among the processes: Construct your data numpy array. Copy a flattened version of the data array into a shared memory array shared_array. Construct a dataframe mydf based on the shared array. The processing pool is created specifying an initializer function that is executed once for each pool process. The initializer is passed the shared array from which it can reconstruct the sharable datframe. from time import sleep import multiprocessing import numpy as np import pandas as pd def np_array_to_shared_array(np_array, lock=False): """Construct a sharable array from a numpy array. Specify lock=True if multiple processes might be updating the same array element.""" shared_array = multiprocessing.Array('B', np_array.nbytes, lock=lock) buffer = shared_array.get_obj() if lock else shared_array arr = np.frombuffer(buffer, np_array.dtype) arr[:] = np_array.flatten(order='C') return shared_array def shared_array_to_np_array(shared_array, shape, dtype): """Reconstruct a numpy array from a shared array.""" buffer = ( shared_array.get_obj() if getattr(shared_array, 'get_obj', None) else shared_array ) return np.ndarray(shape, dtype=dtype, buffer=buffer) def df_from_shared_array(shared_array: multiprocessing.Array, shape: tuple[int], dtype: str): """Construct the dataframe based on a sharable array.""" np_array = shared_array_to_np_array(shared_array, shape, dtype) return pd.DataFrame(np_array, columns=[f'col_{i}' for i in range(shape[1])]) def init_pool(shared_array: multiprocessing.Array, shape: tuple[int], dtype: str): """This is executed for each pool process and for each creates a global variable mydf.""" global mydf mydf = df_from_shared_array(shared_array, shape, dtype) def _fun(imax: int): sleep(1) return mydf[:imax] def make_data(): n_rows = 13_421_773 n_cols = 10 data = np.random.rand(n_rows, n_cols) shared_array = np_array_to_shared_array(data) mydf = df_from_shared_array(shared_array, data.shape, data.dtype) shape, dtype = data.shape, data.dtype del data # We no longer need this return shared_array, shape, dtype, mydf def dummy(): shared_array, shape, dtype, mydf = make_data() imaxs = range(1, shape[1] + 1) with multiprocessing.Pool( shape[1], initializer=init_pool, initargs=(shared_array, shape, dtype) ) as pool: results = pool.map(_fun, imaxs) for result in results: print(result, end='\n\n') if __name__ == '__main__': dummy() | 1 | 1 |
79,394,750 | 2025-1-28 | https://stackoverflow.com/questions/79394750/how-to-enable-seperate-action-for-double-click-and-drag-and-drop-for-python-file | I have a simple python script and I want to enable dragging and dropping files into the python script in Windows 11. This works. But only if I set python launcher as default application for python file types. I want to launch my editor when double clicking. My idea is to create a batch file as default application for the python script and launch my editor if no parameters are passed and launch it with python launcher if it has parameters. I cannot get it to fully work. But when opening the editor command window is created that doesn't close automatically. test drag.py: import sys import os if __name__ == "__main__": test_file_name = os.path.join(os.path.split(sys.argv[0])[0], "test.txt") if len(sys.argv) > 1: with open(test_file_name, 'w', newline='\n') as f: f.write("file names:") f.write("\n") for file_path in sys.argv[1:]: f.write(file_path) f.write("\n") else: with open(test_file_name, 'w', newline='\n') as f: f.write("no file names") f.write("\n") C:\drag\python_drag.cmd: @echo off if not "%~2"=="" ( :: Arguments detected, so files were dragged on it. Calling it with python. START /B python.exe %* ) else ( :: No arguments detected, so double clicked. Opening it with editor (vscode) START /B code %1 ) Open command key is updated via command line: reg add "HKEY_CLASSES_ROOT\Python.File\shell\open\command" /ve /t REG_EXPAND_SZ /d "\"C:\drag\python_drag.cmd\" \"%L\" %*" /f Set drophandler if needed (if installing latest python version with py launcher it should be set). enable_python_drag_and_drop.reg: Windows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\Python.File\shellex\DropHandler] @="{60254CA5-953B-11CF-8C96-00AA00B8708C}" When starting python a window briefly opens (you can only see the outline) and then it closes automaticallt. When starting VSCODE it opens an additional window that doesn't close. How to fix it? Edit: I figured out how to at least minimize the window: START /w /min cmd /c "call code %1" or: START /w /min cmd /c "call code.cmd %1" | code.exe has been moved out of the bin folder. Inside the bin folder is only code and code.cmd. It is now one folder up. Running code.exe directly solves the problem. No window is created. Here is C:\drag\python_drag.cmd: @echo off if not "%~2"=="" ( rem Arguments detected, so files were dragged on it. Calling it with python. START "Python Script Execution" python.exe %* ) else ( rem No arguments detected, so double clicked. Opening it with editor (vscode) START "" "%LOCALAPPDATA%\Programs\Microsoft VS Code\code.exe" %1 ) And it's added with this command: reg add "HKEY_CLASSES_ROOT\Python.File\shell\open\command" /ve /t REG_SZ /d "\"C:\drag\python_drag.cmd\" \"%L\" %*" /f | 1 | 1 |
79,394,708 | 2025-1-28 | https://stackoverflow.com/questions/79394708/how-to-convert-an-imagehash-to-a-numpy-array | I use python / imagehash.phash to search for photos similar to a given one, i.e. photos so that the hamming distance is zero. Now I also would like to detect photos that are flipped or rotated copies - the hamming distance is > 0. Instead of flipping or rotating the photo and then calculating an new pHash I would like to derive the pHash of a rotated / flipped photo directly from the original pHash. My idea is to convert the hash into an numpy array, use np.flip etc and convert the array back into a pHash. Questions: How can I convert the hash into an numpy array and back? is there a better way to search for rotated / flipped images given the hash of the original photo? | From reading the source code, I found that you can convert between an imagehash.ImageHash object and a binary array using .hash and the imagehash.ImageHash() constructor. from PIL import Image import imagehash img = Image.open("house.jpg") image_hash_obj = imagehash.phash(img) print(image_hash_obj) hash_array = image_hash_obj.hash print(hash_array) hash_array[0, 0] = False image_hash_obj2 = imagehash.ImageHash(hash_array) print(image_hash_obj2) Output: fde1921f6c20b34c [[ True True True True True True False True] [ True True True False False False False True] [ True False False True False False True False] [False False False True True True True True] [False True True False True True False False] [False False True False False False False False] [ True False True True False False True True] [False True False False True True False False]] 7de1921f6c20b34c | 1 | 1 |
79,393,979 | 2025-1-28 | https://stackoverflow.com/questions/79393979/distribute-value-to-fill-and-unfill-based-on-a-given-condition | The problem: I want to distribute a value that can be positive or negative from one row into multiple rows, where each row can only contain a specific amount, if the value to distribute is positive it fills rows, if it is negative it "unfills". It is possible to fill/unfill partially a row and the order of filling do matter, the first empty or not totally full row is the first you fill, the last filled or partially filled is the first you unfill or partially unfill. The expected behavior: Here is an example of what I expect: first comes the table of payments where there will be a single record per acc and its respective val to distribute. | acc | val | |-----|-------| | 1 | -100 | | 2 | -123 | | 3 | -75 | | 4 | -300 | | 5 | -77 | | 6 | -500 | | 7 | 111 | | 8 | 123 | | 9 | 300 | | 10 | 75 | then here is the table of bills to fill or unfill: | acc | val | sum | |-----|------|------| | 1 | 100 | 100 | | 1 | 100 | 50 | | 1 | 100 | 0 | | 2 | 123 | 123 | | 2 | 150 | 0 | | 2 | 123 | 0 | | 3 | 70 | 70 | | 3 | 70 | 5 | | 4 | 150 | 150 | | 4 | 100 | 100 | | 4 | 150 | 150 | | 5 | 77 | 78 | | 6 | 500 | 500 | | 6 | 500 | 500 | | 6 | 500 | 500 | | 6 | 500 | 500 | | 7 | 100 | 10 | | 7 | 100 | 0 | | 7 | 100 | 0 | | 8 | 123 | 0 | | 8 | 123 | 0 | | 8 | 123 | 0 | | 9 | 100 | 0 | | 9 | 150 | 0 | | 9 | 200 | 0 | | 10 | 75 | 75 | | 10 | 75 | 0 | while this is my expected output: | acc | val | sum | |-----|------|------| | 1 | 100 | 50 | | 1 | 100 | 0 | | 1 | 100 | 0 | | 2 | 123 | 0 | | 2 | 150 | 0 | | 2 | 123 | 0 | | 3 | 70 | 0 | | 3 | 70 | 0 | | 4 | 150 | 100 | | 4 | 100 | 0 | | 4 | 150 | 0 | | 5 | 77 | 1 | | 6 | 500 | 500 | | 6 | 500 | 500 | | 6 | 500 | 500 | | 6 | 500 | 0 | | 7 | 100 | 100 | | 7 | 100 | 22 | | 7 | 100 | 0 | | 8 | 123 | 123 | | 8 | 123 | 0 | | 8 | 123 | 0 | | 9 | 100 | 100 | | 9 | 150 | 150 | | 9 | 200 | 50 | | 10 | 75 | 75 | | 10 | 75 | 75 | What I have tried: this is my current approach but i have been struggling to get it to work, column res is where i am storing the expected sum import polars as pl pl.Config.set_tbl_rows(100) pl.Config.set_tbl_cols(20) left = pl.DataFrame({ "acc": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], "val": [-100, -123, -75, -300, -77, -500, 111, 123, 300, 75] }) right = pl.DataFrame({ "acc": [1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 5, 6, 6, 6, 6, 7, 7, 7, 8, 8, 8, 9, 9, 9, 10, 10], "val": [100, 100, 100, 123, 150, 123, 70, 70, 150, 100, 150, 77, 500, 500, 500, 500, 100, 100, 100, 123, 123, 123, 100, 150, 200, 10, 10], "sum": [100, 50, 0, 123, 0, 0, 70, 5, 150, 100, 150, 78, 500, 500, 500, 500, 10, 0, 0, 123, 0, 0, 0, 0, 0, 0, 0] }) df = right.join(left.rename({'val': 'bill'}), on='acc') bill = pl.col('bill') available = pl.col('val').cum_sum().over('acc') covered = pl.col('sum').cum_sum(reverse=True).over('acc') val = pl.when(bill < 0).then(covered + bill).otherwise(available - bill) results = df.with_columns(bill=bill, available=available, covered=covered, res=val) print(results) | For each group defined by acc, we can compute the total value as the sum of all payments (sum column in right) and the new payment (val column in left) in the group. Given this total amount, we can distribute it across rows in the group. Particularly, we take the total value and subtract the amount that could've already been distributed across previous rows in the group (computed as used_expr). Clipping is used to ensure a row's value is always between 0 and val. For instructive purposes, I've also added the columns total and used to the dataframe below. However, these are optional to include and can be dropped by removing the first .with_columns(...)-block. total_expr = pl.col("sum").sum() + pl.col("val_new") used_expr = pl.col("val").cum_sum().shift(1).fill_null(0) ( right .join(left, on="acc", suffix="_new") .with_columns( total_expr.over("acc").alias("total"), used_expr.over("acc").alias("used"), ) .with_columns( (total_expr - used_expr).clip(0, "val").over("acc").alias("res") ) ) Note. The output differs from the expected outcome only in cases where the input dataframes don't match the tables shown in the question. shape: (27, 7) βββββββ¬ββββββ¬ββββββ¬ββββββββββ¬ββββββββ¬βββββββ¬ββββββ β acc β val β sum β val_new β total β used β res β β --- β --- β --- β --- β --- β --- β --- β β i64 β i64 β i64 β i64 β i64 β i64 β i64 β βββββββͺββββββͺββββββͺββββββββββͺββββββββͺβββββββͺββββββ‘ β 1 β 100 β 100 β -100 β 50 β 0 β 50 β β 1 β 100 β 50 β -100 β 50 β 100 β 0 β β 1 β 100 β 0 β -100 β 50 β 200 β 0 β β 2 β 123 β 123 β -123 β 0 β 0 β 0 β β 2 β 150 β 0 β -123 β 0 β 123 β 0 β β 2 β 123 β 0 β -123 β 0 β 273 β 0 β β 3 β 70 β 70 β -75 β 0 β 0 β 0 β β 3 β 70 β 5 β -75 β 0 β 70 β 0 β β 4 β 150 β 150 β -300 β 100 β 0 β 100 β β 4 β 100 β 100 β -300 β 100 β 150 β 0 β β 4 β 150 β 150 β -300 β 100 β 250 β 0 β β 5 β 77 β 78 β -77 β 1 β 0 β 1 β β 6 β 500 β 500 β -500 β 1500 β 0 β 500 β β 6 β 500 β 500 β -500 β 1500 β 500 β 500 β β 6 β 500 β 500 β -500 β 1500 β 1000 β 500 β β 6 β 500 β 500 β -500 β 1500 β 1500 β 0 β β 7 β 100 β 10 β 111 β 121 β 0 β 100 β β 7 β 100 β 0 β 111 β 121 β 100 β 21 β β 7 β 100 β 0 β 111 β 121 β 200 β 0 β β 8 β 123 β 123 β 123 β 246 β 0 β 123 β β 8 β 123 β 0 β 123 β 246 β 123 β 123 β β 8 β 123 β 0 β 123 β 246 β 246 β 0 β β 9 β 100 β 0 β 300 β 300 β 0 β 100 β β 9 β 150 β 0 β 300 β 300 β 100 β 150 β β 9 β 200 β 0 β 300 β 300 β 250 β 50 β β 10 β 10 β 0 β 75 β 75 β 0 β 10 β β 10 β 10 β 0 β 75 β 75 β 10 β 10 β βββββββ΄ββββββ΄ββββββ΄ββββββββββ΄ββββββββ΄βββββββ΄ββββββ | 2 | 1 |
79,392,129 | 2025-1-27 | https://stackoverflow.com/questions/79392129/how-to-make-an-asynchronous-python-call-from-within-scala | I am trying to access the Python Client V4 for DyDx from within a scala project. I integrated the previous V3 using scalapy library. But V4 contains asynchronous calls which I don't know how I should handle. So for example the short term order composite example's test function begins with the following lines: node = await NodeClient.connect(TESTNET.node) indexer = IndexerClient(TESTNET.rest_indexer) I can access the connect call as follows in my scala code: val network = py.module("dydx_v4_client.network") val client = py.module("dydx_v4_client.node.client") If I print out the value for client it gives me <coroutine object NodeClient.connect at 0x7fe18e87a340> The connect call in NodeClient is defined as asynchronous: async def connect(config: NodeConfig) -> Self: ... How can I actually execute this coroutine object? | A solution I found is to use the asyncio.run function which can be used to run coroutine: val asyncio = py.module("asyncio") val node = asyncio.run(client.NodeClient.connect(network.TESTNET.node)) | 3 | 0 |
79,393,859 | 2025-1-28 | https://stackoverflow.com/questions/79393859/get-pathlib-path-with-symlink | Let's say I open a python console inside /home/me/symlink/dir, which is actually a shortcut to /home/me/path/to/the/dir. In this console I execute the following code: from pathlib import Path path = Path.cwd() print(path) # "/home/me/path/to/the/dir" So cwd() actually resolves the symlinks and get the absolute path automatically and I am searching how to avoid this behavior. If the path with symlink is provided at the Path object declaration, it seems to work as I would like: from pathlib import Path path = Path("/home/me/symlink/dir") print(path) # "/home/me/symlink/dir" Is there a way to get the current working directory keeping the symlink other than providing the path at object instantiation ? | you'll need to use os.getcwd() . Path.cwd() internally uses os.getcwdb() which resolves symlinks, while os.getcwd() preserves them. from pathlib import Path import os path = Path(os.getcwd()) print(path) If you need work with symlinks in general. path.is_symlink() - check if a path is a symlink path.resolve() - resolve symlinks to get the actual path path.readlink() - get the target of a symlink path = Path(os.getcwd()) if path.is_symlink(): real_path = path.resolve() # "/home/me/path/to/the/dir" symlink_target = path.readlink() # Gets the symlink target Edit 1: For your test case, The PWD environment variable often contains the path with preserved symlinks, as it's typically maintained by the shell. import os from pathlib import Path def get_symlinked_cwd(): return Path(os.getenv("PWD", os.getcwd())) symlink_path = get_symlinked_cwd() print(symlink_path) | 2 | 2 |
79,391,105 | 2025-1-27 | https://stackoverflow.com/questions/79391105/calculate-a-monthly-minimum-mean-and-maximum-on-daily-temperature-data-for-febr | I'm fairly new to Python and I'm trying to calculate the minimum, average and maximum monthly temperature from daily data for February. I'm having a bit of trouble applying my code from other months to February. Here is my code for the 31-day months : import xarray as xr import numpy as np import copernicusmarine DS = copernicusmarine.open_dataset(dataset_id="cmems_mod_glo_phy_my_0.083deg_P1D-m", minimum_longitude = -1.68, maximum_longitude = -1.56, minimum_latitude = 49.63, maximum_latitude = 49.67, minimum_depth = 0, maximum_depth = 0) var_arr = np.zeros((341,len(DS['depth']),len(DS['latitude']),len(DS['longitude']))) ind_time = -1 for y in range(2010,2021): ind_time += 1 print(y) start_rangedate = "%s"%y+"-01-01" end_rangedate = "%s"%y+"-01-31" subset_thetao = DS.thetao.sel(time = slice(start_rangedate, end_rangedate)) var_arr[31*ind_time:31*(ind_time+1),:,:,:] = subset_thetao.data minimum = np.nanmin(var_arr) print(minimum) moyenne = np.mean(var_arr) print(moyenne) maximum = np.nanmax(var_arr) print(maximum) # 31 * 11 (years) = 341 It works well. For February I first tried this : DS = copernicusmarine.open_dataset(dataset_id="cmems_mod_glo_phy_my_0.083deg_P1D-m", minimum_longitude = -1.68, maximum_longitude = -1.56, minimum_latitude = 49.63, maximum_latitude = 49.67, minimum_depth = 0, maximum_depth = 0) years_feb_28 = [2010,2011,2013,2014,2015,2017,2018,2019] years_feb_29 = [2012,2016,2020] var_arr = np.zeros((311,len(DS['depth']),len(DS['latitude']),len(DS['longitude']))) ind_time_28 = -1 ind_time_29 = -1 for y in range(2010,2021): print(y) start_rangedate = "%s"%y+"-02-01" if y in years_feb_28: ind_time_28 += 1 end_rangedate = "%s"%y+"-02-28" subset_thetao1 = DS.thetao.sel(time = slice(start_rangedate, end_rangedate)) var_arr[28*ind_time_28:28*(ind_time_28+1),:,:,:] = subset_thetao1.data if y in years_feb_29: ind_time_29 += 1 end_rangedate = "%s"%y+"-02-29" subset_thetao2 = DS.thetao.sel(time = slice(start_rangedate, end_rangedate)) var_arr[29*ind_time_29:29*(ind_time_29+1),:,:,:] = subset_thetao2.data minimum = np.nanmin(var_arr) print(minimum) maximum = np.nanmax(var_arr) print(maximum) moyenne = np.mean(var_arr) print(moyenne) # (8 x 28) + (3 x 29) = 311 It works, but the values seem wrong to me. The result is : minimum : 0.0 mean : 10.118808567523956 maximum :6.510576634161725 I tried with a single ind_time. DS = copernicusmarine.open_dataset(dataset_id="cmems_mod_glo_phy_my_0.083deg_P1D-m", minimum_longitude = -1.68, maximum_longitude = -1.56, minimum_latitude = 49.63, maximum_latitude = 49.67, minimum_depth = 0, maximum_depth = 0) years_feb_28 = [2010,2011,2013,2014,2015,2017,2018,2019] years_feb_29 = [2012,2016,2020] var_arr = np.zeros((311,len(DS['depth']),len(DS['latitude']),len(DS['longitude']))) ind_time = -1 for y in range(2010,2021): print(y) start_rangedate = "%s"%y+"-02-01" if y in years_feb_28: ind_time += 1 end_rangedate = "%s"%y+"-02-28" subset_thetao1 = DS.thetao.sel(time = slice(start_rangedate, end_rangedate)) var_arr[28*ind_time:28*(ind_time+1),:,:,:] = subset_thetao1.data if y in years_feb_29: ind_time += 1 end_rangedate = "%s"%y+"-02-29" subset_thetao2 = DS.thetao.sel(time = slice(start_rangedate, end_rangedate)) var_arr[29*ind_time:29*(ind_time+1),:,:,:] = subset_thetao2.data minimum = np.nanmin(var_arr) print(minimum) maximum = np.nanmax(var_arr) print(maximum) moyenne = np.mean(var_arr) print(moyenne) But I get this error message without understanding where the value 21 comes from : Cell In[7], line 38 var_arr[29*ind_time:29*(ind_time+1),:,:,:] = subset_thetao2.data ValueError: could not broadcast input array from shape (29,1,1,2) into shape (21,1,1,2) Someone told me that the data taken into account may stop at year-02-28 T:00:00:00 (for 28-day years) and year-02-29 T:00:00:00 (for 29-day years) and that the code doesn't take the last day into account. So I tried to extend the end_rangedate to year-03-01 but I get this : Cell In[8], line 33 var_arr[28*ind_time:28*(ind_time+1),:,:,:] = subset_thetao1.data ValueError: could not broadcast input array from shape (29,1,1,2) into shape (28,1,1,2) Could someone explain to me what I'm doing wrong ? | As I said in my comments, the problems in your different attempts come from the indexes you use for var_arr. In the 1st case, with 2 different ind_time_.. indexes, the data is superposed at the start of var_arr, like in the following figure; this both causes lost data and many zeroes left at the end of the array, which affects the minimum and average. In the 2nd case, the same index is used for 28-day and 29-days months, which creates an offset between months for leap and non leap years, causing both superpositions and gaps (see the rough figure below); but the main problem is that too many "slots" (for days) are consumed, which explains the 8 missing days for feb 2020. Here's a fix consisting of calculating for each year the start and end indexes: DS = copernicusmarine.open_dataset(dataset_id="cmems_mod_glo_phy_my_0.083deg_P1D-m", minimum_longitude = -1.68, maximum_longitude = -1.56, minimum_latitude = 49.63, maximum_latitude = 49.67, minimum_depth = 0, maximum_depth = 0) years_feb_28 = [2010,2011,2013,2014,2015,2017,2018,2019] years_feb_29 = [2012,2016,2020] var_arr = np.zeros((311,len(DS['depth']),len(DS['latitude']),len(DS['longitude']))) end_index = 0 for y in range(2010,2021): print(y) start_index = end_index start_rangedate = "%s"%y+"-02-01" feb_days = 28 + (year % 4 == 0 and year % 100 != 0) or (year % 400 == 0) end_index = start_index + 28 end_rangedate = "%s"%y+"-02-28" if y in years_feb_29: end_index = start_index + 29 end_rangedate = "%s"%y+"-02-29" subset_thetao = DS.thetao.sel(time = slice(start_rangedate, end_rangedate)) var_arr[start_index:end_index,:,:,:] = subset_thetao.data minimum = np.nanmin(var_arr) print(minimum) maximum = np.nanmax(var_arr) print(maximum) moyenne = np.mean(var_arr) print(moyenne) And a shorter version getting rid of the if ... else: DS = copernicusmarine.open_dataset(dataset_id="cmems_mod_glo_phy_my_0.083deg_P1D-m", minimum_longitude = -1.68, maximum_longitude = -1.56, minimum_latitude = 49.63, maximum_latitude = 49.67, minimum_depth = 0, maximum_depth = 0) var_arr = np.zeros((311,len(DS['depth']),len(DS['latitude']),len(DS['longitude']))) end_index = 0 for y in range(2010,2021): print(y) start_index = end_index feb_days = 28 + ((y % 4 == 0 and y % 100 != 0) or (y % 400 == 0)) start_rangedate = "%s"%y+"-02-01" end_index = start_index + feb_days end_rangedate = f"{y}-02-{feb_days}" subset_thetao = DS.thetao.sel(time = slice(start_rangedate, end_rangedate)) var_arr[start_index:end_index,:,:,:] = subset_thetao.data minimum = np.nanmin(var_arr) print(minimum) maximum = np.nanmax(var_arr) print(maximum) moyenne = np.mean(var_arr) print(moyenne) | 1 | 1 |
79,393,186 | 2025-1-28 | https://stackoverflow.com/questions/79393186/convolving-with-a-gaussian-kernel-vs-gaussian-blur | While looking for a way to generate spatially varying noise, I came across this answer, which is able to do what I wanted. But I am getting confused about how the code works. From what I understand, the first step of the code generates a gaussian kernel: import numpy as np import scipy.signal import matplotlib.pyplot as plt # Compute filter kernel with radius correlation_scale (can probably be a bit smaller) correlation_scale = 150 x = np.arange(-correlation_scale, correlation_scale) y = np.arange(-correlation_scale, correlation_scale) X, Y = np.meshgrid(x, y) print(X.shape,Y.shape) dist = np.sqrt(X*X + Y*Y) filter_kernel = np.exp(-dist**2/(2*correlation_scale)) which when visualized looks as follows: The second step of the code generates a random noise grid: n = 512 noise = np.random.randn(n, n) that looks like: The third step convolves the random noise generated in step 2 with the filter generated in step 1. noise1 = scipy.signal.fftconvolve(noise, filter_kernel, mode='same') and the output of this step looks like this: My question is how does the output of step 3 end up looking like this instead of a smoothed out version of the random noise? Isn't convolution with a Gaussian kernel the same as applying a Gaussian blur? For instance, if I apply a Gaussian filter on the random noise generated, the output would look like this: from scipy.ndimage import gaussian_filter noise = gaussian_filter(noise , sigma=1, radius=10) Why are the last two images so different from each other? | I've played a bit with the code you provided, and this seems to be related to the fact that you use standard deviations for your gaussian kernel that are very different, your correlation_scale is 150 in the first example whereas sigma is one in your second example. If I take similar values for both, I get similar results. Unrelated, but feel free to truncate your gaussian kernel a bit (noise_trunc=noise[len(X)//2-50:len(X)//2+50, len(Y)//2-50:len(Y)//2+50]), or at least to set the very low coeffs to 0 (noise[noise<1e-5]=0), to greatly increase the speed of your computations | 1 | 1 |
79,393,295 | 2025-1-28 | https://stackoverflow.com/questions/79393295/filter-dataframe-by-nearest-date | I am trying to filter my Polars DataFrame for dates that are nearest to a given date. For example: import polars import datetime data = { "date": ["2025-01-01", "2025-01-01", "2025-01-01", "2026-01-01"], "value": [1, 2, 3, 4], } df = polars.DataFrame(data).with_columns([polars.col("date").cast(polars.Date)]) shape: (4, 2) ββββββββββββββ¬ββββββββ β date β value β β --- β --- β β date β i64 β ββββββββββββββͺββββββββ‘ β 2025-01-01 β 1 β β 2025-01-01 β 2 β β 2025-01-01 β 3 β β 2026-01-01 β 4 β ββββββββββββββ΄ββββββββ Given a date, say: date = datetime.date(2024, 12, 31) I want to filter the DataFrame for rows where the date column only includes records that are closest to my required date. I know that I can do the following: result = df.with_columns( diff=(polars.col("date") - date).abs() ).filter( polars.col("diff") == polars.min("diff") ) shape: (3, 3) ββββββββββββββ¬ββββββββ¬βββββββββββββββ β date β value β diff β β --- β --- β --- β β date β i64 β duration[ms] β ββββββββββββββͺββββββββͺβββββββββββββββ‘ β 2025-01-01 β 1 β 1d β β 2025-01-01 β 2 β 1d β β 2025-01-01 β 3 β 1d β ββββββββββββββ΄ββββββββ΄βββββββββββββββ Is there a more succinct way to achieve this (without creating a new column, for example)? | You don't need to add the temporary column, just filter directly: df.filter((m:=(pl.col('date')-date).abs()).min() == m) Or, without the walrus operator: diff = (pl.col('date')-date).abs() df.filter(diff.min() == diff) Output: ββββββββββββββ¬ββββββββ β date β value β β --- β --- β β date β i64 β ββββββββββββββͺββββββββ‘ β 2025-01-01 β 1 β β 2025-01-01 β 2 β β 2025-01-01 β 3 β ββββββββββββββ΄ββββββββ | 4 | 3 |
79,393,122 | 2025-1-28 | https://stackoverflow.com/questions/79393122/transforming-data-with-implicit-categories-in-header-with-pandas | I have a table like: | 2022 | 2022 | 2021 | 2021 class | A | B | A | B -----------|------|------|------|------ X | 1 | 2 | 3 | 4 Y | 5 | 6 | 7 | 8 How can I transform it to following form? year | category | class | value ---------------------------------- 2022 | A | X | 1 2022 | A | Y | 5 2022 | B | X | 2 2022 | B | Y | 6 2021 | A | X | 3 2021 | A | Y | 7 2021 | B | X | 4 2021 | B | Y | 8 I tried various combinations of pd.melt with no success.. Thx in advance! | You could melt with ignore_index=False and rename_axis/rename: out = (df.rename_axis(columns=['year', 'category']) .melt(ignore_index=False) .reset_index() ) Or: out = (df.melt(ignore_index=False) .rename(columns={'variable_0': 'year', 'variable_1': 'category'}) .reset_index() ) Output: class year category value 0 X 2022 A 1 1 Y 2022 A 5 2 X 2022 B 2 3 Y 2022 B 6 4 X 2021 A 3 5 Y 2021 A 7 6 X 2021 B 4 7 Y 2021 B 8 Reproducible input: df = pd.DataFrame.from_dict({'index': ['X', 'Y'], 'columns': [('2022', 'A'), ('2022', 'B'), ('2021', 'A'), ('2021', 'B')], 'data': [[1, 2, 3, 4], [5, 6, 7, 8]], 'index_names': ['class'], 'column_names': [None, None]}, orient='tight') | 2 | 2 |
79,392,980 | 2025-1-28 | https://stackoverflow.com/questions/79392980/is-there-a-method-to-check-generic-type-on-instancing-time-in-pyhon | I suppose how it can be check what type specified on Generic types in __init__ ? On python 3.10, it could not be. On first, I found this page: Instantiate a type that is a TypeVar and make some utility for get __orig_class__. And test on instancing time, there is no __orig_class__ attribute in that timing on self. Then I searched again I found this issue page: https://github.com/python/typing/issues/658. I recognized that any python might have this limit. Is there any other way? The following is sample code (not original one). This code cannot be executed for the reasons stated above. (I haven't run it, so there may be some bugs too.) from abc import ABC, abstractmethod from typing import Iterable, Any, TypeVar, Generic class BaseLoader(ABC): """ `BaseLoader` loads something and return some expected object. - load a file from path(str) - convert a raw data to some expected data (list like) - implement `load` in inherited class. """ def __init__(self, path: str): self._path = path @abstractmethod def load(self) -> Iterable: pass class SimpleLineLoader(BaseLoader): """ `SimpleTextBaseLoader` loads text data and return some converted lines. - load a utf-8 text file. - convert to a list of text lines. - implement `_convert_line` in inherited class. """ def load(self): with open(self._path, "r") as f: lines = f.readlines() converted_lines = [self._convert_line(line) for line in lines] return converted_lines @abstractmethod def _convert_line(self, line: str) -> Any: """ `_convert_line` convert line string to another one. """ pass class RawLineLoader(SimpleLineLoader): """ `RawLineLoader` loads raw data. nothing to do. """ def _convert_line(self, line): """ nothing to do. just return raw value. """ return line class ElementTrimmedLineLoader(SimpleLineLoader): """ `ElementTrimmedLineLoader` loads lines and regards each line as comma(,) separated data. it trims each tokens in the line and get back to the line. - trims each element separated with comma in each line. - sample: >>> from tempfile import TemporaryDirectory >>> data = ["a, b, d ", "d , e , f"] >>> with TemporaryDirectory() as dname: >>> with open(dname, "w") as f: >>> f.writelines(data) >>> loader = ElementTrimmedLineLoader(f.name) >>> print(loader.load()) [['a', 'b', 'd'], ['d', 'e', 'f']] """ def __init__(self, path): super().__init__(path) self._delim = "," def _convert_line(self, line): """ split with the delim and trim each element, combine them to one with the delim again. """ delim = self._delim converted = delim.join(elem.strip(' ') for elem in line.split(delim)) return converted def get_delim(self): """ get a delimiter in use """ return self._delim class BaseProcessor(ABC): """ `BaseProcessor` process `Iterable` and return a list of result of `_process_element`. - implement `_process_element` in inherited class. """ def __init__(self, loader: BaseLoader, another_param: object): self._loader = loader self._another_param = another_param def process(self, elements: Iterable) -> list[Any]: data = self._loader.load() result = [] for element in data: output = self._process_element(element) result.append(output) # print(output) return result @abstractmethod def _process_element(self, element: object) -> Any: pass options = {filename: "c:/some/constant/path.txt"} T = TypeVar(T, bound=BaseLoader) class SimpleBaseProcessor(BaseProcessor, Generic[T]): """ `SimpleBaseProcessor` process a constant file, and return raw values in situ. - indicate `BaseLoader` class. (what data is expected as prerequisite) - override `_process_element` in inherited class. """ def __init__(self, another_param: object): loader_class = self.__orig_class__.__args__[0] loader = loader_class(options["filename"]) super().__init__(loader, another_param) def _process_element(self, element): """ nothing to do. just return raw value. (default) """ return element class LineCountPrefixedProcessor(SimpleBaseProcessor[RawLineLoader]): """ `LineCountPrefixedProcessor` return a list of line count prefixed line strings. """ def __init__(self, another_param, prefix = "{0} | "): super().__init__(another_param) self._prefix = prefix self._count = 0 def _process_element(self, element): processed = self._prefix.format(self._count) + element self._count += 1 return processed class ElementQuoteProcessor(SimpleBaseProcessor[ElementTrimmedLineLoader]): """ `ElementQuoteProcessor` process trimmed elements and quote each of them. """ def __init__(self, another_param, quote_char = "'"): super().__init__(another_param) self._quote_char = quote_char def _quote(self, target): q = self._quote return q + target + q def _process_element(self, element): delim = self._loader.get_delim() return delim.join(self._quote(elem) for elem in element.split(delim)) | When you write something like: from typing import Generic, TypeVar T = TypeVar('T') class MyGeneric(Generic[T]): def __init__(self): print(getattr(self, '__orig_class__', None)) x = MyGeneric[int]() You may see x.orig_class appear on some Python versions and not on others. That is an internal artifact of how the CPython interpreter (and the typing module) tracked Generic information in older versions. It was never specified in any PEP as a stable, user-facing API for βinspect your type parameters at runtime.β Since standard Python discards generic parameters at runtime, why not just store them yourself from typing import Generic, TypeVar, Type T = TypeVar('T') class MyGenericClass(Generic[T]): def __init__(self, t: Type[T]): self._t = t x - MyGenericClass(int) print(x._t) Or just pass the value class MyGenericClass(Generic[T]): def __init(self, examplevalue: T): self._t = type(examplevalue) | 1 | 1 |
79,391,458 | 2025-1-27 | https://stackoverflow.com/questions/79391458/surprising-lack-of-speedup-in-caching-numpy-calculations | I need to do a lot of calculations on numpy arrays, with some of the calculations being repeated. I had the idea of caching the results, but observe that In most cases, the cached version is slower than just carrying out all calculations. Not only is the cached version slower, line profiling also indicates that the absolute time spent on numpy operations increase, even though there are fewer of them. I can accept the first observation by some combined magic of numpy and the python interpreter, but the second observation makes no sense to me. I also see similar behavior when operating on scipy sparse matrices. The full application is complex, but the behavior can be reproduced by the following: import numpy as np from time import time def numpy_comparison(do_cache: bool, array_size: int, num_arrays: int, num_iter: int): # Create random arrays arrays: dict[int, np.ndarray] = {} for i in range(num_arrays): arrays[i] = np.random.rand(array_size) if do_cache: # Set up the cache if needed - I cannot use lru_cache or similar in practice cache: dict[tuple[int, int], np.ndarray] = {} for _ in range(num_iter): # Loop over random pairs of array, add, store if relevant i, j = np.random.randint(num_arrays, size=2) if do_cache and (i, j) in cache: a = cache[(i, j)] # a is not used further here, but would be in the real case else: a = arrays[i] + arrays[j] if do_cache: cache[(i, j)] = a Now running (with no multithreading) %timeit numpy_comparison(do_cache=False, array_size=10000, num_arrays=100, num_iter=num_iter) %timeit numpy_comparison(do_cache=True, array_size=10000, num_arrays=100, num_iter=num_iter) gives the following results num_iter No caching With caching 100 10.3ms 13.7ms 1000 28.8ms 62.7ms 10000 225ms 392ms 100000 2.12s 1.62s Varying the array size and number of arrays give similar behavior. When num_iter is sufficiently high, retrieving from cache is most efficient, but in the regime relevant for my application, num_iter=1000 when the average chance of hitting a cached value is about 5%. Line profiling indicates this is not caused by working on cache, but on the addition of the arrays being slow. Can anyone give a hint of what is going on here? | TL;DR: page faults explain why the cache-based version is significantly slower than the one without a cache when num_iter is small. This is a side effect of creating many new Numpy arrays and deleted only at the end. When num_iter is big, the cache becomes more effective (as explained in the JonSG's answer). Using another system allocator like TCMalloc can strongly reduce this overhead. When you create many new temporary arrays, Numpy requests some memory to the system allocator which request relatively large buffers to the operating system (OS). The first touch to memory pages causes a page fault enabling the OS to actually setup the pages (actual page fault): the virtual pages are mapped to a physical one and the target pages are filled with zeros for security reasons (information should not leak from one process to another). When all arrays are deleted, Numpy free the memory space and the underlying memory allocator has a good chance to give the memory back to the OS (so other processes can use it). Page faults are very expensive because the CPU needs to switch from the user-land to kernel one (with elevated privileges). The kernel needs to setup many data structures and call a lot of functions to do that. Profiling & Analysis To prove page faults are the issue and how bad page faults are performance-wise, here is the low-level breakdown of the time when the cache is enabled (using perf): 13,75% _multiarray_umath.cpython-311-x86_64-linux-gnu.so [.] DOUBLE_add_AVX2 7,47% [kernel] [k] __irqentry_text_end 6,94% _mt19937.cpython-311-x86_64-linux-gnu.so [.] __pyx_f_5numpy_6random_8_mt19937_mt19937_double 3,63% [kernel] [k] clear_page_erms 3,22% [kernel] [k] error_entry 2,98% [kernel] [k] native_irq_return_iret 2,88% libpython3.11.so.1.0 [.] _PyEval_EvalFrameDefault 2,35% [kernel] [k] sync_regs 2,28% [kernel] [k] __list_del_entry_valid_or_report 2,27% [kernel] [k] unmap_page_range 1,62% [kernel] [k] __handle_mm_fault 1,45% [kernel] [k] __mod_memcg_lruvec_state 1,43% mtrand.cpython-311-x86_64-linux-gnu.so [.] random_standard_uniform_fill 1,10% [kernel] [k] do_anonymous_page 1,10% _mt19937.cpython-311-x86_64-linux-gnu.so [.] mt19937_gen 0,98% [kernel] [k] mas_walk 0,93% libpython3.11.so.1.0 [.] PyObject_GenericGetAttr 0,91% [kernel] [k] get_mem_cgroup_from_mm 0,89% [kernel] [k] get_page_from_freelist 0,79% libpython3.11.so.1.0 [.] _PyObject_Malloc 0,77% [kernel] [k] lru_gen_add_folio 0,72% [nvidia] [k] _nv039919rm 0,65% [kernel] [k] lru_gen_del_folio.constprop.0 0,63% [kernel] [k] blk_cgroup_congested 0,62% [kernel] [k] handle_mm_fault 0,59% [kernel] [k] __alloc_pages_noprof 0,57% [kernel] [k] lru_add 0,57% [kernel] [k] folio_batch_move_lru 0,56% [kernel] [k] __rcu_read_lock 0,52% [kernel] [k] do_user_addr_fault [...] (many others functions taking <0.52% each) As we can see, there are a lot of [kernel] functions called and most of them are due to page faults. For example, __irqentry_text_end and native_irq_return_iret (taking ~10% of the time) is caused by CPU interrupts triggered when the CPython process access to pages for the first time. clear_page_erms is the function clearing a memory page during a first touch. Several functions are related to the virtual to physical memory mapping (e.g. AFAIK ones related to the LRU cache). Note that DOUBLE_add_AVX2 is the internal native Numpy function actually summing the two arrays. In comparison, here is the breakdown with the cache disabled: 20,85% _multiarray_umath.cpython-311-x86_64-linux-gnu.so [.] DOUBLE_add_AVX2 17,39% _mt19937.cpython-311-x86_64-linux-gnu.so [.] __pyx_f_5numpy_6random_8_mt19937_mt19937_double 5,69% libpython3.11.so.1.0 [.] _PyEval_EvalFrameDefault 3,35% mtrand.cpython-311-x86_64-linux-gnu.so [.] random_standard_uniform_fill 2,46% _mt19937.cpython-311-x86_64-linux-gnu.so [.] mt19937_gen 2,15% libpython3.11.so.1.0 [.] PyObject_GenericGetAttr 1,76% [kernel] [k] __irqentry_text_end 1,46% libpython3.11.so.1.0 [.] _PyObject_Malloc 1,07% libpython3.11.so.1.0 [.] PyUnicode_FromFormatV 1,03% libc.so.6 [.] printf_positional 0,93% libpython3.11.so.1.0 [.] _PyObject_Free 0,88% _multiarray_umath.cpython-311-x86_64-linux-gnu.so [.] NpyIter_AdvancedNew 0,79% _multiarray_umath.cpython-311-x86_64-linux-gnu.so [.] ufunc_generic_fastcall 0,77% [kernel] [k] error_entry 0,74% _bounded_integers.cpython-311-x86_64-linux-gnu.so [.] __pyx_f_5numpy_6random_17_bounded_integers__rand_int64 0,72% [nvidia] [k] _nv039919rm 0,69% [kernel] [k] native_irq_return_iret 0,66% [kernel] [k] clear_page_erms 0,55% libpython3.11.so.1.0 [.] PyType_IsSubtype 0,55% [kernel] [k] sync_regs 0,52% mtrand.cpython-311-x86_64-linux-gnu.so [.] __pyx_pw_5numpy_6random_6mtrand_11RandomState_31randint 0,48% [kernel] [k] unmap_page_range 0,46% libpython3.11.so.1.0 [.] PyDict_GetItemWithError 0,43% _multiarray_umath.cpython-311-x86_64-linux-gnu.so [.] PyArray_NewFromDescr_int 0,40% libpython3.11.so.1.0 [.] _PyFunction_Vectorcall 0,38% [kernel] [k] __mod_memcg_lruvec_state 0,38% _multiarray_umath.cpython-311-x86_64-linux-gnu.so [.] promote_and_get_ufuncimpl 0,37% libpython3.11.so.1.0 [.] PyDict_SetItem 0,36% libpython3.11.so.1.0 [.] PyObject_RichCompareBool 0,36% _multiarray_umath.cpython-311-x86_64-linux-gnu.so [.] PyUFunc_GenericReduction [...] (many others functions taking <0.35% each) There are clearly less kernel function called in the top 30 most expensive functions. We can still see the interrupt related functions, but note that this low-level profiling is global to my whole machine and so these interrupt-related function likely comes from other processes (e.g. perf itself, the graphical desktop environment, and Firefox running as I write this answer). There is another reason proving page faults are the main culprit: during my tests, the system allocator suddenly changed its behavior because I allocated many arrays before running the same commands, and this results in only few kernel calls (in perf) as well as very close timings between the two analyzed variants (with/without cache): # First execution: In [3]: %timeit numpy_comparison(do_cache=False, array_size=10000, num_arrays=100, num_iter=num_iter) ...: %timeit numpy_comparison(do_cache=True, array_size=10000, num_arrays=100, num_iter=num_iter) 20.2 ms Β± 63.7 ΞΌs per loop (mean Β± std. dev. of 7 runs, 10 loops each) 46.4 ms Β± 81.9 ΞΌs per loop (mean Β± std. dev. of 7 runs, 10 loops each) # Second execution: In [4]: %timeit numpy_comparison(do_cache=False, array_size=10000, num_arrays=100, num_iter=num_iter) ...: %timeit numpy_comparison(do_cache=True, array_size=10000, num_arrays=100, num_iter=num_iter) 19.8 ms Β± 43.4 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) 46.3 ms Β± 155 ΞΌs per loop (mean Β± std. dev. of 7 runs, 10 loops each) # After creating many Numpy arrays (not deleted since) with no change of `numpy_comparison`: In [95]: %timeit numpy_comparison(do_cache=False, array_size=10000, num_arrays=100, num_iter=num_iter) ...: %timeit numpy_comparison(do_cache=True, array_size=10000, num_arrays=100, num_iter=num_iter) 18.4 ms Β± 15.4 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) 19.9 ms Β± 26.6 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) <----- We can clearly see that the overhead of using a cache is now pretty small. This also means the performance could be very different in a real-world application (because of a different allocator state), or even on different platforms. Solution to mitigate page faults You can use alternative system allocators like TCMalloc which requests a big chunk of memory to the OS at startup time so not to often pay page faults. Here are results with it (using the command line LD_PRELOAD=libtcmalloc_minimal.so.4 ipython): In [3]: %timeit numpy_comparison(do_cache=False, array_size=10000, num_arrays=100, num_iter=num_iter) ...: %timeit numpy_comparison(do_cache=True, array_size=10000, num_arrays=100, num_iter=num_iter) 16.9 ms Β± 51.6 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) 19.5 ms Β± 29.9 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) The overhead related to the cache which was actually due the creation of many temporary arrays and more specifically page faults is now >10 times smaller! I think the remaining overhead is due to the larger memory working set as pointed out by NickODell in comments (this cause more cache misses due to cash trashing and more data to be loaded from the slow DRAM). Put it shortly, the cache version is simply less cache friendly. Here are results with num_item=100_000 with/without TCMalloc: # Default system allocator (glibc) In [97]: %timeit numpy_comparison(do_cache=False, array_size=10000, num_arrays=100, num_iter=num_iter) ...: %timeit numpy_comparison(do_cache=True, array_size=10000, num_arrays=100, num_iter=num_iter) 1.31 s Β± 4.9 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) 859 ms Β± 3.41 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) # TCMalloc allocator In [3]: %timeit -n 1 numpy_comparison(do_cache=False, array_size=10000, num_arrays=100, num_iter=num_iter) ...: %timeit -n 1 numpy_comparison(do_cache=True, array_size=10000, num_arrays=100, num_iter=num_iter) 1.28 s Β± 13.3 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) 774 ms Β± 83.3 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) This behavior is expected: the cache starts to be useful with more hits. Note that TCMalloc makes the overall execution faster in most case! | 10 | 10 |
79,392,330 | 2025-1-27 | https://stackoverflow.com/questions/79392330/is-there-an-add-element-to-a-set-method-returning-whether-the-element-was-actual | I would like to achieve in Python the following semantics which is present in Kotlin: myset = set() for elem in arr: if not myset.add(elem): return 'duplicate found' I am looking for a way to get a slight performance boost by 'hijacking' add-to-set operation by analysing the returned value. It should be better if compared to the following approach I already know in Python: myset = set() for elem in arr: if elem in myset: return 'duplicate found' myset.add(elem) Yet as far as I can observe add() method of set returns None no matter what the outcome of adding is. Is there such a method in Python3? Looking into the documentation I have not managed to find such a result returning add-to-set method yet. | No, and it has been rejected. | 1 | 1 |
79,392,068 | 2025-1-27 | https://stackoverflow.com/questions/79392068/multiple-facet-plots-with-python | I have data from different locations and the following code to plot rainfall and river discharge for each location separately. import pandas as pd import matplotlib.pyplot as plt def hydrograph_plot(dates, rain, river_flow): # figure and subplots fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(10, 6), sharex=True, gridspec_kw={'height_ratios': [1, 2]}) # bar graph rainfall (inverse y axis) ax1.bar(dates, rain, color='blue', alpha=0.6, label='Chuva (mm)') ax1.set_ylabel('Rainfall (mm)', color='blue') ax1.set_ylim(max(rain), 0) # y axis inverted ax1.set_title('Hydrograph: Rainfall at ' + rain.name + ' and Flow at ' + river_flow.name) # Concatenate colname to the title ax1.legend(loc='upper left') ax1.grid(True, linestyle='--', alpha=0.5) # line graph for river flow ax2.plot(dates, river_flow, color='green', marker='o', label='NΓvel do Rio (m)') ax2.set_xlabel('Data') ax2.set_ylabel('StreamFlow (m)', color='green') ax2.legend(loc='upper left') ax2.grid(True, linestyle='--', alpha=0.5) # layout and show fig.autofmt_xdate() plt.tight_layout() plt.show() # Example data for multiple locations data = { 'Time': pd.date_range(start='2023-10-01', periods=24, freq='D'), # Time in hours 'Rainfall_Loc1': [0, 2, 5, 10, 8, 4, 2, 0, 0, 1, 3, 6, 9, 7, 5, 3, 1, 0, 0, 0, 0, 0, 0, 0], # Rainfall for Location 1 'Streamflow_Loc1': [10, 12, 15, 50, 80, 70, 60, 50, 40, 35, 30, 25, 20, 18, 16, 14, 12, 11, 10, 10, 10, 10, 10, 10], # Streamflow for Location 1 'Rainfall_Loc2': [0, 1, 3, 7, 12, 10, 6, 3, 1, 0, 0, 0, 2, 4, 6, 8, 7, 5, 3, 1, 0, 0, 0, 0], # Rainfall for Location 2 'Streamflow_Loc2': [15, 18, 20, 60, 90, 85, 75, 65, 55, 50, 45, 40, 35, 30, 25, 20, 18, 16, 15, 15, 15, 15, 15, 15], # Streamflow for Location 2 'Rainfall_Loc3': [0, 0, 0, 0, 1, 2, 4, 6, 8, 10, 12, 10, 8, 6, 4, 2, 1, 0, 0, 0, 0, 0, 0, 0], # Rainfall for Location 3 'Streamflow_Loc3': [20, 22, 25, 70, 100, 95, 85, 75, 65, 60, 55, 50, 45, 40, 35, 30, 25, 22, 20, 20, 20, 20, 20, 20], # Streamflow for Location 3 'Rainfall_Loc4': [0, 5, 8, 0, 1, 3, 14, 6, 5, 20, 1, 10, 0, 16, 2, 21, 10, 0, 0, 0, 0, 0, 0, 0], # Rainfall for Location 3 'Streamflow_Loc4': [20, 22, 25, 70, 100, 95, 85, 75, 65, 60, 55, 50, 45, 40, 35, 30, 25, 22, 20, 20, 20, 20, 20, 20] # Streamflow for Location 3 } df = pd.DataFrame(data) I would like to plot multiple graphs, as shown in the example of the image. | You can use subplots2grid import pandas as pd import matplotlib.pyplot as plt def hydrograph_plot(dates, rain, river_flow, posx, posy): # figure and subplots ax1 = plt.subplot2grid((6,2), (posy*3, posx), colspan=1, rowspan=1) ax2 = plt.subplot2grid((6,2), (posy*3+1, posx), colspan=1, rowspan=2, sharex=ax1) # bar graph rainfall (inverse y axis) ax1.bar(dates, rain, color='blue', alpha=0.6, label='Chuva (mm)') ax1.set_ylabel('Rainfall (mm)', color='blue') ax1.set_ylim(max(rain), 0) # y axis inverted ax1.set_title('Hydrograph: Rainfall at ' + rain.name + ' and Flow at ' + river_flow.name) # Concatenate colname to the title ax1.legend(loc='upper left') ax1.grid(True, linestyle='--', alpha=0.5) # line graph for river flow ax2.plot(dates, river_flow, color='green', marker='o', label='NΓvel do Rio (m)') ax2.set_xlabel('Data') ax2.set_ylabel('StreamFlow (m)', color='green') ax2.legend(loc='upper left') ax2.grid(True, linestyle='--', alpha=0.5) data = { 'Time': pd.date_range(start='2023-10-01', periods=24, freq='D'), # Time in hours 'Rainfall_Loc1': [0, 2, 5, 10, 8, 4, 2, 0, 0, 1, 3, 6, 9, 7, 5, 3, 1, 0, 0, 0, 0, 0, 0, 0], # Rainfall for Location 1 'Streamflow_Loc1': [10, 12, 15, 50, 80, 70, 60, 50, 40, 35, 30, 25, 20, 18, 16, 14, 12, 11, 10, 10, 10, 10, 10, 10], # Streamflow for Location 1 'Rainfall_Loc2': [0, 1, 3, 7, 12, 10, 6, 3, 1, 0, 0, 0, 2, 4, 6, 8, 7, 5, 3, 1, 0, 0, 0, 0], # Rainfall for Location 2 'Streamflow_Loc2': [15, 18, 20, 60, 90, 85, 75, 65, 55, 50, 45, 40, 35, 30, 25, 20, 18, 16, 15, 15, 15, 15, 15, 15], # Streamflow for Location 2 'Rainfall_Loc3': [0, 0, 0, 0, 1, 2, 4, 6, 8, 10, 12, 10, 8, 6, 4, 2, 1, 0, 0, 0, 0, 0, 0, 0], # Rainfall for Location 3 'Streamflow_Loc3': [20, 22, 25, 70, 100, 95, 85, 75, 65, 60, 55, 50, 45, 40, 35, 30, 25, 22, 20, 20, 20, 20, 20, 20], # Streamflow for Location 3 'Rainfall_Loc4': [0, 5, 8, 0, 1, 3, 14, 6, 5, 20, 1, 10, 0, 16, 2, 21, 10, 0, 0, 0, 0, 0, 0, 0], # Rainfall for Location 3 'Streamflow_Loc4': [20, 22, 25, 70, 100, 95, 85, 75, 65, 60, 55, 50, 45, 40, 35, 30, 25, 22, 20, 20, 20, 20, 20, 20] # Streamflow for Location 3 } df = pd.DataFrame(data) fig=plt.figure() hydrograph_plot(df.Time, df.Rainfall_Loc1, df.Streamflow_Loc1, 0,0) hydrograph_plot(df.Time, df.Rainfall_Loc2, df.Streamflow_Loc2, 1,0) hydrograph_plot(df.Time, df.Rainfall_Loc3, df.Streamflow_Loc3, 0,1) hydrograph_plot(df.Time, df.Rainfall_Loc4, df.Streamflow_Loc4, 1,1) # layout and show fig.autofmt_xdate() plt.tight_layout() plt.show() (I do not argue that it is the best way to do it, with hydrograph_plot needing to be aware of how many total plots there are. Maybe total shape could be an argument. But that is the way that left most of your code unchanged) Note the sharex argument, which ensure that ax1 and ax2, even if they are, each time, just 2 among 8 plots, do share x-axis. | 2 | 1 |
79,387,290 | 2025-1-25 | https://stackoverflow.com/questions/79387290/get-click-to-not-expand-variables-in-argument | I have a simple Click app like this: import click @click.command() @click.argument('message') def main(message: str): click.echo(message) if __name__ == '__main__': main() When you pass an environment variable in the argument, it expands it: β Desktop python foo.py '$M0/.viola/2025-01-25-17-20-23-307878' M:/home/ramrachum/.viola/2025-01-25-17-20-23-307878 Note that I used single quotes above, so my shell is not expanding the environment variable, Click does. How do I get Click to not expand it? | The solution is to pass windows_expand_args=False when calling the main command. | 3 | 1 |
79,392,038 | 2025-1-27 | https://stackoverflow.com/questions/79392038/pandas-batch-update-account-string | My organization has account numbers that are comprised of combining multiple fields. The last field is always 4 characters (typically 0000) Org Account 01 01-123-0000 01 01-456-0000 02 02-789-0000 02 02-456-0000 03 03-987-0000 03 03-123-1234 I also have a dictionary mapping of how many characters the last component should be. MAP = {'01': 4, '02': 3, '03': 3} However there also special mappings for Org 03: D03_SPECIAL_MAP = {'0000': '012', '1234': '123'} My code to update the last component is: for i,r in df.iterrows(): updated = False # Keep track if we have updated this row # Split off last component from the rest of the account Acct, last_comp = r['Account'].rsplit('-',1) # Check if we need to update code length and the code length does not match if r['Org'] in MAP and len(last_comp) != MGMT_MAP[r['Org']]: df.at[i,'Account'] = '-'.join(Acct) + "-" + last_comp.zfill(MAP[r['Org']]) updated = True # Special mapping for Org 03 if r['Org'] =='03' and last_comp in D03_SPECIAL_MAP.keys(): df.at[i,'Account'] = '-'.join(Acct) + "-" + D03_SPECIAL_MAP[last_comp] updated = True if not updated: # Join Default if we have not hit either of he conditions above df.at[i,'Account'] = '-'.join(Acct) + "-" + last_comp The output of this will be: Org Account 01 01-123-0000 01 01-456-0000 02 02-789-000 02 02-456-000 03 03-987-012 03 03-123-123 My code works as expected except this process is a little slow to check every record. Is there a way to perform the same operation without using df.iterrows()? | Since working with strings is hardly vectorizable, just use a simple python function and a list comprehension: def replace(org, account): a, b = account.rsplit('-', maxsplit=1) if org == '03': return f'{a}-{D03_SPECIAL_MAP[b]}' return f'{a}-{b[:MAP[org]]}' df['Account'] = [replace(o, a) for o, a in zip(df['Org'], df['Account'])] Output: Org Account 0 01 01-123-0000 1 01 01-456-0000 2 02 02-789-000 3 02 02-456-000 4 03 03-987-012 5 03 03-123-123 | 1 | 2 |
79,389,136 | 2025-1-26 | https://stackoverflow.com/questions/79389136/how-to-create-a-python-module-in-c-that-multiprocessing-does-not-support | I am trying and failing to reproduce and understand a problem I saw where multiprocessing failed when using a python module written in C++. My understanding was that the problem is that multiprocessing needs to pickle the function it is using. So I made my_module.cpp as follows: #include <pybind11/pybind11.h> int add(int input_number) { return input_number + 10; } PYBIND11_MODULE(my_module, m) { m.doc() = "A simple module implemented in C++ to add 10 to a number."; m.def("add", &add, "Add 10 to a number"); } After pip install pybind11 I compiled with: c++ -O3 -Wall -shared -std=c++11 -fPIC $(python3 -m pybind11 --includes) my_module.cpp -o my_module$(python3-config --extension-suffix) I can import my_module and it works as expected. I can test if it can be pickled with: import my_module import pickle # Use the add function print(my_module.add(5)) # Outputs: 15 # Attempt to pickle the module try: pickle.dumps(my_module) except TypeError as e: print(f"Pickling error: {e}") # Expected error which outputs Pickling error: cannot pickle 'module' object as expected. Now I tested multiprocessing and was surprising that it worked. I was expecting it to give a pickling error. import my_module from multiprocessing import Pool # A wrapper function to call the C++ add function def parallel_add(number): return my_module.add(number) if __name__ == "__main__": numbers = [1, 2, 3, 4, 5] try: # Create a pool of worker processes with Pool(processes=2) as pool: results = pool.map(parallel_add, numbers) print(results) # If successful, prints the results except Exception as e: print(f"Multiprocessing error: {e}") How can I make a Python module in C++ with pybind11 which fails with multiprocessing because of a pickling error? I am using Linux | I don't think your code tries to pickle a module as-is? If you redefine parallel_add to take a module as an argument, then use a partial to pass my_module into it, you can force Python to do that. import my_module from functools import partial from multiprocessing import Pool # Same wrapper, but now takes a module as an argument def parallel_add(module, number): return module.add(number) if __name__ == "__main__": numbers = [1, 2, 3, 4, 5] with Pool(processes=2) as pool: results = pool.map(partial(parallel_add, my_module), numbers) print(results) This throws the error you were expecting: Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "/home/anerdw/stackoverflow/unpickling.py", line 13, in <module> results = pool.map(partial(parallel_add, my_module), numbers) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/multiprocessing/pool.py", line 367, in map return self._map_async(func, iterable, mapstar, chunksize).get() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/multiprocessing/pool.py", line 774, in get raise self._value File "/usr/lib/python3.11/multiprocessing/pool.py", line 540, in _handle_tasks put(task) File "/usr/lib/python3.11/multiprocessing/connection.py", line 205, in send self._send_bytes(_ForkingPickler.dumps(obj)) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/multiprocessing/reduction.py", line 51, in dumps cls(buf, protocol).dump(obj) TypeError: cannot pickle 'module' object You can also get a pickling-related multiprocessing error much more quickly by cutting out the wrapper and trying to pickle the function directly. import my_module from multiprocessing import Pool if __name__ == "__main__": numbers = [1, 2, 3, 4, 5] with Pool(processes=2) as pool: results = pool.map(my_module.add, numbers) print(results) Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "/home/anerdw/stackoverflow/unpickling.py", line 8, in <module> results = pool.map(my_module.add, numbers) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/multiprocessing/pool.py", line 367, in map return self._map_async(func, iterable, mapstar, chunksize).get() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/multiprocessing/pool.py", line 774, in get raise self._value File "/usr/lib/python3.11/multiprocessing/pool.py", line 540, in _handle_tasks put(task) File "/usr/lib/python3.11/multiprocessing/connection.py", line 205, in send self._send_bytes(_ForkingPickler.dumps(obj)) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/multiprocessing/reduction.py", line 51, in dumps cls(buf, protocol).dump(obj) TypeError: cannot pickle 'PyCapsule' object | 1 | 3 |
79,381,851 | 2025-1-23 | https://stackoverflow.com/questions/79381851/whats-the-fastest-way-of-skipping-tuples-with-a-certain-structure-in-a-itertool | I have to process a huge number of tuples made by k integers, each ranging from 1 to Max_k. Each Max can be different. I need to skip the tuples where an element has reached is max value, in that case keeping only the tuple with "1" in the remaining position. The max is enforced by design, so it cannot be that some item is > of its max For example, if the max of the second element of a triple is 4, i need to keep (1,4,1) but skip (1,4,2) , (1,4,3) ... (2,4,1) etc. I am pretty sure I am missing a much faster way to do that. My typical scenario is tuples with 16 to 20 elements, with maxes in the 50-70 mark. What would be the recommended approach ? In Python, as a toy example with hardcoded Maxes (5,4,2), is the following: from itertools import * def filter_logic(y): if y[0]==5: if y[1] > 1 or y[2] >1: return True if y[1]==4: if y[0] > 1 or y[2] >1: return True if y[2]==2: if y[0] > 1 or y[1] >1: return True return False def tuples_all(max_list): my_iterables = [] for limit in max_list: my_iterables.append(range(1, limit+1)) return product(*my_iterables) def tuples_filtered(max_list): return filterfalse(filter_logic, tuples_all(max_list)) max_list = [5,4,2] print("Original list") for res in tuples_all(max_list): print(res) print("After filtering") for fil in tuples_filtered(max_list): print(fil) Output of the filtered tuples: After filtering (1, 1, 1) (1, 1, 2) (1, 2, 1) (1, 3, 1) (1, 4, 1) (2, 1, 1) (2, 2, 1) (2, 3, 1) (3, 1, 1) (3, 2, 1) (3, 3, 1) (4, 1, 1) (4, 2, 1) (4, 3, 1) (5, 1, 1) | Since you commented that order between tuples isn't important, we can simply produce the tuples with max value and then the tuples without max value: from itertools import * def tuples_direct(max_list): n = len(max_list) # Special case if 1 in max_list: yield (1,) * n return # Tuples with a max. for i, m in enumerate(max_list): yield (1,) * i + (m,) + (1,) * (n-i-1) # Tuples without a max. yield from product(*(range(1, m) for m in max_list)) max_list = [5,4,2] for tup in tuples_direct(max_list): print(tup) Output (Attempt This Online!): (5, 1, 1) (1, 4, 1) (1, 1, 2) (1, 1, 1) (1, 2, 1) (1, 3, 1) (2, 1, 1) (2, 2, 1) (2, 3, 1) (3, 1, 1) (3, 2, 1) (3, 3, 1) (4, 1, 1) (4, 2, 1) (4, 3, 1) | 2 | 0 |
79,389,115 | 2025-1-26 | https://stackoverflow.com/questions/79389115/how-to-map-values-from-a-3d-tensor-to-a-1d-tensor-in-pytorch | I'm stuck with a Pytorch problem and could use some help: I've got two tensors: A 3D tensor (shape: i, j, j) with integer values from 0 to n A 1D tensor (shape: n) I need to create a new tensor that's the same shape as the first one (i, j, j), but where each value is replaced by the corresponding value from the second tensor. Like using the values in the first tensor as indices for the second one. Here's a quick example: import torch # My tensors big_tensor = torch.randint(0, 256, (10, 25, 25)) small_tensor = torch.rand(256) # What I'm trying to do result = magic_function(big_tensor, small_tensor) # How it should work print(big_tensor[0, 0, 0]) # Let's say this outputs 42 print(small_tensor[42]) # This might output 0.7853 print(result[0, 0, 0]) # This should also be 0.7853 I'm looking for a performant way to do this, preferably not using loops, as both tensors can be quite big. Is there an efficient Pytorch operation or method I can use for this? Any help would be awesome - thanks in advance! | This should work: small_tensor[big_tensor] Take note that the type of the big_tensor must be long/int. Edit: In response to the comment of @simon, I wrote a colab notebook that shows how this solution works without the need to perform any other operation. | 2 | 2 |
79,390,708 | 2025-1-27 | https://stackoverflow.com/questions/79390708/understanding-type-variance-in-python-protocols-with-generic-types | I'm trying to understand how type variance works with Python protocols and generics. My test cases seem to contradict what I expect regarding invariant, covariant, and contravariant behavior. Here's a minimal example demonstrating the issue: from typing import TypeVar, Protocol # Type variables T = TypeVar('T') T_co = TypeVar('T_co', covariant=True) T_contra = TypeVar('T_contra', contravariant=True) # Class hierarchy class Animal: pass class Dog(Animal): pass # Protocols class Feeder(Protocol[T]): def feed(self, animal: T) -> T: ... class Adopter(Protocol[T_co]): def adopt(self) -> T_co: ... class Walker(Protocol[T_contra]): def walk(self, animal: T_contra) -> None: ... # Implementations class AnimalFeeder: def feed(self, animal: Animal) -> Animal: ... class DogFeeder: def feed(self, animal: Dog) -> Dog: ... class AnimalAdopter: def adopt(self) -> Animal: ... class DogAdopter: def adopt(self) -> Dog: ... class AnimalWalker: def walk(self, animal: Animal) -> None: ... class DogWalker: def walk(self, animal: Dog) -> None: ... When testing type assignments, some cases behave differently than expected: # Test cases with expected vs actual behavior feeder1: Feeder[Dog] = DogFeeder() # Expected β
Actual β
(exact match) feeder2: Feeder[Dog] = AnimalFeeder() # Expected β Actual β (invariant) feeder3: Feeder[Animal] = DogFeeder() # Expected β Actual β
(Why does this work?) adopter1: Adopter[Dog] = DogAdopter() # Expected β
Actual β
(exact match) adopter2: Adopter[Dog] = AnimalAdopter() # Expected β Actual β (return type mismatch) adopter3: Adopter[Animal] = DogAdopter() # Expected β
Actual β
(covariant, correct) walker1: Walker[Dog] = DogWalker() # Expected β
Actual β
(exact match) walker2: Walker[Dog] = AnimalWalker() # Expected β
Actual β (Should work with contravariance?) walker3: Walker[Animal] = DogWalker() # Expected β Actual β
(Why does this work?) Questions: Why does feeder3 type-check when it seemingly violates invariance? Why does walker2 fail when it should be valid with contravariance? Are these behaviors correct according to Python's type system, or is this a limitation in PyCharm's type checking? I'm using Python 3.10 and PyCharm 2024.3.1. | feeder3 works because Python's structural typing checks method return type covariance (accepting Dog as Animal), but incorrectly ignores parameter contravariance (should reject Dog input for Animal parameter). This is a type checker limitation (PyCharm/mypy may differ). walker2 fails due to insufficient contravariance support in some type checkers. AnimalWalker (handles supertype Animal) should satisfy Walker[Dog] (needs subtype input) via contravariance, but PyCharm doesn't recognize Behaviors are partially incorrect - true variance rules (per PEP 544) would reject feeder3 and accept walker2. PyCharm's checker has limitations in enforcing variance for protocols, unlike stricter tools like mypy. | 2 | 4 |
79,386,410 | 2025-1-25 | https://stackoverflow.com/questions/79386410/scrapy-bench-errors-with-assertionerror-on-execution | I ran this command to install conda install -c conda-forge scrapy pylint autopep8 -y then I ran scrapy bench to get the below error. The same thing is happening on global installation via pip command. Please help as I can't understand the reason for this error scrapy bench 2025-01-25 13:52:30 [scrapy.utils.log] INFO: Scrapy 2.12.0 started (bot: scrapybot) 2025-01-25 13:52:30 [scrapy.utils.log] INFO: Versions: lxml 5.3.0.0, libxml2 2.13.5, cssselect 1.2.0, parsel 1.10.0, w3lib 2.2.1, Twisted 24.11.0, Python 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:06:27) [MSC v.1942 64 bit (AMD64)], pyOpenSSL 25.0.0 (OpenSSL 3.4.0 22 Oct 2024), cryptography 44.0.0, Platform Windows-11-10.0.26100-SP0 2025-01-25 13:52:31 [scrapy.addons] INFO: Enabled addons: [] 2025-01-25 13:52:31 [scrapy.extensions.telnet] INFO: Telnet Password: 1d038a25605956ac 2025-01-25 13:52:31 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.closespider.CloseSpider', 'scrapy.extensions.logstats.LogStats'] 2025-01-25 13:52:31 [scrapy.crawler] INFO: Overridden settings: {'CLOSESPIDER_TIMEOUT': 10, 'LOGSTATS_INTERVAL': 1, 'LOG_LEVEL': 'INFO'} 2025-01-25 13:52:32 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2025-01-25 13:52:32 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2025-01-25 13:52:32 [scrapy.middleware] INFO: Enabled item pipelines: [] 2025-01-25 13:52:32 [scrapy.core.engine] INFO: Spider opened 2025-01-25 13:52:32 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2025-01-25 13:52:32 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2025-01-25 13:52:32 [scrapy.core.scraper] ERROR: Spider error processing <GET http://localhost:8998?total=100000&show=20> (referer: None) Traceback (most recent call last): File "C:\Users\Risha\anaconda3\envs\scrapy\Lib\site-packages\scrapy\utils\defer.py", line 327, in iter_errback yield next(it) ^^^^^^^^ File "C:\Users\Risha\anaconda3\envs\scrapy\Lib\site-packages\scrapy\utils\python.py", line 368, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "C:\Users\Risha\anaconda3\envs\scrapy\Lib\site-packages\scrapy\utils\python.py", line 368, in __next__ return next(self.data) ^^^^^^^^^^^^^^^ File "C:\Users\Risha\anaconda3\envs\scrapy\Lib\site-packages\scrapy\core\spidermw.py", line 106, in process_sync yield from iterable File "C:\Users\Risha\anaconda3\envs\scrapy\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 379, in <genexpr> return (self._set_referer(r, response) for r in result) ^^^^^^ File "C:\Users\Risha\anaconda3\envs\scrapy\Lib\site-packages\scrapy\core\spidermw.py", line 106, in process_sync yield from iterable File "C:\Users\Risha\anaconda3\envs\scrapy\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 57, in <genexpr> return (r for r in result if self._filter(r, spider)) ^^^^^^ File "C:\Users\Risha\anaconda3\envs\scrapy\Lib\site-packages\scrapy\core\spidermw.py", line 106, in process_sync yield from iterable File "C:\Users\Risha\anaconda3\envs\scrapy\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 54, in <genexpr> return (r for r in result if self._filter(r, response, spider)) ^^^^^^ File "C:\Users\Risha\anaconda3\envs\scrapy\Lib\site-packages\scrapy\core\spidermw.py", line 106, in process_sync yield from iterable File "C:\Users\Risha\anaconda3\envs\scrapy\Lib\site-packages\scrapy\commands\bench.py", line 70, in parse assert isinstance(Response, TextResponse) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError 2025-01-25 13:52:32 [scrapy.core.engine] INFO: Closing spider (finished) 2025-01-25 13:52:32 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 241, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 1484, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'elapsed_time_seconds': 0.140934, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2025, 1, 25, 8, 22, 32, 389327, tzinfo=datetime.timezone.utc), 'items_per_minute': None, 'log_count/ERROR': 1, 'log_count/INFO': 10, 'response_received_count': 1, 'responses_per_minute': None, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'spider_exceptions/AssertionError': 1, 'start_time': datetime.datetime(2025, 1, 25, 8, 22, 32, 248393, tzinfo=datetime.timezone.utc)} 2025-01-25 13:52:32 [scrapy.core.engine] INFO: Spider closed (finished) | This is a bug on Scrapy introduced on 2.12.0. It's passing the wrong param to isinstance(). This function expects the first param to be the object to be verified (see the docs), but it's currently passing Response class, which leads to the AssertionError we can see in your logs: File "C:\Users\Risha\anaconda3\envs\scrapy\Lib\site-packages\scrapy\commands\bench.py", line 70, in parse assert isinstance(Response, TextResponse) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError I submitted a PR with a fix here replacing the Response class passed as param with the response object. The PR was merged, but a new version wasn't yet released. Therefore, to move forward, you can choose one of the options below: a) Clone the Scrapy repository and install it based on the latest master b) Downgrade your scrapy version to 2.11.2 c) Wait until Scrapy officially releases the fix (likely on 2.13 version) | 2 | 3 |
79,385,456 | 2025-1-24 | https://stackoverflow.com/questions/79385456/create-background-for-a-3d-and-2d-plots | Can someone help me create a layout for plots with the following structure: A CD A/B EF B GH where A,B are 3D plots (fig.add_subplot(241, projection='3d')) and C,D,E,F,G,H are regular 2D plots. A/B represents a shared space where half of plot A and half of plot B appear. | You could use a subplot_mosaic: f, grid = plt.subplot_mosaic('''\ ACD ACD AEF BEF BGH BGH''', per_subplot_kw={('A', 'B'): {'projection': '3d'}} ) plt.tight_layout() Then plot on grid['A']/grid['B']/grid['C']/... Output: If you need a more flexible, play with the gridspec: f, grid = plt.subplot_mosaic('''\ ACD ACD AEF BEF BGH BGH''', per_subplot_kw={('A', 'B'): {'projection': '3d'}}, gridspec_kw={'width_ratios':[3, 1, 1], 'height_ratios':[3, 1, 1, 1, 1, 1], }) plt.tight_layout() Output: | 1 | 3 |
79,389,718 | 2025-1-27 | https://stackoverflow.com/questions/79389718/python-queue-not-updated-outside-of-thread | I've created a Flask app that retrieves data from a queue that is updated in a separate thread. I'm not sure why the Queue is empty when I retrieve it from the Flask GET endpoint, and am a bit ignorant of what Queues being thread-safe is supposed to mean, since my example doesn't appear to reflect that. In the example below, the queue in the Flask @app.route('/measurements') measurements route is empty even though it's updated in the TCP message handler. If anyone can enlighten me I'd appreciate it. I am running this on Ubuntu with python3 in case that's relevant. from flask import Flask, render_template import socket from threading import Thread import os from wyze_sdk import Client from dotenv import load_dotenv from wyze_sdk.errors import WyzeApiError import time from queue import Queue load_dotenv() app = Flask(__name__) start_time = time.time() response = Client().login( email=os.environ['WYZE_EMAIL'], password=os.environ['WYZE_PASSWORD'], key_id=os.environ['WYZE_KEY_ID'], api_key=os.environ['WYZE_API_KEY'] ) client = Client(token=response['access_token']) HOST = '192.168.1.207' # Listen on all network interfaces PORT = 9000 # The same port as in your ESP8266 sketch PORT_UI = 7001 MIN_WATER_DIST = 20 # minimum distance from sensor to water in cm MAX_WATER_DIST = 45 # maximum distance from sensor to water in cm MAX_TIME_PLUG_ON = 600 # maximum amount of time plug should be on # Initialize state variables plug_on_time = None # Track when the plug was turned on measurements = Queue(maxsize=86400) @app.route('/measurements') def measurements_api(): current_time = time.time() recent_measurements = [m for m in list(measurements.queue) if current_time - m['timestamp'] <= 86400] return {'measurements': recent_measurements} # empty queue, no measurements returned # This function will handle incoming TCP messages def handle_tcp_connection(client_socket, client_address, measurements): try: data = client_socket.recv(1024) # Buffer size of 1024 bytes if data: distance_str = data.decode('utf-8') dist = int(distance_str) print(f"Received message: {distance_str}") timestamp = time.time() print(len(measurements.queue)) # prints the correct number of measurements measurements.get() measurements.put({'value': value, 'timestamp': timestamp}) client_socket.close() except Exception as e: print(f"Error: {e}") client_socket.close() # This function runs the TCP server in a separate thread def run_tcp_server(measurements): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as server_socket: server_socket.bind((HOST, PORT)) server_socket.listen(5) print(f"Listening on {HOST}:{PORT} for TCP connections...") while True: client_socket, client_address = server_socket.accept() # Handle each incoming client connection in a new thread Thread(target=handle_tcp_connection, args=(client_socket, client_address, measurements)).start() # Start the TCP server in a separate thread tcp_server_thread = Thread(target=run_tcp_server, daemon=True, args=(measurements,)) tcp_server_thread.start() @app.route('/') def index(): return render_template('index.html') if __name__ == "__main__": app.run(host='0.0.0.0', port=PORT_UI, debug=True) | Queue is thread-safe but not process-shared. Could you please show your uvicorn command for running server? Also, I see you using debug=True. This command involves reloading, which can create two processes. I could suggest you: Use debug=False: app.run(host='0.0.0.0', port=PORT_UI, debug=False) Confirming itβs a single process in router handlers print(os.getpid()) | 2 | 2 |
79,387,822 | 2025-1-26 | https://stackoverflow.com/questions/79387822/how-to-set-major-x-ticks-in-matplotlib-boxplot-time-series | I'm working on creating a time series box plot that spans multiple years, and I want the x-axis to display only the month and year for the 1st of each month. I would like to limit the labels to every month, or maybe every couple of months. import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import matplotlib.dates as mdates import random Dates = pd.date_range(start="2020-01-01", end="2020-02-15", freq='D').date data = {'Date': [], 'Values': []} for d in Dates: data['Date'].extend([d] * 10) data['Values'].extend(random.sample(range(1, 101), 10)) df = pd.DataFrame(data) df['Date'] = pd.to_datetime(df['Date']) sns.boxplot(x='Date', y='Values', data=df) plt.xticks(rotation=45) plt.title('Time Series Box Plot') plt.ylabel('Values') plt.xlabel('Date') plt.tight_layout() plt.show() I tried adding this to the end, but for some reason it makes the year 1970. ax = plt.gca() ax.xaxis.set_major_locator(mdates.MonthLocator(interval=1)) ax.xaxis.set_major_formatter(mdates.DateFormatter('%b %Y')) plt.show() result | You were almost there, just need one more instruction: import matplotlib.dates as mdates mdates.set_epoch('2020-01-01T00:00:00') It is because in Matplotlib the 0 date is the 01/01/1970. This instruction has to be put at the start of your code and if you are in Jupyter you will need to restart your kernel. | 2 | 2 |
79,387,857 | 2025-1-26 | https://stackoverflow.com/questions/79387857/django-5-1-usercreationform-wont-allow-empty-passwords | I'm upgrading a Django 3.0 app to 5.1 and have been moving slowly through each minor release. So far so good. However, once I went from Django 5.0 to 5.1, I saw changed behavior with my "Create New User" page which uses a UserCreationForm form that allows empty passwords. If no password is supplied, a random one is generated. Now, if I submit the form with an empty password I get "required field" errors on the password fields, even though they are both explicitly set as required=False. I saw there were UserCreationForm changes in Django 5.1.0 and 5.1.1. I tried using AdminUserCreationForm and setting the usable_password field to None, but it still won't allow empty passwords like before. Any ideas? Environment Python 3.12.8 Django 5.1.5 Crispy Forms 2.3 Simplified Code from django.contrib.auth.forms import AdminUserCreationForm from crispy_forms.helper import FormHelper class SignupForm(AdminUserCreationForm): # previously using UserCreationForm usable_password = None # Newly added # Form fields sharedaccountflag = forms.ChoiceField( label = 'Cuenta compartida', required = True ) # Constructor def __init__(self, *args, **kwargs): # Call base class constructor super(SignupForm, self).__init__(*args, **kwargs) # Set password fields as optional self.fields['password1'].required = False self.fields['password2'].required = False # Set form helper properties self.helper = FormHelper() self.helper.form_tag = False # Specify model and which fields to include in form class Meta: model = get_user_model() fields = ('password1', 'password2', 'sharedaccountflag') Screenshot Update I used Serhii's example and modified it so the normal validation is called when appropriate. I also changed back to use UserCreationForm: class SignupForm(UserCreationForm): # Override default validation to allow empty password (change in Django 5.1) def validate_passwords( self, password1_field_name = "password1", password2_field_name = "password2" ): # Store password values password1 = self.cleaned_data.get(password1_field_name) password2 = self.cleaned_data.get(password2_field_name) # Do nothing if passwords are not required and no value is provided if ( not self.fields[password1_field_name].required and not self.fields[password2_field_name].required and not password1.strip() and not password2.strip() ): pass # Call default validation if password is required OR a value is provided else: super().validate_passwords(password1_field_name, password2_field_name) | Yes. In new versions of Django, the source code has changed and the behaviour of the BaseUserCreationForm class has changed accordingly. The password1 and password2 fields are now created using the static method SetPasswordMixin.create_password_fields(), and they default to required=False. This can be easily checked here. But even though the fields are optional, the validate_passwords method is always called, in the clean method and checks that the fields are not empty. For example, when you call something like this form.is_valid(), clean will be called. If you need behaviour where empty passwords are allowed, given required=False you can define a custom validate_passwords method like the one in the code below, this will allow you to create users with empty passwords: from django.contrib.auth import forms class CustomUserCreationForm(forms.UserCreationForm): def validate_passwords( self, password1_field_name: str = "password1", password2_field_name: str = "password2"): def is_password_field_required_and_not_valid(field_name: str) -> bool: is_required = self.fields[field_name].required cleaned_value = self.cleaned_data.get(field_name) is_field_has_errors = field_name in self.errors return ( is_required and not cleaned_value and not is_field_has_errors ) if is_password_field_required_and_not_valid(password1_field_name): error = ValidationError( self.fields[password1_field_name].error_messages["required"], code="required", ) self.add_error(password1_field_name, error) if is_password_field_required_and_not_valid(password2_field_name): error = ValidationError( self.fields[password2_field_name].error_messages["required"], code="required", ) self.add_error(password2_field_name, error) password1 = self.cleaned_data.get(password1_field_name) password2 = self.cleaned_data.get(password2_field_name) if password1 != password2: error = ValidationError( self.error_messages["password_mismatch"], code="password_mismatch", ) self.add_error(password2_field_name, error) p.s. I'm not sure if that's the way it was intended, maybe there's just a mistake made, in validate_passwords, where the required flag is simply not taken into account. | 1 | 2 |
79,388,211 | 2025-1-26 | https://stackoverflow.com/questions/79388211/pip-install-fails-looking-into-var-private-folders | pip install . fails, while python3 ./setup.py install succeeds. I am trying to compile and install a module that is purely local for now, written in C++. The name of the module is tcore. The module uses python C API and numpy C API. pip install complains it cannot find numpy which is imported by my setup.py, however the import is actually correct and the setup.py import is very trivial. numpy is correctly installed and can be imported: pierre@pierre-maxm3 tcore % pip install numpy Requirement already satisfied: numpy in /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages (1.26.2) pierre@pierre-maxm3 tcore % python3 Python 3.12.3 (v3.12.3:f6650f9ad7, Apr 9 2024, 08:18:47) [Clang 13.0.0 (clang-1300.0.29.30)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy_path = numpy.get_include() >>> print(numpy_path) /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/numpy/core/include Of course python3 setup.py install works and install my module properly. But pip install fails to load numpy. It looks into /var/private/folders directory that doesn't exist... this is pip 24.3.1. Any idea? What is the problem with pip? pierre@pierre-maxm3 tcore % pip install . DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/tcore-1.0.0-py3.12-macosx-10.9-universal2.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330 Processing /Users/pierre/src/trade/t-bot/c/modules/tcore Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error Γ Getting requirements to build wheel did not run successfully. β exit code: 1 β°β> [33 lines of output] 3.12.3 (v3.12.3:f6650f9ad7, Apr 9 2024, 08:18:47) [Clang 13.0.0 (clang-1300.0.29.30)] system paths: /Users/pierre/src/trade/t-bot/c/modules/tcore /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process /private/var/folders/9g/wy6m2s4d705fpzyr78yx0s9w0000gn/T/pip-build-env-1m0z9fie/site /Library/Frameworks/Python.framework/Versions/3.12/lib/python312.zip /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12 /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/lib-dynload /private/var/folders/9g/wy6m2s4d705fpzyr78yx0s9w0000gn/T/pip-build-env-1m0z9fie/overlay/lib/python3.12/site-packages /private/var/folders/9g/wy6m2s4d705fpzyr78yx0s9w0000gn/T/pip-build-env-1m0z9fie/normal/lib/python3.12/site-packages /private/var/folders/9g/wy6m2s4d705fpzyr78yx0s9w0000gn/T/pip-build-env-1m0z9fie/overlay/lib/python3.12/site-packages/setuptools/_vendor *******will import numpy Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module> main() File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel return hook(config_settings) ^^^^^^^^^^^^^^^^^^^^^ File "/private/var/folders/9g/wy6m2s4d705fpzyr78yx0s9w0000gn/T/pip-build-env-1m0z9fie/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 334, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=[]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/private/var/folders/9g/wy6m2s4d705fpzyr78yx0s9w0000gn/T/pip-build-env-1m0z9fie/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 304, in _get_build_requires self.run_setup() File "/private/var/folders/9g/wy6m2s4d705fpzyr78yx0s9w0000gn/T/pip-build-env-1m0z9fie/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 522, in run_setup super().run_setup(setup_script=setup_script) File "/private/var/folders/9g/wy6m2s4d705fpzyr78yx0s9w0000gn/T/pip-build-env-1m0z9fie/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 320, in run_setup exec(code, locals()) File "<string>", line 13, in <module> ModuleNotFoundError: No module named 'numpy' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error Γ Getting requirements to build wheel did not run successfully. β exit code: 1 β°β> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. pierre@pierre-maxm3 tcore % ls -la /private/var/folders/9g/wy6m2s4d705fpzyr78yx0s9w0000gn/T/pip* zsh: no matches found: /private/var/folders/9g/wy6m2s4d705fpzyr78yx0s9w0000gn/T/pip* pierre@pierre-maxm3 tcore % pip setup.py install : pierre@pierre-maxm3 tcore % python3 setup.py install ['/Users/pierre/src/trade/t-bot/c/modules/tcore', '/Library/Frameworks/Python.framework/Versions/3.12/lib/python312.zip', '/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12', '/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/lib-dynload', '/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages', '/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/tcore-1.0.0-py3.12-macosx-10.9-universal2.egg'] 3.12.3 (v3.12.3:f6650f9ad7, Apr 9 2024, 08:18:47) [Clang 13.0.0 (clang-1300.0.29.30)] /Users/pierre/src/trade/t-bot/c/modules/tcore /Library/Frameworks/Python.framework/Versions/3.12/lib/python312.zip /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12 /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/lib-dynload /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/tcore-1.0.0-py3.12-macosx-10.9-universal2.egg *******will import numpy done numpy root_path /Users/pierre/src/trade/t-bot/c/modules/tcore src_path /Users/pierre/src/trade/t-bot/c/modules/tcore main() root_path /Users/pierre/src/trade/t-bot/c/modules/tcore running install /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated. !! ******************************************************************************** Please avoid running ``setup.py`` directly. Instead, use pypa/build, pypa/installer or other standards-based tools. See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details. ******************************************************************************** !! self.initialize_options() /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/setuptools/_distutils/cmd.py:66: EasyInstallDeprecationWarning: easy_install command is deprecated. !! ******************************************************************************** Please avoid running ``setup.py`` and ``easy_install``. Instead, use pypa/build, pypa/installer or other standards-based tools. See https://github.com/pypa/setuptools/issues/917 for details. ******************************************************************************** !! self.initialize_options() running bdist_egg running egg_info writing tcore.egg-info/PKG-INFO writing dependency_links to tcore.egg-info/dependency_links.txt writing top-level names to tcore.egg-info/top_level.txt dependency /Users/pierre/src/trade/t-bot/c/src/ticker_database/src/TickerDatabase.h won't be automatically included in the manifest: the path must be relative dependency /Users/pierre/src/trade/t-bot/c/src/ticker_database/src/Tickert.h won't be automatically included in the manifest: the path must be relative dependency /Users/pierre/src/trade/t-bot/c/src/logger/src/TmonLogger.h won't be automatically included in the manifest: the path must be relative reading manifest file 'tcore.egg-info/SOURCES.txt' writing manifest file 'tcore.egg-info/SOURCES.txt' installing library code to build/bdist.macosx-10.9-universal2/egg running install_lib running build_ext building 'tcore' extension clang -fno-strict-overflow -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -O3 -Wall -arch arm64 -arch x86_64 -g -I/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12 -c /Users/pierre/src/trade/t-bot/c/src/logger/src/TmonLogger.cpp -o build/temp.macosx-10.9-universal2-cpython-312/Users/pierre/src/trade/t-bot/c/src/logger/src/TmonLogger.o -O3 -std=c++23 -I/Users/pierre/src/trade/t-bot/c/src/ticker_database/src -I/Users/pierre/src/trade/t-bot/c/src/logger/src -I/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/numpy/core/include clang -fno-strict-overflow -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -O3 -Wall -arch arm64 -arch x86_64 -g -I/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12 -c /Users/pierre/src/trade/t-bot/c/src/ticker_database/src/Ticker.cpp -o build/temp.macosx-10.9-universal2-cpython-312/Users/pierre/src/trade/t-bot/c/src/ticker_database/src/Ticker.o -O3 -std=c++23 -I/Users/pierre/src/trade/t-bot/c/src/ticker_database/src -I/Users/pierre/src/trade/t-bot/c/src/logger/src -I/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/numpy/core/include clang -fno-strict-overflow -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -O3 -Wall -arch arm64 -arch x86_64 -g -I/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12 -c /Users/pierre/src/trade/t-bot/c/src/ticker_database/src/TickerDatabase.cpp -o build/temp.macosx-10.9-universal2-cpython-312/Users/pierre/src/trade/t-bot/c/src/ticker_database/src/TickerDatabase.o -O3 -std=c++23 -I/Users/pierre/src/trade/t-bot/c/src/ticker_database/src -I/Users/pierre/src/trade/t-bot/c/src/logger/src -I/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/numpy/core/include clang -fno-strict-overflow -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -O3 -Wall -arch arm64 -arch x86_64 -g -I/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12 -c /Users/pierre/src/trade/t-bot/c/src/ticker_database/src/day_offsets.cpp -o build/temp.macosx-10.9-universal2-cpython-312/Users/pierre/src/trade/t-bot/c/src/ticker_database/src/day_offsets.o -O3 -std=c++23 -I/Users/pierre/src/trade/t-bot/c/src/ticker_database/src -I/Users/pierre/src/trade/t-bot/c/src/logger/src -I/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/numpy/core/include clang -fno-strict-overflow -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -O3 -Wall -arch arm64 -arch x86_64 -g -I/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12 -c tcore.cpp -o build/temp.macosx-10.9-universal2-cpython-312/tcore.o -O3 -std=c++23 -I/Users/pierre/src/trade/t-bot/c/src/ticker_database/src -I/Users/pierre/src/trade/t-bot/c/src/logger/src -I/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/numpy/core/include clang++ -bundle -undefined dynamic_lookup -arch arm64 -arch x86_64 -g build/temp.macosx-10.9-universal2-cpython-312/Users/pierre/src/trade/t-bot/c/src/logger/src/TmonLogger.o build/temp.macosx-10.9-universal2-cpython-312/Users/pierre/src/trade/t-bot/c/src/ticker_database/src/Ticker.o build/temp.macosx-10.9-universal2-cpython-312/Users/pierre/src/trade/t-bot/c/src/ticker_database/src/TickerDatabase.o build/temp.macosx-10.9-universal2-cpython-312/Users/pierre/src/trade/t-bot/c/src/ticker_database/src/day_offsets.o build/temp.macosx-10.9-universal2-cpython-312/tcore.o -o build/lib.macosx-10.9-universal2-cpython-312/tcore.cpython-312-darwin.so -std=c++23 creating build/bdist.macosx-10.9-universal2/egg copying build/lib.macosx-10.9-universal2-cpython-312/tcore.cpython-312-darwin.so -> build/bdist.macosx-10.9-universal2/egg creating stub loader for tcore.cpython-312-darwin.so byte-compiling build/bdist.macosx-10.9-universal2/egg/tcore.py to tcore.cpython-312.pyc creating build/bdist.macosx-10.9-universal2/egg/EGG-INFO copying tcore.egg-info/PKG-INFO -> build/bdist.macosx-10.9-universal2/egg/EGG-INFO copying tcore.egg-info/SOURCES.txt -> build/bdist.macosx-10.9-universal2/egg/EGG-INFO copying tcore.egg-info/dependency_links.txt -> build/bdist.macosx-10.9-universal2/egg/EGG-INFO copying tcore.egg-info/top_level.txt -> build/bdist.macosx-10.9-universal2/egg/EGG-INFO writing build/bdist.macosx-10.9-universal2/egg/EGG-INFO/native_libs.txt zip_safe flag not set; analyzing archive contents... __pycache__.tcore.cpython-312: module references __file__ creating 'dist/tcore-1.0.0-py3.12-macosx-10.9-universal2.egg' and adding 'build/bdist.macosx-10.9-universal2/egg' to it removing 'build/bdist.macosx-10.9-universal2/egg' (and everything under it) Processing tcore-1.0.0-py3.12-macosx-10.9-universal2.egg removing '/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/tcore-1.0.0-py3.12-macosx-10.9-universal2.egg' (and everything under it) creating /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/tcore-1.0.0-py3.12-macosx-10.9-universal2.egg Extracting tcore-1.0.0-py3.12-macosx-10.9-universal2.egg to /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages Adding tcore 1.0.0 to easy-install.pth file Installed /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/tcore-1.0.0-py3.12-macosx-10.9-universal2.egg Processing dependencies for tcore==1.0.0 Finished processing dependencies for tcore==1.0.0 pierre@pierre-maxm3 tcore % Here is my setup.py: from setuptools import setup, Extension import os import sys import site print(sys.version) print("system paths:") for p in sys.path: print(p) print("\n*******will import numpy") import numpy print("done numpy") root_path = os.path.abspath(".") print(f"root_path {root_path}") p = root_path # look for top path of the .git repo while ".git" not in os.listdir(p): p = os.path.dirname(p) src_path = os.path.join(p, "c", "src") print(f"src_path {root_path}") src_files = ["tcore.cpp",\ os.path.join(src_path, "ticker_database", "src", "TickerDatabase.cpp"),\ os.path.join(src_path, "ticker_database", "src", "day_offsets.cpp"),\ os.path.join(src_path, "ticker_database", "src", "Ticker.cpp"),\ os.path.join(src_path, "logger", "src", "TmonLogger.cpp"),\ ] h_files = [os.path.join(src_path, "ticker_database", "src", "TickerDatabase.h"),\ os.path.join(src_path, "ticker_database", "src", "Tickert.h"),\ os.path.join(src_path, "logger", "src", "TmonLogger.h"),\ ] sp = site.getsitepackages()[0] numpy_path = numpy.get_include() include_path = [os.path.join(src_path, "ticker_database", "src"),\ os.path.join(src_path, "logger", "src"), numpy_path] include_options = [f"-I{p}" for p in include_path] compile_options = ["-O3", "-std=c++23"] compile_options += include_options def main(): print(f"main() root_path {root_path}") src_path = "../" setup(name="tcore", version="1.0.0", description="Python interface for tcore C++", author="Pierre Vandwalle", include_package_data=True, ext_modules=[Extension("tcore", src_files, extra_compile_args=compile_options, extra_link_args=["-std=c++23"], depends=h_files, )], setup_requires=['numpy'] ) if __name__ == "__main__": main() Added pyproject.toml: [project] name = "tcore" dependencies = [ "numpy==1.26.2", ] | Modern pip uses build isolation, it uses a transient virtual env to build a wheel and then installs the wheel into the target environment; this transient virtual env is your absent temporary directory; pip removes it after success or failure so you cannot find it. There are two ways to work around the problem: Install numpy into the current environment and install the source code without build isolation: pip install numpy pip install --no-build-isolation . Create pyproject.toml and declare numpy as a build-time dependency: [build-system] build-backend = "setuptools.build_meta" requires = ["setuptools", "numpy==1.26.2"] | 2 | 1 |
79,387,073 | 2025-1-25 | https://stackoverflow.com/questions/79387073/unionlistnode-none-vs-optionallistnode-vs-optionallistnode | It seems we can use the following types for hinting LinkedLists in Python: Union[ListNode, None] or Union['ListNode', None] Optional[ListNode] Optional['ListNode'] ListNode | None ... Which of these should we prefer? Feel free to share any other relevant insights as well. Attempt Here we simply want to merge K sorted LinkedLists: from typing import Union, List, Any, Optional import unittest import gc class ListNode: def __init__(self, val: int = 0, next: Union['ListNode', None] = None): self.val = val self.next = next class Solution: def mergeKLists(self, linkedlists: Optional[List[ListNode]]) -> Union[ListNode, None]: if not linkedlists: return None interval = 1 while interval < len(linkedlists): for i in range(0, len(linkedlists) - interval, interval << 1): linkedlists[i] = self.merge_two_linkedlists(linkedlists[i], linkedlists[i + interval]) interval <<= 1 return linkedlists[0] def merge_two_linkedlists(self, l1: Union[ListNode, None], l2: Union[ListNode, None]) -> Union[ListNode, None]: sentinel = curr = ListNode() while l1 and l2: if l1.val <= l2.val: curr.next = l1 l1 = l1.next else: curr.next = l2 l2 = l2.next curr = curr.next if l1: curr.next = l1 else: curr.next = l2 return sentinel.next def main_test_merge_k_linkedlists(test_case_class): suite = unittest.TestLoader().loadTestsFromTestCase(test_case_class) runner = unittest.TextTestRunner(verbosity=2, failfast=1) return runner.run(suite) class TestMergeKLists(unittest.TestCase): def test_merge_k_sorted_linkedlists(self): l1 = ListNode(10) l1.next = ListNode(20) l2 = ListNode(40) l2.next = ListNode(50) l3 = ListNode(80) l3.next = ListNode(150) self.assertEqual(Solution().mergeKLists([l1, l2, l3]), l1) l1 = l2 = l3 = None gc.collect() if __name__ == '__main__': main_test_merge_k_linkedlists(TestMergeKLists) | typing.Optional[T] is just shorthand for typing.Union[T, None]. Would always prefer the former to the latter for its succinctness. Union is of course still useful when using it with something else than None. After Python 3.10, a union of types can simply be written with the | operator (Optional[T] becomes T | None). So nowadays it's unnecessary to import/use either Union or Optional. Also, as of Python 3.9, the built-in collection types support the subscripting [] operator, so a generic typing.List[T] can now just be written as list[T]. The reason for sometimes needing to have a type written in quotes, is that if the type does not exist yet (e.g. using the class as a type within its own body), the string format can still be understood by type checkers and it doesn't cause a runtime NameError. This stringification need can be avoided by using the from __future__ import annotations statement at the top of a module. That statement explicitly opts you into PEP 563 β Postponed Evaluation of Annotations. (That PEP has actually been superseded PEP 649 β Deferred Evaluation Of Annotations Using Descriptors, and it's coming "mandatory" in Python 3.14.) | 3 | 5 |
79,387,911 | 2025-1-26 | https://stackoverflow.com/questions/79387911/how-to-make-values-into-rows-instead-of-columns-when-using-pivot-table-in-pandas | Say I have this data frame: import pandas as pd x = pd.DataFrame([[1, 'step', 'id', 22, 33], [2, 'step', 'id', 55, 66]], columns=['time', 'head_1', 'head_2', 'value_1', 'value_2']) print(x) time head_1 head_2 value_1 value_2 0 1 step id 22 33 1 2 step id 55 66 Then I use pivot table like below print(x.pivot_table(values=['value_1', 'value_2'], columns='time', index=['head_1', 'head_2'])) value_1 value_2 time 1 2 1 2 head_1 head_2 step id 22 55 33 66 However, I really want to have value_1 and value_2 in rows instead of columns like below (a new header as head_3). That is, put value_1 and value_2 in rows and only time as column. How do I do that? time 1 2 head_1 head_2 head_3 step id value_1 22 55 step id value_2 33 66 | One straightforward way is to melt your dataframe so value_1 and value_2 become labels in single column and then pivot on that. Like: import pandas as pd x = pd.DataFrame( [ [1, 'step', 'id', 22, 33], [2, 'step', 'id', 55, 66] ], columns=['time','head_1', 'head_2', 'value_1', 'value_2'] ) melted = x.melt( id_vars=['time','head_1', 'head_2'], value_vars=['value_1','value_2'], var_name='head_3', value_name='val' ) result = melted.pivot_table( index=['head_1','head_2','head_3'], columns='time', values='val', aggfunc='first' ) Youβll get: time 1 2 head_1 head_2 head_3 step id value_1 22 55 value_2 33 66 This places value_1 and `value_2 in rows as you wanted with only time going across columns | 1 | 3 |
79,387,755 | 2025-1-26 | https://stackoverflow.com/questions/79387755/how-to-resolve-incompatible-types-in-assignment-when-converting-dictionary-val | I am working with a dictionary in Python where a key ("expiryTime") initially holds a str value in ISO 8601 format (e.g., "2025-01-23T12:34:56"). At some point in my code, I convert this string into a datetime object using datetime.strptime. However, I encounter a mypy error during type checking: error: Incompatible types in assignment (expression has type "datetime", target has type "str") [assignment] Here is a simplified version of the code: from datetime import datetime # Initial dictionary with string value token_dict: dict[str, str] = {"expiryTime": "2025-01-23T12:34:56"} # Convert the string to datetime expiry_time_dt = datetime.strptime(token_dict["expiryTime"], "%Y-%m-%dT%H:%M:%S") token_dict["expiryTime"] = expiry_time_dt # Error here: incompatible types I understand that mypy complains because the dictionary's value was initially declared as a str, and assigning a datetime object violates the declared type. However, I need to store the datetime object for further processing in my code. My Question: What is the best way to handle this situation while maintaining type safety with mypy? Should I: Suppress the type-checking error with # type: ignore? Refactor my code to use a Union[str, datetime] type for the dictionary values, despite the strptime error? Utilize an alternate approach I may not be aware of? The goal is to ensure the code remains type-safe while handling the necessary conversions between str and datetime. Any advice or recommendations for improving this workflow would be appreciated! What I Have Tried: Suppressed the Error: I used # type: ignore to bypass the issue temporarily: token_dict["expiryTime"] = expiry_time_dt # type: ignore While this works, it's not a clean or type-safe solution. Using Union[str, datetime]: I updated the type hint of the dictionary: token_dict: dict[str, Union[str, datetime]] = {"expiryTime": "2025-01-23T12:34:56"} This was the only viable solution until I encountered the following mypy error: error: Argument 1 to "strptime" of "datetime" has incompatible type "Union[str, datetime]"; expected "str" [arg-type] The error occurs when I try to pass a Union[str, datetime] value to strptime, which expects a str. This creates additional complexity because mypy requires explicit type checks for every access to token_dict["expiryTime"], making it harder to maintain type correctness. Conversion to str for Mocking: When mocking requests that interact with this dictionary, I must ensure the value is serialized back to a string (since datetime is not JSON-serializable). This back-and-forth conversion is unavoidable, and I need to ensure type correctness in my code. Problem Context (for Reference): The key "expiryTime" is part of a token dictionary that represents metadata for authentication. When mocking requests, I serialize the dictionary using JSON, which does not support datetime objects. This necessitates the conversion of datetime values back to strings before serialization. The back-and-forth conversions, combined with type annotations, have made this problem challenging to address cleanly. | For your options... You probably don't want to suppress the error, it's there for a reason :) The Union type doesn't enforce that your dictionary is all of one type or another, meaning it is hard to consume that variable and having to add handling for both types [I might need more context on your mocking to fully understand what you're trying to do] I recommend not mutating the object at all, and processing it into a new dictionary: new_token_dict: dict[str, datetime] = {k: datetime.strptime(v, "%Y-%m-%dT%H:%M:%S") for k,v in token_dict.items()} Mutating the variable (and even worse, its type), adds extra complexity to your code since you now need to handle that variable differently depending on whether you've performed your manipulation or not. You've lost the benefits of having the static types. By creating a separate variable, you can keep the concepts separate with their own purpose. | 2 | 3 |
79,387,246 | 2025-1-25 | https://stackoverflow.com/questions/79387246/how-to-loop-through-all-distinct-triplets-of-an-array-such-that-they-are-of-the | As stated above, I need to efficiently count the number of distinct triplets of the form (a, b, b). In addition, the triplet is only valid if and only if it can be formed by deleting some integers from the array, only leaving behind that triplet in that specific ordering. What this is saying is that the triplets need to be in chronological order, I believe, but don't have to consist of consecutive elements. The solution needs to be really efficient as N (the length of the array) can go upto 10^6 (or a million). For example, if the array was [5, 6, 7, 3, 3, 3], then the answer would be 3 as the triplets would be: (5, 3, 3), (6, 3, 3), and (7, 3, 3). This was my first brute force (just to start off, O(n^3)): n = int(input()) arr = list(map(int, input().split())) ans = set() for i in range(n): for j in range(i + 1, n): if arr[i] != arr[j]: for k in range(j + 1, n): if arr[j] == arr[k]: ans.add((arr[i], arr[j], arr[k])) print(len(ans)) Then, I unsuccessfully tried optimizing this to an O(n^2), which is still too slow, but I can't even seem to get this right: def solve(): n = int(input()) arr = list(map(int, input().split())) freq = Counter(arr) ans = set() for a in freq: if freq[a] < 1: continue for b in freq: if b != a and freq[b] >= 2: ans.add((a, b, b)) return len(ans) print(solve()) I can't fix the logic for the O(n^2) and optimize this further to fully solve the problem under the given constraints. Assistance would be much appreciated. | At the second-to-last occurrence of each b-value, add the number of different values that came before it. Takes about 1.5 seconds for array length 10^6. from collections import Counter def linear(arr): ctr = Counter(arr) A = set() result = 0 for b in arr: if ctr[b] == 2: result += len(A) - (b in A) ctr[b] -= 1 A.add(b) return result Testing your small example and five larger arrays: import random from time import time def original(arr): n = len(arr) ans = set() for i in range(n): for j in range(i + 1, n): if arr[i] != arr[j]: for k in range(j + 1, n): if arr[j] == arr[k]: ans.add((arr[i], arr[j], arr[k])) return len(ans) def test(arr): expect = original(arr) result = linear(arr) print(result == expect, expect, result) # Correctness test([5, 6, 7, 3, 3, 3]) for _ in range(5): test(random.choices(range(100), k=100)) # Speed n = 10**6 arr = random.choices(random.sample(range(10**9), n), k=n) t = time() print(linear(arr)) print(time() - t) Sample output (Attempt This Online!): True 3 3 True 732 732 True 1038 1038 True 629 629 True 754 754 True 782 782 80414828386 1.4968228340148926 | 3 | 0 |
79,386,829 | 2025-1-25 | https://stackoverflow.com/questions/79386829/how-to-provide-tomllib | Since python3.11 we are able to use builtin library tomllib before we had access to third party library tomli and a few others. I did not have analyzed both packages deeply but came to the conclusion, that I am able to replace tomli with tomllib for my purposes. My issue I have with the situation: How to handle the change while also supporting down to 3.9? Is there a solution to provide different dependencies in pyproject.toml for different versions of python (with additons to codebase changes accordingly)? This Question could be probably generalized, but I think the example gives a good picture of the general issue I face. | You can use PEP 496 β Environment Markers and PEP 508 β Dependency specification for Python Software Packages; they're usable in setup.py, setup.cfg, pyproject.toml, requirements.txt. In particular see PEP 631 β Dependency specification in pyproject.toml for pyproject.toml: [project] dependencies = [ 'tomli ; python_version < "3.11"' ] After installing or skipping tomli you test presence of tomllib/tomli in the code: try import tomllib except ImportError: try import tomli except ImportError: sys.exit('Error: This program requires either tomllib or tomli but neither is available') | 2 | 1 |
79,379,114 | 2025-1-22 | https://stackoverflow.com/questions/79379114/performance-optimization-for-minimax-algorithm-in-tic-tac-toe-with-variable-boar | I'm implementing a Tic-Tac-Toe game with an AI using the Minimax algorithm (with Alpha-Beta Pruning) to select optimal moves. However, I'm experiencing performance issues when the board size increases or the number of consecutive marks required to win (k) grows. The current implementation works fine for smaller board sizes (e.g., 3x3), but as I scale up the board size to larger grids (e.g., 8x12), the performance drops significantly, especially with the depth of the search. The algorithm is running too slowly on larger boards (e.g., 6x6 or 8x12) with more complex win conditions (e.g., 4 in a row, 5 in a row). import random import time # Check if a player has won def check_win(board, player, k): rows = len(board) cols = len(board[0]) # Check horizontally, vertically, and diagonally for r in range(rows): for c in range(cols): if c <= cols - k and all(board[r][c+i] == player for i in range(k)): # Horizontal return True if r <= rows - k and all(board[r+i][c] == player for i in range(k)): # Vertical return True if r <= rows - k and c <= cols - k and all(board[r+i][c+i] == player for i in range(k)): # Diagonal return True if r >= k - 1 and c <= cols - k and all(board[r-i][c+i] == player for i in range(k)): # Anti-diagonal return True return False # Heuristic function to evaluate the board state def evaluate_board(board, player, k): # Check if the player has already won if check_win(board, player, k): return 1000 # Win elif check_win(board, 3-player, k): # Opponent return -1000 # Loss return 0 # No winner yet # Minimax algorithm with Alpha-Beta Pruning def minimax(board, depth, alpha, beta, is_maximizing_player, player, k): # Evaluate the current board state score = evaluate_board(board, player, k) if score == 1000 or score == -1000: return score if depth == 0: return 0 # Depth limit reached if is_maximizing_player: best = -float('inf') for r in range(len(board)): for c in range(len(board[0])): if board[r][c] == 0: # Empty cell board[r][c] = player # Make move best = max(best, minimax(board, depth-1, alpha, beta, False, player, k)) board[r][c] = 0 # Undo move alpha = max(alpha, best) if beta <= alpha: break return best else: best = float('inf') for r in range(len(board)): for c in range(len(board[0])): if board[r][c] == 0: # Empty cell board[r][c] = 3 - player # Opponent's move best = min(best, minimax(board, depth-1, alpha, beta, True, player, k)) board[r][c] = 0 # Undo move beta = min(beta, best) if beta <= alpha: break return best # Function to find the best move for the AI def best_move(board, player, k): best_val = -float('inf') best_move = (-1, -1) for r in range(len(board)): for c in range(len(board[0])): if board[r][c] == 0: # Empty cell board[r][c] = player # Make move move_val = minimax(board, 4, -float('inf'), float('inf'), False, player, k) # Depth 4 for searching board[r][c] = 0 # Undo move if move_val > best_val: best_move = (r, c) best_val = move_val return best_move # Function to play the game def play_game(board_size=(3, 3), k=3): board = [[0 for _ in range(board_size[1])] for _ in range(board_size[0])] player = 1 # 1 - X, 2 - O while True: # Display the board for row in board: print(' '.join(str(x) if x != 0 else '.' for x in row)) print() # Check if someone has won if check_win(board, player, k): print(f"Player {player} wins!") break # AI's move if player == 1: row, col = best_move(board, player, k) print(f"AI (X) move: {row}, {col}") else: # Simple random move for player O row, col = random.choice([(r, c) for r in range(board_size[0]) for c in range(board_size[1]) if board[r][c] == 0]) print(f"Player O move: {row}, {col}") board[row][col] = player player = 3 - player # Switch player # Check for a draw if all(board[r][c] != 0 for r in range(board_size[0]) for c in range(board_size[1])): print("It's a draw!") break # Speed test function def test_speed(board_size=(3, 3), k=3): start_time = time.time() play_game(board_size, k) print(f"Game duration: {time.time() - start_time} seconds") # Example: play on a 8x8 board, with 4 in a row to win test_speed((8,8), 4) I tried limiting the depth of the Minimax search to reduce the number of possibilities being evaluated. For example, I reduced the search depth to 2 for larger boards: move_val = minimax(board, depth=2, alpha=-float('inf'), beta=float('inf'), is_maximizing_player=False, player=player, k=k) However, this only partially mitigates the issue, and the AI's decision-making still seems too slow. Also i explored parallelizing the move evaluation using the ThreadPoolExecutor from Pythonβs concurrent.futures module: from concurrent.futures import ThreadPoolExecutor def parallel_minimax(board, depth, alpha, beta, is_maximizing_player, player, k): with ThreadPoolExecutor() as executor: futures = [] for r in range(len(board)): for c in range(len(board[0])): if board[r][c] == 0: board[r][c] = player futures.append(executor.submit(minimax, board, depth-1, alpha, beta, False, player, k)) board[r][c] = 0 # Undo move for future in futures: result = future.result() # [process the result] This approach improved parallelism, but it still does not solve the underlying issue of time complexity for large boards. Please help to make the algorithm as fast as possible | Minimax, even with alpha-beta pruning, will have to look at an exponentially growing number of states. For larger board sizes this will mean you can only perform shallow searches. I would suggest to switch to the Monte Carlo Search algorithm. It uses random sampling, making decisions whether to explore new branches in the search tree or to deepen existing ones. You can check out the Wikipedia page on Monte Carlo tree search for more information. You mention 4-in-a-row, but realise that on large boards (like 8x12), that is an easy win for the first player. Below is an implementation I made for answering your question. It defines a TicTacToe class with methods like best_move, play_row_col, ... But it inherits from a more generic class which provides the core Monte Carlo search functionality with its mc_search method. That superclass has no knowledge of the game logic; it only assumes two players and that it is turn-based with at each turn a finite number of moves. It will only refer to these moves with a sequential number, and will depend on the subclass methods to perform the corresponding moves. It is responsible for driving the search directions. The TicTacToe class does not have search logic. It depends on the superclass for that. Although it is not necessary, I decided to "help" find winning lines more effectively, and still add some logic in TicTacToe that narrows the list of moves that the algorithm should consider, in two ways: Only moves that are neighboring (possibly diagonally) occupied cells are considered. For the very first move only the center move is considered. This is a limitation that you might consider too strong, as in the beginning of the game on a large board, strong players may prefer to place their pieces further away from the other pieces. Still, I found that this restriction for the search algorithm still allows for reasonable good play (it's all relative to what strength you expect). If there is a winning move for the player who's turn it is, this is detected, and only that move will be in the move list. If the last played piece created a "threat" to win on their next move, this also leads to a move list with just one move for the opponent, since they will want to defend. These restrictions are only applicable for the search algorithm, not for the human player (of course). As on larger boards the game may need a lot of moves to come to an end, I also added a parameter to limit the search depth in the Monte Carlo rollout phase, and consider the outcome a draw when there is no win within that number of moves counting from the start of the rollout. It could be set to 20 or 30 for example. Here is the code: from __future__ import annotations from math import log from time import perf_counter_ns from random import shuffle, randrange from abc import abstractmethod from typing import Callable from enum import StrEnum from copy import deepcopy # Helper function to get best list entry based on callback function def best_index(lst: [any], evaluate: Callable[[any], int]) -> int: results = list(map(evaluate, lst)) return results.index(max(results)) # Rather generic, abstract class for 2-player game class MonteCarloGame: class States(StrEnum): FIRST_PLAYERS_TURN = "X" SECOND_PLAYERS_TURN = "O" FIRST_PLAYER_WON = "Player X won" SECOND_PLAYER_WON = "Player O won" DRAW = "It's a draw" TerminalStates = (States.FIRST_PLAYER_WON, States.DRAW, States.SECOND_PLAYER_WON) TurnStates = (States.FIRST_PLAYERS_TURN, States.SECOND_PLAYERS_TURN) # Node in the Monte Carlo search tree has the sum of scores over a number of games: class Node: def __init__(self, num_children: int, last_player: MonteCarloGame.States): self.last_player = last_player self.score = 0 self.max_score = 0 self.children: [MonteCarloGame.Node] = [None] * num_children self.unvisited = list(range(num_children)) shuffle(self.unvisited) # To get a random child (i.e. move to next state) def pick_child(self) -> int: if self.unvisited: # If there are unvisited child nodes, choose one of those first return self.unvisited.pop() # = a randomly chosen Node to expand if not self.children: raise ValueError("Cannot call pick_child on a terminal node") # Use UCB1 formula to choose which node to explore further log_n = log(self.max_score) return best_index(self.children, lambda child: child.score / child.max_score + (log_n / child.max_score)**0.5) def update_score(self, score: int): self.max_score += 2 # Every simulated game counts for 2 points (i.e. the maximum score) # Make the score relative to the player who made the last move self.score += 2 - score if self.last_player == MonteCarloGame.States.FIRST_PLAYERS_TURN else score # A score can be 0, 1 or 2 (loss, draw, win). def select_or_expand(self, index: int, num_children: int, last_player: MonteCarloGame.States) -> MonteCarloGame.Node: child = self.children[index] or MonteCarloGame.Node(num_children, last_player) self.children[index] = child return child def __init__(self): self.mc_root = None @abstractmethod def copy(self) -> MonteCarloGame: return self # must implement @abstractmethod def size_of_move_list(self) -> int: return 0 # must implement @abstractmethod def state(self) -> States: return self.States.FIRST_PLAYERS_TURN # must implement @abstractmethod def play_from_move_list(self, move_index: int) -> States: return self.States.FIRST_PLAYERS_TURN # must implement # To be called when a move is played from the subclass, so # that the root can "follow" to the corresponding child node def mc_update(self, move_index: int): if not self.mc_root or move_index == -1: self.mc_root = MonteCarloGame.Node(self.size_of_move_list(), self.state()) else: self.mc_root = self.mc_root.select_or_expand(move_index, self.size_of_move_list(), self.state()) # Main Monte Carlo Search algorithm: def mc_search(self, timeout_ms: int, rollout_depth: int) -> int: if self.state() in self.TerminalStates: return -1 # Nothing to do if not self.mc_root: self.mc_root = MonteCarloGame.Node(self.size_of_move_list(), self.state()) expiry = perf_counter_ns() + timeout_ms * 1_000_000 while perf_counter_ns() < expiry: # Start at root (current game state) game = self.copy() node = self.mc_root state = game.state() path = [node] # Traverse down until we have expanded a node, or reached a terminal state while state in self.TurnStates and node.max_score: move_index = node.pick_child() next_state = game.play_from_move_list(move_index) node = node.select_or_expand(move_index, game.size_of_move_list(), state) path.append(node) state = next_state # Rollout score = 1 # Default outcome is a draw (when depth limit is reached) for depth in range(rollout_depth): # Limit the depth if state not in self.TurnStates: score = self.TerminalStates.index(state) # Absolute score (good for player #2) break state = game.play_from_move_list(randrange(game.size_of_move_list())) # Back propagate the score up the search tree for node in path: node.update_score(score) # Choose the most visited move return best_index(self.mc_root.children, lambda n: n.max_score if n else 0) class TicTacToe(MonteCarloGame): class Contents(StrEnum): INVALID = "#" EMPTY = "." FIRST_PLAYER = MonteCarloGame.States.FIRST_PLAYERS_TURN SECOND_PLAYER = MonteCarloGame.States.SECOND_PLAYERS_TURN def __init__(self, num_rows: int=3, num_cols: int=3, win_length: int=3, *, source: TicTacToe=None): super().__init__() if source: # copy self.__dict__ = deepcopy(source.__dict__) return self.num_rows = num_rows self.num_cols = num_cols self.win_length = win_length width = num_cols + 1 # include dummy column bottom = width * (num_rows + 1) # Surround board with dummy row and column, and create a flat representation: self.board = [ (self.Contents.INVALID if i < width or i >= bottom or i % width == 0 else self.Contents.EMPTY) for i in range(width * (num_rows + 2) + 1) # flat structure ] self.free_cells_count = num_cols * num_rows # Maintain a reasonable move list (not ALL free cells) to keep search efficient # At the start we only consider one move (can adapt to allow some more...) # This list is not relevant when a "human" player plays a move (any free cell is OK) self.move_list = [self.row_col_to_cell(num_rows//2, num_cols//2)] self._state = self.States.FIRST_PLAYERS_TURN # Add some information about current threats on the board (to narrow rollout paths) self.open_threats: [int] = [] # board cell indices where the last player has opportunity to win self.immediate_win = 0 # board cell index where the next move wins the game self.forced_cell = 0 # derived from previous two attributes def copy(self) -> TicTacToe: return TicTacToe(source=self) def size_of_move_list(self) -> int: if self._state in self.TerminalStates: return 0 # If we are on a forced line to a win or to avoid a loss, consider one move only if self.forced_cell: return 1 return len(self.move_list) def state(self) -> MonteCarloGame.States: return self._state def play_from_move_list(self, move_index: int) -> MonteCarloGame.States: return self.play_at_cell(self.move_index_to_cell(move_index)) def play_at_cell(self, cell: int) -> MonteCarloGame.States: n = len(self.board) turn = self._state self._state = self.TurnStates[1-self.TurnStates.index(turn)] # toggle self.board[cell] = self.Contents(turn) # Type cast to silence warning self.free_cells_count -= 1 if cell in self.move_list: self.move_list.remove(cell) # Add surrounding cells to move_list for next turn w = self.num_cols for neighbor in (cell-1,cell+1,cell-w-2,cell-w-1,cell-w,cell+w,cell+w+1,cell+w+2): if self.board[neighbor] == self.Contents.EMPTY and neighbor not in self.move_list: self.move_list.append(neighbor) threats = set() # Collect cells where the move creates a threat (where opponent must play at) # check if this move makes a direct win or a threat for step in (1, self.num_cols, self.num_cols + 1, self.num_cols + 2): span = self.win_length * step i = next(k for k in range(cell - step, -1, -step) if self.board[k] != turn) j = next(k for k in range(cell + step, n, step) if self.board[k] != turn) if j - i > span: # it's an immediately winning move self._state = (self.States.FIRST_PLAYER_WON if turn == self.States.FIRST_PLAYERS_TURN else self.States.SECOND_PLAYER_WON) return self._state # look for a single gap in a line, such that this gap is a threat if self.board[i] == self.Contents.EMPTY: i2 = next(k for k in range(i - step, -1, -step) if self.board[k] != turn) if j - i2 > span: threats.add(i) if self.board[j] == self.Contents.EMPTY: j2 = next(k for k in range(j + step, n, step) if self.board[k] != turn) if j2 - i > span: threats.add(j) if not self.free_cells_count: self._state = self.States.DRAW self.immediate_win = next((c for c in self.open_threats if c != cell), 0) self.open_threats = list(threats) self.forced_cell = self.immediate_win or next(iter(self.open_threats), 0) return self._state def play_row_col(self, row: int, col: int) -> MonteCarloGame.States: if not 0 <= row < self.num_rows or not 0 <= col < self.num_cols: raise ValueError("row/col out of range") cell = self.row_col_to_cell(row, col) if self.board[cell] != TicTacToe.Contents.EMPTY: raise ValueError("cell is already occupied") move_index = self.cell_to_move_index(cell) state = self.play_at_cell(cell) self.mc_update(move_index) # must be called after the move is played return state # Some methods that convert between different ways to identify a move def move_index_to_cell(self, move_index: int) -> int: # Convert index in the move list to cell in the flattened board # If we are on a forced line to a win or to avoid a loss, consider that move only return self.forced_cell or self.move_list[move_index] def cell_to_move_index(self, cell: int) -> int: # Convert cell in the flattened board to index in the move list cells = [self.forced_cell] if self.forced_cell else self.move_list try: return cells.index(cell) except ValueError: return -1 def row_col_to_cell(self, row: int, col: int) -> int: # Convert row / col to the cell in the flattened board return (row + 1) * (self.num_cols + 1) + col + 1 def cell_to_row_col(self, cell: int) -> tuple[int, int]: # Convert cell in flat board structure back to row/col: return cell // (self.num_cols + 1) - 1, cell % (self.num_cols + 1) - 1 # The method that starts the search for the best move, and returns it def best_move(self, timeout_ms: int, rollout_depth: int=15) -> tuple[int, int]: return self.cell_to_row_col(self.move_index_to_cell( self.mc_search(timeout_ms, rollout_depth) )) def __repr__(self) -> str: return "#" * (self.num_cols * 2 + 2) + " ".join( (content, "#\n#")[content == self.Contents.INVALID] for content in self.board[self.num_cols+1:-self.num_cols-2] ) + " #\n" + "#" * (self.num_cols * 2 + 3) def main(): # Create the game instance and set parameters game = TicTacToe(num_rows=8, num_cols=12, win_length=5) timeout_ms = 2000 # milliseconds that tree search may spend for determining a good move rollout_depth = 20 # max number of moves in a rollout game: if reached it counts as a draw human = (MonteCarloGame.States.SECOND_PLAYERS_TURN, ) # Game loop state = game.state() while state in MonteCarloGame.TurnStates: print(game) print(f"Player {state} to move...") if state in human: try: state = game.play_row_col(int(input(f"Enter row (0-{game.num_rows-1}): ")), int(input(f"Enter column (0-{game.num_cols-1}): "))) except ValueError: print("Invalid move. Try again:") else: state = game.play_row_col(*game.best_move(timeout_ms, rollout_depth)) print("Computer has played:") print(game) print(f"Game over: {state}") main() The above can be run as-is and will play a computer-vs-human game on a 8x12 board, with a 5-in-a-row target, and the computer getting 2 seconds time per move, and a rollout depth of 20. Have a go at it, and try to beat it. It is possible. This is not an end-product, but I think this program plays reasonable games. For smaller boards, like 3x3, you don't need 2 seconds, and could set it to just 100 milliseconds. Several things could be improved, like: The search could keep going during the time that the human player is to enter their move. The search time could be more dynamic so that it would return a best move faster when it becomes clear that one move stands out among the rest. More logic could be added to the TicTacToe class so that it would detect forced lines at an earlier stage and limit the move list (for the Monte Carlo search) to the relevant moves to follow those lines. Think of the creation of double threats, ...etc. The depth limit for the rollout phase could be made more dynamic, depending on the size of the board, the time still available,... I hope this meets some of your requirements and this gives some leads on how to further improve it. At least I was happy with the result that less than 300 lines of code could give. | 2 | 1 |
79,386,521 | 2025-1-25 | https://stackoverflow.com/questions/79386521/polars-top-k-by-with-over-k-1-bug | Given the following dataFrame: pl.DataFrame({ 'A': ['a0', 'a0', 'a1', 'a1'], 'B': ['b1', 'b2', 'b1', 'b2'], 'x': [0, 10, 5, 1] }) I want to take value of column B with max value of column x within same value of A (taken from this question). I know there's solution with pl.Expr.get() and pl.Expr.arg_max(), but I wanted to use pl.Expr.top_k_by() instead, and for some reason it doesn't work for me with k = 1: df.with_columns( pl.col.B.top_k_by("x", 1).over("A").alias("y") ) ComputeError: the length of the window expression did not match that of the group Error originated in expression: 'col("B").top_k_by([dyn int: 1, col("x")]).over([col("A")])' It does work for k = 2 though. Do you think it's a bug? | The error message produced when running your code without the window function gives is a bit more explicit and hints at a solution. df.with_columns( pl.col("B").top_k_by("x", 1) ) InvalidOperationError: Series B, length 1 doesn't match the DataFrame height of 4 If you want expression: col("B").top_k_by([dyn int: 1, col("x")]) to be broadcasted, ensure it is a scalar (for instance by adding '.first()'). Especially, pl.Expr.first can be used to allow for proper broadcasting here. df.with_columns( pl.col("B").top_k_by("x", 1).first().over("A").alias("y") ) shape: (4, 4) βββββββ¬ββββββ¬ββββββ¬ββββββ β A β B β x β y β β --- β --- β --- β --- β β str β str β i64 β str β βββββββͺββββββͺββββββͺββββββ‘ β a0 β b1 β 0 β b2 β β a0 β b2 β 10 β b2 β β a1 β b1 β 5 β b1 β β a1 β b2 β 1 β b1 β βββββββ΄ββββββ΄ββββββ΄ββββββ | 3 | 4 |
79,383,867 | 2025-1-24 | https://stackoverflow.com/questions/79383867/streaming-multiple-videos-through-fastapi-to-web-browser-causes-http-requests-to | I have a large FastAPI application. There are many different endpoints, including one that is used to proxy video streams. The usage is something like this: the endpoint receives the video stream URL, opens it and returns it through streaming response. If I proxy 5 video streams, then everything is fine. If I proxy 6 streams, then my FastAPI app stops accepting any further requests, until I close one of the opened streams in the browser. I will need to proxy a much larger number of video streams, but the problem occurs already when proxying 6. How fix this? I am attaching a minimally reproducible example, but I cannot attach the links to the video streams themselves. I will note that there are no problems with the original streams, they work directly in any quantity. import uvicorn import httpx from fastapi import FastAPI from fastapi.responses import StreamingResponse app = FastAPI() async def proxy_mjpeg_stream(url: str): async with httpx.AsyncClient() as client: try: async with client.stream("GET", url) as stream: async for chunk in stream.aiter_bytes(): if chunk: yield chunk except httpx.ReadTimeout as ER: print(f'ERROR ReadTimeout - {url=} - {ER}') @app.get("/get_stream") async def get_stream(url: str): if url: headers = { "Content-Type": 'multipart/x-mixed-replace; boundary=frame', } return StreamingResponse(proxy_mjpeg_stream(url), headers=headers) if __name__ == "__main__": uvicorn.run(app) | As you noted that you are testing the application through a web browser, you should be aware that every browser has a specific limit for parallel connections to a given hostname (as well as in general). That limit is hard codedβin Chrome and FireFox, for instance, that is 6βhave a look here. So, if one opened 6 parallel connections to the same domain name through Chrome/FireFox, all subsequent requests would get stalled and queued, until a connection is available (that is, when the request is completed/a response is received). You could use a Python client using httpx, as demonstrated in this answer, in order to test your application instead. Also, as noted in the linked answer, when opening a new connection to the same endpoint, you should do that from a tab that is isolated from the browser's main session; otherwise, succeeding requests might be blocked by the browser (i.e., on client side), as the browser might be waiting for a response to the previous request from the server, before sending the next request (e.g., for caching purposes). Finally, please have a look at this answer on how to create a reusable HTTP client at application startup, instead of creating a new instance every time the endpoint is called. | 1 | 3 |
79,385,534 | 2025-1-24 | https://stackoverflow.com/questions/79385534/python-adaptor-for-working-around-relative-imports-in-my-qgis-plugin | I am writing a QGIS plugin. During early development, I wrote and tested the Qt GUI application independently of QGIS. I made use of absolute imports, and everything worked fine. Then, I had to adapt everything to the quirks of QGIS. I can't explain why and haven't been able to find any supporting documentation, but nonetheless: apparently, QGIS needs or strongly prefers relative imports. The majority of other plugins I've looked at (completely anecdotal of course) have all used a flat plugin directory and relative imports. My team decided to keep the hierarchical structure. The plugin now works in QGIS, with the same hierarchical structure, using relative imports. My goal: I would like to still be able to run the GUI independently of QGIS, as it does not (yet) depend on any aspects of QGIS. With the relative imports, this is completely broken. My project directory has the following hierarchy: . βββ app β βββ main.py βββ __init__.py βββ justfile βββ metadata.txt βββ plugin.py βββ README.md βββ resources β βββ name_resolver.py β βββ response_codes.py βββ processing β βββ B_processor.py β βββ A_processor.py β βββ processor_core.py β βββ processor_interface.py β βββ processor_query_ui.py β βββ processor_query_ui.ui βββ tests βββ __init__.py βββ test_A_processor.py βββ test_processor_core.py The app directory and its main.py module are where I'm trying to run the GUI independently of QGIS. The GUI is in processing/processor_query_ui.py. app/main.py is as follows: if __name__ == "__main__": import sys from PyQt5 import QtWidgets from processing.processor_query_ui import UI_DataFinderUI app = QtWidgets.QApplication(sys.argv) ui = UI_DataFinderUI() ui.show() sys.exit(app.exec_()) When running main from the top level, all imports within main.py work: $ python app/main.py What does NOT work are the subsequent imports: Traceback (most recent call last): File "/path/to/app/main.py", line 4, in <module> from processing.processor_query_ui import UI_DataFinderUI File "/path/to/processing/processor_query_ui.py", line 2, in <module> from .A_processor import Aprocessor File "/path/to/processing/A_processor.py", line 5, in <module> from ..resources.response_codes import RESPONSE_CODES ImportError: attempted relative import beyond top-level package This shows that all imports in main.py are working correctly. But when processor_query_ui tries to do its imports, those ones fail. I have tried adding __init__.py files to all first level directories as well (e.g. {app,resources,processing}/__init__.py) to no vail. Running python -m app/main{.py} doesn't work, though I didn't really expect it to. For pytest to work, the tests directory must have an init.py file; then, pytest works as either pytest or python -m pytest. My goal is to be able to run the GUI in processing/processor_query_ui.py as a standalone app, by writing some kind of adaptor such that I do not have to change the current directory structure or relative imports (which, as above, make QGIS happy). Any advice is greatly appreciated. | What does your python path look like? For those relative imports to work, you need the directory containing your repository directory (not just the repository directory itself) to be on the python path. The reason the relative imports work when you run the code as a QGIS plugin is that the directory containing your repo β- i.e., the QGIS plugin directory β- is already on the python path in the QGIS python environment. To avoid that issue, I would suggest restructuring your repo as follows, with all of the plugin code in a subdirectory off of the top-level directory (Iβd suggest a more descriptive name than βpluginβ, but I donβt know what your plugin is called). . βββ app β βββ main.py βββ justfile βββ README.md βββ plugin β βββ __init__.py β βββ plugin.py β βββ metadata.txt β βββ resources β β βββ name_resolver.py β β βββ response_codes.py β βββ processing β βββ B_processor.py β βββ A_processor.py β βββ processor_core.py β βββ processor_interface.py β βββ processor_query_ui.py β βββ processor_query_ui.ui βββ tests βββ __init__.py βββ test_A_processor.py βββ test_processor_core.py This has a couple of advantages: First, it lets you distribute just the plugin subdirectory as your plugin. Your plugin users donβt need your unit tests or the app directory or the README. Second, if you now add your top level repository directory to the python path (which I assume it already is or your absolute imports wouldnβt work), you should be able to do the absolute imports from outside the plugin folder (youβll have to change those to import plugin.processing and plugin.resources and their contents instead of just processing etc), and you should then be able to do relative imports within plugin directory. Your tests will also need to use absolute imports if they donβt already. This structure has worked for me for QGIS plugins with multiple levels of submodules/subfolders. | 1 | 1 |
79,385,866 | 2025-1-24 | https://stackoverflow.com/questions/79385866/numpy-array-boolean-indexing-to-get-containing-element | Given a (3,2,2) array how do I get second dimension elements given a single value on the third dimension import numpy as np arr = np.array([ [[31., 1.], [41., 1.]], [[63., 1.],[73., 3.]], [[ 95., 1.], [100., 1]] ] ) ref = arr[(arr[:,:,0] > 41.) & (arr[:,:,0] <= 63)] print(ref) Result [[63. 1.]] Expected result [[63., 1.],[73., 3.]] The input value is 63 so I don't know in advance 73 exists but I want to return it as well. In other words, if value exists return the whole parent array without reshaping. Another example ref = arr[(arr[:,:,0] <= 63)] Returns [[31. 1.] [41. 1.] [63. 1.]] But should return [[[31. 1.] [41. 1.]] [[63. 1.] [73. 1.]]] | I think you want arr[((arr[:,:,0]>41)&(arr[:,:,0]<=63)).any(axis=1)] and arr[(arr[:,:,0] <= 63).any(axis=1)] Some explanation. First of all, a 3D array, is also a 2D array of 1D array, or a 1D array of 2D array. So, if the expected answer is an array of "whole parent", that is an array of 2D arrays (so a 3D array, with only some subarrays in it), you need a 1D array of booleans as index. Such as [True, False, False] to select only the 1st row, [[[31., 1.], [41., 1.]]], which is a bit the same as arr[[0]]. arr[:,:,0]>41 is a 2D array of booleans. And therefore would pick individually some pairs (that is select elements along the 1st two axis). For example arr[[[True, False], [False, False], [False, False]]] selects only the 1st pair of the 1st subarray, [[31,1]]. A bit like arr[[0],[0]] would do. So, since you want something like [[[31., 1.], [41., 1.]]], not something like [[31,1]], we need to produce a 1D array of booleans, telling for each line (each subarray along axis 0) whether we want it or not. Now, the comments were about how to decide whether we want a subarray or not. If we start from ref=arr[:,:,0]<=63 That is ref = array([[ True, True], [ True, False], [False, False]]) getting arr[ref] would select 3 pairs, which is not what you want (again, we don't want a 2D array of booleans as selector, since we want whole parents). Your attempt answer, is to use ref[:,0], which is a 1D-array of 3 booleans (the first of each row) : [True, True, False], which would select the 3 first rows. This answer, is to use ref[:,0], which is also a 1D-array of 3 booleans, each True iff one boolean at least of the row is True. So, also [True, True, False] Difference between our two answers shows with another example, used in comment. ref=arr[:,:,0]>97 array([[False, False], [False, False], [False, True]]) if we use ref[:,0], that is [False, False, False], then answer is an empty array. Even the last row is not selected, tho it contains a value over 97. But we are only interested (if we say ref[:,0]) in rows whose 1st value of 1st pair is > 97 If we use ref.any(axis=1), as in this answer, that is [False, False, True], we get the last row. Because this means that we are interested in any row whose at least one pair has a 1st value>97. We could also select rows whose only second pair has a 1st value>97 (arr[ref[:,1]]). Or rows whose one pair, but not both, has a 1st value>97 (arr[ref[:,0]^ref[:,1]]). Etc. Everything is possible. Point is, if we want to get a list of whole rows (subarrays along axis 0), then we need to build a 1D array of 3 booleans, deciding for each row if we want it all (True) or nothing (false) | 2 | 2 |
79,382,645 | 2025-1-23 | https://stackoverflow.com/questions/79382645/fastapi-why-does-synchronous-code-do-not-block-the-event-loop | Iβve been digging into FastAPIβs handling of synchronous and asynchronous endpoints, and Iβve come across a few things that Iβm trying to understand more clearly, especially with regards to how blocking operations behave in Python. From what I understand, when a synchronous route (defined with def) is called, FastAPI offloads it to a separate thread from the thread pool to avoid blocking the main event loop. This makes sense, as the thread can be blocked (e.g., time.sleep()), but the event loop itself doesnβt get blocked because it continues handling other requests. But hereβs my confusion: If the function is truly blocking (e.g., itβs waiting for something like time.sleep()), how is the event loop still able to execute other tasks concurrently? Isnβt the Python interpreter supposed to execute just one thread at a time? Here an example: from fastapi import APIRouter import os import threading import asyncio app = APIRouter() @app.get('/sync') def tarefa_sincrona(): print('Sync') total = 0 for i in range(10223424*1043): total += i print('Sync task done') @app.get('/async') async def tarefa_sincrona(): print('Async task') await asyncio.sleep(5) print('Async task done') If I make two requests β the first one to the sync endpoint and the second one to the async endpoint β almost at the same time, I expected the event loop to be blocked. However, in reality, what happens is that the two requests are executed "in parallel." | If the function is truly blocking (e.g., itβs waiting for something like time.sleep()), how is the event loop still able to execute other tasks concurrently? Isnβt the Python interpreter supposed to execute just one thread at a time? Only one thread is indeed executed at a time. The flaw in the quoted question is to assume that time.sleep() keeps the thread active - as another answerer has pointed out, it does not. The TL;DR is that time.sleep() does block the thread, but it contains a C macro that periodically releases its lock on the global interpreter. Concurrency in Python (with GIL) A thread can acquire a lock on the global interpreter, but only if the interpreter isn't already locked A lock cannot be forcibly removed, it has to be released by the thread that has it CPython will periodically release the running thread's GIL if there are other threads waiting for execution time Functions can also voluntarily release their locks Voluntarily releasing locks is pretty common. In C-extensions, it's practically mandatory: Py_BEGIN_ALLOW_THREADS is a macro for { PyThreadState *_save; _save = PyEval_SaveThread(); PyEval_SaveThread() releases GIL. time.sleep() voluntarily releases the lock on the global interpreter with the macro mentioned above. Synchronous threading: As mentioned earlier, Python will regularly try to release the GIL so that other threads can get a bit of execution time. For threads with a varied workload, this is smart. If a thread is waiting for I/O but the code doesn't voluntarily release GIL, this method will still result in the GIL being swapped to a new thread. For threads that are entirely or primarily CPU-bound, it works... but it doesn't speed up execution. I'll include code that proves this at the end of the post. The reason it doesn't provide a speed-up in this case is that CPU-bound operations aren't waiting on anything, so sleeping func_1 to give execution time to func_2 just means that func_1 is idle for no reason - with the result that func_1's potential completion time gets staggered by the amount of execution time is granted to func_2. Inside of an event loop: asyncio's event loop is single-threaded, which is to say that it doesn't spawn new threads. Each coroutine that runs, uses the main thread (the same thread the event loop lives in). The way this works is that the event loop and its coroutines work together to pass the GIL among themselves. But why aren't coroutines offloaded to threads, so that CPython can step in and release the GIL to to other threads? Many reasons, but the easiest to grasp is maybe this: In practice that would have meant running the risk of significantly lagging the event loop. Because instead of immediately resuming its own tasks (which is to spawn a new coroutine) when the current coroutine finishes, it now possibly has to wait for execution time due to the GIL having been passed off elsewhere. Similarly, coroutines would take longer to finish due to constant context-switching. Which is a long-winded way of saying that if time.sleep() didn't release its lock, or if you were running a long CPU-bound thing, a single thread would indeed block the entire event loop (by hogging the GIL). So what now? Inside of GIL-bound Python, whether it's sync or async, the only way to execute CPU-binding code (that doesn't actively release its lock) with true concurrency is at the process-level, so either multiprocessing or concurrent.futures.ProcessPoolExecutor, as each process will have its own GIL. So: async functions running CPU-bound code (with no voluntary yields) will run to completion before yielding GIL. sync functions in separate threads running CPU-bound code with no voluntary yields will get paused periodically, and the GIL gets passed off elsewhere. (For clarity:) sync functions in the same thread will have no concurrency whatsoever. multiprocessing docs also hint very clearly at the above descriptions: The multiprocessing package offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads. As well as threading docs: threading is still an appropriate model if you want to run multiple I/O-bound tasks simultaneously Reading between the lines, this is much the same as saying that tasks bound by anything other than I/O won't achieve any noteworthy concurrency through threading. Testing it yourself: # main.py from fastapi import FastAPI import time import os import threading app = FastAPI() def bind_cpu(id: int): thread_id = threading.get_ident() print(f"{time.perf_counter():.4f}: BIND GIL for ID: {id}, internals: PID({os.getpid()}), thread({thread_id})") start = time.perf_counter() total = 0 for i in range(100_000_000): total += i end = time.perf_counter() print(f"{time.perf_counter():.4f}: REL GIL for ID: {id}, internals: PID({os.getpid()}), thread({thread_id}). Duration: {end-start:.4f}s") return total def endpoint_handler(method: str, id: int): print(f"{time.perf_counter():.4f}: Worker reads {method} endpoint with ID: {id} - internals: PID({os.getpid()}), thread({threading.get_ident()})") result = bind_cpu(id) print(f"{time.perf_counter():.4f}: Worker finished ID: {id} - internals: PID({os.getpid()}), thread({threading.get_ident()})") return f"ID: {id}, {result}" @app.get("/async/{id}") async def async_endpoint_that_gets_blocked(id: int): return endpoint_handler("async", id) @app.get("/sync/{id}") def sync_endpoint_that_gets_blocked(id: int): return endpoint_handler("sync", id) if __name__ == "__main__": import uvicorn uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True, workers=1) # test.py import asyncio import httpx import time async def send_requests(): async with httpx.AsyncClient(timeout=httpx.Timeout(25.0)) as client: tasks = [] for i in range(1, 5): print(f"{time.perf_counter():.4f}: Sending HTTP request for id: {i}") if i % 2 == 0: tasks.append(client.get(f"http://localhost:8000/async/{i}")) else: tasks.append(client.get(f"http://localhost:8000/sync/{i}")) responses = await asyncio.gather(*tasks) for response in responses: print(f"{time.perf_counter():.4f}: {response.text}") asyncio.run(send_requests()) Launch FastAPI (python main.py) Fire off some requests (python test.py) You will get results looking something like this: [...] INFO: Waiting for application startup. INFO: Application startup complete. 10755.6897: Sending HTTP request for id: 1 10755.6900: Sending HTTP request for id: 2 10755.6902: Sending HTTP request for id: 3 10755.6904: Sending HTTP request for id: 4 10755.9722: Worker reads async endpoint with ID: 4 - internals: PID(24492), thread(8972) 10755.9725: BIND GIL for ID: 4, internals: PID(24492), thread(8972) 10759.4551: REL GIL for ID: 4, internals: PID(24492), thread(8972). Duration: 3.4823s 10759.4554: Worker finished ID: 4 - internals: PID(24492), thread(8972) INFO: 127.0.0.1:56883 - "GET /async/4 HTTP/1.1" 200 OK 10759.4566: Worker reads async endpoint with ID: 2 - internals: PID(24492), thread(8972) 10759.4568: BIND GIL for ID: 2, internals: PID(24492), thread(8972) 10762.6428: REL GIL for ID: 2, internals: PID(24492), thread(8972). Duration: 3.1857s 10762.6431: Worker finished ID: 2 - internals: PID(24492), thread(8972) INFO: 127.0.0.1:56884 - "GET /async/2 HTTP/1.1" 200 OK 10762.6446: Worker reads sync endpoint with ID: 3 - internals: PID(24492), thread(22648) 10762.6448: BIND GIL for ID: 3, internals: PID(24492), thread(22648) 10762.6968: Worker reads sync endpoint with ID: 1 - internals: PID(24492), thread(9144) 10762.7127: BIND GIL for ID: 1, internals: PID(24492), thread(9144) 10768.9234: REL GIL for ID: 3, internals: PID(24492), thread(22648). Duration: 6.2784s 10768.9338: Worker finished ID: 3 - internals: PID(24492), thread(22648) INFO: 127.0.0.1:56882 - "GET /sync/3 HTTP/1.1" 200 OK 10769.2121: REL GIL for ID: 1, internals: PID(24492), thread(9144). Duration: 6.4835s 10769.2124: Worker finished ID: 1 - internals: PID(24492), thread(9144) INFO: 127.0.0.1:56885 - "GET /sync/1 HTTP/1.1" 200 OK 10769.2138: "ID: 1, 4999999950000000" 10769.2141: "ID: 2, 4999999950000000" 10769.2143: "ID: 3, 4999999950000000" 10769.2145: "ID: 4, 4999999950000000" Interpretation Going over the timestamps and the durations, two things are immediately clear: The async endpoints are executing de-facto synchronously The sync endpoints are executing concurrently and finish nearly at the same time BUT each request takes twice as long to complete compared to the async ones Both of these results are expected, re: the explanations earlier. The async endpoints become de-facto synchronous because the function we built hoards the GIL, and so the event loop gets no execution time until the coroutine returns. The sync endpoints become faux-asynchronous because Python's context manager is swapping between them every ~5ms, which means that the first request increments by x%, then the second request increments by x% - repeat until both finish ~ish at the same time. | 3 | 4 |
79,385,676 | 2025-1-24 | https://stackoverflow.com/questions/79385676/filter-with-expression-expansion | Is it possible to convert the following filter, which uses two conditions, to something that uses expression expansion or a custom function in order to apply the DRY priciple (avoid the repetition)? Here is the example: import polars as pl df = pl.DataFrame( { "a": [1, 2, 3, 4, 5], "val1": [1, None, 0, 0, None], "val2": [1, None, None, 0, 1], } ) df.filter((~pl.col("val1").is_in([None, 0])) | (~pl.col("val2").is_in([None, 0]))) Results in: βββββββ¬βββββββ¬βββββββ β a β val1 β val2 β β --- β --- β --- β β i64 β i64 β i64 β βββββββͺβββββββͺβββββββ‘ β 1 β 1 β 1 β β 5 β null β 1 β βββββββ΄βββββββ΄βββββββ | .any_horizontal() and .all_horizontal() can be used to build | and & chains. .not_() can also be used instead of ~ if you prefer. df.filter( pl.any_horizontal( pl.col("val1", "val2").is_in([None, 0]).not_() ) ) shape: (2, 3) βββββββ¬βββββββ¬βββββββ β a β val1 β val2 β β --- β --- β --- β β i64 β i64 β i64 β βββββββͺβββββββͺβββββββ‘ β 1 β 1 β 1 β β 5 β null β 1 β βββββββ΄βββββββ΄βββββββ | 2 | 2 |
79,382,803 | 2025-1-23 | https://stackoverflow.com/questions/79382803/i-am-trying-to-cause-race-condition-for-demonstration-purposes-but-fail-to-fail | I am actively trying to get race condition and cause problem in calculation for demonstration purposes but i can't achieve such problem simply. My tought process was to create a counter variable, reach it from diffrent threads and async functions (i did not tried mp since it pauses process) and increase it by one. Running 3 instances of for loops in range x and increasing the counter by 1 in each loop i expected to get value lower then 3x due to race condition. I did not use locks, but i still get 3x value each run. Does it due to GIL update? i have tried with python versions 3.10 / 3.11 / 3.13. What should i do to get race condition simple structure my code to get race condition import threading import asyncio def multithreading_race_condition(): counter2 = 0 def increment(): nonlocal counter2 for _ in range(10000): counter2 = counter2 + 1 threads = [threading.Thread(target=increment) for _ in range(3)] for t in threads: t.start() for t in threads: t.join() print(f"Multithreading Final Counter: {counter2}") async def asyncio_race_condition(): counter3 = 0 async def increment(): nonlocal counter3 for _ in range(10000): counter3 = counter3 + 1 tasks = [asyncio.create_task(increment()) for _ in range(3)] await asyncio.gather(*tasks) print(f"Asyncio Final Counter: {counter3}") def main(): print("\nMultithreading Example:") multithreading_race_condition() print("\nAsyncio Example:") asyncio.run(asyncio_race_condition()) if __name__ == "__main__": main() my output is Multithreading Example: Multithreading Final Counter: 30000 Asyncio Example: Asyncio Final Counter: 30000 | The time between fetching the value of counter, incrementing it and reassigning it back is very short. counter = counter + 1 In order to force a race condition for demonstration purposes, you should extend that window perhaps with sleep tmp = counter time.sleep(random.random()) counter = tmp + 1 I would also increase the number of concurrent tasks to give more chance for an issue to pop up. This should dramatically illustrate things. import threading import asyncio import time import random def multithreading_race_condition(): counter = 0 iteration_count = 10 task_count = 10 def increment(): nonlocal counter for _ in range(iteration_count): tmp = counter time.sleep(random.random()) counter = tmp + 1 threads = [threading.Thread(target=increment) for _ in range(task_count)] for t in threads: t.start() for t in threads: t.join() print(f"Multithreading Final Counter was {counter} expected {iteration_count * task_count}") async def asyncio_race_condition(): counter = 0 iteration_count = 10 task_count = 10 async def increment(): nonlocal counter for _ in range(iteration_count): tmp = counter await asyncio.sleep(random.random()) counter = tmp + 1 tasks = [asyncio.create_task(increment()) for _ in range(task_count)] await asyncio.gather(*tasks) print(f"Asyncio Final Counter was {counter} expected {iteration_count * task_count}") def main(): print("\nMultithreading Example:") multithreading_race_condition() print("\nAsyncio Example:") asyncio.run(asyncio_race_condition()) if __name__ == "__main__": main() That should likely give you something like: Multithreading Example: Multithreading Final Counter was 10 expected 100 Asyncio Example: Asyncio Final Counter was 10 expected 100 Dramatically illustrating a race condition. Here is a second way to demonstrate a race using code a little more like what you have. Note that if counter is loaded first you more likely will find a race and if loaded last you are likely not to find one. You can use dis to see how that might be true: import dis test = """ counter = counter + 1 + (time.sleep(random.random()) or 0) """ print(dis.dis(test)) test2 = """ counter = 1 + (time.sleep(random.random()) or 0) + counter """ print(dis.dis(test2)) So, here are two examples that difffer only in the order of operations on either side of a long opperation. The first is very likely to demonstrate a race condition while the second is unlikely (though not impossible) to do so. import threading import time import random def multithreading_race_condition(): counter = 0 def increment(): nonlocal counter for _ in range(10): counter = counter + 1 + (time.sleep(random.random()) or 0) #counter = 1 + (time.sleep(random.random()) or 0) + counter threads = [threading.Thread(target=increment) for _ in range(10)] for t in threads: t.start() for t in threads: t.join() print(f"Multithreading Final Counter: {counter}") def multithreading_no_race_condition(): counter = 0 def increment(): nonlocal counter for _ in range(10): #counter = counter + 1 + (time.sleep(random.random()) or 0) counter = 1 + (time.sleep(random.random()) or 0) + counter threads = [threading.Thread(target=increment) for _ in range(10)] for t in threads: t.start() for t in threads: t.join() print(f"Multithreading Final Counter: {counter}") def main(): print("\nMultithreading Example with race:") multithreading_race_condition() print("\nMultithreading Example with (probably) no race:") multithreading_no_race_condition() if __name__ == "__main__": main() I am guessing that will give you: Multithreading Example with race: Multithreading Final Counter: 10 Multithreading Example with (probably) no race: Multithreading Final Counter: 100 | 3 | 1 |
79,383,301 | 2025-1-24 | https://stackoverflow.com/questions/79383301/is-there-a-way-to-use-list-of-indices-to-simultaneously-access-the-modules-of-nn | Is there a way to use list of indices to simultaneously access the modules of nn.ModuleList in python? I am working with pytorch ModuleList as described below, decision_modules = nn.ModuleList([nn.Linear(768, 768) for i in range(10)]) Our input data is of the shape x=torch.rand(32,768). Here 32 is the batch size and 768 is the feature dimension. Now for each input data point in a minibatch of 32 datapoints, we want to select 4 decision modules from the list of decision_modules. The 4 decision engines from decision_engine are selected using an index list as explained below. I have a index matrix of dimensions ind. The ind matrix is of dimension torch.randint(0,10,(32,4)). I want to us a solution without use of loops as loops slows down the xecution significantly. But the following code throws and error. import torch import torch.nn as nn linears = nn.ModuleList([nn.Linear(768, 768) for i in range(10)]) ind=torch.randint(0,10,(32,4)) input=torch.rand(32,768) out=linears[ind](input) The following error was observed File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\torch\nn\modules\container.py:334, in ModuleList.getitem(self, idx) 332 return self.class(list(self._modules.values())[idx]) 333 else: --> 334 return self._modules[self._get_abs_string_index(idx)] File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\torch\nn\modules\container.py:314, in ModuleList._get_abs_string_index(self, idx) 312 def _get_abs_string_index(self, idx): 313 """Get the absolute index for the list of modules.""" --> 314 idx = operator.index(idx) 315 if not (-len(self) <= idx < len(self)): 316 raise IndexError(f"index {idx} is out of range") TypeError: only integer tensors of a single element can be converted to an index The expected output shape is (32,4,768). Any help will be highly useful. | The ind tensor is of size (bs, n_decisions), which means we're choosing a different set of experts for each item in the batch. With this setup, the most efficient way to compute the output is to compute all experts for all batch items, then gather the desired choices after. This will be more performant in GPU compared to looping over the individual experts. Since we're looking at a linear layer, you can compute all the experts using a single linear layer of size n_experts * dim. d_in = 768 n_experts = 10 bs = 32 n_choice = 4 # Create a single large linear layer fused_linear = nn.Linear(d_in, d_in * n_experts) indices = torch.randint(0, n_experts, (bs, n_choice)) x = torch.randn(bs, d_in) # Forward pass through the fused layer y = fused_linear(x) # Shape: [bs, d_in * n_experts] # Reshape to separate the experts dimension ys = y.reshape(bs, n_experts, d_in) # Shape: [bs, n_experts, d_in] # Gather the chosen experts ys = torch.gather(ys, 1, indices.unsqueeze(-1).expand(-1, -1, d_in)) The output ys will be of shape (bs, n_choice, d_in) | 1 | 2 |
79,385,026 | 2025-1-24 | https://stackoverflow.com/questions/79385026/pandas-groupby-with-tag-style-list | I have a dataset with 'tag-like' groupings: Id tags 0 item1 ['friends','family'] 1 item2 ['friends'] 2 item3 [] 3 item4 ['family','holiday'] So a row can belong to several groups. I want to create an object similar to groupby, so that I can use agg etc. df.groupby('tags').count() expected result tags count 0 'friends' 2 1 'family' 2 2 'holiday' 1 But of course it won't work because it treats the whole list as the key, rather than the individual tags. Here's an attempt tagset = set(df.tags.explode()) grpby = { t: df.loc[df.tags.str.contains(t, regex=False)] for t in tagset } From what I understand, groupby objects are structured a bit like this. But how to make it a groupby object? So that I can do things like grpby.year.mean() etc? | You can't have a row belong to multiple groups like your grpby object. Thus what you want to do is impossible in pure pandas, unless you duplicate the rows with explode, then you will be able to groupby.agg: out = (df.explode('tags') .groupby('tags', as_index=False) .agg(**{'count': ('tags', 'size')}) ) Output: tags count 0 family 2 1 friends 2 2 holiday 1 With a more meaningful aggregation: out = (df.explode('tags') .groupby('tags', as_index=False) .agg({'Id': frozenset}) ) Output: tags Id 0 family (item4, item1) 1 friends (item2, item1) 2 holiday (item4) Note however that explode is quite expensive, so if you just want to count the tags, better use pure python: from collections import Counter from itertools import chain out = Counter(chain.from_iterable(df['tags'])) Output: Counter({'friends': 2, 'family': 2, 'holiday': 1}) And if you want to split the DataFrame like your grpby object: tmp = df.assign(group=df['tags']).explode('group') group = tmp.pop('group') out = dict(list(tmp.groupby(group))) Output: {'family': Id tags 0 item1 [friends, family] 3 item4 [family, holiday], 'friends': Id tags 0 item1 [friends, family] 1 item2 [friends], 'holiday': Id tags 3 item4 [family, holiday]} | 2 | 1 |
79,384,811 | 2025-1-24 | https://stackoverflow.com/questions/79384811/issues-with-axes-matplotlib-inheritance | I'm trying to mimic the plt.subplots() behavior, but with custom classes. Rather than return Axes from subplots(), I would like to return CustomAxes. I've looked at the source code and don't understand why I am getting the traceback error below. I'm able to accomplish what I want without inheriting from Axes, but I think long term I would like to inherit from Axes. If you think this is ridiculous and there's a better way, let me know! Code: from matplotlib.figure import Figure from matplotlib.axes import Axes class CustomAxes(Axes): def __init__(self, fig, *args, **kwargs): super().__init__(fig, *args, **kwargs) def create_plot(self, i): self.plot([1, 2, 3], [1, 2, 3]) self.set_title(f'Title {i}') class CustomFigure(Figure): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def subplots(self, *args, **kwargs): axes = super().subplots(*args, **kwargs) axes = [CustomAxes(fig=self, *args, **kwargs) for ax in axes.flatten()] return axes fig, axes = CustomFigure().subplots(nrows=2, ncols=2) for i, ax in enumerate(axes, start=1): ax.create_plot(i=i) fig.tight_layout() fig Traceback: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[60], line 23 20 axes = [CustomAxes(fig=self, *args, **kwargs) for ax in axes.flatten()] 21 return axes ---> 23 fig, axes = CustomFigure().subplots(nrows=2, ncols=2) 24 for i, ax in enumerate(axes, start=1): 25 ax.create_plot(i=i) Cell In[60], line 20 18 def subplots(self, *args, **kwargs): 19 axes = super().subplots(*args, **kwargs) ---> 20 axes = [CustomAxes(fig=self, *args, **kwargs) for ax in axes.flatten()] 21 return axes Cell In[60], line 20 18 def subplots(self, *args, **kwargs): 19 axes = super().subplots(*args, **kwargs) ---> 20 axes = [CustomAxes(fig=self, *args, **kwargs) for ax in axes.flatten()] 21 return axes Cell In[60], line 7 6 def __init__(self, fig, *args, **kwargs): ----> 7 super().__init__(fig, *args, **kwargs) File ~/repos/test/venv/lib/python3.11/site-packages/matplotlib/axes/_base.py:656, in _AxesBase.__init__(self, fig, facecolor, frameon, sharex, sharey, label, xscale, yscale, box_aspect, forward_navigation_events, *args, **kwargs) 654 else: 655 self._position = self._originalPosition = mtransforms.Bbox.unit() --> 656 subplotspec = SubplotSpec._from_subplot_args(fig, args) 657 if self._position.width < 0 or self._position.height < 0: 658 raise ValueError('Width and height specified must be non-negative') File ~/repos/test/venv/lib/python3.11/site-packages/matplotlib/gridspec.py:576, in SubplotSpec._from_subplot_args(figure, args) 574 rows, cols, num = args 575 else: --> 576 raise _api.nargs_error("subplot", takes="1 or 3", given=len(args)) 578 gs = GridSpec._check_gridspec_exists(figure, rows, cols) 579 if gs is None: TypeError: subplot() takes 1 or 3 positional arguments but 0 were given Working code without inheritance: from matplotlib.figure import Figure class CustomAxes(): def __init__(self, ax): self.ax = ax def create_plot(self, i): self.ax.plot([1, 2, 3], [1, 2, 3]) self.ax.set_title(f'Title {i}') class CustomFigure(Figure): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def subplots(self, *args, **kwargs): axes = super().subplots(*args, **kwargs) axes = [CustomAxes(ax) for ax in axes.flatten()] return self, axes fig, axes = CustomFigure().subplots(nrows=2, ncols=2) for i, ax in enumerate(axes, start=1): ax.create_plot(i=i) fig.tight_layout() fig | I think the error is because you are not passing any position information (i.e. args) to Axes.__init__ when you call it via super. However, you can do this more simply without subclassing Figure, since subplots lets you specify an Axes subclass: import matplotlib.pyplot as plt from matplotlib.axes import Axes class CustomAxes(Axes): def __init__(self, fig, *args, **kwargs): super().__init__(fig, *args, **kwargs) def create_plot(self, i): self.plot([1, 2, 3], [1, 2, 3]) self.set_title(f'Title {i}') fig, ax_arr = plt.subplots(nrows=2, ncols=2, subplot_kw={'axes_class': CustomAxes}) for i, ax in enumerate(ax_arr.flat, start=1): ax.create_plot(i=i) fig.tight_layout() plt.show() | 1 | 2 |
79,384,228 | 2025-1-24 | https://stackoverflow.com/questions/79384228/batch-insert-data-using-psycopg2-vs-psycopg3 | Currently i am inserting to postgres database using psycopg2. Data is large and also the write frequency is high, so my database has WAL disabled and few other optimizations for faster writes. When i use psycopg2 with execute_values, i am able to write batch of 1000 rows in 0.1-0.15 seconds. from psycopg2.extras import execute_values self.engine = create_engine(f'postgresql+psycopg2://postgres:password@localhost/postgres', pool_size=DB_POOL_SIZE,max_overflow=20) def insert_data_todb(self, table_name, batch_data): try: t1 = time.perf_counter() insert_sql = f"""INSERT INTO {table_name} ({self._market_snapshot_columns_str}) VALUES %s;""" with self.engine.connect() as conn, conn.connection.cursor() as cur: execute_values(cur, insert_sql, batch_data) t2 = time.perf_counter() logger.info(f"Inserted {len(batch_data)} records in {t2 - t1} seconds") except Exception as ex: logger.error(f"Error inserting batch data into {table_name}:") logger.exception(ex) I uninstalled psycopg2 and installed psycopg 3.2. and used psycopg3's executemany function like this: import psycopg self.engine = create_engine(f'postgresql+psycopg://postgres:password@localhost/postgres', pool_size=DB_POOL_SIZE,max_overflow=20) def insert_data_todb(self, table_name, batch_data): try: t1 = time.perf_counter() placeholders = ', '.join(['%s'] * len(batch_data[0])) insert_sql = f"""INSERT INTO {table_name} ({self._market_snapshot_columns_str}) VALUES ({placeholders});""" # stored variable with self.engine.connect() as conn: with conn.cursor() as cur: cur.executemany(insert_sql, batch_data) # Pass the batch data directly t2 = time.perf_counter() logger.info(f"Inserted {len(batch_data)} records in {t2 - t1} seconds") except Exception as ex: logger.error(f"Error inserting batch data into {table_name}:") logger.exception(ex) My psycopg3 code is way slower! it takes 8-20 seconds to insert the same batches. | These examples aren't equivalent and it's not about the psycopg version: Your cur.executemany() in the second example runs one insert per row. The execute_values() in the first example can construct inserts with longer values lists, which is typically more effective. These both lose to a very simple batch insert that uses copy instead of insert. Quoting the result summary from an example benchmark: Function Time (seconds) Memory (MB) insert_one_by_one() 128.8 0.08203125 insert_executemany() 124.7 2.765625 insert_executemany_iterator() 129.3 0.0 insert_execute_batch() 3.917 2.50390625 insert_execute_batch_iterator(page_size=1) 130.2 0.0 insert_execute_batch_iterator(page_size=100) 4.333 0.0 insert_execute_batch_iterator(page_size=1000) 2.537 0.2265625 insert_execute_batch_iterator(page_size=10000) 2.585 25.4453125 insert_execute_values() 3.666 4.50390625 insert_execute_values_iterator(page_size=1) 127.4 0.0 insert_execute_values_iterator(page_size=100) 3.677 0.0 insert_execute_values_iterator(page_size=1000) 1.468 0.0 insert_execute_values_iterator(page_size=10000) 1.503 2.25 copy_stringio() 0.6274 99.109375 copy_string_iterator(size=1024) 0.4536 0.0 copy_string_iterator(size=8192) 0.4596 0.0 copy_string_iterator(size=16384) 0.4649 0.0 copy_string_iterator(size=65536) 0.6171 0.0 | 1 | 3 |
79,383,692 | 2025-1-24 | https://stackoverflow.com/questions/79383692/how-to-get-start-indices-of-regions-of-empty-intervals | I have sorted start indices (included) and end indices (excluded) of intervals (obtained by using seachsorted), for instance: import numpy as np # Both arrays are of same size, and sorted. # Size of arrays is number of intervals. # Intervals do not overlap. # interval indices: 0 1 2 3 4 5 interval_start_idxs = np.array([0, 3, 3, 3, 6, 7]) interval_end_excl_idxs = np.array([2, 4, 4, 4, 7, 9]) An empty interval is identified when interval_start_idxs[interval_idx] == interval_end_excl_idxs[interval_idx]-1 I would like to identify the starts and ends of each region where intervals are empty. A region is made with one or several intervals sharing the same start indices and end excluded indices. With previous data, expected result would then be: empty_interval_starts = [1, 4] # start is included empty_intervals_ends_excl = [4, 5] # end is excluded This result is to be understood as: intervals from index 1 to 3, these intervals are a same region of empty intervals and interval at index 4 is a separate region on its own | import numpy as np interval_start_idxs = np.array([0, 3, 3, 3, 6, 7]) interval_end_excl_idxs = np.array([2, 4, 4, 4, 7, 9]) is_region_start = np.r_[True, np.diff(interval_start_idxs) != 0] is_region_end = np.roll(is_region_start, -1) is_empty = (interval_start_idxs == interval_end_excl_idxs - 1) empty_interval_starts = np.nonzero(is_region_start & is_empty)[0] empty_interval_ends_excl = np.nonzero(is_region_end & is_empty)[0] + 1 Explanation: is_region_start marks the starts of all potential regions, i.e. indices where the current index differs from its predecessor the index of the end of a potential region is right before the start of a new region, which is why we roll back all markers in is_region_start by one to get is_region_end; the rollover in the roll-back from index 0 to index -1 works in our favor here: the marker, previously at index 0, which is always True, used to mark the start of the first potential region in is_region_start and now marks the end of the last potential region in is_region_end is_empty marks all indices that are actually empty, according to your definition empty_interval_starts is the combination of two criteria: start of a potential region and actually being empty (since np.nonzero() returns tuples, we need to extract the first element, β¦[0], to get to the actual array of indices) empty_interval_ends_excl, likewise, is the combination of two criteria: end of a potential region and actually being empty; however, since empty_interval_ends_excl should be exclusive, we need to add 1 to get the final result At present, the results (empty_interval_starts and empty_interval_ends_excl) are Numpy arrays. If you prefer them as lists, as written in the question, you might want to convert them with empty_interval_starts.tolist() and empty_interval_ends_excl.tolist(), respectively. | 1 | 2 |
79,384,474 | 2025-1-24 | https://stackoverflow.com/questions/79384474/polars-get-column-value-at-another-columns-min-max-value | Given the following polars dataframe: pl.DataFrame({'A': ['a0', 'a0', 'a1', 'a1'], 'B': ['b1', 'b2', 'b1', 'b2'], 'x': [0, 10, 5, 1]}) shape: (4, 3) βββββββ¬ββββββ¬ββββββ β A β B β x β β --- β --- β --- β β str β str β i64 β βββββββͺββββββͺββββββ‘ β a0 β b1 β 0 β β a0 β b2 β 10 β β a1 β b1 β 5 β β a1 β b2 β 1 β βββββββ΄ββββββ΄ββββββ I want to add a column y which groups by A and selects the value from B with the maximum corresponding x. The following dataframe should be the result: βββββββ¬ββββββ¬ββββββ¬ββββββ β A β B β x β y β β --- β --- β --- β --- β β str β str β i64 β str β βββββββͺββββββͺββββββͺββββββ‘ β a0 β b1 β 0 β b2 β β a0 β b2 β 10 β b2 β β a1 β b1 β 5 β b1 β β a1 β b2 β 1 β b1 β βββββββ΄ββββββ΄ββββββ΄ββββββ I've tried various versions of df.with_columns(y=pl.col('B').?.over('A')) without any luck. | You can use pl.Expr.get and pl.Expr.arg_max to obtain the value of B with maximum corresponding value of x. This can be combined with the window function pl.Expr.over to perform the operation separately for each group defined by A. df.with_columns( pl.col("B").get(pl.col("x").arg_max()).over("A").alias("y") ) shape: (4, 4) βββββββ¬ββββββ¬ββββββ¬ββββββ β A β B β x β y β β --- β --- β --- β --- β β str β str β i64 β str β βββββββͺββββββͺββββββͺββββββ‘ β a0 β b1 β 0 β b2 β β a0 β b2 β 10 β b2 β β a1 β b1 β 5 β b1 β β a1 β b2 β 1 β b1 β βββββββ΄ββββββ΄ββββββ΄ββββββ | 4 | 6 |
79,383,889 | 2025-1-24 | https://stackoverflow.com/questions/79383889/summing-columns-of-pandas-dataframe-in-a-systematic-way | I have a pandas dataframe which looks like this: 1_2 1_3 1_4 2_3 2_4 3_4 1 5 2 8 2 2 4 3 4 5 8 5 8 8 8 9 3 3 4 3 4 4 8 3 8 0 7 4 2 2 where the columns are the 4C2 combinations of 1,2,3,4. And I would like to generate 4 new columns f_1, f_2, f_3, f_4 where the values of the columns are defined to be df['f_1'] = df['1_2']+df['1_3']+df['1_4'] df['f_2'] = df['1_2']+df['2_3']+df['2_4'] df['f_3'] = df['1_3']+df['2_3']+df['3_4'] df['f_4'] = df['1_4']+df['2_4']+df['3_4'] In other words, the column f_i are defined to be the sum of columns i_j and k_i. So I can brute force my way in this case. However, my original dataframe is a lot bigger and there are 20C2 = 190 columns instead and hence a brute force method wouldn't work. So the desired outcome looks like 1_2 1_3 1_4 2_3 2_4 3_4 f_1 f_2 f_3 f_4 1 5 2 8 2 2 8 11 15 6 4 3 4 5 8 5 11 17 13 17 8 8 8 9 3 3 24 20 20 14 4 3 4 4 8 3 11 16 10 15 8 0 7 4 2 2 15 14 6 11 Thank you so much. | Build a dictionary of the columns with str.split+explode+Index.groupby, and process them in a loop: s = df.columns.to_series().str.split('_').explode() d = s.index.groupby(s) for k, v in d.items(): df[f'f_{k}'] = df[v].sum(axis=1) You could also use eval instead of the loop once you have d: query = '\n'.join(f'f_{k} = {"+".join(map("`{}`".format, v))}' for k,v in d.items()) out = df.eval(query) Output: 1_2 1_3 1_4 2_3 2_4 3_4 f_1 f_2 f_3 f_4 0 1 5 2 8 2 2 8 11 15 6 1 4 3 4 5 8 5 11 17 13 17 2 8 8 8 9 3 3 24 20 20 14 3 4 3 4 4 8 3 11 16 10 15 4 8 0 7 4 2 2 15 14 6 11 Intermediate d: {'1': ['1_2', '1_3', '1_4'], '2': ['1_2', '2_3', '2_4'], '3': ['1_3', '2_3', '3_4'], '4': ['1_4', '2_4', '3_4'], } Pure python approach to build d: d = {} for c in df: for k in c.split('_'): d.setdefault(k, []).append(c) You could also imagine a pure pandas approach based on reshaping with melt+pivot_table, but this is most likely much less efficient: out = df.join(df .set_axis(df.columns.str.split('_'), axis=1) .melt(ignore_index=False).explode('variable') .reset_index() .pivot_table(index='index', columns='variable', values='value', aggfunc='sum') .add_prefix('f_') ) | 4 | 3 |
79,383,355 | 2025-1-24 | https://stackoverflow.com/questions/79383355/custom-comparison-for-pandas-series-and-dictionary | I have a series with four categories A,B,C,D and their current value s1 = pd.Series({"A": 0.2, "B": 0.3, "C": 0.3, "D": 0.9}) And a threshold against which I need to compare the categories, threshold = {"custom": {"A, B": 0.6, "C": 0.3}, "default": 0.4} But the threshold has two categories summed together: A & B And it has a "default" threshold to apply to each category that hasn't been specifically named. I can't quite work out how to do this in a general way I can solve two separate sub problems, but not the whole problem. I can solve the problem with single categories and a default threshold. Or, I can solve combined categories, but not apply the default threshold. What I need to evaluate is: s1[A]+s1[B] < threshold["custom"]["A,B"] :: 0.2 + 0.3 < 0.6 s1[C] < threshold["custom"]["C"] :: 0.3 < 0.3 s1[D] < threshold["default"] :: 0.9 < 0.4 And return this Series: # A,B True # C False # D False Here is what I've got for the subproblems 1. To apply the default threshold, I reindex and fillna with the default value: aligned_threshold = ( pd.Series(threshold.get("custom")) .reindex(s1.index) .fillna(threshold.get("default")) ) # A 0.4 # B 0.4 # C 0.3 # D 0.4 then I can compare: s1 < aligned_threshold # A True # B True # C False # D False # dtype: bool 2. To combine categories threshold_s = pd.Series(threshold.get("custom")) s1_combined = pd.Series(index=threshold_s.index) for category, threshold in threshold["custom"].items(): s1_combined[category] = sum([s1.get(k, 0) for k in category.split(", ")]) # now s1_combined is: # A,B 0.6 # C 0.3 s1_combined < threshold_s # A,B True # C False # dtype: bool but I've lost category D To recap, what I need is: s1[A]+s1[B] s1[C] s1[D] So that I can compare thus: s1 < threshold And return this Series: # A,B True # C False # D False | You could build a mapper to rename, then groupby.sum and compare to the reference thresholds: mapper = {x: k for k in threshold['custom'] for x in k.split(', ')} # {'A': 'A, B', 'B': 'A, B', 'C': 'C'} s2 = (s1.rename(mapper) .groupby(level=0).sum() ) out = s2.lt(s2.index.to_series() .map(threshold['custom']) .fillna(threshold['default']) ) Alternative for the last step if you don't have NaNs: out = s2.lt(s2.index.map(threshold['custom']).values, fill_value=threshold['default']) Output: A, B True C False D False dtype: bool | 3 | 3 |
79,382,816 | 2025-1-23 | https://stackoverflow.com/questions/79382816/how-to-multiply-2x3x3x3-matrix-by-2x3-matrix-to-get-2x3-matrix | I am trying to compute some derivatives of neural network outputs. To be precise I need the jacobian matrix of the function that is represented by the neural network and the second derivative of the function with respect to its inputs. I want to multiply the derivative of the jacobian with a vector of same size as the input for every sample. I got the result tht I need with this implementation: import torch x_1 = torch.tensor([[1.,1.,1.]], requires_grad=True) x_2 = torch.tensor([[2.,2.,2.]], requires_grad=True) # Input to the network with dim 2x3 --> 2 Samples 3 Feature x = torch.cat((x_1,x_2),dim=0) def calculation(x): c = torch.tensor([[1,2,3],[4,5,6],[7,8,9]]).float() return (x@c)**2 c = torch.tensor([[1,2,3],[4,5,6],[7,8,9]]).float() #output of the network with dimension 2x3 --> 3 outputs per Sample y = calculation(x) #Calculation of my jacobian with dimension 2x3x3 (one for each sample) g = torch.autograd.functional.jacobian(calculation,(x),create_graph=True) jacobian_summarized = torch.sum(g, dim=0).transpose(0,1) #Calculation of my second order derivative 2x3x3x3 (On Tensor for each Sample) gg = torch.autograd.functional.jacobian(lambda x: torch.sum(torch.autograd.functional.jacobian(calculation, (x), create_graph=True), dim=0).transpose(0,1), (x), create_graph=True) second_order_derivative = torch.sum(gg, dim=0).transpose(1,2).transpose(0,1) print('x:', x) print('c:',c) print('y:',y) print('First Order Derivative:',jacobian_summarized) print('Second Order Derivative:',second_order_derivative) # Multiplication with for loop result = torch.empty(0) for i in range(y.shape[0]): result_row = torch.empty(0) for ii in range(y.shape[1]): result_value = (x[i].unsqueeze(0))@second_order_derivative[i][ii]@(x[i].unsqueeze(0).T) result_row = torch.cat((result_row, result_value), dim=1) result = torch.cat((result, result_row)) print(result) I would like to know if there is a way to get the same result of the multiplication without having to use 2 for loops but rather some simple multiplication of the matrices | It seems like you're looking for einsum. Should be something like: result = torch.einsum('bi,bijk,bk->bj', x, second_order_derivative, x) | 2 | 2 |
79,382,572 | 2025-1-23 | https://stackoverflow.com/questions/79382572/condensing-a-python-method-that-does-a-different-comparison-depending-on-the-ope | I am trying to write a method that evaluates a statement, but the operator (>, <, =) is sent by the user. I am wondering if there is a easy way to write a more concise method. The simplified version of the code is: def comparsion(val1: int, val2: int, operator: str): if operator == ">": if val1 > val2: return True elif operator == "=": if val1 == val2: return True elif operator == "<": if val1 < val2: return True The only thing that changes in each case is the operator, I am wondering if there is a more efficient way to write this. | Use the operator module, which provides named functions for all the standard operators. Then you can use a dictionary to map from the operator string to the corresponding function. from operator import lt, gt, eq ops = {'<': lt, '>': gt, '=': eq} def comparsion(val1: int, val2: int, operator: str): return ops[operator](val1, val2) | 1 | 8 |
79,381,028 | 2025-1-23 | https://stackoverflow.com/questions/79381028/why-cant-subfigures-be-nested-in-gridspecs-to-keep-their-suptitles-separate-in | I would expect this code: import matplotlib.pyplot as plt fig = plt.figure(figsize=(8, 6)) fig_gridspec = fig.add_gridspec(1, 1) top_subfig = fig.add_subfigure(fig_gridspec[(0, 0)]) top_subfig.suptitle("I am the top subfig") top_subfig_gridspec = top_subfig.add_gridspec(1, 1, top=.7) nested_subfig = top_subfig.add_subfigure(top_subfig_gridspec[(0, 0)]) nested_subfig.suptitle("I am the nested subfig") plt.show() to generate two suptitles on different lines. Instead, they are overlapping. Can anyone explain why? Also, is there a way to achieve such separation with nested subfigures? Edit: To be clear, I mean, without changing the dimensions of the grid in the gridspec. I know I can do this, and it might be what I wind up doing: import matplotlib.pyplot as plt fig = plt.figure(figsize=(8, 6)) fig_gridspec = fig.add_gridspec(1, 1) top_subfig = fig.add_subfigure(fig_gridspec[(0, 0)]) top_subfig.suptitle("I am the top subfig") top_subfig_gridspec = top_subfig.add_gridspec(2, 1, height_ratios=[.1, 1]) nested_subfig = top_subfig.add_subfigure(top_subfig_gridspec[(1, 0)]) nested_subfig.suptitle("I am the nested subfig") plt.show() I just don't understand why my first code block doesn't work, and why there seems to be no way to adjust a nested subfigures's position within another subfigure. Second edit: I don't have a minimum reproducible example for this, but relatedly, it also doesn't seem to be than hspace is doing anything for a gridspec that contains subfigures that also contains subfigures. I am starting to conclude that gridspec keyword arguments simply do not work when what the gridspec contains is subfigures, when the gridspec is associated with a subfigure, or both. I don't yet know the boundaries of the phenomenon. Yet another edit: I should have said to begin with that I find in my larger context, not discussed here, that constrained_layout causes a bunch of problems even as it solves some others, so I can't address my issue that way. I'm really wondering about why it is I can't just get a subfigure to respect the gridspec it's in, and if the answer is "this is a bug in matplotlib" and you should not use subfigures for your use case, I'd accept it. Or if the answer is, "here is the fully justified reason for subfigures to behave this way" and you should not use subfigures for your use case, I'd accept that too. | Answering my own question. Subfigures are not meant to respect Gridspec keyword arguments. Further answering the question of what it is they do respect: they respect arguments passed to the subfigures method. It would be nice if this method also took top, left keywords, etc., but as of now it doesn't. However, this works to get my intended effect: import matplotlib.pyplot as plt fig = plt.figure(figsize=(8, 6)) top_subfig = fig.subfigures(1, 1) top_subfig.suptitle("I am the top subfig") nested_subfigs = top_subfig.subfigures(2, 1, height_ratios=[.1, 1]) nested_subfigs[1].suptitle("I am the nested subfig") plt.show() While very similar to the "add a row in gridspec strategy" that's in my second code block in my question, I find this solution more satisfying, a) because I think it's the intended usage, and b) because understanding that if I make my subfigures this way, I can get the subfigures to respect some of the arguments I'd otherwise pass to a gridspec is going to solve other layout problems. | 2 | 1 |
79,381,804 | 2025-1-23 | https://stackoverflow.com/questions/79381804/identify-broken-xml-files-inside-a-zipped-archive | I am trying to read a large number of zipped files (.zip or .docx) in a loop, each again containing a large number of embedded XML (.xml) files inside them. However some of the embedded XML files are broken/corrupted. I can create a parser which ignores the errors and loads the XML contents. However, I want to know which XML file is corrupted and which particular element inside it is broken/failing. I have tried the below code: import re import os import zipfile from lxml import etree for file in os.listdir(filepath): if file.endswith('.zip') or file.endswith('.docx'): ext = os.path.splitext(file)[1] newfile = f"{os.path.splitext(os.path.basename(file))[0]}_new{ext}" zippedin = zipfile.ZipFile(os.path.join(filepath, file), 'r') recovering_parser = etree.XMLParser(recover=True) matched_items = [] for item in zippedin.infolist(): xmltree = etree.fromstring(zippedin.read(item.filename), parser=recovering_parser) for node in xmltree.iter(tag=etree.Element): if re.search('XXXXXXX', str(node)) or re.search('YYYYYYYY', str(node.attrib)): matched_items.append(item) with zipfile.ZipFile(os.path.join(filepath, newfile), 'w') as zippedout: for element in matched_items: zippedout.writestr(element, zippedin.read(element.filename)) zippedin.close() This code snippet works perfectly fine and bypasses the broken XML files inside the zipped archives. However, I require to identify which files are failing and also the individual components. If I remove the recovering_parser portion, I receive error messages of the following sort: lxml.etree.XMLSyntaxError: xmlns: 'ABCDEFGHXXXX' is not a valid URI, line 5, column 45 It does not show which XML is corrupted. Can someone help me identify the broken XMLs and a proper way of exception handling and error scraping/extracting the faulty component name. | Using recovering_parser = etree.XMLParser(recover=True) is preventing you from being able to catch which files are broken. In order to catch those errors, you can use a try/except block. import re import os import zipfile from lxml import etree try: # xml parsing code here except Exception as e: # Debugging code here print(file) # print the file name | 1 | 2 |
79,380,546 | 2025-1-23 | https://stackoverflow.com/questions/79380546/zero-pad-a-numpy-n-dimensional-array | Not a duplicate of Zero pad numpy array (that I posted 9 years ago, ouch!) because here it's about n-dimensional arrays. How to zero pad a numpy n-dimensional array, if possible in one line? Example: a = np.array([1, 2, 3]) zeropad(a, 8) # [1, 2, 3, 0, 0, 0, 0, 0] b = np.array([[1, 2], [3, 4], [5, 6]]) zeropad(b, (5, 2)) # [[1, 2], [3, 4], [5, 6], [0, 0], [0, 0]] When using b.resize((5, 2)), here it works, but in some real cases, it gives: ValueError: cannot resize this array: it does not own its data How to zero pad numpy nd arrays no matter if it owns its data or not? | Instead of using pad, since you want to pad after, you could create an array of zeros and assign the existing values: out = np.zeros(pad, dtype=arr.dtype) out[np.indices(arr.shape, sparse=True)] = arr Or, if you only want to pad the first dimension, with resize. Just ensure that the array owns its data with copy: out = arr.copy() out.resize(pad) Output: array([[1, 2], [3, 4], [5, 6], [0, 0], [0, 0]]) I really want a one-liner resize: IMO there is no good reason for that, but you could always use an assignment expression (python β₯ 3.8): (out:=arr.copy()).resize(pad) Output for a different pad arr = np.array([[1, 2], [3, 4], [5, 6]]) pad = (5, 3) # output zeros + assignment array([[1, 2, 0], [3, 4, 0], [5, 6, 0], [0, 0, 0], [0, 0, 0]]) # output resize array([[1, 2, 3], [4, 5, 6], [0, 0, 0], [0, 0, 0], [0, 0, 0]]) | 2 | 4 |
79,379,447 | 2025-1-22 | https://stackoverflow.com/questions/79379447/getting-form-results-using-python-request | Admittedly, this is my first time using requests. The form I'm attempting to use is here: https://trade.cbp.dhs.gov/ace/liquidation/LBNotice/ Here is my code: import requests headers = { 'Accept': 'application/json, text/javascript, */*; q=0.01', 'Content-Type': 'application/json; charset=UTF-8', 'User-Agent': 'Mozilla/5.0', 'X-Requested-With': 'XMLHttpRequest' } payload = { "dtPageVars": { "draw":1, "columns": [ {"data":"postedDate","name":"","searchable":"true","orderable":"false","search":{"value":"","regex":"false"}}, {"data":"eventDate","name":"","searchable":"true","orderable":"false","search":{"value":"","regex":"false"}}, {"data":"voidedDate","name":"","searchable":"true","orderable":"false","search":{"value":"","regex":"false"}}, {"data":"event","name":"","searchable":"true","orderable":"false","search":{"value":"","regex":"false"}}, {"data":"basis","name":"","searchable":"true","orderable":"false","search":{"value":"","regex":"false"}}, {"data":"action","name":"","searchable":"true","orderable":"false","search":{"value":"","regex":"false"}}, {"data":"entryNumber","name":"","searchable":"true","orderable":"false","search":{"value":"","regex":"false"}}, {"data":"portOfEntry","name":"","searchable":"true","orderable":"false","search":{"value":"","regex":"false"}}, {"data":"entryDate","name":"","searchable":"true","orderable":"false","search":{"value":"","regex":"false"}}, {"data":"entryType","name":"","searchable":"true","orderable":"false","search":{"value":"","regex":"false"}}, {"data":"teamNumber","name":"","searchable":"true","orderable":"false","search":{"value":"","regex":"false"}} ], "order":[], "start":0, "length":100, "search": { "value":"", "regex":"false" } }, "searchFields": { "portOfEntry":"0101", "entryType":"01" } } url = 'https://trade.cbp.dhs.gov/ace/liquidation/LBNotice/' searchUrl = 'https://trade.cbp.dhs.gov/ace/liquidation/LBNotice/search' response = requests.post(searchUrl, headers=headers, data=payload) print(response.status_code) I keep getting 500 status code but I am not sure why. Could it be the headers I'm using? I've tried many different combinations of headers without luck. Is it possible to use requests for this form or am I better off finding a different approach? Any help would truly be appreciated. Thank you. | Cause In headers you are telling the server you are sending it a JSON data but in request body, you are sending Form data. There is inconsistency. Solution: send json data response = requests.post(url, json=payload) Remove headers too. Explanation In requests library, if you pass a dictionary to data parameter, your dictionary of data will automatically be form-encoded when the request is made. But based on your header, it seems you want to send a JSON data: 'Content-Type': 'application/json; charset=UTF-8', headers = { 'Accept': 'application/json, text/javascript, */*; q=0.01', 'Content-Type': 'application/json; charset=UTF-8', 'User-Agent': 'Mozilla/5.0', 'X-Requested-With': 'XMLHttpRequest' } So, basically In headers you are telling the server you are sending it a JSON data but in request body, you are sending Form data. If you don't want to send a form data (if you want to send JSON or just string data), you can send a string data directly. If you pass in a string instead of a dict, that data will be posted directly. requests.post(url, data=json.dumps(payload)) json.dumps(payload) essentially turns your dictionary into (JSON) string. But it doesn't change/set headers. You need to set headers manually. There is an easier way: If you need a header set to application/json and you donβt want to encode the dict yourself, you can pass the dcit to json parameter of requests.post method instead of data parameter. response = requests.post(url, json=payload) It will both update/set your headers to application/json and stringifies your dictionary. Tip If you do not need to specify custom headers, you don't need to add headers to the request, i.e., you can go with default options. | 1 | 1 |
79,379,633 | 2025-1-23 | https://stackoverflow.com/questions/79379633/why-is-the-accuracy-of-scipy-integrate-solve-ivp-rk45-extremely-poor-compared | What I want to solve I solved a system of ordinary differential equations using scipy.integrate.solve_ivp and compared the results with my homemade 4th-order Runge-Kutta (RK4) implementation. Surprisingly, the accuracy of solve_ivp (using RK45) is significantly worse. Could someone help me understand why this might be happening? Problem Description I simulated uniform circular motion in a 2D plane with an initial velocity of (10, 0) and a centripetal acceleration of magnitude 10. Theoretically, this system should describe a circle with a radius of 10. I plotted the results from scipy.integrate.solve_ivp (method="RK45") in blue and those from my homemade RK4 implementation in red. The resulting plot is shown below: Since RK4 has 4th-order accuracy and RK45 has 5th-order accuracy, I expected RK45 to perform better and follow the circle more closely. However, the results from RK4 are far superior. Relevant Source Code from scipy.integrate import solve_ivp import math import matplotlib.pyplot as plt import numpy as np def get_a(t, r, v): # Accelerates perpendicular to the direction of motion with magnitude 10 x, y = 0, 1 direction_of_motion = math.atan2(v[y], v[x]) direction_of_acceleration = direction_of_motion + math.pi / 2 acceleration_magnitude = 10 a_x = acceleration_magnitude * math.cos(direction_of_acceleration) a_y = acceleration_magnitude * math.sin(direction_of_acceleration) return np.array([a_x, a_y]) def get_v_and_a(t, r_and_v): # r_and_v == np.array([r_x, r_y, v_x, v_y]) # returns np.array([v_x, v_y, a_x, a_y]) r = r_and_v[0:2] v = r_and_v[2:4] a = get_a(t, r, v) return np.concatenate([v, a]) # Simulation settings initial_position = [0, 0] initial_velocity = [10, 0] end_time = 300 time_step = 0.01 # scipy.integrate.solve_ivp simulation (RK45) initial_values = np.array(initial_position + initial_velocity) time_points = np.arange(0, end_time + time_step, time_step) result = solve_ivp( fun=get_v_and_a, t_span=[0, end_time], y0=initial_values, method="RK45", t_eval=time_points ) scipy_ts = result.t scipy_xs = result.y[0] scipy_ys = result.y[1] # Homemade RK4 simulation my_ts, my_xs, my_ys = [], [], [] current_time = 0.0 step_count = 0 r = np.array(initial_position, dtype="float64") v = np.array(initial_velocity, dtype="float64") delta_t = time_step while current_time <= end_time: my_ts.append(current_time) my_xs.append(r[0]) my_ys.append(r[1]) t = current_time + delta_t r_and_v = np.concatenate([r, v]) k0 = get_v_and_a(t - delta_t, r_and_v ) k1 = get_v_and_a(t - 1/2 * delta_t, r_and_v + 1/2 * k0 * delta_t) k2 = get_v_and_a(t - 1/2 * delta_t, r_and_v + 1/2 * k1 * delta_t) k3 = get_v_and_a(t , r_and_v + k2 * delta_t) step_count += 1 current_time = step_count * delta_t r_and_v += (1/6 * k0 + 1/3 * k1 + 1/3 * k2 + 1/6 * k3) * delta_t r = r_and_v[0:2] v = r_and_v[2:4] # Plot results # 1. set up size = max([abs(x) for x in scipy_xs] + [abs(y) for y in scipy_ys])*2.5 fig, ax = plt.subplots(figsize=(10, 10)) ax.set_xlim(-size/2, size/2) ax.set_ylim(-size/2, size/2) ax.set_xlabel("X") ax.set_ylabel("Y") ax.set_aspect("equal", adjustable="box") # 2. plot ax.plot(scipy_xs, scipy_ys, lw=0.1, color="blue", label="scipy.integrate.solve_ivp RK45") ax.plot(my_xs, my_ys, lw=0.1, color="red", label="homemade code RK4") # 3. time points for i, t in enumerate(scipy_ts): if t % 5 == 0: ax.text( scipy_xs[i], scipy_ys[i], str(round(t)), fontsize=8, ha = "center", va = "center", color = "blue" ) if t % 1 == 0 and (t <= 5 or 295 <= t): ax.text( my_xs[i], my_ys[i], str(round(t)), fontsize=8, ha = "center", va = "center", color = "red" ) # 4. show leg = ax.legend() leg.get_lines()[0].set_linewidth(1) leg.get_lines()[1].set_linewidth(1) ax.grid(True) plt.show() About the Homemade Code What I Tried I have read the official documentation for solve_ivp but couldn't figure out what might be causing this discrepancy. | I get substantially better results if I lower the tolerances on solve_ivp(). e.g. result = solve_ivp( fun=get_v_and_a, t_span=[0, end_time], y0=initial_values, method="RK45", rtol=1e-5, atol=1e-6, t_eval=time_points ) The default value of rtol is 1e-3, and changing the value to 1e-5 makes the simulation more accurate. Smaller values of rtol make solve_ivp more accurate, at the cost that it takes more evaluations of your function. Here's what rtol=1e-5 looks like: Without this change, the homemade integrator makes 120,000 calls to your function, but SciPy makes only 1,226 calls. The homemade integrator is simulating your function at a much smaller timestep, so it ends up being more accurate. Solving this problem with an rtol of 1e-5 requires 3,512 calls, so you can get accuracy comparable to the homemade solution in less time. | 1 | 4 |
79,378,514 | 2025-1-22 | https://stackoverflow.com/questions/79378514/force-altair-chart-to-display-years | Using a data frame of dates and values starting from 1 Jan 2022: import datetime as dt import altair as alt import polars as pl import numpy as np alt.renderers.enable("browser") dates = pl.date_range(dt.date(2022, 1, 1), dt.date(2025, 1, 22), "1d", eager = True) values = np.random.uniform(size = len(dates)) df = pl.DataFrame({"dates": dates, "values": values}) alt.Chart(df).mark_point().encode(alt.X("dates:T"), alt.Y("values:Q")).show() But if I start the data frame from 2020 and filter it for dates > 1 Jan 2022: dates_b = pl.date_range(dt.date(2020, 1, 1), dt.date(2025, 1, 22), "1d", eager = True) values_b = np.random.uniform(size = len(dates_b)) df_b = pl.DataFrame({"dates": dates, "values": values}) alt.Chart(df_b.filter(pl.col("dates") > dt.date(2022, 1, 1))).mark_point().encode(alt.X("dates:T"), alt.Y("values:Q")).show() How can I specify that years must be shown? Note that I do get the right result if I filter using >= to include 1 Jan 2022, but that's besides the point. I always need years. | You can use labelExpr to build your own logic for setting tick labels. For example, this gives the year if the month is January and the month otherwise. dates_b = pl.date_range(dt.date(2020, 1, 1), dt.date(2025, 1, 22), "1d", eager=True) values_b = np.random.uniform(size=len(dates_b)) df_b = pl.DataFrame({"dates": dates, "values": values}) alt.Chart(df_b.filter(pl.col("dates") > dt.date(2022, 1, 1))).mark_point().encode( alt.X("dates:T").axis( labelExpr="timeFormat(datum.value, '%m') == '01' ? timeFormat(datum.value, '%Y') : timeFormat(datum.value, '%b')", ), alt.Y("values:Q"), ) | 2 | 2 |
79,376,790 | 2025-1-22 | https://stackoverflow.com/questions/79376790/problem-with-return-when-calling-a-python-function-within-matlab | i encountered the problem that when calling a python function within the matlab environment, the return value from python is not recognized in matlab - i always get a py.NoneType as an output. Here my code, it is a code to send E-mails. I can see the print commands as an output in matlab but cannot capture the return values properly (always a py.NoneType in Matlab). When calling the function directly from python everything works fine. What could the problem be ? import smtplib from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart from email.mime.base import MIMEBase from email import encoders import os def send_email(gmail_user, gmail_password, to_email, subject, body, attachment_path): """ Send an email with an attachment using Gmail's SMTP server. Parameters: gmail_user (str): Your Gmail email address gmail_password (str): Your Gmail app password to_email (str): Recipient's email address subject (str): Subject of the email body (str): Body text of the email attachment_path (str): Full path to the attachment file """ try: # Create the email email = MIMEMultipart() email['From'] = gmail_user email['To'] = to_email email['Subject'] = subject # Add the email body email.attach(MIMEText(body, 'plain')) # Normalize the file path attachment_path = attachment_path.encode('utf-8').decode('utf-8') attachment_path = os.path.normpath(attachment_path) # attachment_path = attachment_path.replace("\\", "\\\\") print(f"Processed attachment path: {attachment_path}") # Check if the attachment file exists if not os.path.exists(attachment_path): print(f"Error: The file {attachment_path} does not exist.") return 'Name in Email not correct!'; # Open the file in binary mode and encode it with open(attachment_path, 'rb') as attachment_file: attachment = MIMEBase('application', 'octet-stream') # Generic MIME type, could be more specific attachment.set_payload(attachment_file.read()) encoders.encode_base64(attachment) # Extract filename from the path filename = os.path.basename(attachment_path) print(f"Attachment found: {filename}") # Add header for the attachment attachment.add_header('Content-Disposition', 'attachment', filename=filename) # Attach the file to the email email.attach(attachment) print(f"Attachment '{filename}' added successfully.") return 'yes' # Send the email using Gmail's SMTP server with smtplib.SMTP('smtp.gmail.com', 587) as server: server.starttls() # Secure the connection server.login(gmail_user, gmail_password) server.sendmail(gmail_user, to_email, email.as_string()) print("Email sent successfully!") return 'Email sent!' except Exception as e: print(f"Failed to send email: {e}") return 'Email failed fatal!' return "Unknown error occurred." Thanks in advance I tried a "mini example" with a simple python function (just adding two numbers) and it worked, so I have no idea why the more advanced code does not work. No errors are showing up in Matlab and also no details from the python code. My expectation was that when calling the function from Matlab like output=py.send_email.send_email(Arguments) The variable "output" holds the string 'Email sent!', but instead is always py.NoneType. I built in some print statement just to check if Matlab prints it properly, and it does ! The email is being sent as it should. I just want an output variable for some extra coding in Matlab. I also get no return when i use the wrong path where I should get 'Name in Email not correct!', and this is just at the beginning. | I solved the problem by a simple restart of matlab, I was not aware of this apparently simple "issue". So when changing the python code one has to restart matlab to execute the updated python code. Maybe an other solution would be to somehow terminate and reboot the python environment in matlab. | 2 | 3 |
79,377,681 | 2025-1-22 | https://stackoverflow.com/questions/79377681/is-it-a-bug-to-use-subprocess-run-in-a-multithreaded-script | I have a long build script which is a mix of "real" python code and lengthy subprocess.run() calls to things like debootstrap, wget, apt-get or build.sh. Everything is parallelized. One thread does debootstrap followed by apt-get, while another does build.sh, then both are joined and we start more threads for combining the results and so on. Each thread logs to a dedicated file. While looking for a generic way to log the output of subprocess.run() started by one of these threads, I came upon the following answer: https://stackoverflow.com/a/31868783/876832 Is it a bug to call subprocess.run() in a python script with more than one thread? | Is it a bug to call subprocess.run() in a python script with more than one thread? No. I do that all the time. I think the answer you linked is misguided. After all, you don't even know whether subprocess.Popen() and its facades like subprocess.run() use the fork syscall (especially on Windows, they certainly don't, since that call doesn't exist there). Sure, if you were to manually os.fork(), I'd exercise caution in a threaded program (and indeed, as the docs say, since 3.12, if Python detects you're doing that, it'll warn you). Under the hood, though, on POSIX systems, subprocess.Popen() uses os.posix_spawn when possible, and otherwise a very careful fork_exec implementation. Oh, and as an aside, the question you linked actually uses the Python 2.7-era low-level thread module, these days aptly called _thread, the low-level threading API. That's a different low-level beast than threading.Threads. | 2 | 5 |
79,376,490 | 2025-1-22 | https://stackoverflow.com/questions/79376490/using-regex-in-python-to-parse-a-string-for-variables-names-and-values | I need some help with python regex. My "payload" string look like this "status=0/eStopStatus=2/version=1.0.16/runTime=005320" My code uses the following parse out a list of variables and values: variables = re.findall(r'([\w]+)=', payload) values = re.findall(r"[-+]?(?:\d*\.*\d+)", payload) My variables are parsed correctly as: status eStopStatus version runTime But the double . in my version value 1.0.16 trips up my values and I get 0 2 1.0 .16 005320 I must admit I am battling to understand the syntax of the parameters using regex for this. Any assistance appreciated | This will work perfectly: values = re.findall(r"[-+]?(\d+(?:\.\d+)*)", payload) Here is a link to the pattern and test string: https://regex101.com/r/YbcMUR/1 TEST CODE: import re payload = "status=0/eStopStatus=2/version=1.0.16/runTime=005320" keywords_pattern = r'([\w]+)=' values_pattern = r'[-+]?(\d+(?:\.\d+)*)' variables = re.findall(keywords_pattern, payload) values = re.findall(values_pattern, payload) [print(variable) for variable in variables] [print(value) for value in values] RESULTS: PS C:\..\regex_payload.py status eStopStatus version runTime 0 2 1.0.16 005320 | 1 | 0 |
79,376,634 | 2025-1-22 | https://stackoverflow.com/questions/79376634/how-to-merge-dataframes-over-multiple-columns-and-split-rows | I have two datafames: df1 = pd.DataFrame({ 'from': [0, 2, 8, 26, 35, 46], 'to': [2, 8, 26, 35, 46, 48], 'int': [2, 6, 18, 9, 11, 2]}) df2 = pd.DataFrame({ 'from': [0, 2, 8, 17, 34], 'to': [2, 8, 17, 34, 49], 'int': [2, 6, 9, 17, 15]}) I want to create a new dataframe that looks like this: df = pd.DataFrame({ 'from': [0, 2, 8, 17, 26, 34, 35, 46, 48], 'to': [2, 8, 17, 26, 34, 35, 46, 48, 49], 'int': [2, 6, 9, 9, 8, 1, 11, 2, 1]}) I have tried standard merging scripts but have not been able to split the rows containing higher 'from' and 'to' numbers in either df1 or df2 into smaller ones. How can I merge my dataframes over multiple columns and split rows? | Frirst, combine all unique from and to values from both df1 and df2 to create a set of breakpoints: breakpoints = set(df1['from']).union(df1['to']).union(df2['from']).union(df2['to']) breakpoints = sorted(breakpoints) In the example, this is [0, 2, 8, 17, 26, 34, 35, 46, 48, 49]. Now, create a new dataframe with these from and to values, then compute the intervals: new_df = pd.DataFrame({'from': breakpoints[:-1], 'to': breakpoints[1:]}) new_df['int'] = new_df['to'] - new_df['from'] Result: from to int 0 0 2 2 1 2 8 6 2 8 17 9 3 17 26 9 4 26 34 8 5 34 35 1 6 35 46 11 7 46 48 2 8 48 49 1 | 1 | 2 |
79,376,281 | 2025-1-22 | https://stackoverflow.com/questions/79376281/mod-operator-in-free-pascal-gives-a-different-result-than-expected | The mod operator in Free Pascal does not produce the results I would expect. This can be demonstrated by the program below whose output does not agree with the result of the same calculation in Python (or Google). program test(output); var a, b, c: longint; begin a := -1282397916; b := 2147483647; c := a mod b; writeln (c:16); end. -1282397916 Compare this to the output of the Python script below which gives the result I expected. #!/usr/bin/python a = -1282397916 b = 2147483647 c = a % b print (c) 865085731 This is the same as the result as that obtained by pasting the following text into Google (and when using the mod operator in VAXβPascal). (-1282397916 % 2147483647) The question is: Why does Free Pascal behave differently? And how do I get the same result as that obtained when using the mod operator in Python? | With respect to a nonβnegative integral dividend and positive integral divisor there is no ambiguity. All programming languages do the same. Once you use other values though, programming languages differ. Pascal uses a Euclideanβlike definition of the modulus operator. In ISO standard 7185 (βStandard Pascalβ), page 48, it is defined as follows: A term of the form i mod j shall be an error if j is zero or negative; otherwise, the value of i mod j shall be that value of (i β (k Γ j)) for integral k such that 0 β€ i mod j < j. In other words: Evaluation of a term of the form x mod y is an error if y is less than or equal to zero; otherwise there is an integer k such that x mod y satisfies the following relation: 0 <= x mod y = x β k * y < y. Source: Jensen, Kathleen; Wirth, Niklaus. Pascal β user manual and report (4th revised ed.). p. 168. doi:10.1007/978β1β4612β4450β9. ISBN 978β0β387β97649β5. Thus the result of the mod operator is guaranteed to be non-negative. Unfortunately, as you have already observed, the FreePascal Compiler does not adhere to the ISO standards. The FPC will only return the proper result if {$modeSwitch isoMod+} is set: program moduloConfusion(output); {$modeSwitch isoMod+} type integer = ALUSInt; var dividend, divisor: integer; begin dividend := -1282397916; divisor := 2147483647; writeLn(dividend mod divisor:16) end. Note, this affects the definition of the mod operator per compilation unit, so the RTL and everything else β unless recompiled β continues using the other definition internally. Rest assured, however, Delphi and the GPC (GNU Pascal Compiler) do work correctly without making jump through loops. If you want to get the same result as in Python, you need to define and use your own function (here in Extended Pascal): function modulo(protected dividend, divisor: integer): integer; begin modulo := (abs(dividend) mod abs(divisor)) * -1 pow ord(divisor < 0) end; There is no magic switch to make FreePascalβs mod behave exactly like Pythonβs %. | 3 | 4 |
79,412,275 | 2025-2-4 | https://stackoverflow.com/questions/79412275/pandas-performance-while-iterating-a-state-vector | I want to make a pandas dataframe that describes the state of a system at different times I have the initial state which describes the first row Each row correspond to a time I have reveserved the first two columns for "household" / statistics The following columns are state parameters At each iteration/row a number of parameters change - this could be just one or many I have created a somewhat simplified version that simulates my change data : df_change Question 1 Can you think of a more efficient way of generating the matrix than what i do in this code? i have a state that i update in a loop and insert Question 2 This is what i discovered while trying to write the sample code for this discussion. I see 20 fold performanne boost in loop iteration performance if i do the assignments to the "household" columns after the loop. Why is this? I am using python = 3.12.4 and pandas 2.2.2. df["product"] ="some_product" #%% import numpy as np import pandas as pd from tqdm import tqdm num_cols =600 n_changes = 40000 # simulate changes extra_colums = ["n_changes","product"] columns = [chr(i+65) for i in range(num_cols)] state = { icol : np.random.random() for icol in columns} change_index = np.random.randint(0,4,n_changes).cumsum() change_col = [columns[np.random.randint(0,num_cols)] for i in range(n_changes)] change_val= np.random.normal(size=n_changes) # create change matrix df_change=pd.DataFrame(index= change_index ) df_change['col'] = change_col df_change['val'] = change_val index = np.unique(change_index) # %% # Slow approach gives 5 iterations/s df = pd.DataFrame(index= index, columns=extra_colums + columns) df["product"] ="some_product" for i in tqdm(index): state.update(zip(df_change.loc[[i],"col"] , df_change.loc[[i],"val"])) df.loc[i,columns] = pd.Series(state) # %% # Fast approach gives 1000 iterations/sec df2 = pd.DataFrame(index= index, columns=extra_colums + columns) for i in tqdm(index): state.update(zip(df_change.loc[[i],"col"] , df_change.loc[[i],"val"])) df2.loc[i,columns] = pd.Series(state) df2["product"] ="some_product" Edit I marked the answer by ouroboros1 as theaccepted solution - it works really well and answered Question 1. I am still curios about Question 2 : the difference in pandas performance using the two methods where i iterate through the rows. I found that I can also get a performance similar to the original "df2" method depending on how i assign the value before the loop. The interesting point here is that pre assignment changes the performance in loop that follows. # Fast approach gives 1000 iterations/sec df3 = pd.DataFrame(index=index, columns=extra_colums + columns) #df3.loc[index,"product"] = "some_product" # Fast #df3["product"] = "some_product" # Slow df3.product = "some_product" # Fast for i in tqdm(index): state.update(zip(df_change.loc[[i], "col"], df_change.loc[[i], "val"])) df3.loc[i, columns] = np.array(list(state.values())) | Here's one approach that should be much faster: Data sample num_cols = 4 n_changes = 6 np.random.seed(0) # reproducibility # setup ... df_change col val 1 C 0.144044 4 A 1.454274 5 A 0.761038 7 A 0.121675 7 C 0.443863 10 B 0.333674 state {'A': 0.5488135039273248, 'B': 0.7151893663724195, 'C': 0.6027633760716439, 'D': 0.5448831829968969} Code out = (df_change.reset_index() .pivot_table(index='index', columns='col', values='val', aggfunc='last') .rename_axis(index=None, columns=None) .assign(product='some_product') .reindex(columns=extra_colums + columns) .fillna(pd.DataFrame(state, index=[index[0]])) .ffill() ) Output n_changes product A B C D 1 NaN some_product 0.548814 0.715189 0.144044 0.544883 4 NaN some_product 1.454274 0.715189 0.144044 0.544883 5 NaN some_product 0.761038 0.715189 0.144044 0.544883 7 NaN some_product 0.121675 0.715189 0.443863 0.544883 10 NaN some_product 0.121675 0.333674 0.443863 0.544883 # note: # A updated in 4, 5, 7 # B updated in 10 # C updated in 1, 7 Explanation / Intermediates Use df.reset_index to access 'index' inside df.pivot_table. For the aggfunc use 'last'. I.e., we only need to propagate the last value in case of duplicate 'col' values per index value. Cosmetic: use df.rename_axis to reset index and columns names to None. # df_chagne.reset_index().pivot_table(...).rename_axis(...) A B C 1 NaN NaN 0.144044 4 1.454274 NaN NaN 5 0.761038 NaN NaN 7 0.121675 NaN 0.443863 10 NaN 0.333674 NaN Use df.assign to add column 'product' with a scalar ('some_product'). Use df.reindex to get the columns in the desired order (with extra_columns up front). Not yet existing column 'n_changes' will be added with NaN values. Now, apply df.fillna and use a pd.DataFrame with state for only the first index value (index[0]), to fill the first row (alternatively, use df.combine_first). # after .fillna(...) n_changes product A B C D 1 NaN some_product 0.548814 0.715189 0.144044 0.544883 4 NaN some_product 1.454274 NaN NaN NaN 5 NaN some_product 0.761038 NaN NaN NaN 7 NaN some_product 0.121675 NaN 0.443863 NaN 10 NaN some_product NaN 0.333674 NaN NaN Finally, we want to forward fill: df.ffill. Performance comparison: num_cols = 100 n_changes = 100 np.random.seed(0) # reproducibility # out: 7.01 ms Β± 435 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) # df2 (running this *afterwards*, as you are updating `state` 93.7 ms Β± 3.79 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) Equality check: df2.astype(out.dtypes).equals(out) # True | 1 | 2 |
79,403,046 | 2025-1-31 | https://stackoverflow.com/questions/79403046/unable-to-run-any-code-after-launching-the-app-in-flask | My question is that are you able to run any code after the if __name__ == "__main__": app.run() in flask. I tried to print something after the above syntax to check and it only appeared after I exited the process. I was unable to find answer to this question. If yes, please guide me on how to accomplish this. I need to run a block of code after the website has been launched. I am running on localhost if that helps. PS: I am creating a website that downloads an image from the user's computer, runs a python script and predicts what the thing in the image is and tells whether the object is recyclable or not. This is for a school project. I tried to print("check") and, as I use pycharm, stopped the process using the stop button and the following appeared:- WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on http://127.0.0.1:5000 Press CTRL+C to quit check Process finished with exit code 0 | As said by @JonSG and @Chris, I needed to create a route method before the app.run() to make the code work. Thanks! | 1 | 0 |
79,413,005 | 2025-2-4 | https://stackoverflow.com/questions/79413005/i-have-a-time-in-string-format-which-is-of-new-york-i-want-the-time-to-be-conv | Input (New York Time in string format) = '2024-11-01 13:00:00' Output (UTC Time in string format) = '2024-11-01 17:00:00' | Parse the time. It won't be "time zone aware", so apply local time zone, convert to UTC and format again: import datetime as dt import zoneinfo as zi zone = zi.ZoneInfo('America/New_York') fmt = '%Y-%m-%d %H:%M:%S' s = '2024-11-01 13:00:00' print(s) # parse the time and apply the local time zone nyt = dt.datetime.strptime(s, fmt).replace(tzinfo=zone) # convert to UTC and format string utc = nyt.astimezone(dt.UTC).strftime(fmt) print(utc) Output: 2024-11-01 13:00:00 2024-11-01 17:00:00 | 1 | 4 |
79,412,706 | 2025-2-4 | https://stackoverflow.com/questions/79412706/whats-going-on-with-the-chaining-in-pythons-string-membership-tests | I just realized I had a typo in my membership test and was worried this bug had been causing issues for a while. However, the code had behaved just as expected. Example: "test" in "testing" in "testing" in "testing" This left me wondering how this membership expression works and why it's allowed. I tried applying some order of operations logic to it with parentheses but that just breaks the expression. And the docs don't mention anything about chaining. Is there a practical use case for this I am just not aware of? | in is a comparison operator. As described at the top of the section in the docs you linked to, all comparison operators can be chained: Formally, if a, b, c, β¦, y, z are expressions and op1, op2, β¦, opN are comparison operators, then a op1 b op2 c ... y opN z is equivalent to a op1 b and b op2 c and ... y opN z, except that each expression is evaluated at most once. So: "test" in "testing" in "testing" in "testing" Is equivalent to: "test" in "testing" and "testing" in "testing" and "testing" in "testing" | 2 | 2 |
79,412,615 | 2025-2-4 | https://stackoverflow.com/questions/79412615/understanding-and-fixing-the-regex | I have a regex on my input parameter: r"^(ABC-\d{2,9})|(ABz?-\d{3})$" Ideally it should not allow parameters with ++ or -- at the end, but it does. Why is the regex not working in this case but works in all other scenarios? ABC-12 is a valid. ABC-123456789 is a valid. AB-123 is a valid. ABz-123 is a valid. | The problem is that your ^ and $ anchors don't apply to the entire pattern. You match ^ only in the first alternative, and $ only in the second alternative. So if the input matches (ABC-\d{2,9}) at the beginning, the match will succeed even if there's more after this. You can put a non-capturing group around everything except the anchors to fix this. r"^(?:(ABC-\d{2,9})|(ABz?-\d{3}))$" | 1 | 8 |
79,410,015 | 2025-2-3 | https://stackoverflow.com/questions/79410015/i-am-not-able-to-login-using-selenium-to-www-moneycontrol-com | The main issue I am not able to properly navigate to the login form and fill email and password to enable login button. I tried below code to fill the login form into www.moneycontrol.com but it seems little bit complicated. Login form is multi-layered, hover or click "Hello, Login". click Log-in button inside floating form Login with password tab then fill email and password to enable login button. Help is appreciated, code is from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys import time import pandas as pd from io import StringIO from bs4 import BeautifulSoup chrome_options = Options() chrome_options.add_argument('--headless') chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--disable-dev-shm-usage') driver = webdriver.Chrome(options=chrome_options) driver.get('https://www.moneycontrol.com') email = "[email protected]" password = "Qxxxxxxx" # Click on the "Login" button print(driver.page_source) login_button = driver.find_element(By.XPATH, '//a[contains(text(), "Hello, Login")]') login_button.click() time.sleep(3) # Wait for login popup to appear # Click on the "Login" button login_button = driver.find_element(By.XPATH, '//a[contains(text(), "Log-in")]') login_button.click() time.sleep(3) # Wait for login popup to appear # Click on the "Login" button login_button = driver.find_element(By.XPATH, '//a[contains(text(), "Login with Password")]') login_button.click() time.sleep(3) # Wait for login popup to appear # Switch to login iframe if present try: iframe = driver.find_element(By.TAG_NAME, "iframe") driver.switch_to.frame(iframe) time.sleep(2) except: pass # If no iframe, continue normally # Enter email/username email_field = driver.find_element(By.CSS_SELECTOR, '#mc_login > form > div:nth-child(1) > div > input[type=text]') email_field.send_keys(email) # Enter password password_field = driver.find_element(By.CSS_SELECTOR, '#mc_login > form > div:nth-child(2) > div > input[type=password]') password_field.send_keys(password) # Click on Login button submit_button = driver.find_element(By.CSS_SELECTOR, '#mc_login button[type="submit"]') submit_button.click() # Wait for login to process time. Sleep(5) website has too much ad, use ad blocker to avoid frustration | As others in the comment have mentioned, this site is plagued with popup ads. Instead of logging in from the main page, try the dedicated login URL. https://accounts.moneycontrol.com/mclogin You'll be able to log in easily using this URL. Then go to the main page. Your code has another problem. The site detects headless chrome and blocks access. You have to set the window size and user agent explicitly to avoid detection. Also, use explicit waits instead of sleep. That way you have to wait a minimum time. The site loads a promo URL first, then redirects to the main site. You have to navigate through the site using page URLs, instead of clicking on menus. That way you'll be able to avoid handling those annoying ads. Try the following code: from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys import time import pandas as pd from io import StringIO from bs4 import BeautifulSoup chrome_options = Options() chrome_options.add_argument('--headless') chrome_options.add_argument('----window-size=1980,1080') #set window size chrome_options.add_argument("--user-agent=Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36") # Set user agent driver = webdriver.Chrome(options=chrome_options) wait = WebDriverWait(driver, 60) # Add explicit wait for minimum wait # First go to the login page driver.get('https://accounts.moneycontrol.com/mclogin') email = "[email protected]" password = "Qxxxxxxx" # Enter email/username email_field = driver.find_element(By.CSS_SELECTOR, "form#login_form input#email") email_field.send_keys(email) # Enter password password_field = driver.find_element(By.CSS_SELECTOR, "form#login_form input#pwd") password_field.send_keys(password) # Click on Login button submit_button = driver.find_element(By.CSS_SELECTOR, "form#login_form #ACCT_LOGIN_SUBMIT") submit_button.click() # Wait for authenticating screen wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, "div.signinblock[style='display: block;']"))) print("logged in") # Go to main site driver.get('https://www.moneycontrol.com/') # wait for main site to load wait.until(EC.url_to_be('https://www.moneycontrol.com/')) print(driver.page_source) driver.quit() | 1 | 4 |
79,411,167 | 2025-2-4 | https://stackoverflow.com/questions/79411167/pandas-apply-function-return-a-list-to-new-column | I have a pandas dataframe: import pandas as pd import numpy as np np.random.seed(150) df = pd.DataFrame(np.random.randint(0, 10, size=(10, 2)), columns=['A', 'B']) I want to add a new column "C" whose values ββare the combined-list of every three rows in column "B". So I use the following method to achieve my needs, but this method is slow when the data is large. >>> df['C'] = [df['B'].iloc[i-2:i+1].tolist() if i >= 2 else None for i in range(len(df))] >>> df A B C 0 4 9 None 1 0 2 None 2 4 5 [9, 2, 5] 3 7 9 [2, 5, 9] 4 8 3 [5, 9, 3] 5 8 1 [9, 3, 1] 6 1 4 [3, 1, 4] 7 4 1 [1, 4, 1] 8 1 9 [4, 1, 9] 9 3 7 [1, 9, 7] When I try to use the df.apply function, I get an error message: df['C'] = df['B'].rolling(window=3).apply(lambda x: list(x), raw=False) TypeError: must be real number, not list I remember that pandas apply doesn't seem to return a list, so is there a better way? I searched the forum, but couldn't find a suitable topic about apply and return. | You can use numpy's sliding_window_view: from numpy.lib.stride_tricks import sliding_window_view as swv N = 3 df['C'] = pd.Series(swv(df['B'], N).tolist(), index=df.index[N-1:]) Output: A B C 0 4 9 NaN 1 0 2 NaN 2 4 5 [9, 2, 5] 3 7 9 [2, 5, 9] 4 8 3 [5, 9, 3] 5 8 1 [9, 3, 1] 6 1 4 [3, 1, 4] 7 4 1 [1, 4, 1] 8 1 9 [4, 1, 9] 9 3 7 [1, 9, 7] | 5 | 6 |
79,409,259 | 2025-2-3 | https://stackoverflow.com/questions/79409259/how-does-hydra-partial-interact-with-seeding | In the configuration management library Hydra, it is possible to only partially instantiate classes defined in configuration using the _partial_ keyword. The library explains that this results in a functools.partial. I wonder how this interacts with seeding. E.g. with pytorch torch.manual_seed() lightnings seed_everything etc. My reasoning is, that if I use the _partial_ keyword while specifying all parameters for __init__, then I would essentially obtain a factory which could be called after specifying the seed to do multiple runs. But this assumes that _partial_ does not bake the seed in already. To my understanding that should not be the case. Is that correct? | Before using hydra.utils.instantiate no third party code is not run by hydra. So you can set your seeds before each use of instantiate; or if a partial before each call to the partial. Here a complete toy example, based on Hydra's doc overview, which creates a partial to instantiate an optimizer or a model, that takes a callable optim_partial as an argument. # config.yaml model: _target_: "__main.__.MyModel" optim_partial: _partial_: true _target_: __main__.MyOptimizer algo: SGD lr: 0.01 from functools import partial from typing import Callable import random from pprint import pprint import hydra from omegaconf import DictConfig, OmegaConf class MyModel: def __init__(self, lr, optim_partial: Callable[..., "MyOptimizer"]): self.optim_partial = optim_partial self.optim1 = self.optim_partial() self.optim2 = self.optim_partial() class MyOptimizer: def __init__(self, algo): print(algo, random.randint(0, 10000)) @hydra.main(config_name="config", config_path="./", version_base=None) def main(cfg: DictConfig): # Check out the config pprint(OmegaConf.to_container(cfg, resolve=False)) print(type(cfg.model.optim_partial)) # Create the functools.partial optim_partial: partial[MyOptimizer] = hydra.utils.instantiate(cfg.model.optim_partial) # Set the seed before you call the a partial random.seed(42) optimizer1: MyOptimizer = optim_partial() optimizer2: MyOptimizer = optim_partial() random.seed(42) optimizer1b: MyOptimizer = optim_partial() optimizer2b: MyOptimizer = optim_partial() # model is not a partial; use seed before creation random.seed(42) model: MyModel = hydra.utils.instantiate(cfg.model) if __name__ == "__main__": main() # Output {'model': {'_target_': '__main__.MyModel', 'lr': 0.01, 'optim_partial': {'_partial_': True, '_target_': '__main__.MyOptimizer', 'algo': 'SGD'}}} type of cfg.model.optim_partial <class 'omegaconf.dictconfig.DictConfig'> SGD 1824 SGD 409 SGD 1824 SGD 409 SGD 1824 SGD 409 | 2 | 1 |
79,404,210 | 2025-2-1 | https://stackoverflow.com/questions/79404210/how-to-cancel-trigonometric-expressions-in-sympy | I have a bunch of expressions deque([-6*cos(th)**3 - 9*cos(th), (11*cos(th)**2 + 4)*sin(th), -6*sin(th)**2*cos(th), sin(th)**3]). Then I run them through some code that iteratively takes a derivative, adds, and then divides by sin(th): import sympy as sp th = sp.symbols('th') order = 4 for nu in range(order + 1, 2*order): # iterate order-1 more times to reach the constants q = 0 for mu in range(1, nu): # Terms come from the previous derivative, so there are nu - 1 of them here. p = exprs.popleft() term = q + sp.diff(p, th) exprs.append(sp.cancel(term/sp.sin(th))) q = p exprs.append(sp.cancel(q/sp.sin(th))) print(nu, exprs) The output is a bunch of junk: 5 deque([18*cos(th)**2 + 9, (-22*sin(th)**2*cos(th) + 5*cos(th)**3 - 5*cos(th))/sin(th), 6*sin(th)**2 - cos(th)**2 + 4, -3*sin(th)*cos(th), sin(th)**2]) 6 deque([-36*cos(th), (22*sin(th)**4 - 19*sin(th)**2*cos(th)**2 + 14*sin(th)**2 - 5*cos(th)**4 + 5*cos(th)**2)/sin(th)**3, (-8*sin(th)**2*cos(th) + 5*cos(th)**3 - 5*cos(th))/sin(th)**2, (9*sin(th)**2 - 4*cos(th)**2 + 4)/sin(th), -cos(th), sin(th)]) 7 deque([36, (24*sin(th)**4*cos(th) + 39*sin(th)**2*cos(th)**3 - 24*sin(th)**2*cos(th) + 15*cos(th)**5 - 15*cos(th)**3)/sin(th)**5, (30*sin(th)**4 - 34*sin(th)**2*cos(th)**2 + 19*sin(th)**2 - 15*cos(th)**4 + 15*cos(th)**2)/sin(th)**4, (9*sin(th)**2*cos(th) + 9*cos(th)**3 - 9*cos(th))/sin(th)**3, (10*sin(th)**2 - 4*cos(th)**2 + 4)/sin(th)**2, 0, 1]) I expect the formulas to become simpler over the time steps and end up as a bunch of constants. Here's a correct output: 5 deque([9*cos(2*th) + 18, -27*sin(2*th)/2, 7*sin(th)**2 + 3, -3*sin(2*th)/2, sin(th)**2]) 6 deque([-36*cos(th), 36*sin(th), -13*cos(th), 13*sin(th), -cos(th), sin(th)]) 7 deque([36, 0, 49, 0, 14, 0, 1]) I can add various .simplify() calls and get it to work for order = 4, but I'm trying to get it to work with more complicated expressions with higher orders, and I'm finding SymPy is just not reliably figuring out how to cancel a sin(th) at each stage (even though I know it's possible to do so). How can I coax it in the right direction? I'm finding .trigsimp() and .simplify() sometimes create factors of higher frequency, like sin(2th), and then .cancel() can't figure out how to eliminate a lower-frequency sin(th) from those. Yet if I don't simplify at all, or try to simplify only at the end, sympy does markedly worse, both in terms of runtime and final complexity. Here are the expression deques for higher orders. The example above comes from order 4 only for the sake of simplicity while explaining the setup. The real problem is that I can't find a solution that works beyond order 6. I'd like a solution that works on all of these: 5: deque([-24*cos(th)**4 - 72*cos(th)**2 - 9, (50*cos(th)**3 + 55*cos(th))*sin(th), (-35*cos(th)**2 - 10)*sin(th)**2, 10*sin(th)**3*cos(th), -sin(th)**4]) 6: deque([-120*cos(th)**5 - 600*cos(th)**3 - 225*cos(th), (274*cos(th)**4 + 607*cos(th)**2 + 64)*sin(th), (-225*cos(th)**3 - 195*cos(th))*sin(th)**2, (85*cos(th)**2 + 20)*sin(th)**3, -15*sin(th)**4*cos(th), sin(th)**5]) 7: deque([-720*cos(th)**6 - 5400*cos(th)**4 - 4050*cos(th)**2 - 225, (1764*cos(th)**5 + 6552*cos(th)**3 + 2079*cos(th))*sin(th), (-1624*cos(th)**4 - 2842*cos(th)**2 - 259)*sin(th)**2, (735*cos(th)**3 + 525*cos(th))*sin(th)**3, (-175*cos(th)**2 - 35)*sin(th)**4, 21*sin(th)**5*cos(th), -sin(th)**6]) 8: deque([-5040*cos(th)**7 - 52920*cos(th)**5 - 66150*cos(th)**3 - 11025*cos(th), (13068*cos(th)**6 + 73188*cos(th)**4 + 46575*cos(th)**2 + 2304)*sin(th), (-13132*cos(th)**5 - 38626*cos(th)**3 - 10612*cos(th))*sin(th)**2, (6769*cos(th)**4 + 9772*cos(th)**2 + 784)*sin(th)**3, (-1960*cos(th)**3 - 1190*cos(th))*sin(th)**4, (322*cos(th)**2 + 56)*sin(th)**5, -28*sin(th)**6*cos(th), sin(th)**7]) If you're curious where these come from, you can see my full notebook here, which is part of a larger project with a lot of math. | Rewriting to exp and doing the simplifications seems to work in this case: from collections import deque from sympy import * import sympy as sp th = sp.symbols('th') order = 4 exprs = deque([i.simplify() for i in [-6*cos(th)**3 - 9*cos(th), (11*cos(th)**2 + 4)*sin(th), -6*sin(th)**2*cos(th), sin(th)**3]]) def simpler(e): return e.rewrite(exp).simplify().rewrite(cos).cancel() for nu in range(order + 1, 2*order): # iterate order-1 more times to reach the constants q = 0 for mu in range(1, nu): # Terms come from the previous derivative, so there are nu - 1 of them here. p = exprs.popleft() term = q + sp.diff(p, th) exprs.append(simpler(term/sp.sin(th))) q = p this = simpler(q/sp.sin(th)) exprs.append(this) print(nu,exprs) outputs 5 deque([9*cos(2*th) + 18, -27*cos(2*th - pi/2)/2, 13/2 - 7*cos(2*th)/2, -3*cos(2*th - pi/2)/2, cos(th - pi/2)**2]) 6 deque([-36*cos(th), 36*sin(th), -13*cos(th), 13*sin(th), -cos(th), cos(th - pi/2)]) 7 deque([36, 0, 49, 0, 14, 0, 1]) If you just want to get the constants at the end, don't simplify until the end. Here is a complete working example: from collections import deque import sympy as sp th = sp.symbols('th') E = {4:deque([-6*cos(th)**3 - 9*cos(th), (11*cos(th)**2 + 4)*sin(th), -6*sin(th)**2*cos(th), sin(th)**3]), 5: deque([-24*cos(th)**4 - 72*cos(th)**2 - 9, (50*cos(th)**3 + 55*cos(th))*sin(th), (-35*cos(th)**2 - 10)*sin(th)**2, 10*sin(th)**3*cos(th), -sin(th)**4]), 6: deque([-120*cos(th)**5 - 600*cos(th)**3 - 225*cos(th), (274*cos(th)**4 + 607*cos(th)**2 + 64)*sin(th), (-225*cos(th)**3 - 195*cos(th))*sin(th)**2, (85*cos(th)**2 + 20)*sin(th)**3, -15*sin(th)**4*cos(th), sin(th)**5]), 7: deque([-720*cos(th)**6 - 5400*cos(th)**4 - 4050*cos(th)**2 - 225, (1764*cos(th)**5 + 6552*cos(th)**3 + 2079*cos(th))*sin(th), (-1624*cos(th)**4 - 2842*cos(th)**2 - 259)*sin(th)**2, (735*cos(th)**3 + 525*cos(th))*sin(th)**3, (-175*cos(th)**2 - 35)*sin(th)**4, 21*sin(th)**5*cos(th), -sin(th)**6]), 8: deque([-5040*cos(th)**7 - 52920*cos(th)**5 - 66150*cos(th)**3 - 11025*cos(th), (13068*cos(th)**6 + 73188*cos(th)**4 + 46575*cos(th)**2 + 2304)*sin(th), (-13132*cos(th)**5 - 38626*cos(th)**3 - 10612*cos(th))*sin(th)**2, (6769*cos(th)**4 + 9772*cos(th)**2 + 784)*sin(th)**3, (-1960*cos(th)**3 - 1190*cos(th))*sin(th)**4, (322*cos(th)**2 + 56)*sin(th)**5, -28*sin(th)**6*cos(th), sin(th)**7])} from time import time for order in E: t=time() exprs = E[order] for nu in range(order + 1, 2*order): # iterate order-1 more times to reach the constants q = 0 for mu in range(1, nu): # Terms come from the previous derivative, so there are nu - 1 of them here. p = exprs.popleft() term = q + sp.diff(p, th) exprs.append(sp.cancel(term/sp.sin(th))) q = p exprs.append(sp.cancel(q/sp.sin(th))) t1=time()-t t=time() print(order,round(t1,2),[i.rewrite(sp.exp).simplify() for i in exprs],round(time()-t,2)) 4 0.06 [36, 0, 49, 0, 14, 0, 1] 0.16 5 0.13 [-576, 0, -820, 0, -273, 0, -30, 0, -1] 0.4 6 0.29 [14400, 0, 21076, 0, 7645, 0, 1023, 0, 55, 0, 1] 0.68 7 0.59 [-518400, 0, -773136, 0, -296296, 0, -44473, 0, -3003, 0, -91, 0, -1] 1.14 8 0.97 [25401600, 0, 38402064, 0, 15291640, 0, 2475473, 0, 191620, 0, 7462, 0, 140, 0, 1] 1.7 | 2 | 2 |
79,407,738 | 2025-2-3 | https://stackoverflow.com/questions/79407738/poetry-installed-tensorflow-but-python-says-modulenotfounderror-no-module-nam | tensorflow python package installed using poetry is not recognised within python. (poetry version - 2.0.1) Anyone else facing this issue? and how did you solve them? C:\Users\username\Documents\folder>poetry add tensorflow Using version ^2.18.0 for tensorflow Updating dependencies Resolving dependencies... (0.9s) Package operations: 18 installs, 0 updates, 0 removals Installing markupsafe (3.0.2) Installing grpcio (1.70.0) Installing markdown (3.7) Installing protobuf (5.29.3) Installing setuptools (75.8.0) Installing tensorboard-data-server (0.7.2) Installing werkzeug (3.1.3) Installing wheel (0.45.1) Installing astunparse (1.6.3) Installing gast (0.6.0) Installing libclang (18.1.1) Installing opt-einsum (3.4.0) Installing google-pasta (0.2.0) Installing flatbuffers (25.1.24) Installing termcolor (2.5.0) Installing tensorboard (2.18.0) Installing wrapt (1.17.2) Installing tensorflow (2.18.0) Writing lock file C:\Users\username\Documents\folder>poetry run python Python 3.12.8 (tags/v3.12.8:2dc476b, Dec 3 2024, 19:30:04) [MSC v.1942 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'tensorflow' | tensorflow-intel has not been installed, which since you are on windows it should be. Please go to https://github.com/tensorflow/tensorflow/issues/75415 and encourage the tensorflow folk to put consistent metadata in all of their wheels so that cross-platform resolvers like poetry and uv can reliably derive accurate information. | 1 | 1 |
79,408,932 | 2025-2-3 | https://stackoverflow.com/questions/79408932/im-having-issues-with-logging-is-there-anyway-to-fix-this | is there away to make this code not spam in the second file without interfering with any other functions? "Anomaly.log" is being spammed although "Bandit.log" is not being spammed, what am I doing wrong? import pyautogui import schedule import time import logging # Initialize a counter call_count = 0 # Example calls def logger(): logging.basicConfig(level=logging.INFO, filename="bandit.log", filemode="w", format="%(asctime)s - %(levelname)s - %(message)s") logger = logging.getLogger(__name__) logger.propagate = False handler = logging.FileHandler('anomaly.log') formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s") handler.setFormatter(formatter) logger.addHandler(handler) try: location = pyautogui.locateOnScreen('1.png', confidence=0.9) if pyautogui.locateOnScreen('1.png', confidence=0.9): global call_count call_count += 1 logging.info(call_count) except pyautogui.ImageNotFoundException: pass def bandit_cycle(): try: location = pyautogui.locateOnScreen('1.png', confidence=0.9) logging.info('x1 Data found') time.sleep(1.5) except: pass # schedule for main working code schedule.every(1).seconds.do(bandit_main) # schedule for main working code schedule.every(1).seconds.do(logger) while True: schedule.run_pending() time.sleep(1) Anomaly.log 2025-02-03 14:51:19,773 - main - INFO - 1 2025-02-03 14:51:19,773 - main - INFO - 1 2025-02-03 14:51:19,773 - main - INFO - 1 2025-02-03 14:51:19,773 - main - INFO - 1 2025-02-03 14:51:57,076 - main - INFO - 2 2025-02-03 14:51:57,076 - main - INFO - 2 2025-02-03 14:51:57,076 - main - INFO - 2 2025-02-03 14:51:57,076 - main - INFO - 2 2025-02-03 14:51:57,076 - main - INFO - 2 2025-02-03 14:51:57,076 - main - INFO - 2 2025-02-03 14:51:57,076 - main - INFO - 2 2025-02-03 14:51:57,076 - main - INFO - 2 2025-02-03 14:51:57,076 - main - INFO - 2 2025-02-03 14:51:57,076 - main - INFO - 2 Bandit.log 2025-02-03 14:51:15,548 - INFO - x1 Data found 2025-02-03 14:51:19,773 - INFO - 1 2025-02-03 14:51:52,877 - INFO - x1 Data found 2025-02-03 14:51:57,076 - INFO - 2 2025-02-03 14:52:30,495 - INFO - x5 Data found | When you call this function the first time: logger = logging.getLogger(__name__) It creates a new logger object. But if you call it again within the same program execution, it remembers the logger object you already created and just fetches it, instead of creating a new one. And then each time you call this: logger.addHandler(handler) It adds another handler to the logger object. So you end up with a logger that has dozens, or even hundreds, of handlers. And messages are logged by each handler on the logger. You should rearrange your logging setup/initialization code so it isn't called multiple times. | 1 | 2 |
Subsets and Splits