question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
78,797,526
2024-7-26
https://stackoverflow.com/questions/78797526/is-using-a-pandas-dataframe-as-a-read-only-table-scalable-in-a-flask-app
I'm developing a small website in Flask that relies on data from a CSV file to output data to a table on the frontend using JQuery. The user would select an ID from a drop-down on the front-end, then a function would run on the back-end where the ID would be used as a filter on the table to return data. The data returned would usually just be a single column from the dataframe as well. The usual approach, from my understanding, would be to load the CSV data into a SQLite DB on startup and query using SQL methods in python at runtime. However, in my case, the table is 15MB in size (214K rows) and will never grow past that point. All the data will be as is for the duration of the Apps lifecycle. As such, would it be easier and less hassle to just load the dataframe table into memory and just filter on a copy of it when requests come in? Is that scalable or am I just kicking a can down the road? Example: app = Flask(__name__) dir_path = os.path.abspath(os.path.dirname(__file__)) with app.app_context(): print("Writing DB on startup") query_df = pd.read_csv(dir_path+'/query_file.csv') @app.route('/getData', methods=["POST"]) def get_data(): id = request.get_json() print("Getting rows....") data_list = sorted(set(query_df[query_df['ID'] == id]['Name'].tolist())) return jsonify({'items': data_list, 'ID': id}) This may be a tad naive on my end but I could not find a straight answer for my particular use-case.
This line of code can be made much faster without adding any new dependencies, just by using the tools that Pandas gives you. data_list = sorted(set(query_df[query_df['ID'] == id]['Name'].tolist())) The following optimizations can be made: sorted() can be replaced by pre-sorting the dataframe. set() can be replaced by dropping duplicates with the same ID and Name. query_df[query_df['ID'] == id] requires searching the entire dataframe for matching ID values, and can be replaced with an index. To prepare the dataframe, on the startup of your program, after reading the dataframe with read_csv(), you would do the following: name_lookup_series = query_df \ .sort_values(['ID', 'Name']) \ .drop_duplicates(['ID', 'Name']) \ .set_index('ID')['Name'] To look up any particular value, you would do the following: name_lookup_series.loc[[id_to_look_up]].tolist() Benchmarking this, it is roughly 100x faster, using the following benchmark program: import pandas as pd import numpy as np np.random.seed(92034) N = 200000 df = pd.DataFrame({ 'ID': np.random.randint(0, N, size=N), 'Name': np.random.randint(0, N, size=N), }) df['ID'] = 'ID' + df['ID'].astype('str') df['Name'] = 'N' + df['Name'].astype('str') print("Test dataframe") print(df) id_to_look_up = np.random.choice(df['ID']) print("Looking up", id_to_look_up) print("Result, method 1", sorted(set(df[df['ID'] == id_to_look_up]['Name'].tolist()))) %timeit sorted(set(df[df['ID'] == id_to_look_up]['Name'].tolist())) name_lookup_series = df.copy() \ .sort_values(['ID', 'Name']) \ .drop_duplicates(['ID', 'Name']) \ .set_index('ID')['Name'] print("Result, method 2", name_lookup_series.loc[[id_to_look_up]].tolist()) %timeit name_lookup_series.loc[[id_to_look_up]].tolist()
2
2
78,809,821
2024-7-30
https://stackoverflow.com/questions/78809821/why-does-gekko-not-provide-optimal-commands-even-though-the-output-does-not-matc
The following is related to this question: predictive control model using GEKKO I am trying to apply the MPC to maintain the temperature of a room within a defined range, but GEKKO gives me null commands even if the output diverges. I run the corrected code from my previous question: # Import library import numpy as np import pandas as pd import time from gekko import GEKKO from numpy import array K = array([[ 0.93705481, -12.24012156]]) p = {'a': array([[ 1.08945247], [-0.00242145], [-0.00245978], [-0.00272713], [-0.00295845], [-0.00319119], [-0.00343511], [-0.00366243], [-0.00394247], [-0.06665054]]), 'b': array([[[-0.05160235, -0.01039767], [ 0.00173511, -0.01552485], [ 0.00174602, -0.01179519], [ 0.00180031, -0.01052658], [ 0.00186416, -0.00822121], [ 0.00193947, -0.00570905], [ 0.00202877, -0.00344507], [ 0.00211395, -0.00146947], [ 0.00223514, 0.00021945], [ 0.03800987, 0.04243736]]]), 'c': array([0.0265903])} # i have used only 200 mes of T_externel T_externel = np.linspace(9.51,9.78,200) m = GEKKO(remote=False) m.y = m.Array(m.CV,1) m.u = m.Array(m.MV,2) m.arx(p,m.y,m.u) # rename CVs m.T = m.y[0] # rename MVs m.beta = m.u[1] # distrubance m.d = m.u[0] # distrubance and parametres m.d = m.Param(T_externel[0]) # lower,heigh bound for MV TL = m.Param(value = 16) TH = m.Param(value = 18) # steady state initialization m.options.IMODE = 1 m.solve(disp=False) # set up MPC m.d.value = T_externel m.options.IMODE = 6 # MPC m.options.CV_TYPE = 2 # the objective is an l2-norm (squared error) m.options.NODES = 2 # Collocation nodes m.options.SOLVER = 1 # APOPT m.time = np.arange(0,len(T_externel)*300,300) # step time = 300s # Manipulated variables m.beta.STATUS = 1 # calculated by the optimizer m.beta.FSTATUS = 1 # use measured value m.beta.DMAX = 1.0 # Delta MV maximum step per horizon interval m.beta.DCOST = 0.0 # Delta cost penalty for MV movement m.beta.UPPER = 1.0 # Lower bound m.beta.LOWER = 0.0 m.beta.MEAS = 0 # set u=0 # Controlled variables m.T.STATUS = 1 # drive to set point m.T.FSTATUS = 1 # receive measurement m.T.SP = 17 # set point TL.value = np.ones(len(T_externel))*16 TH.value = np.ones(len(T_externel))*18 m.T.value = 17 # Temprature starts at 17 for i in range(len(T_externel)): m.solve(disp = False) if m.options.APPSTATUS == 1: # Retrieve new values beta = m.beta.NEWVAL else: # Solution failed beta = 0.0 import matplotlib.pyplot as plt # Plot the results plt.figure(figsize=(12,6)) plt.subplot(2,1,1) plt.plot(m.time,m.T.value,'r-',label=r'$T_{int}$') plt.plot(m.time,TL.value,'k--',label='Lower Bound') plt.plot(m.time,TH.value,'k--',label='Upper Bound') plt.ylabel('Temperature (Β°C)') plt.legend() plt.subplot(2,1,2) plt.plot(m.time,m.beta.value,'b--',label=r'$\beta$') plt.ylabel('optimal control') plt.xlabel('Time (sec)') plt.legend() plt.show() And is the output and the optimal control obtained by GEKKO: I need to to maintain the temperature of a room within a defined range.
The gain is listed as K = array([[ 0.93705481, -12.24012156]]) so an increase in beta by +1 leads to a decrease in the T by -12.24. An increase in T_ext by +1 leads to an increase in T by +0.937. The steady-state value of T is 13.32 so to get a starting value of 17, a common practice is to create an additive (or multiplicative bias value that adjusts the starting value. An additional variable Tb is added, although Gekko does this internally. When investigating a controller response, a good first step is to confirm the model gain between the MV and CV with a step response. Here is a step response on beta. # Import library import numpy as np import pandas as pd import time from gekko import GEKKO from numpy import array K = array([[ 0.93705481, -12.24012156]]) p = {'a': array([[ 1.08945247], [-0.00242145], [-0.00245978], [-0.00272713], [-0.00295845], [-0.00319119], [-0.00343511], [-0.00366243], [-0.00394247], [-0.06665054]]), 'b': array([[[-0.05160235, -0.01039767], [ 0.00173511, -0.01552485], [ 0.00174602, -0.01179519], [ 0.00180031, -0.01052658], [ 0.00186416, -0.00822121], [ 0.00193947, -0.00570905], [ 0.00202877, -0.00344507], [ 0.00211395, -0.00146947], [ 0.00223514, 0.00021945], [ 0.03800987, 0.04243736]]]), 'c': array([0.0265903])} # i have used only 200 mes of T_externel T_externel = np.linspace(9.51,9.78,200) m = GEKKO(remote=True) m.y = m.Array(m.CV,1) m.u = m.Array(m.MV,2) m.arx(p,m.y,m.u) # rename CVs m.T = m.y[0] # rename MVs m.beta = m.u[1] # distrubance m.d = m.u[0] # distrubance and parametres m.d = m.Param(T_externel[0]) m.bias = m.Param(0) m.Tb = m.CV() m.Equation(m.Tb==m.T+m.bias) # steady state initialization m.options.IMODE = 1 m.solve(disp=True) # set up MPC #m.d.value = T_externel m.options.IMODE = 6 # MPC m.options.CV_TYPE = 1 # the objective is an l1-norm (region) m.options.NODES = 2 # Collocation nodes m.options.SOLVER = 3 # IPOPT m.time = np.arange(0,len(T_externel)*300,300) # step time = 300s # Manipulated variables m.beta.STATUS = 0 # calculated by the optimizer m.beta.FSTATUS = 0 # use measured value m.beta.DMAX = 0.2 # Delta MV maximum step per horizon interval m.beta.DCOST = 0.0 # Delta cost penalty for MV movement m.beta.UPPER = 1.0 # Upper bound m.beta.LOWER = 0.0 # Lower bound m.beta.MEAS = 0.0 # Measured value # step test m.beta.value = np.zeros_like(m.time) m.beta.value[20:] = 1 # Controlled variables m.Tb.STATUS = 1 # drive to set point m.Tb.FSTATUS = 1 # receive measurement m.Tb.SPHI = 21 # set point high level m.Tb.SPLO = 19 # set point low level T_MEAS = 17 # Temperature starts at 17 m.Tb.value = T_MEAS m.bias.value = T_MEAS - m.T.value[0] m.solve(disp=True) if m.options.APPSTATUS == 1: # Retrieve new values beta = m.beta.NEWVAL else: # Solution failed beta = 0.0 import matplotlib.pyplot as plt # Plot the results plt.figure(figsize=(8,3.5)) plt.subplot(2,1,1) plt.plot(m.time,m.Tb.value,'r-',label=r'$T_{biased}$') plt.plot(m.time,m.T.value,'r--',label=r'$T_{unbiased}$') plt.plot(m.time,m.d.value,'b:',label=r'T_{external}') plt.plot([0,m.time[-1]],[m.Tb.SPHI,m.Tb.SPHI],'k--',label='Upper Bound') plt.plot([0,m.time[-1]],[m.Tb.SPLO,m.Tb.SPLO],'k--',label='Lower Bound') plt.ylabel('Temperature (Β°C)') plt.legend(); plt.grid() plt.subplot(2,1,2) plt.step(m.time,m.beta.value,'b--',label=r'$\beta$') plt.ylabel('optimal control') plt.xlabel('Time (sec)') plt.legend(); plt.grid() plt.savefig('results.png',dpi=300) plt.show() The next step to investigate the controller response is to add the disturbance and set the appropriate set point range. The controller is switched to m.options.CV_TYPE=1 to use the l1-norm objective that gives m.Tb.SPHI and m.Tb.SPLO options to specify the range. # Import library import numpy as np import pandas as pd import time from gekko import GEKKO from numpy import array K = array([[ 0.93705481, -12.24012156]]) p = {'a': array([[ 1.08945247], [-0.00242145], [-0.00245978], [-0.00272713], [-0.00295845], [-0.00319119], [-0.00343511], [-0.00366243], [-0.00394247], [-0.06665054]]), 'b': array([[[-0.05160235, -0.01039767], [ 0.00173511, -0.01552485], [ 0.00174602, -0.01179519], [ 0.00180031, -0.01052658], [ 0.00186416, -0.00822121], [ 0.00193947, -0.00570905], [ 0.00202877, -0.00344507], [ 0.00211395, -0.00146947], [ 0.00223514, 0.00021945], [ 0.03800987, 0.04243736]]]), 'c': array([0.0265903])} # i have used only 200 mes of T_externel T_externel = np.linspace(9.51,9.78,200) m = GEKKO(remote=True) m.y = m.Array(m.CV,1) m.u = m.Array(m.MV,2) m.arx(p,m.y,m.u) # rename CVs m.T = m.y[0] # rename MVs m.beta = m.u[1] # distrubance m.d = m.u[0] # distrubance and parametres m.d = m.Param(T_externel[0]) m.bias = m.Param(0) m.Tb = m.CV() m.Equation(m.Tb==m.T+m.bias) # steady state initialization m.options.IMODE = 1 m.solve(disp=True) # set up MPC m.d.value = T_externel m.options.IMODE = 6 # MPC m.options.CV_TYPE = 1 # the objective is an l1-norm (region) m.options.NODES = 2 # Collocation nodes m.options.SOLVER = 3 # IPOPT m.time = np.arange(0,len(T_externel)*300,300) # step time = 300s # Manipulated variables m.beta.STATUS = 1 # calculated by the optimizer m.beta.FSTATUS = 0 # use measured value m.beta.DMAX = 1.0 # Delta MV maximum step per horizon interval m.beta.DCOST = 0.0 # Delta cost penalty for MV movement m.beta.UPPER = 1.0 # Upper bound m.beta.LOWER = 0.0 # Lower bound # Controlled variables m.Tb.STATUS = 1 # drive to set point m.Tb.FSTATUS = 0 # receive measurement m.Tb.SPHI = 15 # set point high level m.Tb.SPLO = 13 # set point low level m.Tb.WSPHI = 100 # set point high priority m.Tb.WSPLO = 100 # set point low priority T_MEAS = 17 # Temperature starts at 17 m.Tb.value = T_MEAS m.bias.value = T_MEAS - m.T.value[0] m.solve(disp=True) if m.options.APPSTATUS == 1: # Retrieve new values beta = m.beta.NEWVAL else: # Solution failed beta = 0.0 import matplotlib.pyplot as plt # Plot the results plt.figure(figsize=(8,3.5)) plt.subplot(2,1,1) plt.plot(m.time,m.Tb.value,'r-',label=r'$T_{biased}$') plt.plot(m.time,m.T.value,'r--',label=r'$T_{unbiased}$') plt.plot(m.time,m.d.value,'g:',label=r'$T_{ext}$') plt.plot([0,m.time[-1]],[m.Tb.SPHI,m.Tb.SPHI],'k--',label='Upper Bound') plt.plot([0,m.time[-1]],[m.Tb.SPLO,m.Tb.SPLO],'k--',label='Lower Bound') plt.ylabel('Temperature (Β°C)') plt.legend(loc=1); plt.grid() plt.subplot(2,1,2) plt.step(m.time,m.beta.value,'b--',label=r'$\beta$') plt.ylabel('optimal control') plt.xlabel('Time (sec)') plt.legend(loc=1); plt.grid() plt.savefig('results.png',dpi=300) plt.show() The controller keeps the Tb value within the setpoint range by adjusting beta. It is also possible to make the beta value an integer if the cooling system only has an on/off state instead of continuous values between 0 and 1. # Import library import numpy as np import pandas as pd import time from gekko import GEKKO from numpy import array K = array([[ 0.93705481, -12.24012156]]) p = {'a': array([[ 1.08945247], [-0.00242145], [-0.00245978], [-0.00272713], [-0.00295845], [-0.00319119], [-0.00343511], [-0.00366243], [-0.00394247], [-0.06665054]]), 'b': array([[[-0.05160235, -0.01039767], [ 0.00173511, -0.01552485], [ 0.00174602, -0.01179519], [ 0.00180031, -0.01052658], [ 0.00186416, -0.00822121], [ 0.00193947, -0.00570905], [ 0.00202877, -0.00344507], [ 0.00211395, -0.00146947], [ 0.00223514, 0.00021945], [ 0.03800987, 0.04243736]]]), 'c': array([0.0265903])} # i have used only 200 mes of T_externel T_externel = np.linspace(9.51,9.78,200) m = GEKKO(remote=True) m.y = m.Array(m.CV,1) m.u = m.Array(m.MV,2) m.arx(p,m.y,m.u) # rename CVs m.T = m.y[0] # rename MVs m.beta = m.u[1] m.free(m.beta) m.bint = m.MV(0,lb=0,ub=1,integer=True) m.Equation(m.beta==m.bint) # distrubance m.d = m.u[0] # distrubance and parametres m.d = m.Param(T_externel[0]) m.bias = m.Param(0) m.Tb = m.CV() m.Equation(m.Tb==m.T+m.bias) # steady state initialization m.options.IMODE = 1 m.solve(disp=True) # set up MPC m.d.value = T_externel m.options.IMODE = 6 # MPC m.options.CV_TYPE = 1 # the objective is an l1-norm (region) m.options.NODES = 2 # Collocation nodes m.options.SOLVER = 3 # IPOPT m.time = np.arange(0,len(T_externel)*300,300) # step time = 300s # Manipulated variables m.bint.STATUS = 1 # calculated by the optimizer m.bint.FSTATUS = 0 # use measured value m.bint.DCOST = 0.0 # Delta cost penalty for MV movement m.bint.UPPER = 1.0 # Upper bound m.bint.LOWER = 0.0 # Lower bound m.bint.MV_STEP_HOR = 10 m.bint.value = 0 # Controlled variables m.Tb.STATUS = 1 # drive to set point m.Tb.FSTATUS = 0 # receive measurement m.Tb.SPHI = 15 # set point high level m.Tb.SPLO = 13 # set point low level m.Tb.WSPHI = 100 # set point high priority m.Tb.WSPLO = 100 # set point low priority T_MEAS = 17 # Temperature starts at 17 m.Tb.value = T_MEAS m.bias.value = T_MEAS - m.T.value[0] m.options.SOLVER = 1 m.solve(disp=True) if m.options.APPSTATUS == 1: # Retrieve new values beta = m.beta.NEWVAL else: # Solution failed beta = 0.0 import matplotlib.pyplot as plt # Plot the results plt.figure(figsize=(8,3.5)) plt.subplot(2,1,1) plt.plot(m.time,m.Tb.value,'r-',label=r'$T_{biased}$') plt.plot(m.time,m.T.value,'r--',label=r'$T_{unbiased}$') plt.plot(m.time,m.d.value,'g:',label=r'$T_{ext}$') plt.plot([0,m.time[-1]],[m.Tb.SPHI,m.Tb.SPHI],'k--',label='Upper Bound') plt.plot([0,m.time[-1]],[m.Tb.SPLO,m.Tb.SPLO],'k--',label='Lower Bound') plt.ylabel('Temperature (Β°C)') plt.legend(loc=1); plt.grid() plt.subplot(2,1,2) plt.step(m.time,m.bint.value,'b--',label=r'$\beta_{int}$') plt.ylabel('optimal control') plt.xlabel('Time (sec)') plt.legend(loc=1); plt.grid() plt.savefig('results.png',dpi=300) plt.show() Solution time for binary variables is significantly longer so many approximate a 0.37 as 37% ON with some post-processing such as off 3.7 minutes and on 6.3 minutes in 10-minute interval blocks. Iter: 71 I: 0 Tm: 16.99 NLPi: 10 Dpth: 4 Lvs: 87 Obj: 3.04E+02 Gap: 6.24E-01 Iter: 72 I: 0 Tm: 11.37 NLPi: 3 Dpth: 4 Lvs: 88 Obj: 3.04E+02 Gap: 6.24E-01 --Integer Solution: 3.04E+02 Lowest Leaf: 3.04E+02 Gap: 9.61E-09 Iter: 73 I: 0 Tm: 13.62 NLPi: 3 Dpth: 13 Lvs: 88 Obj: 3.04E+02 Gap: 9.61E-09 Successful solution --------------------------------------------------- Solver : APOPT (v1.0) Solution time : 1064.45290000000 sec Objective : 304.259454634496 Successful solution ---------------------------------------------------
2
1
78,792,688
2024-7-25
https://stackoverflow.com/questions/78792688/unpickling-error-magic-number-pickle-module-loadf-pickle-load-args-pick
When I am trying to load a .pt file i am seeing the following error, str1='Dataset/ALL_feats_cgqa.pt' m = torch.load(str1) the error is as follows, File "/home/Storage1/pythonCodeArea/train.py", line 21, in load_embeddings m = torch.load(str1) File "/home/.local/lib/python3.10/site-packages/torch/serialization.py", line 1040, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/home/.local/lib/python3.10/site-packages/torch/serialization.py", line 1262, in _legacy_load magic_number = pickle_module.load(f, **pickle_load_args) _pickle.UnpicklingError: invalid load key, 'v'. I have no idea about this error. Any help will be highly appreciated. I have gone through these references without any solution, [1], [2], [3], [4].
My guess is: your .pt file is most likely broken/corrupted. Most of the references that you posted in your question hint at the same cause. One can reproduce the problem as follows (that isn't to say that your .pt file was produced this way, but rather is intended to show that a corrupted file can trigger exactly the message that you saw): import torch from pathlib import Path # TODO: Adjust location as suitable pt_file = Path("~/test.pt").expanduser() pt_file.write_text("v") # Write nothing but 'v' to .pt file torch.load(pt_file) # Raises "UnpicklingError: invalid load key, 'v'." My suggestions for further diagnoses and steps would be: Check the size of your .pt file: Is it reasonably large? For example, if it holds the weights of some relatively modern neural network, its size should probably be on the order of megabytes or gigabytes, rather than bytes or kilobytes. Can you open the .pt file with an archive manager? Using torch.save() with recent PyTorch versions (β‰₯1.6) saves .pt files as zip archives by default. This means if the file is not corrupted and if it was created this way, you should be able to open it with an archive manager (maybe you need to change the suffix to .zip first). Look at the contents of your .pt file with a text editor or hex editor. What do you see? Finally, try to get the .pt file again from its source (re-download, re-copy, etc.) or from a backup if this is possible; or else try to recreate it yourself (by re-training your model).
3
1
78,795,722
2024-7-25
https://stackoverflow.com/questions/78795722/gcloud-mistakes-event-trigger-for-storage-trigger
There's a cloud function in Python that processes some data when a file is uploaded to firebase's bucket: @storage_fn.on_object_finalized(bucket = "my-bucket", timeout_sec = timeout_sec, memory = memory, cpu = cpu, region='us-central1') def validate_file_upload(event: storage_fn.CloudEvent[storage_fn.StorageObjectData]): process(event) When it's deployed via firebase cli the function works properly firebase deploy --only functions:validate_file_upload However, when the same function is deployed via gcloud gcloud functions deploy validate_file_upload --gen2 --region=us-central1 ....... --entry-point=validate_file_upload --trigger-event-filters="type=google.cloud.storage.object.v1.finalized" --trigger-event-filters="bucket=my-bucket" When function is triggered, it fails with TypeError: validate_file_upload() takes 1 positional argument but 2 were given The reason is that when function is deployed viafirebase, GCP sends 'Eventarc' object as single argument to cloud function, but if it's deployed via gcloud it sends two: (data, context) and naturally it causes exception Even in the documentation it states there should be only 1 argument: A Cloud Storage trigger is implemented as a CloudEvent function, in which the Cloud Storage event data is passed to your function in the CloudEvents format https://cloud.google.com/functions/docs/calling/storage How to make sure that gcloud deployment uses correct function prototype?
The answer is to use double-decorators: import functions_framework from cloudevents.http import CloudEvent from firebase_functions import storage_fn, options @functions_framework.cloud_event @storage_fn.on_object_finalized(bucket = "my-bucket", timeout_sec = timeout_sec, memory = memory, cpu = cpu, region='us-central1') def validate_file_upload(event: CloudEvent): process(event) This way it'll work regardless of being deployed via gcloud or firebase. Also I've noticed that if the function was originally deployed with HTTP triggers as opposed to "bucket" trigger, it GCP it stays marked as HTTP untill you delete it and redeploy anew. Simple redeployment leaves this function marked as HTTP (altough in Triggers tab it'll show Event for buckets as expected)
3
0
78,813,619
2024-7-30
https://stackoverflow.com/questions/78813619/regex-to-interpret-awkward-scientific-notation
Ok, so I'm working with this ENDF data, see here. Sometimes in the files they have what is quite possibly the most annoying encoding of scientific notation floating point numbers I have ever seen1. There it is often used that instead of 1.234e-3 it would be something like 1.234-3 (omitting the "e"). Now I've seen a library that simply changes - into e- or + into e+ by a simple substitution. But that doesn't work when some of the numbers can be negative. You end up getting some nonsense like e-5.122e-5 when the input was -5.122-5. So, I guess I need to move onto regex? I'm open to another solution that's simpler but its the best I can think of right now. I am using the re python library. I can do a simple substitution where I look for [0-9]-[0-9] and replace that like this: import re str1='-5.634-5' x = re.sub('[0-9]-[0-9]','4e-5',str1) print(x) But obviously this won't work generally because I need to get the numerals before and after the - to be what they were, not just something I made up... I've used capturing groups before but what would be the fastest way in this context to use a capturing group for the digits before and after the - and feed it back into the substitution using the Python regex library import re? 1 Yes, I know, fortran...80 characters...save space...punch cards...nobody cares anymore.
Probably wouldn't reach for regex for this, when some simple string ops should work: s.replace("-", "e-").replace("+", "e+").lstrip("e")
2
3
78,808,725
2024-7-29
https://stackoverflow.com/questions/78808725/python-regex-to-match-multiple-words-in-a-line-without-going-to-the-next-line
I'm writing a parser to parse the below output: admin@str-s6000-on-5:~$ show interface status Ethernet4 Interface Lanes Speed MTU Alias Vlan Oper Admin Type Asym PFC --------------- ----------- ------- ----- ------------ ------ ------ ------- -------------- ---------- Ethernet4 29,30,31,32 40G 9100 fortyGigE0/4 trunk up up QSFP+ or later off PortChannel0001 N/A 40G 9100 N/A routed up up N/A N/A PortChannel0002 N/A 40G 9100 N/A routed up up N/A N/A PortChannel0003 N/A 40G 9100 N/A routed up up N/A N/A PortChannel0004 N/A 40G 9100 N/A routed up up N/A N/A I have made an attempt to write a regex to match all the fields as below (\S+)\s+([\d,]+)\s+(\S+)\s+(\d+)\s+(\S+)\s+(\S+)\s+([up|down])+\s+([up|down]+)\s+([\w\s+?]+)\s+(\S+) I'm able to get upto Admin column correctly. The column Type contains multiple words so i have used the pattern ([\w\s+?]+) hoping it will match multiple workds seperated by one space with + being optional followed by (\S+) to match the last column. The problem that I face is, regex ([\w\s+?]+) spawns over multiple lines and it gives me an output as below Ethernet4 29,30,31,32 40G 9100 fortyGigE0/4 trunk up up QSFP+ or later off PortChannel0001 N/A I see that \s matches the new line as well. how to restrict that not to match the new line? could someone pls help me to clarify. I looked at this space Regex for one or more words separated by spaces but that is not helping me either. can someone help me to understand this better?
Suppose, for simplicity, the data were as follows. str = """ admin@str-s6000-on-5:~$ show interface status Ethernet4 Interface Lanes MTU Alias Ad Type Asym PFC --------------- -------- ---- ------ ---- ----------- ---------- Ethernet4 29,30,31 9100 fG0/4 up Q+ or later off PortChannel0001 N/A 9100 N/A up N/A N/A """ I would suggest you use both verbose (a.k.a. free spacing) mode (re.VERBOSE) and named capture groups to make the regular expression self-documenting: import re rgx = r""" ^ # match beginning of line [ ]* # match zero or more spaces (?P<Interface>\S+) # match one or more non-whitespaces and # save to capture group 'Interface' [ ]+ (?P<Lanes>\d+(?:,\d+)*) # match one or more strings of two or # more digits separated by a comma and # save to capture group 'Lanes' [ ]+ (?P<MTU>\d+) # match one or more digits and save to # capture group 'MTU' [ ]+ (?P<Alias>\S+) # match one or more non-whitespaces and # save to capture group 'Alias' [ ]+ (?P<Ad>up|down) # match 'up' or 'down' and save to # capture group 'Ad' [ ]+ (?P<Type>\S+(?:[ ]\S+)*) # match one or more groups of # non-whitespaces separated by one # space and save to capture group 'Type' [ ]* (?P<Asym_PFC>off|on) # match 'up' or 'down' and save to # capture group 'Asym_PFC' [ ]* $ # match end of line """ Note I have assumed the whitespaces in the text are simply spaces (and not tabs, for one), in which case it is preferable to use spaces in the expression. Also, I've written each space to be in a capture group ([ ]); else it would be stripped out along with spaces that are not part of the expression. There are other ways to protect spaces, one being to escape them (\ ). You may then extract the contents of the capture groups as follows. match = re.search(rgx, str, re.VERBOSE | re.MULTILINE) if match: print("capture group 'Interface': ", match.group('Interface')) print("capture group 'Lanes': ", match.group('Lanes')) print("capture group 'MTU': ", match.group('MTU')) print("capture group 'Alias': ", match.group('Alias')) print("capture group 'Ad': ", match.group('Ad')) print("capture group 'Type': ", match.group('Type')) print("capture group 'Asym_PFC': ", match.group('Asym_PFC')) else: print('did not find') which displays capture group 'Interface': Ethernet4 capture group 'Lanes': 29,30,31 capture group 'MTU': 9100 capture group 'Alias': fG0/4 capture group 'Ad': up capture group 'Type': Q+ or later capture group 'Asym_PFC': off Demo Note that it may be sufficient to determine if each line is a keeper, and if it is simply split the line on the regular expression {2,}; that is, split on two or more spaces (after stripping off spaces at the beginning and/or end of the string). If, for example, lines of interest begin 'Ethernet', possibly padded left with spaces, we could use the regular expression r'^ *Ethernet.*' to identify lines of interest and r' {2,}' to split those lines and assign the pieces to variables. Demo
3
3
78,798,267
2024-7-26
https://stackoverflow.com/questions/78798267/hysteresis-modelling-as-a-control-constraint-for-mpc-in-python-gekko
I am trying to introduce a hysteresis constraint in an MPC optimization problem for control signal dispatch using Python GEKKO. This has become a daunting task as I am unable to transform the following problem into equations that GEKKO understands. The problem: If ON time < minimum ON time, control dispatch for a given asset should not be able to turn it OFF. If OFF time < minimum OFF time, control dispatch for the same asset should not be able to turn it ON. An example of what I'm trying to do Where: engine = GEKKO(remote = False) control = engine.Param(value = control_signal) key = engine.MV(value = 0) key.STATUS = 1 key.LOWER = -1 key.UPPER = 1 engine.Equation(hysteresis_equation(key)) Manipulated variables in this case are dispatch percentiles of the control signal called 'key' which will influence the problem dynamics. Where the key is the manipulated variable and hysteresis_equation is a function of the key value that should emulate a time dependent hysteresis. I have not given more details because there is no point, the problem resides in the implmentation of a non-linear hysteresis constraint in a GEKKO model. I have tried looking at binary variables, however, I do not understand how to get them to change value throughout the optimization using GEKKO. Tried calling an external function that returns True or False is not supported and yields @error: Equation Definition Equation without an equality (=) or inequality (>,<) I have also tried introducing booleans in an equation that resembles power == (can_on * key + (1-can_on) *b0 + can_off * 0 + (1-can_off) * key)/2. The booleans are controlled variables in the hysteresis_equation and are set to 1 or 0 depending on the state of hysteresis, but are not GEKKO variables. Thank you in advance for your help.
The easiest way to prevent the controller from turning off or on MVs too frequently is to use the u.MV_STEP_HOR option to specify the number of steps that it must be constant before it can move again. More details are in the documentation. Implementing a time-based condition is more complicated, but still possible. Below is an example script that has a minimum on-time of 2 cycles and a minimum off-time of 3 cycles. It uses logical conditions that switch the on_allow and off_allow states that control the inequality constraints on the MV derivative. from gekko import GEKKO import numpy as np import matplotlib.pyplot as plt from matplotlib.ticker import MaxNLocator m = GEKKO() m.time = np.linspace(0,10,11) min_on_time = 2 min_off_time = 3 # Manipulated variable u = m.MV(0, lb=0, ub=1, integer=True) uc = m.Var(0); m.Equation(uc==u) # copy variable u.STATUS = 1 # allow optimizer to change is_on = m.Intermediate(u) # m.if3(u-0.5,0,1) is_off = m.Intermediate(1-is_on) on_time,off_time = m.Array(m.Var,2,value=0) m.Equation(on_time.dt()==is_on) m.Equation(off_time.dt()==is_off) on_td,off_td = m.Array(m.Var,2,value=0) m.delay(on_time,on_td,min_on_time+1) m.delay(off_time,off_td,min_off_time+1) off_allow = m.if3(on_time -on_td -min_on_time +0.5,0,1) on_allow = m.if3(off_time-off_td-min_off_time+0.5,0,1) m.Equation(off_allow *(uc.dt()+1)+ (1-off_allow) * uc.dt()>=0) m.Equation( on_allow *(uc.dt()-1)+ (1-on_allow) * uc.dt()<=0) # Process model x = m.Var(value=0,lb=0,ub=5) m.Equation(5*x.dt() == -x + 3*u) # Objective m.Minimize((x-2)**2) m.options.IMODE = 6 # control m.options.SOLVER = 1 m.solve(disp=True) plt.figure(figsize=(6,3.5)) plt.subplot(3,1,1) plt.step(m.time,u.value,'b-',label='u Profile') plt.plot(m.time,x.value,'r--',label='x Response') plt.legend(); plt.grid() ax = plt.gca() ax.xaxis.set_major_locator(MaxNLocator(integer=True)) plt.subplot(3,1,2) plt.plot(m.time,on_allow.value,'r--',label='on allow') plt.plot(m.time,off_allow.value,'k:',label='off allow') ax = plt.gca() ax.xaxis.set_major_locator(MaxNLocator(integer=True)) plt.legend(); plt.grid() plt.subplot(3,1,3) plt.plot(m.time,on_time.value,'r--',label='on time') plt.plot(m.time,off_time.value,'k:',label='off time') ax = plt.gca() ax.xaxis.set_major_locator(MaxNLocator(integer=True)) plt.legend(); plt.grid() plt.xlabel('Time'); plt.tight_layout() plt.savefig('results.png',dpi=300) plt.show()
2
1
78,813,399
2024-7-30
https://stackoverflow.com/questions/78813399/using-apply-in-polars
I am trying to create a new column using apply in polars. For this, I tried the following operation: df = df.with_columns( pl.col("AH_PROC_REALIZADO") .apply(get_procedure_description) .alias("proced_descr") ) But I'm getting the error: AttributeError: 'Expr' object has no attribute 'apply' The function I am trying to apply looks as follows. def get_procedure_description(cod): proceds = { '41612': 'CIRURGIA ONCOLOGICA', '30401': 'RADIOTERAPIA', ... } for proced in proceds.keys(): if cod[: len(proced)] == proced: return proceds[proced] return None
pl.Expr.apply was deprecated in favour of pl.Expr.map_elements in Polars release 0.19.0. Recently, pl.Expr.apply was removed in the release of Polars 1.0.0. You can adapt your code to the new version as follows. df.with_columns( pl.col("AH_PROC_REALIZADO") .map_elements(get_procedure_description, return_dtype=pl.String) .alias("proced_descr") )
2
6
78,810,432
2024-7-30
https://stackoverflow.com/questions/78810432/how-to-explode-multiple-list-columns-with-missing-values-in-python-polars
Given a Polars dataframe like below, how can I call explode() on both columns while expanding the null entry to the correct length to match up with its row? shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ x ┆ y β”‚ β”‚ --- ┆ --- β”‚ β”‚ list[i64] ┆ list[bool] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════════════║ β”‚ [1] ┆ [true] β”‚ β”‚ [1, 2] ┆ null β”‚ β”‚ [1, 2, 3] ┆ [true, false, true] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Currently calling df.explode(["x", "y"]) will result in this error. polars.exceptions.ShapeError: exploded columns must have matching element counts I'm assuming there's not a built-in way. But I can't find/think of a way to convert that null into a list of correct length, such that the explode will work. Here, the required length is not known statically upfront. I looked into passing list.len() expressions into repeat_by(), but repeat_by() doesn't support null.
You were on the right track, trying to fill the missing values with a list of null values of correct length. To make pl.Expr.repeat_by work with null, we need to ensure that the base expression is of a non-null type. This can be achieved by setting the dtype argument of pl.lit explicity. Then, the list column of (lists of) nulls can be used to fill the null values in y. From there, exploding x and y simultaneously works as usually. ( df .with_columns( pl.col("y").fill_null( pl.lit(None, dtype=pl.Boolean).repeat_by(pl.col("x").list.len()) ) ) ) shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ x ┆ y β”‚ β”‚ --- ┆ --- β”‚ β”‚ list[i64] ┆ list[bool] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════════════║ β”‚ [1] ┆ [true] β”‚ β”‚ [1, 2] ┆ [null, null] β”‚ β”‚ [1, 2, 3] ┆ [true, false, true] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ From here, df.explode("x", "y") should work as expected. Note. If there are more than two columns, which all might contain null values, one can combine the answer above with this answer to have a valid solution. Note.
7
5
78,813,024
2024-7-30
https://stackoverflow.com/questions/78813024/fastest-way-to-process-an-expanding-linear-sequence-in-python
I have the following conditions: The number u(0) = 1 is the first one in u. For each x in u, then y = 2 * x + 1 and z = 3 * x + 1 must be in u also. There are no other numbers in u. No duplicates should be present. The numbers must be in ascending sequential order Ex: u = [1, 3, 4, 7, 9, 10, 13, 15, 19, 21, 22, 27, ...] The program wants me to give the value of a member at a given index. I've already found ways to solve this using insort and I hacked together a minimal binary search tree, as well. Unfortunately, this process needs to be faster than what I have and I'm at a loss as to what to do next. I thought the BST would do it, but it is not fast enough. Here's my BST code: class BSTNode: def __init__(self, val=None): self.left = None self.right = None self.val = val def insert(self, val): if not self.val: self.val = val return if self.val == val: return if val < self.val: if self.left: self.left.insert(val) return self.left = BSTNode(val) return if self.right: self.right.insert(val) return self.right = BSTNode(val) def inorder(self, vals): if self.left is not None: self.left.inorder(vals) if self.val is not None: vals.append(self.val) if self.right is not None: self.right.inorder(vals) return vals and here's my function: from sys import setrecursionlimit def twotimeslinear(n): #setrecursionlimit(2000) i = 0 u = [1] ended = False bst = BSTNode() bst.insert(1) while i < n and not ended: for j in range(2, 4): k = 1 cur = j * bst.inorder([])[i] + 1 bst.insert(cur) if len(u) == n: ended = True break i+= 1 return bst.inorder([])[n] I just need directions as to what I could do to make the process faster. I can solve the problem if I only knew what I was missing. I'm probably overlooking some data structure that would work better, but I don't even know what to look for. Thank you for any help.
Generating and merging the ys and zs: from heapq import merge from itertools import groupby def twotimeslinear(n): u = [1] ys = (2*x+1 for x in u) zs = (3*x+1 for x in u) for x, _ in groupby(merge(ys, zs)): u.append(x) if n < len(u): return u[n] print(*map(twotimeslinear, range(20))) Attempt This Online! Takes ~0.05 seconds for your limit index 60,000 and ~0.7 seconds for index 1,000,000. Alternative basic implementation: def twotimeslinear(n): u = [1] i = j = 0 while n >= len(u): y = 2*u[i] + 1 z = 3*u[j] + 1 m = min(y, z) u.append(m) if y == m: i += 1 if z == m: j += 1 return u[n] print(*map(twotimeslinear, range(20))) Attempt This Online! And another version, throwing tons of batteries at it :-) from heapq import merge from itertools import groupby, tee, chain, islice, repeat from operator import itemgetter, mul, add def twotimeslinear(n): parts = [[1]] whole = chain.from_iterable(parts) output, feedback1, feedback2 = tee(whole, 3) ys = map(1 .__add__, map(2 .__mul__, feedback1)) zs = map(add, map(mul, repeat(3), feedback2), repeat(1)) # different way just for fun merged = map(itemgetter(0), groupby(merge(ys, zs))) parts.append(merged) return next(islice(output, n, None)) print(*map(twotimeslinear, range(20))) After setting it up with Python code, this runs entirely in C code. It's a silly hobby of mine. Attempt This Online!
2
5
78,802,454
2024-7-27
https://stackoverflow.com/questions/78802454/static-files-not-loading-in-production-in-django-react-application
I'm running a Django application in a Docker container, and I'm having trouble serving static files in production. Everything works fine locally, but when I deploy to production, the static files don't load, and I get 404 errors. Here are the relevant parts of my setup: Django settings.py: TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join(BASE_DIR, 'build')], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] STATIC_URL = '/static/' MEDIA_URL = '/media/' STATIC_ROOT = '/vol/web/static' STATICFILES_DIRS = [os.path.join(BASE_DIR, 'build', 'static')] The build folder was generated by npm run build command in a react application. After running collectstatic, the volume /vol/web/static is correctly populated. However, the browser shows 404 errors for the static files, e.g., GET https://site/static/js/main.db771bdd.js [HTTP/2 404 161ms] GET https://site/static/css/main.4b763604.css [HTTP/2 404 160ms] Loading failed for the <script> with source β€œhttps://mysite/static/js/main.db771bdd.js”. These files exist in the build/static directory, but I thought the browser should use the static files collected into /vol/web/static. Nginx Configuration: server { listen ${LISTEN_PORT}; location /static { alias /vol/static; } location / { uwsgi_pass ${APP_HOST}:${APP_PORT}; include /etc/nginx/uwsgi_params; client_max_body_size 10M; } } Dockerfile: FROM python:3.9-alpine ENV PYTHONUNBUFFERED 1 ENV PATH="/scripts:${PATH}" RUN pip install --upgrade "pip<24.1" COPY ./requirements.txt /requirements.txt RUN apk add --update --no-cache postgresql-client jpeg-dev \ && apk add --update --no-cache --virtual .tmp-build-deps \ gcc libc-dev linux-headers postgresql-dev musl-dev zlib zlib-dev libffi-dev \ && pip install -r /requirements.txt \ && apk del .tmp-build-deps RUN mkdir -p /app /vol/web/media /vol/web/static RUN adduser -D user RUN chown -R user:user /vol /app COPY ./app /app COPY ./scripts /scripts COPY ./requirements.txt /requirements.txt RUN chmod -R 755 /vol/web /app /scripts \ && chmod +x /scripts/* USER user WORKDIR /app VOLUME /vol/web CMD ["entrypoint.sh"] For further context, I deployed the Django application and the proxy in separated containers inside a ECS task: [ { "name": "api", "image": "${app_image}", "essential": true, "memoryReservation": 256, "environment": [ {"name": "DJANGO_SECRET_KEY", "value": "${django_secret_key}"}, {"name": "DB_HOST", "value": "${db_host}"}, {"name": "DB_NAME", "value": "${db_name}"}, {"name": "DB_USER", "value": "${db_user}"}, {"name": "DB_PASS", "value": "${db_pass}"}, {"name": "ALLOWED_HOSTS", "value": "${allowed_hosts}"}, {"name": "S3_STORAGE_BUCKET_NAME", "value": "${s3_storage_bucket_name}"}, {"name": "S3_STORAGE_BUCKET_REGION", "value": "${s3_storage_bucket_region}"} ], "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-group": "${log_group_name}", "awslogs-region": "${log_group_region}", "awslogs-stream-prefix": "api" } }, "portMappings": [ { "containerPort": 9000, "hostPort": 9000 } ], "mountPoints": [ { "readOnly": false, "containerPath": "/vol/web", "sourceVolume": "static" } ] }, { "name": "proxy", "image": "${proxy_image}", "essential": true, "portMappings": [ { "containerPort": 8000, "hostPort": 8000 } ], "memoryReservation": 256, "environment": [ {"name": "APP_HOST", "value": "127.0.0.1"}, {"name": "APP_PORT", "value": "9000"}, {"name": "LISTEN_PORT", "value": "8000"} ], "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-group": "${log_group_name}", "awslogs-region": "${log_group_region}", "awslogs-stream-prefix": "proxy" } }, "mountPoints": [ { "readOnly": true, "containerPath": "/vol/static", "sourceVolume": "static" } ] } ] The entrypoint.sh script called by the Dockerfile is given by #!/bin/sh set -e python manage.py collectstatic --noinput --settings=app.settings.staging python manage.py wait_for_db --settings=app.settings.staging python manage.py wait_for_es --settings=app.settings.staging python manage.py migrate --settings=app.settings.staging python manage.py search_index --rebuild --settings=app.settings.staging -f uwsgi --socket :9000 --workers 4 --master --enable-threads --module app.wsgi --env DJANGO_SETTINGS_MODULE=app.settings.staging In terraform, my code is essentially equal to the configuration found here I suspect there might be an issue with file permissions, but after I change the permission the errors continue. Any insights on what might be going wrong or how to debug this further? Any help would be greatly appreciated!
I think the error lies in your nginx config. You are setting the alias for the static route to /vol/static, instead of /vol/web/static: server { listen ${LISTEN_PORT}; location /static { alias /vol/web/static; } ... }
2
1
78,811,382
2024-7-30
https://stackoverflow.com/questions/78811382/random-samplepopulation-x-sometimes-not-contained-in-random-samplepopulation
EDIT: Edited original post replacing my code with an MRE from user @no comment. I noticed a seemingly non-intuitive behaviour using a seeded random.sample(population, k) call to sample from a list of files in a directory. I would expect that sampling 100 items with k=100 while using a seed, guarantees that when I subsequently sample with k=800, the 100 sampled items are guaranteed to be within the list of 800 sampled items. Below is an example for a number of population sizes and seeds: import random for n in 1000, 2000, 5000: print(end=f'{n=}: ') for i in range(10): random.seed(i) x = random.sample(range(n), 100) random.seed(i) y = random.sample(range(n), 800) print(sum(i==j for i,j in zip(x,y[:100])), end=' ') print() The output shows the expected perfect match when sampling from a population of 1000 or 5000, but somehow randomly only a partial match when sampling from 2000: n=1000: 100 100 100 100 100 100 100 100 100 100 n=2000: 93 61 38 44 68 36 39 33 71 86 n=5000: 100 100 100 100 100 100 100 100 100 100 Similarly, after running my sampling script in some directories, I get mixed results. From some directories, the 100 are within the 800, but for others there are differences for a few files, some are here and not there and vice versa. Are there any sources of randomness that I am not aware of? I tried sorting the os.listdir() call that lists directory contents but this didn't change anything. I know I can refactor the script to work by first sampling a greater value and slicing the sample list, but I would expect my original script to work in the same way too.
The exact algorithm isn't documented/guaranteed. You're likely using CPython, which chooses one of two slightly different algorithms for efficiency, depending on whether you want a small or large percentage of the population. Both algorithms select one element at a time and append it to the sample, but they differ in how to select the next element and how to keep track of the selected or remaining elements. Each of the two algorithms actually has the property you desire. But what can happen is that you sample 100 with one algorithm and 800 with the other, and then they don't match. With n=1000, sampling 100 or 800 both use the first algorithm (the one for large percentages). Hence the perfect match of the first 100. With n=2000, sampling 100 or 800 use different algorithms (sampling 100 is now a small percentage). Hence the difference. With n=5000, sampling 100 or 800 both use the second algorithm (the one for small percentages). Hence the perfect match of the first 100. (If you noticed and are wondering why 100 out of 1000 is a "large percentage" while 800 out of 5000 is a "small percentage": It's not a fixed percentage, see the implementation for the formula.)
3
5
78,812,183
2024-7-30
https://stackoverflow.com/questions/78812183/split-strings-in-a-series-convert-to-array-and-average-the-values
I have a Pandas Series that has these unique values: array(['17', '19', '21', '20', '22', '23', '12', '13', '15', '24', '25', '18', '16', '14', '26', '11', '10', '12/16', '27', '10/14', '16/22', '16/21', '13/17', '14/19', '11/15', '10/15', '15/21', '13/19', '13/18', '32', '28', '12/15', '29', '42', '30', '31', '34', '46', '11/14', '18/25', '19/26', '17/24', '19/24', '17/23', '13/16', '11/16', '15/20', '36', '17/25', '19/25', '17/22', '18/26', '39', '41', '35', '50', '9/13', '33', '10/13', '9/12', '93/37', '14/20', '10/16', '14/18', '16/23', '37', '9/11', '37/94', '20/54', '22/31', '22/30', '23/33', '44', '40', '50/95', '38', '16/24', '15/23', '15/22', '18/23', '16/20', '37/98', '19/27', '38/88', '23/31', '14/22', '45', '39/117', '28/76', '33/82', '15/19', '23/30', '47', '46/115', '14/21', '17/18', '25/50', '12/18', '12/17', '21/28', '20/27', '26/58', '22/67', '22/47', '25/51', '35/83', '39/86', '31/72', '24/56', '30/80', '32/85', '42/106', '40/99', '30/51', '21/43', '52', '56', '25/53', '34/83', '30/71', '27/64', '35/111', '26/62', '32/84', '39/95', '18/24', '22/29', '42/97', '48', '55', '58', '39/99', '49', '43', '40/103', '22/46', '54/133', '25/54', '36/83', '29/72', '28/67', '35/109', '25/62', '14/17', '42/110', '52/119', '20/60', '46/105', '25/56', '27/65', '25/74', '21/49', '29/71', '26/59', '27/62'], dtype=object) The ones that have the '/', I want to split these into arrays and then average their values. One simpler but a flawed approach is to simply extract the first value: master_data["Cmb MPG"].str.split('/').str[0].astype('int8') However, what I truly require is the two values being averaged. I have tried several commands and this one: np.array(master_data["Cmb MPG"].str.split('/')).astype('int8').mean() Should ideally do the job, but I get a ValueError followed by a TypeError: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) TypeError: int() argument must be a string, a bytes-like object or a real number, not 'list' The above exception was the direct cause of the following exception: ValueError Traceback (most recent call last) Cell In[88], line 1 ----> 1 np.array(master_data["Cmb MPG"].str.split('/')).astype('int8') ValueError: setting an array element with a sequence. The slice() method returns a Series but it won't proceed either with the splitting of strings. What is required is: '18/25' ---> [18, 25] ---> 22 (rounded)
I would use extractall and groupby.mean: s = pd.Series(['10', '12/16', '27', '10/14', '16/22', '16/21', '13/17']) out = (s.str.extractall(r'(\d+)')[0].astype(int).groupby(level=0).mean() .round().astype(int) ) You could also go with split and mean, but this generates a more expensive intermediate and will not scale as well if you have many items (1/2/3/4/5): out = (s.str.split('/', expand=True).astype(float).mean(axis=1) .round().astype(int) ) Output: 0 10 1 14 2 27 3 12 4 19 5 18 6 15 dtype: int64
3
2
78,811,781
2024-7-30
https://stackoverflow.com/questions/78811781/rolling-kpi-calculations-in-polars-index-not-visible
How to add rolling KPI's to original dataframe in polars? when I do group by, I am not seeing an index and so cant join? I want to keep all original columns in dataframe intact but add rolling kpi to the dataframe? Pandas code: groups_df = df[mask_for_filter].groupby(['group_identifier']) rolling_kpi = groups_df[['col_1', 'col_2']].rolling(15, min_periods=1, center=True).median().reset_index(level='group_identifier').sort_index() df.loc[mask_for_filter, 'col_1_median'] = rolling_kpi['col_1'] df.loc[mask_for_filter, 'col_2_median'] = rolling_kpi['col_2'] Polars: df = df.filter(mask_for_filter).group_by('group_identifier').agg( col_1_median=pl.col('col_1').rolling_median(15, min_periods=1, center=True), col_2_median=pl.col('col_2').rolling_median(15, min_periods=1, center=True)) Code: result_df should be same as df, except that with extra rolling median columns which is not happening in above....plus there is no index so can't merge/join import polars as pl import numpy as np np.random.seed(0) data = { 'group_identifier': np.random.choice(['A', 'B', 'C'], 100), 'col_1': np.random.randn(100).round(2), 'col_2': np.random.randn(100).round(2), 'other_col': np.random.randn(100).round(2) } df = pl.DataFrame(data) mask_for_filter = df['col_1'] > 0 result_df = df.filter(mask_for_filter).group_by('group_identifier').agg( col_1_median=pl.col('col_1').rolling_median(15, min_periods=1, center=True), col_2_median=pl.col('col_2').rolling_median(15, min_periods=1, center=True) )
It looks like you don't need to group by, but to run rolling_median() over window instead. over() to limit calculation to be within group_identifier. name.suffix() to assign names to the new columns. If you only need filtered rows: ( df .filter(mask_for_filter) .with_columns( pl.col("col_1", "col_2") .rolling_median(15, min_periods=1, center=True) .over("group_identifier") .name.suffix("_median") ) ) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ group_identifier ┆ col_1 ┆ col_2 ┆ other_col ┆ col_1_median ┆ col_2_median β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ f64 ┆ f64 ┆ f64 ┆ f64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ═══════β•ͺ═══════════β•ͺ══════════════β•ͺ══════════════║ β”‚ B ┆ 0.01 ┆ 1.68 ┆ 1.12 ┆ 0.83 ┆ -0.46 β”‚ β”‚ B ┆ 0.37 ┆ -0.26 ┆ 0.04 ┆ 0.85 ┆ -0.66 β”‚ β”‚ A ┆ 0.72 ┆ -0.38 ┆ 0.47 ┆ 0.93 ┆ -0.44 β”‚ β”‚ A ┆ 0.36 ┆ -0.51 ┆ -0.4 ┆ 0.86 ┆ -0.5 β”‚ ... β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Or, if you need this in your original DataFrame when/then() twice - top one to only assign rolling median to rows which has col_1 > 0 and second one to not include rows to be filtered into the calculation of rolling median. ( df .with_columns( pl.when(pl.col("col_1") > 0).then( pl.when(pl.col("col_1") > 0).then(pl.col("col_1", "col_2")) .rolling_median(15, min_periods=1, center=True) .over("group_identifier") ) .name.suffix("_median") ) ) If you want to add more aggregates you could generalize it (although I'm not sure if it's readable enough to go to production): ( df .with_columns( pl.when(pl.col("col_1") > 0).then( transform( pl.when(pl.col("col_1") > 0).then(pl.col("col_1", "col_2")), 15, min_periods=1, center=True ) .over("group_identifier") ) .name.suffix(suffix) for transform, suffix in [ (pl.Expr.rolling_median, "_median"), (pl.Expr.rolling_mean, "_mean"), ] ) )
4
1
78,811,505
2024-7-30
https://stackoverflow.com/questions/78811505/why-does-order-of-the-data-matter-for-neural-network
Recently, I discovered the really weird behaviour of my an AI model. I wanted to build AI model that would try and guess the implicit functions based on the data I gave it. For example, the equation of the flower is: And this is how I wrote it in the Numpy array: K = 0.75 a = 9 def f(x): return x - 2 - 2*np.floor((x - 1)/2) def r1(x): return abs(6 * f(a * K * x / 2 / np.pi)) - a/2 t = np.linspace(0, 17 * math.pi, 1000) x = np.cos(t) * r1(t) y = np.sin(t) * r1(t) points = np.vstack((x, y)).T After that, I tried to experiment a bit and allowed my AI to actually try and guess the shape of this flower! At first try, it actually got it written. Here it is: Well, I got a great example. After that, I tried to experiment and checked what would happen if I shuffled the point array, and I completely got devasting results! And I couldn't explain why the order of cartesian coordinates mattered to the approximate flower implicit function. Can anyone explain it? Here is the code for AI. # Define the neural network model model = Sequential() model.add(Dense(128, input_dim=2, activation='relu')) model.add(Dense(64, activation='relu')) model.add(Dense(2, activation='linear')) # Compile the model model.compile(optimizer='adam', loss='mean_squared_error') # Train the model model.fit(points, points, epochs=100, batch_size=32)
Actually, it's a plotting issue. The order in a plot matters while the one of a scatter don't : import matplotlib.pyplot as plt shuffled_points = np.random.permutation(points) plt.scatter(*points.T) # in .scatter()-method POINT-s are depicted plt.scatter(*shuffled_points.T) # in .scatter()-method order does NOT matter plt.plot(*points.T, color="r") # in .plot()-method interpolator follows the order plt.plot(*shuffled_points.T, color="r") # method creates a SEQUENCE of LINEs
4
1
78,811,969
2024-7-30
https://stackoverflow.com/questions/78811969/small-algorithm-to-create-a-list-of-y-axes-in-python
On python, I have a function called get_list_axes which takes 2 parameters : A list of unique captors - let's call them ["A", "B", "C", D"]. A list of captors from the unique captors above which should share the same y-axis - For instance ["A", "D"] or ["A", "C"] or ["A", "B", "D"], etc. The goal of the function is to return a list of y-axis depending on the list of captors to merge on the same y-axis. When there is nothing to merge, it should then return ["y1", "y2", "y3", "y4"]. The function is as follows : def get_list_axes(list_ana_captors: list, merge_captors: list) -> list: counter_no_merge, counter_merge = 1, 1 list_axes = ["y1"] for captor in list_ana_captors[1:]: if captor not in merge_captors or captor == merge_captors[0]: if captor == merge_captors[0]: counter_merge = counter_no_merge + 1 counter_no_merge = counter_no_merge + 1 if captor not in merge_captors: list_axes.append(f"y{counter_no_merge}") else: list_axes.append(f"y{counter_merge}") return list_axes Now, let's define our list of unique captors list_captors = ["A", "B", "C", "D"]. Basically, my problem is that whenever the list of captors to merge is in the same order than the list of captors list_captors, everything works, for instance : get_list_axes(list_captors, ["A", "B", "C", "D"]) = ['y1', 'y1', 'y1', 'y1'] get_list_axes(list_captors, ["B", "C", "D"]) = ['y1', 'y2', 'y2', 'y2'] get_list_axes(list_captors, ["A", "B"]) = ['y1', 'y1', 'y2', 'y3'] get_list_axes(list_captors, ["A", "C"]) = ['y1', 'y2', 'y1', 'y3'] get_list_axes(list_captors, ["A", "D"]) = ['y1', 'y2', 'y3', 'y1'] get_list_axes(list_captors, ["B", "D"]) = ['y1', 'y2', 'y3', 'y2'] However, if we try in reverse : get_list_axes(list_captors, ["D", "A"]) = ['y1', 'y2', 'y3', 'y4'] instead of ['y1', 'y2', 'y3', 'y1'] get_list_axes(list_captors, ["D", "C"]) = ['y1', 'y2', 'y1', 'y3'] instead of ['y1', 'y2', 'y3', 'y3'] get_list_axes(list_captors, ["D", "B"]) = ['y1', 'y1', 'y2', 'y3'] instead of ['y1', 'y2', 'y3', 'y2'] get_list_axes(list_captors, ["D", "C", "A", "B"]) = ['y1', 'y1', 'y1', 'y2'] instead of ['y1', 'y1', 'y1', 'y1'] Is there an easier way than what I am currently doing ? Session Info : ----- session_info 1.0.0 ----- IPython 8.12.3 jupyter_client 8.6.2 jupyter_core 5.7.2 ----- Python 3.8.5 (tags/v3.8.5:580fbb0, Jul 20 2020, 15:57:54) [MSC v.1924 64 bit (AMD64)] Windows-10-10.0.19041-SP0 ----- Session information updated at 2024-07-30 15:15
Your original get_list_axes function fails when merge_captors is not in the same order as list_ana_captors. This is because it relies on the specific order of captors, causing incorrect y-axis assignments. To fix this, we map each captor to its y-axis, ensuring captors in merge_captors share the same y-axis regardless of their order. Code: def get_list_axes(list_ana_captors: list, merge_captors: list) -> list: list_axes = [] captor_to_axis = {} current_y = 1 for captor in list_ana_captors: if captor in merge_captors: if merge_captors[0] not in captor_to_axis: captor_to_axis[merge_captors[0]] = f"y{current_y}" current_y += 1 captor_to_axis[captor] = captor_to_axis[merge_captors[0]] else: captor_to_axis[captor] = f"y{current_y}" current_y += 1 for captor in list_ana_captors: list_axes.append(captor_to_axis[captor]) return list_axes Hope it will help, if you have any question feel free to ask.
2
1
78,810,300
2024-7-30
https://stackoverflow.com/questions/78810300/pythonic-approach-to-avoid-nested-loops-for-string-concatenation
I want to find all 5-digit strings, for which the first three digits are in my first list, the second trough fourth are in my second and the third to fifth are in my last list: l0=["123","567","451"] l1=["234","239","881"] l2=["348","551","399"] should thus yield: ['12348', '12399']. I have therefore written a function is_successor(a,b) that tests if a and b overlap: def is_successor(a:str,b:str)->bool: """tests if strings a and b overlap""" return a[1:]==b[:2] I can then achieve my goal by writing this nested loop/check structure, that basically appends back to front strings and resulting in all valid strings: pres=[] for c in l2: for b in l1: if is_successor(b,c): for a in l0: if is_successor(a,b): pres.append(a[0]+b[0]+c) pres I know I could write it as a list comprehension, but for my original data i have more nested lists and i loose readability even in a list comprehension. I start from l2 -> l0 because in my original data the lists become longer, the lower the index and thus I can filter out more cases early this way. A single loop trough all combinations of l0,l1,l2 and checking the succession of all items a,b,c simultaneously would work but it test way more unnecessary combinations than my current construct does. Question: How can this nested loop and conditional checking call be abstracted? Is there a pythonic way to capture the repetition of for -> is_successor()? A larger input might be: primes = [2, 3, 5, 7, 11, 13, 17] lsts=[ [ str(j).zfill(3) for j in range(12,988) if not j%prime ] for prime in primes ]
I would use itertools product to define a function that will perform the check annd append the data: import itertools def get_list(l1,l2): #THIS FUNCTION WILL TEST ALL THE PRODUCTS AND RETURN ONLY THE CORRECT ONES return [a+b[2:] for a,b in list(itertools.product(l1,l2)) if a[1:] == b[:2]] then you can nest the function like this: get_list(l0,get_list(l1,l2)) you will get the expected result: ['12348', '12399'] EDIT: for a long list use reduce. let s call l the list of list of inputs. In your specific case l = list([l0,l1,l2]) Reverse the list to apply from the end till the beginning l.reverse() and the final code: from functools import reduce reduce(lambda x,y:get_list(x,y),l) Note: in this case you want to change the inputs of the "get_list" function as it will go l2 then l1 then l0.Just change the def of get_list like this: def get_list(l2,l1)
3
1
78,802,780
2024-7-28
https://stackoverflow.com/questions/78802780/python-beautiful-soup-not-loading-table-values
I am unsure how to extra the 'Settle' column for each state (New South Wales, Victoria, Queensland, South Australia) from this website: https://www.asxenergy.com.au/futures_au It seems the numerical data isn't showing. My starting code is: from bs4 import BeautifulSoup from urllib.request import urlopen url = "https://www.asxenergy.com.au/futures_au" page = urlopen(url) html = page.read().decode("utf-8") soup = BeautifulSoup(html, "html.parser") print(soup.get_text())
As stated, the data comes from asxenergy.com.au/futures_au/dataset Use pandas to get the <table> tags. It's not ideally structured, so need a bit of processing here. import pandas as pd url = 'https://www.asxenergy.com.au/futures_au/dataset' dfs = pd.read_html(url) states = dfs[0].iloc[0].to_list() table_names = list(dfs[1].columns[0]) data_dfs = dfs[2:] output = {} for count, state in enumerate(states): start_idx = count * 6 end_idx = start_idx + 6 set_idx = list(range(start_idx, end_idx)) temp_table_names = [table_names[i] for i in set_idx] temp_data_dfs = [data_dfs[i] for i in set_idx] data_dict = dict(zip(temp_table_names, temp_data_dfs)) output[state] = data_dict Output: for state, tables in output.items(): for table_name, df in tables.items(): print(state, '-', table_name) print(df, '\n') New South Wales - BMth BMth Bid Ask Last +/- Vol Settle 0 Jul24 - - - - - 130.75 1 Aug24 - - - - - 117.74 2 Sep24 - - - - - 97.43 3 Oct24 - - - - - 110.00 New South Wales - BQtr BQtr Bid Ask Last +/- Vol Settle 0 Q324 113.50 115.50 115.50 - 40 115.50 1 Q424 109.50 110.00 110.00 - 16 110.00 2 Q125 126.10 126.70 126.10 - 16 126.10 3 Q225 130.15 130.50 130.15 - 21 130.15 4 Q325 125.00 126.85 126.18 - 23 126.18 New South Wales - BStr BStr Bid Ask Last +/- Vol Settle 0 FY25 120.29 121.00 - - - 120.38 1 CY25 121.00 122.50 122.00 - 14 122.00 2 FY26 120.00 122.00 121.00 -0.19 4 121.19 3 CY26 120.50 121.25 121.50 +0.69 7 120.81 4 FY27 120.25 121.00 - - - 120.43 New South Wales - Caps Caps Bid Ask Last +/- Vol Settle 0 Q324 10.00 11.00 10.00 - 17 10.00 1 Q424 20.25 21.50 20.50 +0.25 10 20.25 2 Q125 41.00 43.00 - - - 41.46 3 Q225 25.50 27.00 - - - 26.66 4 Q325 25.00 - - - - 26.54 New South Wales - CapStr CapStr Bid Ask Last +/- Vol Settle 0 FY25 - - - - - 24.49 1 CY25 - - - - - 28.96 2 FY26 29.40 - - - - 29.40 3 CY26 28.50 30.00 29.00 -0.35 1 29.35 4 FY27 29.00 30.00 - - - 29.93 New South Wales - PQtr PQtr Bid Ask Last +/- Vol Settle 0 Q324 116.00 160.00 - - - 160.00 1 Q424 125.00 160.00 - - - 145.00 2 Q125 - - - - - 170.73 3 Q225 - - - - - 150.00 4 Q325 - - - - - 140.00 Victoria - PkStr PkStr Bid Ask Last +/- Vol Settle 0 FY25 - - - - - 156.47 1 CY25 - - - - - 144.42 2 FY26 - - - - - 147.97 3 CY26 - - - - - 160.00 4 FY27 - - - - - 148.94 Victoria - BMth BMth Bid Ask Last +/- Vol Settle 0 Jul24 - - - - - 142.58 1 Aug24 - - - - - 108.21 2 Sep24 - 80.00 - - - 71.28 3 Oct24 - - - - - 58.95 Victoria - BQtr BQtr Bid Ask Last +/- Vol Settle 0 Q324 107.75 108.50 107.75 - 18 107.75 1 Q424 58.50 58.95 59.00 +0.05 14 58.95 2 Q125 73.40 74.00 73.50 - 13 73.50 3 Q225 93.00 93.80 93.80 - 10 93.80 4 Q325 88.00 89.25 88.50 - 5 88.50 Victoria - BStr BStr Bid Ask Last +/- Vol Settle 0 FY25 - - - - - 83.53 1 CY25 76.25 76.85 76.65 +0.29 2 76.36 2 FY26 70.00 73.30 - - - 72.22 3 CY26 65.50 70.24 69.80 +0.25 8 69.55 4 FY27 - - - - - 70.17 Victoria - Caps Caps Bid Ask Last +/- Vol Settle 0 Q324 6.75 7.5 8.15 +0.65 18 7.5 1 Q424 5.00 7.0 - - - 6.3 2 Q125 27.75 29.0 28.00 - 20 28.0 3 Q225 8.25 11.5 - - - 10.0 4 Q325 10.50 11.5 - - - 10.5 Victoria - CapStr CapStr Bid Ask Last +/- Vol Settle 0 FY25 - - - - - 12.88 1 CY25 13.00 15.25 - - - 14.25 2 FY26 13.85 15.25 - - - 14.49 3 CY26 14.50 16.00 - - - 14.81 4 FY27 15.40 - - - - 15.42 Queensland - PQtr PQtr Bid Ask Last +/- Vol Settle 0 Q324 111.11 - - - - 140.0 1 Q424 - - - - - 64.5 2 Q125 - - - - - 103.0 3 Q225 - - - - - 120.0 4 Q325 - - - - - 119.0 Queensland - PkStr PkStr Bid Ask Last +/- Vol Settle 0 FY25 - - - - - 107.07 1 CY25 - - - - - 102.24 2 FY26 - - - - - 77.04 3 CY26 - - - - - 60.00 4 FY27 - - - - - 60.00 Queensland - BMth BMth Bid Ask Last +/- Vol Settle 0 Jul24 - - - - - 108.07 1 Aug24 - - - - - 89.86 2 Sep24 - - - - - 89.86 3 Oct24 - - - - - 97.00 Queensland - BQtr BQtr Bid Ask Last +/- Vol Settle 0 Q324 96.00 101.75 - - - 96.00 1 Q424 96.75 97.50 97.00 - 12 97.00 2 Q125 133.00 133.50 133.50 - 18 133.50 3 Q225 99.80 100.45 99.80 - 31 99.80 4 Q325 95.00 96.75 96.01 - 15 96.01 Queensland - BStr BStr Bid Ask Last +/- Vol Settle 0 FY25 106.50 109.75 - - - 106.45 1 CY25 104.00 104.50 104.00 -0.22 7 104.22 2 FY26 99.50 100.00 99.75 -0.12 4 99.87 3 CY26 97.00 98.00 97.40 -0.02 2 97.42 4 FY27 - 94.70 94.50 +0.11 2 94.39 Queensland - Caps Caps Bid Ask Last +/- Vol Settle 0 Q324 7.00 9.00 8.25 - 15 8.25 1 Q424 17.25 18.90 19.00 +0.10 1 18.90 2 Q125 42.00 45.25 45.00 - 1 45.00 3 Q225 - 17.00 - - - 17.00 4 Q325 - - - - - 16.84 South Australia - CapStr CapStr Bid Ask Last +/- Vol Settle 0 FY25 - - - - - 22.18 1 CY25 - 23.85 - - - 23.74 2 FY26 - 23.30 - - - 23.19 3 CY26 21.50 23.50 22.99 - 10 22.99 4 FY27 - - - - - 22.78 South Australia - PQtr PQtr Bid Ask Last +/- Vol Settle 0 Q324 95.0 - - - - 95.0 1 Q424 100.0 - - - - 100.0 2 Q125 160.0 - - - - 160.0 3 Q225 100.0 - - - - 100.0 4 Q325 90.0 - - - - 90.0 South Australia - PkStr PkStr Bid Ask Last +/- Vol Settle 0 FY25 - - - - - 113.53 1 CY25 - - - - - 107.21 2 FY26 - - - - - 69.79 3 CY26 - - - - - 54.00 4 FY27 - - - - - 54.00 South Australia - BMth BMth Bid Ask Last +/- Vol Settle 0 Jul24 - - - - - 129.78 1 Aug24 - - - - - 129.78 2 Sep24 - - - - - 129.78 3 Oct24 - - - - - 67.32 South Australia - BQtr BQtr Bid Ask Last +/- Vol Settle 0 Q324 - 154.75 - - 2 129.78 1 Q424 - - - - 2 67.32 2 Q125 108.50 120.00 - - 2 117.08 3 Q225 - 123.50 - - 2 116.12 4 Q325 - - - - - 119.00 South Australia - BStr BStr Bid Ask Last +/- Vol Settle 0 FY25 - - 107.50 - 2 107.50 1 CY25 - - - - - 105.90 2 FY26 - 108.00 - - - 104.38 3 CY26 - - - - - 103.45 4 FY27 100.00 105.00 - - - 103.14
2
1
78,810,121
2024-7-30
https://stackoverflow.com/questions/78810121/python-regex-to-match-the-first-repetition-of-a-digit
Examples: For 0123123123, 1 should be matched since the 2nd 1 appears before the repetition of any other digit. For 01234554321, 5 should be matched since the 2nd 5 appears before the repetition of any other digit. Some regexes that I have tried: The below works for the 1st but not the 2nd example. It matches 1 instead because 1 is the first digit that appears in the string which is subsequently repeated. import re m = re.search(r"(\d).*?\1", string) print(m.group(1)) The below works for the 2nd but not the 1st example. It matches 3 instead - in particular the 2nd and 3rd occurrence of the digit. I do not know why it behaves that way. import re m = re.search(r"(\d)(?!(\d).*?\2).*?\1", string) print(m.group(1))
One idea: capture the end of the string and add it in the negative lookahead (group 2 here): (\d)(?=.*?\1(.*))(?!.*?(\d).*?\3.+?\2$) This way you can control where the subpattern .*?(\d).*?\3 in the negative lookahead ends. If .+?\2$ succeeds, that means there's an other digit that is repeated before the one in group 1. I anchored the pattern for the regex101 demo with ^.*?, but you don't need to do that with the re.search method. Other way: reverse the string and find the last repeated digit: re.search(r'^.*(\d).*?\1', string[::-1]).group(1)
19
12
78,807,662
2024-7-29
https://stackoverflow.com/questions/78807662/how-to-implement-pixel-shuffle
In tensorflow, there is a pixel-shuffle method called depth_to_space. What it does is the following: Suppose we have an image (an array) with dimensions (4,4,4). The above method shuffles the values of this array so that we get an array of size (16,16,1) in a way depicted in the image below: I tried now for a few hours to recreate this method in numpy using plane numpy functions like reshape, transpose etc. however I am not able to succeed. Does anyone know how to implement it? A very similar problem can be found in How to implement tf.space_to_depth with numpy?. However, this question considers the space_to_depth method, which is the inverse operation.
Here is a "channels-first" solution (i.e. assuming your array dimensions are ordered channelsΓ—heightΓ—width): import numpy as np # Create some data ("channels-first" version) a = (np.ones((1, 3, 3), dtype=int) * np.arange(1,5)[:, np.newaxis, np.newaxis]) # 4Γ—3Γ—3 c, h, w = a.shape # channels, height, width p = int(np.sqrt(c)) # height and width of one "patch" assert p * p == c # Sanity-check a_flat = a.reshape(p, p, h, w).transpose(2, 0, 3, 1).reshape(h * p, w * p) # 6Γ—6 print(a) # [[[1 1 1] # [1 1 1] # [1 1 1]] # [[2 2 2] # [2 2 2] # [2 2 2]] # [[3 3 3] # [3 3 3] # [3 3 3]] # [[4 4 4] # [4 4 4] # [4 4 4]]] print(a_flat) # [[1 2 1 2 1 2] # [3 4 3 4 3 4] # [1 2 1 2 1 2] # [3 4 3 4 3 4] # [1 2 1 2 1 2] # [3 4 3 4 3 4]] And here is the corresponding "channels-last" version (i.e. assuming your array dimensions are ordered heightΓ—widthΓ—channels): import numpy as np # Create some data ("channels-last" version) a = np.ones((3, 3, 1), dtype=int) * np.arange(1, 5) # 3Γ—3Γ—4 h, w, c = a.shape # height, width, channels p = int(np.sqrt(c)) # height and width of one "patch" assert p * p == c # Sanity-check a_flat = a.reshape(h, w, p, p).transpose(0, 2, 1, 3).reshape(h * p, w * p) # 6Γ—6 print(a) # [[[1 2 3 4] # [1 2 3 4] # [1 2 3 4]] # [[1 2 3 4] # [1 2 3 4] # [1 2 3 4]] # [[1 2 3 4] # [1 2 3 4] # [1 2 3 4]]] print(a_flat) # [[1 2 1 2 1 2] # [3 4 3 4 3 4] # [1 2 1 2 1 2] # [3 4 3 4 3 4] # [1 2 1 2 1 2] # [3 4 3 4 3 4]] In both cases, the idea is the same: With the first reshape, we split the channel dimension (or "depth") of length c into what will become a pΓ—p patch (note that pΒ·p=c, where p and c correspond to t and tΒ² in the question). With transpose, we place the patch height behind the current image height and the patch width behind the current image width. With the second reshape, we fuse the current image height and patch height into the new image height, and the current image width and patch width into the new image width. Update: Using einops Using rearrange() from the einops package, as suggested in Mercury's comment, corresponding solutions could look as follows: import numpy as np from einops import rearrange # Channels first a = (np.ones((1, 3, 3), dtype=int) * np.arange(1,5)[:, np.newaxis, np.newaxis]) # 4Γ—3Γ—3 p = int(np.sqrt(a.shape[0])) # height and width of one "patch" a_flat = rearrange(a, "(hp wp) h w -> (h hp) (w wp)", hp=p) # Channels last a = np.ones((3, 3, 1), dtype=int) * np.arange(1, 5) # 3Γ—3Γ—4 p = int(np.sqrt(a.shape[-1])) # height and width of one "patch" a_flat = rearrange(a, "h w (hp wp) -> (h hp) (w wp)", hp=p)
2
4
78,807,798
2024-7-29
https://stackoverflow.com/questions/78807798/mypy-1-10-reports-error-when-functools-wraps-is-used-on-a-generic-function
TLDR; I have a decorator that: changes the function signature the wrapped function uses some generic type arguments Other than the signature I would like to use funtools.wraps to preserve the rest of the information. Is there any way to achieve that without mypy complaining? More context A minimal working example would look like this: from functools import wraps from typing import Callable, TypeVar B = TypeVar('B', bound=str) def str_as_int_wrapper(func: Callable[[int], int]) -> Callable[[B], B]: WRAPPER_ASSIGNMENTS = ('__module__', '__name__', '__qualname__', '__doc__',) WRAPPER_UPDATES = ('__dict__', '__annotations__') @wraps(func, assigned=WRAPPER_ASSIGNMENTS, updated=WRAPPER_UPDATES) def _wrapped_func(val: B) -> B: num = int(val) result = func(num) return val.__class__(result) return _wrapped_func @str_as_int_wrapper def add_one(val: int) -> int: return val + 1 This seems to work alright, but mypy (version 1.10.0) does not like it. Instead, it complains with test.py:17: error: Incompatible return value type (got "_Wrapped[[int], int, [Never], Never]", expected "Callable[[B], B]") [return-value] test.py:17: note: "_Wrapped[[int], int, [Never], Never].__call__" has type "Callable[[Arg(Never, 'val')], Never]" If I either remove the @wraps decorator or replace the B type annotations by str, the error disappears. Question Am I missing something? Is this some already reported bug or limitation from mypy (couldn't find anything)? Should it be reported? Thanks!
This is, in some sense, a regression, though it's more of a limitation. Your code should have worked as-is, and Mypy 1.9 does pass your code, which means the behaviour change was added in 1.10. According to this similar issue (which was reported as a bug three months ago, but hasn't been triaged), the cause is this PR, in which the definitions of wraps and related symbols in Mypy's copy of typeshed were changed from (simplified): class IdentityFunction(Protocol): def __call__(self, x: _T, /) -> _T: ... _AnyCallable: TypeAlias = Callable[..., object] def wraps( wrapped: _AnyCallable, # ... ) -> IdentityFunction: ... ...to (also simplified): class _Wrapped(Generic[_PWrapped, _RWrapped, _PWrapper, _RWrapper]): __wrapped__: Callable[_PWrapped, _RWrapped] def __call__(self, *args: _PWrapper.args, **kwargs: _PWrapper.kwargs) -> _RWrapper: ... class _Wrapper(Generic[_PWrapped, _RWrapped]): def __call__(self, f: Callable[_PWrapper, _RWrapper]) -> _Wrapped[_PWrapped, _RWrapped, _PWrapper, _RWrapper]: ... def wraps( wrapped: Callable[_PWrapped, _RWrapped], # ... ) -> _Wrapper[_PWrapped, _RWrapped]: ... Apparently, this was done in an attempt to fix this typeshed issue. Solution If you are not interested in the explanation, apply these general solutions and you are done: return _wrapped_func # type: ignore[return-value] return cast(Callable[[B], B], _wrapped_func) if TYPE_CHECKING: _T = TypeVar('_T') class IdentityFunction(Protocol): def __call__(self, x: _T, /) -> _T: ... _AnyCallable = Callable[..., object] def wraps( wrapped: _AnyCallable, # Default values omitted here for brevity assigned: Sequence[str] = (...), updated: Sequence[str] = (...), ) -> IdentityFunction: ... Explanation Originally, wraps(func) (where the type of func is entirely irrelevant) returned IdentityFunction, a non-generic Protocol whose __call__ is generic over _T: class IdentityFunction(Protocol): def __call__(self, x: _T, /) -> _T: ... Thus, in the following (implicit) call wraps(func)(_wrapped_func), _wrapped_func retained its original type: a generic function. Mypy got this correctly, and it still does. (playgrounds: 1.9, 1.11) def str_as_int_wrapper(func: Callable[[int], int]) -> Callable[[B], B]: @wraps(func, ...) def _wrapped_func(val: B) -> B: ... reveal_type(_wrapped_func) # def [B <: builtins.str] (val: B`-1) -> B`-1 return _wrapped_func After the change, wraps(func) now returns _Wrapper[<P_of_func>, <R_of_func>], whose __call__ returns _Wrapped[<P_of_func>, <R_of_func>, <P_of_wrapped_func>, <R_of_wrapped_func>], a different type than the original type of _wrapped_func. This is where Mypy took the wrong step, just as it would before 1.10 if the typeshed copy were not modified. As noted by STerliakov: The problem originates from trying to resolve B too early: it's bound to _Wrapped['s] generic argument and resolved, B does not survive after that (hence Never in the output).
2
2
78,808,868
2024-7-29
https://stackoverflow.com/questions/78808868/how-to-download-a-geopackage-file-from-geodataframe-in-a-shiny-app
I have a shiny app that displays a number of layers on a map using folium. I want to give the user the possibility to download one of the layers (a linestring geodataframe) as a geopackage file. Here is the code I have so far: # relevant imported packages from shiny import App, render, ui, reactive import pandas as pd import geopandas as gpd # code in the ui for the download button ui.download_button(id="downloadRoute", label="Get the route file"), # code in the server function @output @render.download def downloadRoute(): result_value = result() route = result_value['route_detailed'] with io.BytesIO() as buffer: route.to_file(buffer, driver='GPKG') buffer.seek(0) return ui.download(filename="my_route.gpkg", file=buffer.read(), content_type="application/geopackage+sqlite3") I have verified that route is actually a valid geodataframe. If I download it outside shiny, it is a valid geopackage. In shiny, clicking the download button doesn't do anything on the UI. It only prints this error in the console: 500 Internal Server Error ERROR: Exception in ASGI application Traceback (most recent call last): File "pyogrio\_io.pyx", line 1268, in pyogrio._io.ogr_create File "pyogrio\_err.pyx", line 177, in pyogrio._err.exc_wrap_pointer pyogrio._err.CPLE_OpenFailedError: sqlite3_open(<_io.BytesIO object at 0x000002494BFCE3E0>) failed: unable to open database file What could be the thing I am doing wrong? Are there other ways I can achieve this?
As a side note, the @output decorator is no longer necessary since v0.6.0. And regarding the download issue, you (must?) yield the value of the buffer in the render.download, something like below : from io import BytesIO from shiny import App, render, ui import geopandas as gpd from shapely import LineString app_ui = ui.page_fluid( ui.download_button(id="downloadRoute", label="Download as GPKG") ) route = gpd.GeoDataFrame(geometry=[LineString([(0, 0), (1, 0)])]) def server(input): @render.download( filename="file.gpkg", # media_type="application/geopackage+sqlite3", # optional ) def downloadRoute(): with BytesIO() as buff: route.to_file(buff, driver="GPKG") yield buff.getvalue() app = App(app_ui, server) Demo at shinylive:
2
1
78,807,256
2024-7-29
https://stackoverflow.com/questions/78807256/pandas-merge-without-copying-the-data
I have a dataframe (df1) with unique IDs (col1) but also duplicated IDs for each row (col2). On another dataframe (df2) I have unique IDs of col2. I want to somehow link the two dataframes together without copying the data of df2 (col3) (linking instead of copying). Here is an simplified example of what I try to achieve : df1 = pd.read_clipboard() col1 col2 0 a A 1 b A 2 c B 3 d C 4 e B df2 = pd.read_clipboard() col2 col3 0 A Albert is the best 1 B Bernard is cool 2 C Conrad is ok df3 # col3 being the values of df2, not copied col1 col2 col3 0 a A Albert is the best 1 b A Albert is the best 2 c B Bernard is cool 3 d C Conrad is ok 4 e B Bernard is cool I've been using pd.merge and it creates the df3 I want but uses too much memory, I think that it copies the values to the new tables but it's useless as values are only in a few states. df3 = pd.merge(df1, df2, on='col2', how='left') There is a copy=False attribute I can add but doesn't change anything, it stills seems to copy the data.
I would suggest to convert the non-key columns to Categorical, which should save a lot of memory if you have many duplicates: out = pd.merge(df1.astype({'col1': 'category'}), df2.astype({'col3': 'category'}), on='col2', how='left') Or, for simplicity: out = pd.merge(df1.astype('category'), df2.astype('category'), on='col2', how='left') But col2 won't be kept as a categorical. Output: col1 col2 col3 0 a A Albert is the best 1 b A Albert is the best 2 c B Bernard is cool 3 d C Conrad is ok 4 e B Bernard is cool If you want to convert all object columns automatically and leave the numeric ones unchanged: # example let's add numeric columns df1['num1'] = 1 df2['num2'] = 2 out = pd.merge(df1.astype(dict.fromkeys(df1.select_dtypes('object'), 'category')), df2.astype(dict.fromkeys(df2.select_dtypes('object'), 'category')), on='col2', how='left') out.dtypes # col1 category # col2 object # num1 int64 # col3 category # num2 float64 # dtype: object
2
3
78,807,069
2024-7-29
https://stackoverflow.com/questions/78807069/how-to-implement-in-pytorch-numpys-unique-with-return-index-true
In numpy.unique there is an option return_index=True - which returns positions of unique elements (first occurrence - if several). Unfortunately, there is no such option in torch.unique ! Question: What are the fast and torch-style ways to get indexes of the unique elements ? ===================== More generally my issue is the following: I have two vectors v1, v2, and I want to get positions of those elements in v2, which are not in v1 and also for repeated elements I need only one position. Numpy's unique with return_index = True immediately gives the solution. How to do it in torch ? If we know that vector v1 is sorted, can it be used to speed up the process ?
You can achieve this in PyTorch with the following approach: def get_unique_elements_first_idx(tensor): # sort tensor sorted_tensor, indices = torch.sort(tensor) # find position of jumps unique_mask = torch.cat((torch.tensor([True]), sorted_tensor[1:] != sorted_tensor[:-1])) return indices[unique_mask] Example usage: v1 = torch.tensor([2, 3, 3]) v2 = torch.tensor([1, 2, 6, 2, 3, 10, 4, 6, 4]) # Mask to find elements in v2 that are not in v1 mask = ~torch.isin(v2, v1) v2_without_v1 = v2[mask] # Get unique elements and their first indices unique_indices = get_unique_elements_first_idx(v2_without_v1) print(unique_indices) #[0, 3, 1, 2] print(v2[mask][unique_indices]) #[1, 4, 6, 10] P.S. On my computer, the function processes a vector of size 10 million in about (1.1 Β± 0.1)s.
3
4
78,798,514
2024-7-26
https://stackoverflow.com/questions/78798514/typeerror-the-first-argument-must-be-callable-self-job-func-functools-parti
I seem to be experiencing an issue with the following Python Scheduler tasks. I have the below code which has a function to refresh all workbooks in a set bunch of directories. I have attempted to test the code only refreshing the one directory before moving onto the others and I am receiving the titled error. Is anyone able to help? import time import schedule import win32com.client as win32 import os OD = 'C:/DirectoryString/' # Sub Directory Dir1 = 'STR - Doc/Mer Reps/' Dir2 = 'UK & IE ST - ICS/DB/' Dir3 = 'UK & IE ST - Docs/DB/2024/' Dir4 = 'UK & IE RDF - Docs/Rep/DB/Aut/' Dir5 = 'STR - Doc/RMS/' Dir6 = 'STR - Doc/Int DB/' directory_1 = OD + Dir1 directory_2 = OD + Dir2 directory_3 = OD + Dir3 directory_4 = OD + Dir4 directory_5 = OD + Dir5 directory_6 = OD + Dir6 def excel_update(directory): for filename in os.listdir(directory): if filename.endswith('.xlsx') or filename.endswith('.xlsm'): print(f"{directory} {filename}") file = filename xlapp = win32.DispatchEx('Excel.Application') xlapp.DisplayAlerts = False xlapp.Visible = True xlbook = xlapp.Workbooks.Open(directory + file) xlbook.RefreshAll() xlbook.Save() xlbook.Close() del xlbook xlapp.Quit() del xlapp schedule.every(30).minutes.do(excel_update(directory_2)) while True: schedule.run_pending() time.sleep(1) The initial job runs, but, when it comes to the first repetition, I get the following error. Traceback (most recent call last): File "C:\Users\gat_mbr\OneDrive - PurmoGroup\Projects\pythonProject\MerchantReports\LoopFunction.py", line 45, in <module> schedule.every(30).minutes.do(excel_update(directory_2)) File "C:\Users\gat_mbr\OneDrive - PurmoGroup\Projects\pythonProject\MerchantReports\Lib\site-packages\schedule\__init__.py", line 655, in do self.job_func = functools.partial(job_func, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: the first argument must be callable
do() accepts a function object and its arguments. Your code calls it with the result of excel_update(directory_2) call, which is not a function, but a good old None. Here's how to fix it: schedule.every(30).minutes.do(excel_update, directory_2)) See Pass arguments to a job
2
2
78,806,818
2024-7-29
https://stackoverflow.com/questions/78806818/join-same-name-dataframe-column-values-into-list
I create a dataframe with distinct column names and use rename to create columns with same name. import pandas as pd df = pd.DataFrame({ "a_1": ["x", "x"], "a_2": ["", ""], "b_1": ["", ""], "a_3": ["", "y"], "c_1": ["z", "z"], }) names = { "a_1": "a", "a_2": "a", "a_3": "a", "b_1": "b", "c_1": "c", } df2 = df.rename(columns=names) This produces a a b a c 0 x z 1 x y z How to join values in the columns with same name into a list? out = pd.DataFrame({ "a": [["x"], ["x", "y"]], "b": [[""], [""]], "c": [["z"], ["z"]], }) a b c 0 [x] [] [z] 1 [x, y] [] [z] Bad attempt I suspect this can be resolved with lambda for c in ['a', 'b', 'c']: df2[c] = df2[c].apply(lambda x: x.tolist() if x.any() else [], axis=1) This however produces TypeError: <lambda>() got an unexpected keyword argument 'axis' Any idea?
You could transpose and groupby.agg with a custom lambda: df2.T.groupby(level=0).agg(lambda x: x[x!=''].tolist()).T Output: a b c 0 [x] [] [z] 1 [x, y] [] [z]
2
1
78,787,332
2024-7-24
https://stackoverflow.com/questions/78787332/selecting-default-search-engine-is-needed-for-chrome-version-127
All of my Selenium scripts are raising errors after Chrome updated to version 127 because I always have to select a default search engine when the browser is being launched. I use ChromeDriver 127.0.6533.72. How to fix it?
You need to add this Chrome Option to disable the 'choose your search engine' screen: options.addArguments("--disable-search-engine-choice-screen"); If you are using selenium with Python, you'll have to use: options.add_argument("--disable-search-engine-choice-screen")
33
61
78,805,127
2024-7-29
https://stackoverflow.com/questions/78805127/how-extract-regex-with-variable-from-string-in-pandas
I have a dataframe column containing text, and I'd like to make a new column which contains the sentences with names, but no other sentences. Hoping for an end result that looks like this: I am able to identify cells containing names from the list of names, but I'm stumbling on the part that extracts the sentence containing the name. import re import pandas as pd import numpy as np df = pd.DataFrame({ 'ColumnA': ['Lorum ipsum. This is approved. Lorum Ipsum.', 'Lorum Ipsum. Send the contract to May. Lorum Ipsum.', 'Junk Mail from Brian.'] }) last_names_list = ['May','Brian'] df['last_names'] = '' for x in last_names_list: df['last_names'] = np.where(df['ColumnA'].str.contains(x),x,df['last_names']) def f(x,y): return re.findall(fr'[^.]{x}[^.]',y) df['col_3'] = df.apply(lambda x: f(x['last_names'],x['ColumnA']), axis=1) print(df) When I print the dataframe, every row with a name in df[col_3'] produces an empty list. Any help appreciated.
Code pat = '|'.join(last_names_list) df['col_3'] = df['ColumnA'].str.extract(rf'([^.]*?\b(?:{pat})\b.*?\.)') df:
2
1
78,804,321
2024-7-28
https://stackoverflow.com/questions/78804321/run-ps1-scripts-without-specifying-the-executor
In Python, we can use subprocess to run bat (cmd) scripts natively, like import subprocess as sp sp.run(["D:/Temp/hello.bat"]) works fine. However, it cannot run ps1 scripts natively, codes like import subprocess as sp sp.run(["D:/Temp/hello.ps1"]) will cause `WinError 193: %1 is not a valid Win32 application" to be raised. I thought it was because the .PS1 had not been showing up in PATHEXT and added it there, but it failed again. I also tried the way provided in this answer which seems to be setting the file execution policy and it still wouldn't work. I know it would work when I add the executor in the call like sp.run(["pwsh", "D:/Temp/hello.ps1"]) or use a bat script as intermediate, but that's not desired. I am using a program which is badly ported from *nix to Windows and it just call whatever I provided as a single executable (on *nix we can use shebang, but not on windows).
Batch files are special in that they are the only type of interpreter-based script files directly recognized as executables by the system. For any other script files, including PowerShell scripts (*.ps1), the only way you can achieve their execution when directly invoked is: Redefine the command that is used by the Windows shell to open *.ps1 files (i.e. the command associated with the Open shell verb) so as to pass the file path at hand to the PowerShell CLI (powershell.exe for Windows PowerShell, pwsh for PowerShell (Core) 7) for execution. Note: The default behavior when opening *.ps1 files from outside PowerShell (which includes invoking them from cmd.exe and double-clicking in File Explorer) is to open them for editing. Changing the behavior to executing PowerShell scripts may be unexpected by other users, so it's best to limit this change to your own user account.. The SuperUser answer you link to shows how do to that, via the registry; note that the linked code requires elevation, i.e. running with administrative privileges. However, given the above, you may prefer to define the command for the current user only, in which case elevation isn't required - see the second-to-last section of this answer. In order for subprocess.run() to use the redefined command, shell=True must be passed (which calls via cmd.exe, which automatically invokes the default Windows shell operation on non-executable files): import subprocess as sp # Performs the "Open" Window shell operation on the *.ps1 file. sp.run(["D:/Temp/hello.ps1"], shell=True) If you don't want to redefine the command associated with the Open shell verb, your only option is indeed to call *.ps1 files via the PowerShell CLI; e.g.: import subprocess as sp sp.run(["powershell.exe", "-NoProfile", "-File", "D:/Temp/hello.ps1"])
2
2
78,799,973
2024-7-26
https://stackoverflow.com/questions/78799973/how-to-use-pydantic-model-as-query-parameter-in-litestar-get-route
I’m trying to create a GET route with Litestar that utilizes a Pydantic model as a query parameter. However, the serialization does not work as expected. Here is a minimal example that reproduces my issue: from pydantic import BaseModel from litestar import Litestar, get, Controller class Input(BaseModel): foo: str bar: str class RootController(Controller): path = "/" @get() def input(self, input: Input) -> str: return input.foo + input.bar app = Litestar(route_handlers=[RootController]) And the following GET request: import httpx import json params = { "input": { "foo": "test", "bar": "this" } } def prepare_encode(params: dict) -> dict: for key, value in params.items(): if isinstance(value, dict): params[key] = json.dumps(value, indent=None) return params params = prepare_encode(params) response = httpx.get("http://localhost:8000/", params=params) response.json() The GET request results in the following error: { "status_code": 400, "detail": "Validation failed for GET /?input=%7B%22foo%22%3A%20%22test%22%2C%20%22bar%22%3A%20%22this%22%7D", "extra": [ { "message": "Input should be a valid dictionary or instance of Input" } ] } It seems that the query parameter is not being properly serialized into the Input Pydantic model. What I've Tried: Using json.dumps to encode the dictionary before sending it as a parameter. Debugging the Litestar model implementation where the query parameter is provided as string into the msgspec conversion. This does obviously not fit to the required type. Expected Behavior: I expect the input query parameter to be correctly parsed and serialized into the Input model, allowing the GET request to succeed without validation errors. Question: How can I correctly pass a Pydantic model as a query parameter in a Litestar GET route? What am I missing in the serialization process? Is it possible at all? Additional Context: Litestar version: 2.10.0 Pydantic version: 2.8.2 httpx version: 0.27.0 Any help or guidance would be greatly appreciated.
There are actually problems with both how you make and how you process your request. First, I wasn't be able to find in the docs a possibility to use Pydantic model for query params as FastAPI has. However you can implement similar logic yourself via DI mechanism: def build_input(foo: str, bar: str) -> Input: """Prepare input model from query params.""""" return Input(foo=foo, bar=bar) class RootController(Controller): path = "/" @get(dependencies={"input": Provide(build_input)}) def input(self, input: Input) -> str: return input.foo + input.bar Secondly, if you'd like to use nested structure as an input you should better use POST method instead. { "input": { "foo": "test", "bar": "this" } } Otherwise, the above dictionary will be converted to the following string when used as a query: %7B%22foo%22%3A%20%22test%22%2C%20%22bar%22%3A%20%22this%22%7D I assume what you wanted to do was the following: response = httpx.get( "http://localhost:8000/", params={ "foo": "test", "bar": "this" } ) With these changes everything seems working!
2
1
78,802,415
2024-7-27
https://stackoverflow.com/questions/78802415/web-scraping-for-getting-data-with-python-nonetype-error
I'm trying to get dollar, prices for my school project. So I decide to use web scraping for this but I have a problem about it. When I try to use my code on server it gives me NoneType ERROR. It works on google colab but I can't use on my pc or server. How can I solve this guys? Web Scrape code ; def dolar(): headers = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.2 Safari/605.1.15' url = f'https://finance.yahoo.com/quote/TRY=X/' r = requests.get(url) soup = bs(r.text, 'html.parser') dolar = soup.find("div", {"class": "container yf-mgkamr"}).find_all("span")[0].text return dolar EROOR ; Traceback (most recent call last): File "/Users/user/Desktop/API/main.py", line 38, in <module> dolar() File "/Users/user/Desktop/API/main.py", line 35, in dolar dolar = soup.find("div", {"class": "container yf-mgkamr"}).find_all("span")[0].text ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'find_all' (.venv) user@192 API % I tried to change my main website, tried to use without ".find_all" method. It doesn't change anything.
You should probably use this import requests import time def dolar(): now = time.time() - 10 headers = { "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.2 Safari/605.1.15" } url = f"https://query1.finance.yahoo.com/v8/finance/chart/TRY=X?period1={int(now)}&period2={int(now)}" response = requests.get(url, headers=headers) return response.json()['chart']['result'][0]['meta']['regularMarketPrice'] print(dolar())
3
3
78,802,365
2024-7-27
https://stackoverflow.com/questions/78802365/how-to-filter-pandas-dataframe-for-strings-that-contain-all-substrings-in-a-give
I'm trying to filter out rows of a dataframe where a column of strings under the name 'question' contains all the substrings in a given list. That is, if the given list of substrings is ['King', 'England'], then I need to retain all the rows in the dataframe where the string in the df.question contains both King and England. This code executes without any problems and prints out a boolean value: print(all([word in df.question[0] for word in ['King', 'England']])) But this code results in the following error: print(df[all([word in df.question for word in ['King', 'England']])]) --------------------------------------------------------------------------- KeyError Traceback (most recent call last) File ~\anaconda3\Lib\site-packages\pandas\core\indexes\base.py:3805, in Index.get_loc(self, key) 3804 try: -> 3805 return self._engine.get_loc(casted_key) 3806 except KeyError as err: File index.pyx:167, in pandas._libs.index.IndexEngine.get_loc() File index.pyx:196, in pandas._libs.index.IndexEngine.get_loc() File pandas\\_libs\\hashtable_class_helper.pxi:7081, in pandas._libs.hashtable.PyObjectHashTable.get_item() File pandas\\_libs\\hashtable_class_helper.pxi:7089, in pandas._libs.hashtable.PyObjectHashTable.get_item() KeyError: False The above exception was the direct cause of the following exception: KeyError Traceback (most recent call last) Cell In[9], line 2 1 print(all([word in df.question[0] for word in ['King', 'England']])) ----> 2 print(df[all([word in df.question for word in ['King', 'England']])]) File ~\anaconda3\Lib\site-packages\pandas\core\frame.py:4102, in DataFrame.__getitem__(self, key) 4100 if self.columns.nlevels > 1: 4101 return self._getitem_multilevel(key) -> 4102 indexer = self.columns.get_loc(key) 4103 if is_integer(indexer): 4104 indexer = [indexer] File ~\anaconda3\Lib\site-packages\pandas\core\indexes\base.py:3812, in Index.get_loc(self, key) 3807 if isinstance(casted_key, slice) or ( 3808 isinstance(casted_key, abc.Iterable) 3809 and any(isinstance(x, slice) for x in casted_key) 3810 ): 3811 raise InvalidIndexError(key) -> 3812 raise KeyError(key) from err 3813 except TypeError: 3814 # If we have a listlike key, _check_indexing_error will raise 3815 # InvalidIndexError. Otherwise we fall through and re-raise 3816 # the TypeError. 3817 self._check_indexing_error(key) KeyError: False How do I filter rows in a Dataframe based on this condition?
You can use .str.contains to check each word and then np.all over axis=0 to combine all boolean values for each row: import numpy as np import pandas as pd df = pd.DataFrame( { "question": [ "Who is the King of England?", "Where is England?", "England King", "King", ] } ) df2 = df[ np.all([df['question'].str.contains(word) for word in ["King", "England"]], axis=0) ] print(df2) Output: question 0 Who is the King of England? 2 England King
2
3
78,802,108
2024-7-27
https://stackoverflow.com/questions/78802108/literal-type-hint-unavailable-outside-of-the-init-method
I am using VScode 1.91.1. It seems like it doesn't want to recognize Literal outside of the __init__ method. Is there a way to have the right type throughout the class? class MyDict(TypedDict): my_var: Literal["a", "b"] class MyClass: def __init__(self, my_dict: MyDict): self.my_var = my_dict["my_var"] # hovering over `my_var` here shows Literal["a", "b"] def other_method(self): self.my_var # hovering over it here shows "str"
Another way you could try is the following: from typing import Literal, TypedDict class MyDict(TypedDict): my_var: Literal["a", "b"] class MyClass: def __init__(self, my_dict: MyDict): self._my_var: Literal["a", "b"] = my_dict["my_var"] @property def my_var(self) -> Literal["a", "b"]: return self._my_var def other_method(self): # Now, type checkers should recognize `self.my_var` as `Literal["a", "b"] print(self.my_var) # Hovering over this should correctly show `Literal["a", "b"] where you ensure the property method with @property that helps maintain the Literal type. By assigning _my_var with the specific Literal type and then exposing it through a property method, the type information is maintained consistently throughout the class. This approach ensures that type checkers and IDEs correctly recognize self.my_var as Literal["a", "b"], keeping the type information intact and correctly propagated. I hope this is useful, if not I will try my best again to help you. Have a good coding bro :)))
4
1
78,801,520
2024-7-27
https://stackoverflow.com/questions/78801520/why-is-pycharm-asking-for-old-version-of-pandas-and-how-can-i-make-it-stop
I am trying to learn Pandas, and am doing a simple project in PyCharm, but PyCharm is saying there is an error because I am not using specific out-of-date versions of Pandas, NumPy, and Faker. I do not know what Faker is. You can see the error message and my code below, though my code doesn't matter much to my question. The code still runs as expected, but I want to understand if I'm setting up problems for the future. I tried googling this, but I don't know what keywords I should be searching.
From what I've encountered in the past, this usually means that there is either a requirements.txt or a pyproject.toml, or a similar file in your project directory. These files define what versions of required packages the project was built for to let the user know that if they have a version different than that, the developer cannot guarantee the code will work. At least for the commonly used styles, regardless of the style of requirements file a project has, there will be lines included in it such as what you are seeing in that warning banner from PyCharm. These will look like numpy == 1.18.4 pandas == 1.0.4 ... other package requirements ... The reason it is showing a warning is because these requirements are listed as strong equalities, meaning that the developer is indicating that these versions of these packages are the ONLY version that they guarantee the code will work with. But this isn't the only requirement type that can be listed - there is also >= meaning it needs to be at least the listed version, <= for at most, etc. From what I've seen though, the syntax of the requirements file may vary a bit between styles though I don't know this for certain. I would recommend looking up the syntax of whatever specific requirements file you have. Are you importing a custom package? I don't have time right now to test it to make sure, but I believe it's possible that an imported package that wasn't installed through pip or an environment manager such as conda can also trigger these kinda of warnings (such as in the case of downloading source code from GitHub and including it in your project) (As a side note, it looks like there are a few different file conventions that can be used, though I don't know what files PyCharm will autodetect the less common ones. From what I was able to find in the Pycharm docs, it seems that you can point PyCharm to whatever requirements file you have if it fails to autodetect it.)
2
1
78,795,606
2024-7-25
https://stackoverflow.com/questions/78795606/ffprobe-not-reflecting-mp4-dimension-edits
I'm trying to edit MP4 width & height without scaling. I'm doing that by editing tkhd & stsd boxes of the MP4 header. exiftool will show the new width & height but ffprobe will not. Before editing: Exif: $ exiftool $f | egrep -i 'width|height' Image Width : 100 Image Height : 100 Source Image Width : 100 Source Image Height : 100 FFprobe: $ ffprobe -v quiet -show_streams $f | egrep 'width|height' width=100 height=100 coded_width=100 coded_height=100 After editing the above sizes I then get this new following python file output: [ftyp] size:32 [mdat] size:196933 [moov] size:2057 - [mvhd] size:108 - [trak] size:1941 - - [tkhd] size:92 Updated tkhd box: Width: 100 -> 300, Height: 100 -> 400 - - [mdia] size:1841 - - - [mdhd] size:32 - - - [hdlr] size:44 - - - [minf] size:1757 - - - - [vmhd] size:20 - - - - [dinf] size:36 - - - - - [dref] size:28 - - - - [stbl] size:1693 - - - - - [stsd] size:145 Updated stsd box #1: Width: 100 -> 300, Height: 100 -> 400 - - - - - [stts] size:512 - - - - - [stss] size:56 - - - - - [stsc] size:28 - - - - - [stsz] size:924 - - - - - [stco] size:20 Then running EXIFtool & FFprobe again: $ exiftool $f egrep -i 'width|height' Image Width : 300 Image Height : 400 Source Image Width : 300 Source Image Height : 400 $ ffprobe -v quiet -show_streams $f | egrep 'width|height' width=100 height=100 coded_width=100 coded_height=100 This is my Python code: import sys, struct def read_box(f): offset = f.tell() header = f.read(8) if len(header) < 8: return None, offset size, box_type = struct.unpack(">I4s", header) box_type = box_type.decode("ascii") if size == 1: size = struct.unpack(">Q", f.read(8))[0] elif size == 0: size = None return {"type": box_type, "size": size, "start_offset": offset}, offset def edit_tkhd_box(f, box_start, new_width, new_height, depth): f.seek(box_start + 84, 0) # Go to the width/height part in tkhd box try: old_width = struct.unpack('>I', f.read(4))[0] >> 16 old_height = struct.unpack('>I', f.read(4))[0] >> 16 f.seek(box_start + 84, 0) # Go back to write f.write(struct.pack('>I', new_width << 16)) f.write(struct.pack('>I', new_height << 16)) print(f"{' ' * depth} Updated tkhd box: Width: {old_width} -> {new_width}, Height: {old_height} -> {new_height}") except struct.error: print(f" Error reading or writing width/height to tkhd box") def edit_stsd_box(f, box_start, new_width, new_height, depth): f.seek(box_start + 12, 0) # Skip to the entry count in stsd box try: entry_count = struct.unpack('>I', f.read(4))[0] for i in range(entry_count): entry_start = f.tell() f.seek(entry_start + 4, 0) # Skip the entry size format_type = f.read(4).decode("ascii", "ignore") if format_type == "avc1": f.seek(entry_start + 32, 0) # Adjust this based on format specifics try: old_width = struct.unpack('>H', f.read(2))[0] old_height = struct.unpack('>H', f.read(2))[0] f.seek(entry_start + 32, 0) # Go back to write f.write(struct.pack('>H', new_width)) f.write(struct.pack('>H', new_height)) print(f"{' ' * depth} Updated stsd box #{i + 1}: Width: {old_width} -> {new_width}, Height: {old_height} -> {new_height}") except struct.error: print(f" Error reading or writing dimensions to avc1 format in entry {i + 1}") else: f.seek(entry_start + 8, 0) # Skip to the next entry except struct.error: print(f" Error reading or writing entries in stsd box") def parse_and_edit_boxes(f, new_width, new_height, depth=0, parent_size=None): while True: current_pos = f.tell() if parent_size is not None and current_pos >= parent_size: break box, box_start = read_box(f) if not box: break box_type, box_size = box["type"], box["size"] print(f'{"- " * depth}[{box_type}] size:{box_size}') if box_type == "tkhd": edit_tkhd_box(f, box_start, new_width, new_height, depth) elif box_type == "stsd": edit_stsd_box(f, box_start, new_width, new_height, depth) # Recursively parse children if it's a container box if box_type in ["moov", "trak", "mdia", "minf", "stbl", "dinf", "edts"]: parse_and_edit_boxes(f, new_width, new_height, depth + 1, box_start + box_size) if box_size is None: f.seek(0, 2) # Move to the end of file else: f.seek(box_start + box_size, 0) if __name__ == '__main__': if len(sys.argv) != 4: print("Usage: python script.py <input_file> <new_width> <new_height>") else: with open(sys.argv[1], 'r+b') as f: parse_and_edit_boxes(f, int(sys.argv[2]), int(sys.argv[3])) It seems related to ff_h264_decode_seq_parameter_set
FFprobe analyzes at stream level (eg: H.264) , but you are editing at the container level (eg: MP4). You would need to edit the SPS (Sequence Parameter Settings) bytes. Specifically you'll be editing: pic_width_in_mbs_minus1 and pic_height_in_map_units_minus1. Double-check the following using a hex editor. Try some manual editing first then write code to achieve same result of editing. You need to also research how Golomb and Exp-Golomb codes (numbers) work. Because the information you need to edit is stored in this same bits formatting. You can find the SPS bytes in the avcC box, which is inside the MP4's stsd box. The avcC has the following values (hex digits): 61 76 63 43. Keep going forward per byte until you hit an FF (or 255) which is followed by E1 (or 225). Now begins the SPS... two bytes for length, then SPS bytes themselves(starts with byte 67 which means "SPS data"). Read this blog entry (Chinese) for more info. note: If you use Chrome browser then you can get automatic page translation from Chinese into English. The structure of SPS is shown in the images further below. For example if your bytes look like: FF E1 00 19 67 42 C0 0D 9E 21 82 83 then... After the FF E1 is where the SPS packet begins. 00 19 is the SPS bytes length (hex 0x0019 is equal to decimal 25). Byte value 0x67 signals that the actual SPS data begins here... Profile IDC 0x42 here is set at decimal 66. You can see (from the below image) that Profile IDC uses 8 bits, and since an array slot holds 8 bits, this value will be the entire slot's value. Next is C0 which is four 1-bit values and four reserved zeros. The total is 8 bits so this fills the next array slot as C0 (where C0 bits look like this: 1100 0000). constraint_set0_flag = 1 constraint setl_flag = 1 constraint_set2_flag = 0 constraint_set3 flag = 0 reserved_zero_4bits = 0 0 0 0 Next is 0D which is the Level IDC. Next is 9E which is bits 1001 1110. In the ue(v) format if the first bit is a 1 then the answer == 0, (eg: we stop any time a 1 bit is found, then the answer is how many 0 bits were counted before reaching this 1 bit) seq_parameter_set_id = 0 (since first bit is a 1, we counted zero 0-bits to reach) Here the IF statemement can be skipped since our Profile IDC is 66 (not 100 or more). There are still 7 other bits left in that byte 0x9E as ...001 1110 log2 max pic order cnt Isb minus4 = 3 Because we stop at any next 1, we use the count of previous zeroes to read a bit-length of the data value. So here 001 11 is to be read as: 00 {1} 11 where that {1} is the stop counting signal. There are two zeroes (before 1) so we know to read two bits after the 1 signal for stopping) Hopefully it's enough to get you and other readers started. You must reach pic_width_in_mbs_minus1. The images of data structure of SPS:
2
2
78,799,576
2024-7-26
https://stackoverflow.com/questions/78799576/how-can-i-save-quantum-gates-as-a-graphic-in-png-svg-format-using-qiskit
I am working with Qiskit for programming quantum circuits. Everything works fine but there is one thing which I didn't find out. Here is my Python code: from qiskit import QuantumCircuit qc = QuantumCircuit(2) qc.h(0) qc.cx(0, 1) qc.draw(output = "mpl") This is my output: I am only interested into the graphical representation of the Hadamard gate (red one). I need it for a description. I don't need the wires and the qubits. How can I save the Hadamard gate as png/svg file using Qiskit?
This worked with me: from qiskit import QuantumCircuit from qiskit.visualization import circuit_drawer qc = QuantumCircuit(1) qc.h(0) circuit_img = circuit_drawer(qc, output='mpl', scale=2) circuit_img.savefig('hadamard_gate.png') # save figure as PNG circuit_img.savefig('hadamard_gate.svg') # save figure as SVG cf. https://docs.quantum.ibm.com/api/qiskit/qiskit.visualization.circuit_drawer for additional info
3
0
78,797,590
2024-7-26
https://stackoverflow.com/questions/78797590/adding-a-flag-to-the-end-of-a-bar-chart-in-python
I was trying to follow an example outlined here https://stackoverflow.com/a/61973946 The code, after small adjustments looks like this: def pos_image(x, y, pays, haut): pays = countries.get(pays).alpha2.lower() fichier = "iso-flag-png" fichier += f"/{pays}.png" im = mpimg.imread(fichier) ratio = 4 / 3 w = ratio * haut print(w) ax.imshow(im, extent=(x - w, x, y, y + haut), zorder=2) plt.style.use('seaborn') fig, ax = plt.subplots() liste_pays = [('France', 10), ('USA', 9), ('Spain', 5), ('Italy', 5)] X = [p[1] for p in liste_pays] Y = [p[0] for p in liste_pays] haut = .8 r = ax.barh(y=Y, width=X, height=haut, zorder=1) y_bar = [rectangle.get_y() for rectangle in r] for pays, y in zip(liste_pays, y_bar): pos_image(pays[1], y, pays[0], haut) plt.show() However, the resulting plot looks like this Showing only the last value of the list. What am i doing wrong? Thank you
I can reproduce your result. I am not exactly sure what's going wrong here: Either the answer that you refer to never worked in the first place or some matplotlib internals have changed since it was posted. That said, here is the problem: Once you call pos_image(), the internal call to ax.imshow() has the effect that the x and y limits are adjusted to the region where the flag is placed. The accepted answer to the same question actually also mentions this: Afterwards, the plt.xlim and plt.ylim need to be set explicitly (also because imshow messes with them.) And here is a solution (or workaround): Remember the x and y limits before calling pos_image(), and adjust them again afterwards: x_lim, y_lim = ax.get_xlim(), ax.get_ylim() for pays, y in zip(liste_pays, y_bar): pos_image(pays[1], y, pays[0], haut) ax.set_xlim(x_lim) ax.set_ylim(y_lim) Produces (I used the default style rather than 'seaborn'):
2
1
78,798,250
2024-7-26
https://stackoverflow.com/questions/78798250/plotly-updatemenus-only-update-specific-parameters
I'm looking for a way to have two updatemenu buttons on a plotly figure. One changes the data and the y-axis title and one switches from linear to log scale. Broadly the code below works but I lose something depending on which method I use. If the buttons method is update then when I switch parameters it defaults back to linear scale. If I use restyle the y-axis title stops updating. Is there a method to only partially update I've missed? import plotly.graph_objects as go import pandas as pd def main(): figure = go.Figure() df = pd.DataFrame([['sample 1', 1, 5, 10], ['sample 2', 2, 20, 200], ], columns=['sample id', 'param1', 'param2', 'param3'] ) options = ['param1', 'param2', 'param3'] figure.add_trace(go.Bar(x=df['sample id'], y=df[options[0]])) figure.update_yaxes(title=options[0]) buttons = [] for option in options: buttons.append({'method': 'restyle', ### this can be update or restyle, each has different issues 'label': option, 'args': [{'y': [df[option]], 'name': [option]}, {'yaxis': {'title': option}}, [0]] }) scale_buttons = [{'method': 'relayout', 'label': 'Linear Scale', 'args': [{'yaxis': {'type': 'linear'}}] }, {'method': 'relayout', 'label': 'Log Scale', 'args': [{'yaxis': {'type': 'log'}}] } ] figure.update_layout(yaxis_title=options[0], updatemenus=[dict(buttons=buttons, direction='down', x=0, xanchor='left', y=1.2, ), dict(buttons=scale_buttons, direction='down', x=1, xanchor='right', y=1.2, )], ) figure.show() if __name__ == '__main__': main()
You need to use the update method for the first dropdown (restyle only updates data, update updates both data and layout - nb. those methods refer to their respective Plotly.js function, see Plotly.relayout and Plotly.update). Now to fix the issue, the key is to use "attribute strings", that is, 'yaxis.title': option instead of 'yaxis': {'title': option} in the arguments, otherwise the update will reset the yaxis params other than title to their defaults (same applies for the yaxis type) : The term attribute strings is used to mean flattened (e.g., {marker: {color: 'red'}} vs. {'marker.color': red}). When you pass an attribute string to restyle inside the update object, it’s assumed to mean update only this attribute. Therefore, if you wish to replace and entire sub-object, you may simply specify one less level of nesting. buttons = [] for option in options: buttons.append({ 'method': 'update', 'label': option, 'args': [{ 'y': [df[option]], 'name': [option] }, { 'yaxis.title': option }] }) scale_buttons = [{ 'method': 'relayout', 'label': 'Linear Scale', 'args': [{'yaxis.type': 'linear'}], }, { 'method': 'relayout', 'label': 'Log Scale', 'args': [{'yaxis.type': 'log'}] }]
2
1
78,796,974
2024-7-26
https://stackoverflow.com/questions/78796974/shap-values-for-linear-model-different-from-those-calculated-manually
I train a linear model to predict house price, and then I compare the Shapley values calculation manually vs the values returned by the SHAP library and they are slightly different. My understanding is that for linear models the Shapley value is given by: coeff * features for obs - coeffs * mean(features in training set) Or as stated in the SHAP documentation: coef[i] * (x[i] - X.mean(0)[i]), where i is one feature. The question is, why does SHAP return different values from the manual calculation? Here is the code: import pandas as pd from sklearn.datasets import fetch_california_housing from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler import shap X, y = fetch_california_housing(return_X_y=True, as_frame=True) X = X.drop(columns = ["Latitude", "Longitude", "AveBedrms"]) X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=0, ) scaler = MinMaxScaler().set_output(transform="pandas").fit(X_train) X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) linreg = LinearRegression().fit(X_train, y_train) coeffs = pd.Series(linreg.coef_, index=linreg.feature_names_in_) X_test.reset_index(inplace=True, drop=True) obs = 6188 # manual shapley calculation effect = coeffs * X_test.loc[obs] effect - coeffs * X_train.mean() Which returns: MedInc 0.123210 HouseAge -0.459784 AveRooms -0.128162 Population 0.032673 AveOccup -0.001993 dtype: float64 And the SHAP library returns something slightly different: explainer = shap.LinearExplainer(linreg, X_train) shap_values = explainer(X_test) shap_values[obs] Here the result: .values = array([ 0.12039244, -0.47172515, -0.12767778, 0.03473923, -0.00251017]) .base_values = 2.0809714707337523 .data = array([0.25094137, 0.01960784, 0.06056066, 0.07912217, 0.00437137]) It is set to ignore interactions: explainer.feature_perturbation returning 'interventional'
TL;DR: The definition of training set matters. Longer answer: Your understanding is right. However, what's not quite right is SHAP's hidden data transformations silently applied behind the scenes, which can be traced like this: import pandas as pd from sklearn.datasets import fetch_california_housing from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler import shap X, y = fetch_california_housing(return_X_y=True, as_frame=True) X = X.drop(columns = ["Latitude", "Longitude", "AveBedrms"]) X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=0, ) scaler = MinMaxScaler().set_output(transform="pandas").fit(X_train) X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) linreg = LinearRegression().fit(X_train, y_train) coeffs = pd.Series(linreg.coef_, index=linreg.feature_names_in_) X_test.reset_index(inplace=True, drop=True) obs = 6188 explainer = shap.LinearExplainer(linreg, X_test) shap_values = explainer(X_test) shap_values[obs] .values = array([ 0.15757575, -0.45065211, -0.12948118, 0.03568408, -0.00211654]) .base_values = 2.023180048641746 .data = array([0.25094137, 0.01960784, 0.06056066, 0.07912217, 0.00437137]) vs: y_train.mean() 2.0682462451550387 which is already alarming. To shed some light on what's going on: X_train.mean(0).values array([0.23218765, 0.54154317, 0.03475851, 0.03985979, 0.00382413]) but explainer.masker.data.mean(0) array([0.22695687, 0.53117647, 0.03449285, 0.03624149, 0.00379029]) which should hint you they applied masker and the masker data mean is surprisingly similar to what is actually used in SHAP values calculations (source code): explainer.mean array([0.22695687, 0.53117647, 0.03449285, 0.03624149, 0.00379029]) So, to reconcile what you see as outcome to SHAP value calculations you should account for the use of masker: #expected result (X_test.loc[obs] - explainer.mean) * coeffs MedInc 0.157576 HouseAge -0.450652 AveRooms -0.129481 Population 0.035684 AveOccup -0.002117 dtype: float64 or simply use less than 100 datapoints from the very beginning to avoid use of masker altogether.
2
2
78,794,966
2024-7-25
https://stackoverflow.com/questions/78794966/how-to-customize-the-caption-text-in-folium-colorbar-i-want-to-increase-the-fon
I am trying to use colorbar for an output variable circle plot in Folium colormap = cm.LinearColormap(colors=['green','red'], index=[min(df['output']), max(df['output'])], vmin=min(df['output']),vmax=max(df['output']), caption='output in units') folium.Circle(location=[row['Latitude'], row['Longitude']], radius=800*row["output"], fill=True, # opacity=0.8, # fill_opacity=1, color=colormap(row["output"])).add_to(m) When I use this code, the caption text "output in units" is appearing very small. How to increase its font size?
AFAIK, there is no direct way to do it but, you can inject CSS using the font-size property. By inspecting the html's folium Map (with CTRL+SHIFT+I) on Chrome, the caption's text seems to be at #legend > g > text : import folium import pandas as pd import branca.colormap as cm df = pd.DataFrame({"output": range(10, 110, 10)}) agg = df["output"].agg(vmin="min", vmax="max") m = folium.Map([39.5, -98.35], zoom_start=4) colormap = cm.LinearColormap( colors=["green", "red"], index=agg.to_list(), caption="output in units", **agg.to_dict(), ) CSS = "font-size: 16px;" # add more props if needed m.get_root().header.add_child( folium.Element(f"&ltstyle>#legend > g > text.caption {{{CSS}}}</style>") ) colormap.add_to(m)
2
1
78,797,421
2024-7-26
https://stackoverflow.com/questions/78797421/python-pandas-difference-in-boolean-indexing-between-and
I am confused about different results of boolean indexing when using ~ after != versus when using just == I have a pandas df with 4 columns: dic = { "a": [1,1,1,0,0,1,1], "b": [0,0,1,1,0,0,0], "c": [1,0,1,0,0,1,0], "d": [0,0,1,0,0,1,0], } df = pd.DataFrame(data=dic) print(df) a b c d 0 1 0 1 0 1 1 0 0 0 2 1 1 1 1 3 0 1 0 0 4 0 0 0 0 5 1 0 1 1 6 1 0 0 0 I want to subset the whole df dataframe: I want to remove all rows which have all elements zero, but just on the columns b c d, and not on a. If I use ~ (not) operator after == I get the desired result: names = ["b","c","d"] df_A = df.loc[~(df[names] == 0.0).all(axis=1)] print(df_A) a b c d 0 1 0 1 0 2 1 1 1 1 3 0 1 0 0 5 1 0 1 1 But when I use just == I get different result: names = ["b","c","d"] df_B = df.loc[(df[names] != 0.0).all(axis=1)] print(df_B) a b c d 2 1 1 1 1 Do you have any idea why is this the case? Should these two not be the same? Thank you.
You're not correctly following De Morgan's law. not (A or B) = (not A) and (not B) not (A and B) = (not A) or (not B) If you use the opposite condition as input, you have to replace all (AND) by any (OR): df_B = df.loc[(df[names] != 0.0).any(axis=1)] In English this would be: I want to REMOVE (~) rows for which ALL values are 0 (== 0) I want to KEEP rows for which ANY value is NOT 0 (!= 0) In short the rules for inverting input conditions are: swapping strict/equal operators (e.g. >= becomes < ; > becomes <=) AND become OR and OR becomes AND adding/removing NOT (~) Output: a b c d 0 1 0 1 0 2 1 1 1 1 3 0 1 0 0 5 1 0 1 1
3
3
78,796,866
2024-7-26
https://stackoverflow.com/questions/78796866/polars-when-then-conditions-form-dict
I would like to have a function that accept list of conditions as parameter and filter given dataframe by all of them. Pseudocode should look like this: def Filter(df, conditions = ["a","b"]): conditions_dict = { "a": pl.col("x") < 5, "b": pl.col("x") > -3, "c": pl.col("z") < 7 } return df.with_columns( pl.when( any [conditions_dict[c] for c in conditions]) .then(pl.lit(False)) .otherwise(pl.lit(True)) .alias("bool") ) How to do it?
In general, you can create a generator for the filter expressions of interest and pass it to pl.any_horizontal to check whether any of them evaluates to True. This can be used within a pl.when().then().otherwise() construct. def custom_filter(df: pl.DataFrame, conditions: list[str]) -> pl.DataFrame: conditions_dict = { "a": pl.col("x") < 5, "b": pl.col("x") > -3, "c": pl.col("z") < 7 } return df.with_columns( pl.when( pl.any_horizontal(*(conditions_dict[c] for c in conditions)) ).then( pl.lit(False) ).otherwise( pl.lit(True) ).alias("bool") ) custom_filter(df, ["a"]) Note. In the specific example provided, you can simplify and drop the pl.when().then().otherwise() construct and replace it with the negation of the horizontal aggregation ~pl.any_horizontal(...). shape: (6, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ x ┆ z ┆ bool β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ bool β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═══════║ β”‚ -4 ┆ -2 ┆ false β”‚ β”‚ -2 ┆ -3 ┆ false β”‚ β”‚ 1 ┆ 8 ┆ false β”‚ β”‚ 3 ┆ 4 ┆ false β”‚ β”‚ 5 ┆ 2 ┆ true β”‚ β”‚ 7 ┆ 1 ┆ true β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
2
3
78,795,749
2024-7-25
https://stackoverflow.com/questions/78795749/how-can-i-invert-dataclass-astuple
I am trying to construct a hierarchy of dataclasses from a tuple, like so: from dataclasses import astuple, dataclass @dataclass class Child: name: str @dataclass class Parent: child: Child # this is what I want p = Parent(Child("Tim")) print(p) # this is what I get t = astuple(p) print(Parent(*t)) However, while this constructs a Parent as expected, its child is not of type Child: Parent(child=Child(name='Tim')) Parent(child=('Tim',)) Is there a way to construct Parent and child from the tuple t, or by some other means?
You can use a helper function that constructs an instance of a dataclass from a given tuple of values and recursively constructs child instances for fields that are dataclasses: from dataclasses import astuple, dataclass, fields, is_dataclass @dataclass class Child: name: str @dataclass class Parent: child: Child def from_tuple(cls, data): return cls(*( from_tuple(type, obj) if is_dataclass(type := field.type) else obj for obj, field in zip(data, fields(cls)) )) p = Parent(Child("Tim")) t = astuple(p) print(from_tuple(Parent, t)) This outputs: Parent(child=Child(name='Tim')) Demo here
2
4
78,795,739
2024-7-25
https://stackoverflow.com/questions/78795739/speed-up-parallelize-multivariate-normal-pdf
I have multiple Nx3 points, and I sequentially generate a new value for each from its corresponding multivariate Gaussian, each with 1x3 mean and 3x3 cov. So, together, I have arrays: Nx3 array of points, Nx3 array of means and Nx3x3 array of covs. I only see how to do it with the classic for-loop: import numpy as np from scipy.stats import multivariate_normal # Generate example data N = 5 # Small number for minimal example, can be increased for real use case points = np.random.rand(N, 3) means = np.random.rand(N, 3) covs = np.array([np.eye(3) for _ in range(N)]) # Identity matrices as example covariances # Initialize an array to store the PDF values pdf_values = np.zeros(N) # Loop over each point, mean, and covariance matrix for i in range(N): pdf_values[i] = multivariate_normal.pdf(points[i], mean=means[i], cov=covs[i]) print("Points:\n", points) print("Means:\n", means) print("Covariances:\n", covs) print("PDF Values:\n", pdf_values) Is there any way to speed this up? I tried to pass all directly to multivariate_normal.pdf, but also from the docs that does not seem to be supported (unlike a simpler case of generating values for Nx3 points, but with same mean and covariance. Maybe some implementation not from scipy? I might be too hopeful, but somehow I hope there is a simpler way to speed this up and avoid iterating with this for-loop in a big array of data directly with Pythonic loop.
Here is a solution involving Cholesky decomposition. method 1: import numpy as np x = points - means p = x.shape[1] res = np.exp(-0.5*(x*np.linalg.solve(covs, x)).sum(1)) res/(2*np.pi)**(p/2)/np.linalg.det(covs)**0.5 array([0.03356053, 0.03167125, 0.08042358, 0.04351325, 0.1328082 ]) method 2: y = np.linalg.solve(LU := np.linalg.cholesky(covs), points - means) np.exp(-0.5* (y**2+np.log(2*np.pi)).sum(1))/np.einsum('kii->ki',LU).prod(1) array([0.03356053, 0.03167125, 0.08042358, 0.04351325, 0.1328082 ]) method 3: multivariate_normal.pdf(y, np.zeros(p))/np.einsum('kii->ki',LU).prod(1) array([0.03356053, 0.03167125, 0.08042358, 0.04351325, 0.1328082 ]) method 4: Using svd U,S,V = np.linalg.svd(covs) res_svd = np.einsum('ki,kij,kj,kjl,kl->k', x, U, 1/S, V, x) np.exp(-0.5 * (res_svd + np.log(2 * np.pi * S).sum(1))) array([0.03356053, 0.03167125, 0.08042358, 0.04351325, 0.1328082 ]) Generate Data: import numpy as np from scipy.stats import multivariate_normal, wishart # Generate example data N = 5 # Small number for minimal example, can be increased for real use case p = 3 # dimension np.random.seed(10) points = np.random.rand(N, p) means = np.random.rand(N, p) covs = wishart.rvs(p, np.eye(p),N, 2) # Identity matrices as example covariances # Initialize an array to store the PDF values pdf_values = np.zeros(N) # Loop over each point, mean, and covariance matrix for i in range(N): pdf_values[i] = multivariate_normal.pdf(points[i], mean=means[i], cov=covs[i]) pdf_values array([0.03356053, 0.03167125, 0.08042358, 0.04351325, 0.1328082 ])
2
3
78,795,741
2024-7-25
https://stackoverflow.com/questions/78795741/python-detecting-a-letter-pattern-without-iterating-through-all-possible-combin
Apologies for the possibly not-very-useful title; I couldn't figure out how to summarise this problem into one sentence. I'm trying to count how many "units" long a word is in Python 3.10. One "unit" is (C for consonant and V for vowel) either CV or VC or C or V (the latter two only being used when no pair can be made). For example, "piece" would be three units (pi-e-ce or pi-ec-e), "queue" would be four units (qu-e-u-e), and "lampshade" would be six units (la-m-p-s-ha-de). What I'm struggling with is how exactly I would detect these units without iterating through every combination of every vowel and consonant for each pair of letters. It'd be hugely inefficient to do this with iteration, but I can't think of anything better with my current knowledge of Python. What would be an efficient way of solving this problem? As an extra (optional) problem, what if digraphs are introduced, like "gh" and "th"? This would make words like "thigh" (four units, t-hi-g-h) into only two units (thi-gh), but also complicates the working-out.
Just convert everything to 'V' or 'C', respectively, in a list. Then check the first 2 letters in a loop. If it's 'CV' or 'VC' pop the list, and no matter what, pop it again. Count the iterations it takes to exhaust the list. def units(word) -> int: word = ["V" if c in "aeiou" else "C" for c in word] cnt = 0 while word: if ''.join(word[:2]) in ('CV', 'VC'): word.pop(0) word.pop(0) cnt += 1 return cnt for word in ('queue', 'piece', 'lampshade'): print(word, units(word))
2
1
78,794,752
2024-7-25
https://stackoverflow.com/questions/78794752/how-to-re-order-duplicates-answers-on-polars-dataframe
I have a Polars dataframe that contains multiple questions and answers. The problem is that each answer is contained in its own column, which means that I have a lot of redundant information. Therefore, I would like to have only one column for the questions and another for the answers. Here is an example of the data: data = { "ID" : [1,1,1], "Question" : ["A","B","C"], "Answer A" : ["Answer A", "Answer A", "Answer A"], "Answer B" : ["Answer B", "Answer B", "Answer B"], "Answer C" : ["Answer C", "Answer C", "Answer C"] } df = pl.DataFrame(data) df My approach is to create other filter dataframes and then concact them, however i would like a fancier approach to this problem My current approach: A_df = ( df .drop(["Answer B","Answer C"]) .filter(pl.col("Question") == "A") .rename({"Answer A" : "Answer"}) ) B_df = ( df .drop(["Answer A","Answer C"]) .filter(pl.col("Question") == "B") .rename({"Answer B" : "Answer"}) ) C_df = ( df .drop(["Answer A","Answer B"]) .filter(pl.col("Question") == "C") .rename({"Answer C" : "Answer"}) ) df_final = pl.concat([A_df,B_df,C_df])
TLDR. import polars.selectors as cs ( df .unpivot( on=cs.starts_with("Answer"), index=["ID", "Question"], variable_name="Source", value_name="Answer", ) .filter( pl.col("Question") == pl.col("Source").str.strip_prefix("Answer ") ) .drop("Source") ) shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ ID ┆ Question ┆ Answer β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ══════════β•ͺ═══════════════════║ β”‚ 1 ┆ A ┆ Some answer β”‚ β”‚ 1 ┆ B ┆ Some other answer β”‚ β”‚ 1 ┆ C ┆ Another answer β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Explanation. An approach that is a bit more general is to melt (pl.DataFrame.unpivot) the dataframe on the answer columns. This gives you a long format dataframe, which for each original row contains one row for each answer column. import polars.selectors as cs ( df .unpivot( on=cs.starts_with("Answer"), index=["ID", "Question"], variable_name="Source", value_name="Answer", ) ) shape: (9, 4) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ ID ┆ Question ┆ Source ┆ Answer β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ══════════β•ͺ══════════β•ͺ═══════════════════║ β”‚ 1 ┆ A ┆ Answer A ┆ Some answer β”‚ β”‚ 1 ┆ B ┆ Answer A ┆ Some answer β”‚ β”‚ 1 ┆ C ┆ Answer A ┆ Some answer β”‚ β”‚ 1 ┆ A ┆ Answer B ┆ Some other answer β”‚ β”‚ 1 ┆ B ┆ Answer B ┆ Some other answer β”‚ β”‚ 1 ┆ C ┆ Answer B ┆ Some other answer β”‚ β”‚ 1 ┆ A ┆ Answer C ┆ Another answer β”‚ β”‚ 1 ┆ B ┆ Answer C ┆ Another answer β”‚ β”‚ 1 ┆ C ┆ Answer C ┆ Another answer β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ From here, it is easy to (after some transformation) filter for rows in which the Source column matches the question (see TLDR).
3
3
78,793,639
2024-7-25
https://stackoverflow.com/questions/78793639/why-my-np-gradient-calculation-in-r2-doesnt-fit-with-the-analytical-gradient-c
I'm trying to compute a gradient on a map using np.gradient, but I'm encountering issues. To simplify my problem I am trying on an analytical function z = f(x,y) = -(x - 2)**2 - (y - 2)**2 np.gradient is not providing the expected results; the vectors should point towards the center. What am I doing wrong? Here is the code that I am running: import numpy as np import matplotlib.pyplot as plt # Define the grids for x and y x = np.linspace(0, 4, 100) # 100 points between 0 and 4 y = np.linspace(0, 4, 100) # 100 points between 0 and 4 X, Y = np.meshgrid(x, y) # Create a 2D grid # Define the function f(x, y) Z = -(X - 2)**2 - (Y - 2)**2 # Compute gradients numerically dz_dx, dz_dy = np.gradient(Z, x, y) # Downsampling to reduce the density of arrows step = 10 plt.figure(figsize=(10, 8)) contour = plt.contourf(X, Y, Z, cmap='viridis', levels=50, alpha=0.8) plt.colorbar(contour, label='f(x, y)') plt.quiver(X[::step, ::step], Y[::step, ::step], dz_dx[::step, ::step], dz_dy[::step, ::step], color='r', headlength=3, headwidth=4) plt.title('Function $f(x, y) = -(x - 2)^2 - (y - 2)^2$ and its gradients (numerical)') plt.xlabel('x') plt.ylabel('y') plt.grid(True) plt.show()
the problem is not in the gradient function, it is in the different indexing order of np.meshgrid and np.gradient. by default np.gradient assumes the indexing is the same order as the arguments, ie: Z[x,y] -> np.gradient(Z, x, y) whereas np.meshgrid default indexing results in the opposite indexing, Z[y,x] -> np.meshgrid(x,y) # default indexing = 'xy' you didn't notice this bug because both X and Y are identical, if you made X and Y have different number of points you would get an error. I like to use Z[y,x], so just swap the order of arguments and return of np.gradient and you will get the correct result. dz_dy, dz_dx = np.gradient(Z, y, x) # Z[y,x]
4
4
78,793,287
2024-7-25
https://stackoverflow.com/questions/78793287/how-to-compare-lists-in-two-pandas-dataframes-to-get-the-common-elements
I want to compare lists from columns set_1 and set_2 in df_2 with ins column in df_1 to find all common elements. I've started doing it for one row and one column but I have no idea how to compare all rows between two dfs to get the desired result. Here is my code comparing set_1 and ins in the first row: import pandas as pd d1 = {'chr': [1, 1], 'start': [64, 1000], 'end': [150, 2000], 'family': ['a', 'b'], 'ins': [['P1-12', 'P1-22', 'P1-25', 'P1-28', 'P1-90'], ['P1-6', 'P1-89', 'P1-92', 'P1-93']]} df1 = pd.DataFrame.from_dict(data=d1) d2 = {'set_1': [['P1-12', 'P1-25', 'P1-28'], ['P1-6', 'P1-89', 'P1-93']], 'set_2': [['P1-89', 'P1-92', 'P1-93'], ['P1-25', 'P1-28', 'P1-90']]} df2 = pd.DataFrame.from_dict(data=d2) matches = [x for x in df2.iloc[0, 0] if x in df1.iloc[0, 4]] There is a tiny part of my input data (in original input, df1 contains ~13k rows and df2 ~90): df1: chr start end family ins 0 1 64 150 a [P1-12, P1-22, P1-25, P1-28, P1-90] 1 1 1000 2000 b [P1-6, P1-89, P1-92, P1-93] df2: set_1 set_2 0 [P1-12, P1-25, P1-28] [P1-89, P1-92, P1-93] 1 [P1-6, P1-89, P1-93] [P1-25, P1-28, P1-90] The desired output should look like this: chr start end family df2_index ins_set1 ins_set2 0 1 64 150 a 0 [P1-12, P1-25, P1-28] [] 1 1 64 150 a 1 [] [P1-25, P1-28, P1-90] 2 1 1000 2000 b 0 [] [P1-89, P1-92, P1-93] 3 1 1000 2000 b 1 [P1-6, P1-89, P1-93] []
Since you have objects you'll need to loop. I would first perform a cross-merge, then use a set for efficiency: out = df1.merge(df2, how='cross') cols = list(df2) ins = out.pop('ins').apply(set) for c in cols: out[c] = [[x for x in lst if x in ref] for ref, lst in zip(ins, out[c])] Variant that should be a bit more efficient: out = df1.assign(ins=df1['ins'].apply(set)).merge(df2, how='cross') cols = list(df2) ins = out.pop('ins') for c in cols: out[c] = [[x for x in lst if x in ref] for ref, lst in zip(ins, out[c])] Output: chr start end family set_1 set_2 0 1 64 150 a [P1-12, P1-25, P1-28] [] 1 1 64 150 a [] [P1-25, P1-28, P1-90] 2 1 1000 2000 b [] [P1-89, P1-92, P1-93] 3 1 1000 2000 b [P1-6, P1-89, P1-93] []
2
5
78,792,386
2024-7-25
https://stackoverflow.com/questions/78792386/get-cumulative-weight-of-edges-without-repeating-already-traversed-paths
I have a water pipe network where each node is a house and each edge is a pipe connecting houses. The edges have a water volume as an attribute. I want to calculate the total volume reaching node 13. It should sum to 5+2+1+6+0+3+14+4+12+5+8+10+6+9=85 I've tried something like this, but it will repeat already traversed paths. For example I dont want to sum the value from node 1 to 2 (the weight 5) more than once: import networkx as nx from itertools import combinations G = nx.DiGraph() edges = [(1,2,5), (2,5,0), (3,2,2), (3,4,1), (4,5,6), (5,6,3), (7,8,14), (8,6,4), (6,9,12), (9,11,8), (10,9,5),(10,12,10), (11,12,6),(12,13,9)] for edge in edges: n1, n2, weight = edge G.add_edge(n1, n2, volume=weight) for n1, n2 in combinations(G.nodes,2): paths = list(nx.all_simple_paths(G=G, source=n1, target=n2)) for path in paths: total_weight = nx.path_weight(G=G, path=path, weight="volume") print(f"From node {n1} to {n2}, there's the path {'-'.join([str(x) for x in path])} \n with the total volume: {total_weight}") # From node 1 to 2, there's the path 1-2 # with the total volume: 5 # From node 1 to 5, there's the path 1-2-5 # with the total volume: 5 # From node 1 to 6, there's the path 1-2-5-6 # ...
IIUC, just get a subgraph of all ancestors of 13 (and including 13), then compute the weighted size (= sum of all edge weights): target = 13 G.subgraph(nx.ancestors(G, target)|{target}).size(weight='volume') Output: 85
3
1
78,789,622
2024-7-24
https://stackoverflow.com/questions/78789622/get-max-date-column-name-on-polars
I'm trying to get the column name containing the maximum date value in my Polars DataFrame. I found a similar question that was already answered here. However, in my case, I have many columns, and adding them manually would be tedious. I would like to use column selectors cs.datetime() and have tried the following: import polars as pl from datetime import datetime import polars.selectors as cs data = { "ID" : [1,2,3], "Values_A" : [datetime(1,1,2),datetime(1,1,3),datetime(1,1,4)], "Values_B" : [datetime(1,1,4),datetime(1,1,7),datetime(1,1,2)] } df = pl.DataFrame(data) def arg_max_horizontal(*columns: pl.Expr) -> pl.Expr: return ( pl.concat_list(columns) .list.arg_max() .replace_strict({i: col_name for i, col_name in enumerate(columns)}) ) ( df .with_columns( Largest=arg_max_horizontal(pl.select(cs.datetime())) ) )
You were one step away and simply needed to select the column names of interest using df.select(cs.datetime()).columns. Then, we can unpack the list in the function call. Note. I've adapted the type hint of arg_max_horizontal accordingly. Moreover, (thanks to @Cameron Riddell), we can simplify conversion to a string representation using a cast to a suitable pl.Enum. def arg_max_horizontal(*columns: str) -> pl.Expr: return ( pl.concat_list(columns) .list.arg_max() .cast(pl.Enum(columns)) ) ( df .with_columns( Largest=arg_max_horizontal(*df.select(cs.datetime()).columns) ) ) shape: (3, 4) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ ID ┆ Values_A ┆ Values_B ┆ Largest β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ datetime[ΞΌs] ┆ datetime[ΞΌs] ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ═════════════════════β•ͺ═════════════════════β•ͺ══════════║ β”‚ 1 ┆ 0001-01-02 00:00:00 ┆ 0001-01-04 00:00:00 ┆ Values_B β”‚ β”‚ 2 ┆ 0001-01-03 00:00:00 ┆ 0001-01-07 00:00:00 ┆ Values_B β”‚ β”‚ 3 ┆ 0001-01-04 00:00:00 ┆ 0001-01-02 00:00:00 ┆ Values_A β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
4
3
78,789,944
2024-7-24
https://stackoverflow.com/questions/78789944/how-do-you-sort-column-names-in-date-in-descending-order-in-pandas
I have this DataFrame: Node Interface Speed Band_In carrier Date Server1 wan1 100 80 ATT 2024-05-09 Server1 wan1 100 50 Sprint 2024-06-21 Server1 wan1 100 30 Verizon 2024-07-01 Server2 wan1 100 90 ATT 2024-05-01 Server2 wan1 100 88 Sprint 2024-06-02 Server2 wan1 100 22 Verizon 2024-07-19 I need to convert Date field to this format 1-May, 2-Jun, 19-July, place them on each column in descending order. In this to look like this: Node Interface Speed Band_In carrier 1-July 9-May 21-Jun Server1 wan1 100 80 ATT 80 50 30 I tried this: df['Date'] = pd.to_datetime(df['Date']).dt.strftime('%d-%b') df['is'] = df['Band_In'] / df['Speed'] * 100 df = df.pivot_table(index=['Node', 'Interface', 'carrier'], columns='Date', values='is').reset_index() I need Date values in the column names to be sorted in descending order 9-May 21-Jun 1-July. Any ideas how?
Don't convert your dates to string until after the pivot_table, so can do so easily with rename: df['Date'] = pd.to_datetime(df['Date']) df['is'] = df['Band_In'] / df['Speed'] * 100 out = (df.pivot_table(index=['Node', 'Interface', 'carrier'], columns='Date', values='is') .rename(columns=lambda x: x.strftime('%-d-%b')) .reset_index().rename_axis(columns=None) ) Output: Node Interface carrier 1-May 9-May 2-Jun 21-Jun 1-Jul 19-Jul 0 Server1 wan1 ATT NaN 80.0 NaN NaN NaN NaN 1 Server1 wan1 Sprint NaN NaN NaN 50.0 NaN NaN 2 Server1 wan1 Verizon NaN NaN NaN NaN 30.0 NaN 3 Server2 wan1 ATT 90.0 NaN NaN NaN NaN NaN 4 Server2 wan1 Sprint NaN NaN 88.0 NaN NaN NaN 5 Server2 wan1 Verizon NaN NaN NaN NaN NaN 22.0
2
2
78,786,324
2024-7-24
https://stackoverflow.com/questions/78786324/how-to-plot-justify-bar-labels-to-the-right-side-and-add-a-title-to-the-bar-labe
I have created a chart in matplotlib in python, but the last line in the following code doesn't allow alignment of the bar labels outside of the graph. import matplotlib.pyplot as plt g=df.plot.barh(x=name,y=days) g.set_title("Days people showed up") g.bar_label(g.containers[0], label_type='edge') I get a graph that looks like: Days people showed up ----------------------- Amy |+++ 1 | Bob |+++++++++++++++ 4 | Jane |+++++++ 2 | ---|---|---|---|---|--- 1 2 3 4 5 Instead I want something like this: Days people showed up ----------------------- Count Amy |+++ | 1 Bob |+++++++++++++++ | 4 Jane |+++++++ | 2 ---|---|---|---|---|--- 1 2 3 4 5 Is it possible to do this? It doesn't seem like it is native in matplotlib as the only options for label_type is edge or center. Is it possible to add a label to the bar labels as well?
You can use a "secondary y axis" (this is similar to the more often used "twin" axis, but is only used to add ticks and labels, not for plotting). The example below uses ax as the name for the return value or df.plot.barh() to make the code easier to match with matplotlib documentation and tutorials. For more finetuning, ax.set_ylabe() has parameters to change its position and alignment. import matplotlib.pyplot as plt import pandas as pd df = pd.DataFrame({'name': ['Amy', 'Bob', 'Jane'], 'days': [1, 4, 2]}) ax = df.plot.barh(x='name', y='days', legend=False) ax.set_title("Days people showed up") # ax.bar_label(ax.containers[0], label_type='edge') secax = ax.secondary_yaxis(location='right', functions=(lambda x: x, lambda x: x)) secax.set_yticks(range(len(df)), df['days']) secax.tick_params(length=0) secax.set_ylabel('Counts') plt.tight_layout() plt.show()
3
1
78,786,100
2024-7-24
https://stackoverflow.com/questions/78786100/how-can-i-simplify-this-method-to-replace-punctuation-while-keeping-special-word
I am making a modulatory function that will take keywords with special characters (@&\*%) and keep them intact while all other punctuation is deleted from a sentence. I have devised a solution, but it is very bulky and probably more complicated than it needs to be. Is there a way to do this, but in a much simpler way? In short, my code matches all instances of the special words to find the span. I then match the characters to find their span, and then I loop over the list of matches and remove any characters that also exist in the span of the found words. Code: import re from string import punctuation sentence = "I am going to run over to Q&A and ask them a ton of questions about this & that & that & this while surfacing the internet! with my raccoon buddy @ the bar." # my attempt to remove punctuation class SentenceHolder: sentence = None protected_words = ["Q&A"] def __init__(sentence): self.sentence = sentence def remove_punctuation(self): for punct in punctuation: symbol_matches: List[re.Match] = [i for i in re.finditer(punct, self.sentence)] remove_able_matches = self._protected_word_overlap(symbol_matches) for word in reversed(remove_able_word_matches): self.sentence = (self.modified_string[:word.start()] + " " + self.sentence[word.end():]) def _protected_word_overlap(symbol_matches) protected_word_locations = [] for protected_word in self.protected_words : protected_word_locations.extend([i for i in re.finditer(protected_word, self.sentence)]) protected_matches = [] for protected_word in protected_word_locations: for symbol_inst in symbol_matches: symbol_range: range = range(symbol_inst.start(), symbol_inst.end()) protested_word_set = set(range(protected_word.start(), protected_word.end())) if len(protested_word_set.intersection(symbol_range)) != 0: protected_matches.append(symbol_inst) remove_able_matches = [sm for sm in symbol_matches if sm not in protected_matches] return remove_able_matches The output of the code: my_string = SentenceHolder(sentence) my_string.remove_punctuation() Result: "I am going to run over to Q&A and ask them a ton of questions about this that that this while surfacing the internet with my raccoon buddy the bar" I tried to use regex and pattern to identify all the locations of the punctuation, but the pattern I use in re.sub does not work similarly in re.match.
probably not the best, but really simple protected = ["Q&A", "stack@exchange"] protected_dict = {f'protected{i}': p_word for i, p_word in enumerate(protected)} sentence = "I am going to run over to Q&A stack@exchange and ask them a ton of questions about this & that & that & this while surfacing the internet! with my raccoon buddy @ the bar." # protect for k, v in protected_dict.items(): sentence = sentence.replace(v, k) # replace stuff sentence = sentence.replace('&', '') sentence = sentence.replace('@', '') # revert back protected words for k, v in protected_dict.items(): sentence = sentence.replace(k, v) print(sentence) # I am going to run over to Q&A stack@exchange and ask them a ton of questions about this that that this while surfacing the internet! with my raccoon buddy the bar.
2
1
78,786,208
2024-7-24
https://stackoverflow.com/questions/78786208/polars-replace-elements-in-list-of-list-column
Consider the following example series. s = pl.Series('s', [[1, 2, 3], [3, 4, 5]]) I'd like to replace all 3s with 10s to obtain the following. res = pl.Series('s', [[1, 2, 10], [10, 4, 5]]) Is it possible to efficiently replace elements in the lists of a List column in polars? Note. I've already tried converting to a dataframe and using pl.when().then(), but pl.when() fails for input of type List[bool]. Moreover, I've experimented with pl.Expr.list.eval, but couldn't get much further than the original mask.
In the case of a simple replacement, you can also consider using pl.Expr.replace within a pl.Series.list.eval context. This approach is a bit less general, but more concise than a pl.when().then().otherwise() construct. s.list.eval(pl.element().replace({3: 10})) shape: (2,) Series: 's' [list[i64]] [ [1, 2, 10] [10, 4, 5] ]
3
3
78,765,604
2024-7-18
https://stackoverflow.com/questions/78765604/how-to-enable-vi-mode-for-the-python-3-13-interactive-interpreter
One of the new features in python 3.13 is a more sophisticated interactive interpreter. However, it seems that this interpreter is no longer based on readline, and therefore no longer respects ~/.inputrc. I particularly miss the set editing-mode vi behavior. Is there a way to get this same "vi mode" behavior with the python 3.13 interpreter (other than just disabling the new interpreter completely with PYTHON_BASIC_REPL=1)? Caveat: The specific version of python I'm using is 3.13.0b3. This is a beta release, so I'd be satisfied if the answer is "just wait for the actual release of python 3.13; this feature is planned but hasn't been implemented yet".
According to this CPython issue and this comment on another issue, vi editing mode is not coming any time soon to the new REPL. It is still possible to use the old REPL by setting the PYTHON_BASIC_REPL=1 environmental variable.
3
4
78,767,823
2024-7-19
https://stackoverflow.com/questions/78767823/how-to-immediately-cancel-an-asyncio-task-that-uses-the-ollama-python-library-to
I'm using Ollama to generate answers from large language models (LLMs) with the Ollama Python API. I want to cancel the response generation by clicking the stop button. The problem is that the task cancellation works only if the response generation has already started printing. If the task is still processing and getting ready to print, the cancellation does not work, and the response gets printed regardless. To be more specific, this function prompt_mistral("Testing") still executes and prints the response even after clicking the button. My code: import ollama import asyncio import threading from typing import Optional import tkinter as tk # Create the main window root = tk.Tk() root.title("Tkinter Button Example") worker_loop: Optional[asyncio.AbstractEventLoop] = None task_future: Optional[asyncio.Future] = None async def get_answer_from_phi3(): print("Trying") messages = [ {"role": "system", "content": "Hello"} ] client = ollama.AsyncClient() stream = await client.chat( model='phi3', messages=messages, stream=True, options= { "top_k": 1}) try: async for chunk in stream: # Store generated answer print(chunk['message']['content'], end='', flush=True) except asyncio.exceptions.CancelledError as e: print("Cancelled") pass except Exception as e: print(e) return "Sorry,vv an error occurred while processing your request." async def prompt_mistral(query): messages = [] messages.append({"role": "assistant", "content": "Write a song that celebrates the beauty, diversity, and importance of our planet, Earth. The song should evoke vivid imagery of the natural world, from lush forests and majestic mountains to serene oceans and vast deserts. It should capture the essence of Earth as a living, breathing entity that sustains all forms of life. Incorporate themes of harmony, unity, and interconnectedness, emphasizing how all elements of nature are intertwined and how humanity is an integral part of this complex web. The lyrics should reflect a sense of wonder and appreciation for the planet's resources and ecosystems, highlighting the delicate balance that sustains life. Include references to various landscapes, climates, and wildlife, painting a picture of Earth's diverse environments. The song should also touch on the responsibility we have to protect and preserve the planet for future generations, addressing issues like climate change, deforestation, pollution, and conservation efforts. Use poetic language and metaphors to convey the grandeur and fragility of Earth, and infuse the song with a hopeful and inspiring tone that encourages listeners to take action in safeguarding our shared home. The melody should be uplifting and emotionally resonant, complementing the powerful message of the lyrics"}) generated_answer = '' try: client = ollama.AsyncClient() stream = await client.chat( model='mistral', messages=messages, stream=True, options= { "top_k": 1} ) async for chunk in stream: # Store generated answer generated_answer += chunk['message']['content'] print(chunk['message']['content']) except asyncio.exceptions.CancelledError as e: print("Cancelled reponse") return except Exception as e: print(e) return "Sorry,vv an error occurred while processing your request." def prompt_llama(message): async def prompt(): messages = [] messages.append({"role": "assistant", "content": message}) try: client = ollama.AsyncClient() stream = await client.chat( model='llama2', messages=messages, stream=True, options= { "top_k": 1} ) generated_answer = '' async for chunk in stream: # Store generated answer generated_answer += chunk['message']['content'] print(chunk['message']['content']) if "help" in generated_answer: await prompt_mistral("Testing") else: print(generated_answer) except asyncio.exceptions.CancelledError as e: print("Cancelled") return except Exception as e: print(e) return "Sorry,vv an error occurred while processing your request." def mistral_worker_function(): global worker_loop, task_future worker_loop = asyncio.new_event_loop() task_future = worker_loop.create_task(prompt()) worker_loop.run_until_complete(task_future) print("Starting thread") thread = threading.Thread(target=mistral_worker_function) thread.start() client = ollama.AsyncClient() # Define the function to be called when the button is pressed def on_button_click(): global worker_loop, task_future # the loop and the future are not threadsafe worker_loop.call_soon_threadsafe( lambda: task_future.cancel() ) def phi3_worker_function(): global worker_loop, task_future worker_loop = asyncio.new_event_loop() task_future = worker_loop.create_task(get_answer_from_phi3()) worker_loop.run_until_complete(task_future) print("Starting thread") thread = threading.Thread(target=phi3_worker_function()) thread.start() # Create the button button = tk.Button(root, text="Stop", command=on_button_click) # Place the button on the window button.pack(pady=20) prompt_llama("Hi") # Start the Tkinter event loop root.mainloop()
Update Ollama to the newest version with curl https://ollama.ai/install.sh | sh on Linux. It will automatically fix this issue. The code works exactly as it is. Just have to update Ollama.
2
0
78,760,560
2024-7-17
https://stackoverflow.com/questions/78760560/how-to-read-restore-a-checkpointed-dataframe-across-batches
I need to "checkpoint" certain information during my batch processing with pyspark that are needed in the next batches. For this use case, DataFrame.checkpoint seems to fit. While I found many places that explain how to create the one, I did not find any how to restore or read a checkpoint. For this to be tested, I created a simple test class with two (2) tests. The first reads a CSV and creates a sum. The 2nd one should just get some a continue to sum up: import pytest from pyspark.sql import functions as f class TestCheckpoint: @pytest.fixture(autouse=True) def init_test(self, spark_unit_test_fixture, data_dir, tmp_path): self.spark = spark_unit_test_fixture self.dir = data_dir("") self.checkpoint_dir = tmp_path def test_first(self): df = (self.spark.read.format("csv") .option("pathGlobFilter", "numbers.csv") .load(self.dir)) sum = df.agg(f.sum("_c1").alias("sum")) sum.checkpoint() assert 1 == 1 def test_second(self): df = (self.spark.read.format("csv") .option("pathGlobFilter", "numbers2.csv") .load(self.dir)) sum = # how to get back the sum? Creating the checkpoint in first test works fine (set tmp_path as checkpoint dir) and i see a folder created with a file. But how do I read it? And how do you handle multiple checkpoints? For example, one checkpoint on the sum and another for the average? Are there better approaches to storing state across batches? For sake of completeness, the CSV looks like this: 1719228973,1 1719228974,2 And this is only a minimal example to get it running - my real scenario is more complex.
While in theory, checkpoints are retained across Spark jobs and can be accessed from other Spark jobs by reading the files directly without having to recompute the entire lineage, they have not made it easy to read checkpoints directly from the stored files from other Spark jobs. If you are interested, here is an answer where it's shown how the file names look when checkpointing happens. So in your case, I would advise storing to disk by yourself and reading the file(s) when you need it in another job. You can use a storage mechanism (like parquet that is efficient depending on your data and nature of processing). Something like this: import pytest from pyspark.sql import functions as f class TestCheckpoint: @pytest.fixture(autouse=True) def init_test(self, spark_unit_test_fixture, data_dir, tmp_path): self.spark = spark_unit_test_fixture self.dir = data_dir("") self.checkpoint_dir = tmp_path def test_first(self): df = (self.spark.read.format("csv") .option("pathGlobFilter", "numbers.csv") .load(self.dir)) sum_df = df.agg(f.sum("_c1").alias("sum")) sum_df.write.mode("overwrite").parquet(str(self.checkpoint_dir / "sum")) assert 1 == 1 def test_second(self): previous_sum = self.spark.read.parquet(str(self.checkpoint_dir / "sum")) previous_sum_value = previous_sum.collect()[0]["sum"] df = (self.spark.read.format("csv") .option("pathGlobFilter", "numbers2.csv") .load(self.dir)) new_sum = df.agg(f.sum("_c1").alias("sum")) total_sum = previous_sum_value + new_sum.collect()[0]["sum"] assert 1 == 1 That said, if you need to access the checkpointed data within the same Spark job, you can just keep a reference to the dataframe like so sum = df.agg(f.sum("_c1").alias("sum")) sum = sum.checkpoint() # hold on to this reference to access the checkpointed data Alternatively, you also have df.persist(StorageLevel.DISK_ONLY) which also allows you to store to disk while also preserving data lineage. However, once the job ends, the data is purged.
4
1
78,778,926
2024-7-22
https://stackoverflow.com/questions/78778926/creating-a-metaclass-that-inherits-from-abcmeta-and-qobject
I am building an app in PySide6 that will involve dynamic loading of plugins. To facilitate this, I am using ABCMeta to define a custom metaclass for the plugin interface, and I would like this custom metaclass to inherit from ABC and from QObject so that I can abstract as much of the behavior as possible, including things like standard signals and slots that will be common to all subclasses. I have set up a MWE that shows the chain of logic that enabled me to get this setup working, but the chain of inheritance goes deeper than I thought it would, and the @abstractmethod enforcement of ABC does not seem to carry through (in the sense that not overriding print_value() does not cause an error). Is it possible to shorten this while still having the desired inheritance? In the end, my goal is to have the MetaTab class as an abstract base class that inherits from ABC so that I can define @abstractmethods inside it, and then subclass that for individual plugins. Do I really need both QABCMeta and QObjectMeta to make this work, or is there a way to clean it up that eliminates one of the links in this inheritance chain? from abc import ABC, ABCMeta, abstractmethod from PySide6.QtCore import QObject class QABCMeta(ABCMeta, type(QObject)): pass class QObjectMeta(ABC, metaclass=QABCMeta): pass class MetaTab(QObject, QObjectMeta): def __init__(self, x): print('initialized') self.x = x @abstractmethod def print_value(self): pass class Tab(MetaTab): def __init__(self, x): super().__init__(x) def print_value(self): print(self.x) def main(): obj = Tab(5) for b in obj.__class__.__bases__: print("Base class name:", b.__name__) print("Class name:", obj.__class__.__name__) obj.print_value() if __name__=='__main__': main()
Doing some tests here: PySide6 Qobjetc inheritance modifies the usual Python behavior for classes, including attribute lookup - and this renders the mechanisms for abstractmethods in Python's ABC inoperative. Metclasses are a complicated subject, and cooperative metaclasses between different projects are even more complicated. The workaround I found is to re-implement some of the behavior of ABCMeta in the metaclass that combines the metaclass for QOBjects and ABCMeta itself - with the changes bellow, the @abstractmethod behavior is restored: import abc from PySide6.QtCore import QObject class QABCMeta(abc.ABCMeta, type(QObject)): def __new__(mcls, name, bases, ns, **kw): cls = super().__new__(mcls, name, bases, ns, **kw) abc._abc_init(cls) return cls def __call__(cls, *args, **kw): if cls.__abstractmethods__: raise TypeError(f"Can't instantiate abstract class {cls.__name__} without an implementation for abstract methods {set(cls.__abstractmethods__)}") return super().__call__(*args, **kw) Testing on the REPL with a class derived from "MetaTab" which doesn't implement print_value will raise as expected: In [112]: class T2(MetaTab): ...: pass ...: In [113]: T2(1) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[113], line 1 ----> 1 T2(1) Cell In[105], line 11, in QABCMeta.__call__(cls, *args, **kw) (...) Can't instantiate abstract class T2 without an implementation for abstract methods : {'print_value'}) As for the hierarchy you build: you probably could skip some of the intermediate classes, but what IΒ΄d change there is to avoid using the name "Meta" for classes that are not metaclasses (i.e. classes used to build the classes themselves, and that are passed to the metaclass= named parameter) - both QObjectMeta and MetaTab are confusing names due to that - you could use an infix like "Base" or "Mixin" instead. That said, the class you call QobjectMeta could be just: class AQObjectBase(QObject, metaclass=QABCMeta): pass And then your "MetaTab" would not need to inherit explicitly from QObject. (explicit inheritance from ABC with the metaclass behavior restores isnΒ΄t needed): class TabBase(AQObjectBase): def __init__(...): ... @abstractmethod ....
2
2
78,783,799
2024-7-23
https://stackoverflow.com/questions/78783799/pylance-reports-code-is-unreachable-when-my-test-shows-otherwise
Pylance claims that code under if not dfs: is unreachable and greys it out in VS Code. import pandas as pd def concat_dfs(dfs: tuple[pd.DataFrame | None], logger) -> pd.DataFrame | None: # filter out dfs that are None, if any dfs = [df for df in dfs if df is not None] # concatenate dfs, if any are not None: if not dfs: logger.warning("All elements of dfs are None, so no inputs remain to be analyzed") return None # greyed out as unreachable else: return pd.concat(dfs).sort_index() I tried returning dfs alone, and not pd.concat(dfs).sort_index(), as there are/were some pd.concat related bugs in Pylance. I also tried making a little test. If Pylance is right, I would have expected not dfs = False, and it is True instead: dfs = (None, None) dfs = [df for df in dfs if df is not None] # dfs is an empty list not dfs # returns True --> code should be reachable when every dataframe in dfs is None to begin with...
The reason the code is considered unreachable is that the type hint for the argument dfs is tuple[pd.DataFrame | None]. Type hints take precedence over code interpretation, and this type hint means that the tuple has one element, so not dfs can never be true. Perhaps the type you really want to express is tuple[pd.DataFrame | None, ...] (but you should avoid assigning an object of a different type than the annotation to a variable. In other words, you should not assign [df for df in dfs if df is not None], which is a list, to dfs.).
3
2
78,785,590
2024-7-23
https://stackoverflow.com/questions/78785590/numpy-apply-mask-to-values-then-take-mean-but-in-parallel
I have an 1d numpy array of values: v = np.array([0, 1, 4, 0, 5]) Furthermore, I have a 2d numpy array of boolean masks (in production, there are millions of masks): m = np.array([ [True, True, False, False, False], [True, False, True, False, True], [True, True, True, True, True], ]) I want to apply each row from the mask to the array v, and then compute the mean of the masked values. Expected behavior: results = [] for mask in m: results.append(np.mean(v[mask])) print(results) # [0.5, 3.0, 2.0] Easy to do sequentially, but I am sure there is a beautiful version in parallel? One solution, that I've found: mask = np.ones(m.shape) mask[~m] = np.nan np.nanmean(v * mask, axis=1) # [0.5, 3.0, 2.0] Is there another solution, perhaps using np.ma module? I am looking for a solution that is faster than my current two solutions.
Faster Numpy code A faster Numpy way to do that is to perform a matrix multiplication: (m @ v) / m.sum(axis=1) We can optimize this further by avoiding implicit conversion and perform the summation with 8-bit integer (this is safe only because v.shape[1] is small -- ie. less than 127): (m @ v) / m.view(np.int8).sum(dtype=np.int8, axis=1) Even faster code with Numba multithreading We can use Numba to write a similar function and even use multiple threads: import numba as nb @nb.njit('(float32[::1], bool_[:,::1])', parallel=True) def compute(v, m): si, sj = m.shape res = np.empty(si, dtype=np.float32) for i in nb.prange(si): s = np.float32(0) count = 0 for j in range(sj): if m[i, j]: s += v[j] count += 1 if count > 0: res[i] = s / count else: res[i] = np.nan return res Results Here are results on my i5-9600KF CPU (with 6 cores): Initial vectorized code: 136 ms jakevdp's solution: 74 ms Numpy matmul: 28 ms Numba sequential code: 21 ms Optimized Numpy matmul: 20 ms <----- Numba parallel code: 4 ms <----- The sequential Numba implementation is unfortunately not much better than the Numpy one (Numba do not generate a very good code in this case), but it scale well so the parallel version is significantly faster. The Numba parallel version is 34 times faster than the initial vectorized code and 18 times faster than the jakevdp's solution. The Numpy optimized matrix-multiplication-based solution is 3.7 times faster jakevdp's solution. Note that the Numba code is also a bit better than the jakevdp's solution and the optimized Numpy one since it support the case where the mean is invalid (NaN) without printing a warning about a division by 0.
5
3
78,762,876
2024-7-18
https://stackoverflow.com/questions/78762876/numba-indexing-on-record-type-structured-array-in-numpy
I have a numpy structured array and pass one element in it to a function as below. from numba import njit import numpy as np dtype = np.dtype([ ("id", "i4"), ("qtrnm0", "S4"), ("qtr0", "f4"), ]) a = np.array([(1, b"24q1", 1.0)], dtype=dtype) @njit def upsert_numba(a, sid, qtrnm, val): a[1] = qtrnm a[2] = val #i = 0 #a[i+1] = qtrnm #a[i+2] = val return a x = (1, b"24q2", 3.0) print(upsert_numba(a[0].copy(), *x)) The above code works without problems. But if the updating is through the codes commented out, i.e. i=0;a[i+1]=qtrnm;a[i+2]=val, numba gives the following error. No implementation of function Function(<built-in function setitem>) found for signature: >>> setitem(Record(id[type=int32;offset=0],qtrnm0[type=[char x 4];offset=4],qtr0[type=float32;offset=8];12;False), int64, readonly bytes(uint8, 1d, C)) It seems like indexing is only allowed by a constant, which can be an integer or a CharSeq, known at compile time, but not an expresson on the constant, which is also known at compile time though. May I know what is happening under the hood? I have tried other constant as index like "j=i; a[j]", which also works. But unsurprisingly "j=i+1;a[j]" fails.
Consider the following functions. from numba import njit @njit def func(): i = 777 t = i return t @njit def func2(): i = 776 t = i + 1 return t You can check how each variable's type is inferred using the following method. func() func.inspect_types() This is the key lines: # i = const(int, 777) :: Literal[int](777) # t = i :: Literal[int](777) The part after :: is the type of the variable. This indicates that both i and t are of integer literal type. Next, for func2: func2() func2.inspect_types() # i = const(int, 776) :: Literal[int](776) # t = i + $const10.2 :: int64 Compared to func, you can see that t is inferred as int64 rather than an integer literal type. This means, numba performs type inference on the code before optimization. This is a reasonable choice. Typed code is required for optimization, but type inference is required to generate typed code. So first type inference is performed on Python bytecode, and then optimization is performed based on the inferred types. For more accurate and detailed information on this flow, please refer to the official documentation. In summary, you need a constant variable at the Python bytecode phase. As an additional note, numba does not support indexing records with non-literal variables. However, it is somehow possible by explicitly defining the mapping via overloading. from operator import setitem import numpy as np from numba import njit, types from numba.core.extending import overload a_dtype = np.dtype([("id", "i4"), ("qtrnm0", "S4"), ("qtr0", "f4")]) @overload(setitem) def setitem_overload_for_a(a, index, value): if getattr(a, "dtype", None) != a_dtype: return None if isinstance(value, (types.Integer, types.Float)): def numeric_impl(a, index, value): # You need to map these indexes correctly according to the dtype. if index == 0: a[0] = value elif index == 2: a[2] = value else: raise ValueError() return numeric_impl elif isinstance(value, (types.Bytes, types.CharSeq)): def bytes_impl(a, index, value): if index == 1: a[1] = value else: raise ValueError() return bytes_impl else: raise TypeError(f"Unsupported type: {index=}, {value=}, {a.dtype=}") @njit def upsert_numba(a, sid, qtrnm, val): i = 0 a[i + 1] = qtrnm a[i + 2] = val return a x = (1, b"24q2", 3.0) a = np.array([(1, b"24q1", 1.0)], dtype=a_dtype) print(upsert_numba(a[0].copy(), *x)) # (1, b'24q2', 3.) Note that this is an ad hoc strategy that requires you to hardcode setitem for each record type, and may not work in some cases. That said, it should work unless you're doing something very tricky.
2
1
78,778,792
2024-7-22
https://stackoverflow.com/questions/78778792/plotting-star-maps-with-equatorial-coordinates-system
I'm trying to generate star maps with the equatorial coordinates system (RAJ2000 and DEJ2000). However, I only get a grid system where meridians and parallels are in parallel, while parallels should be curved and meridians should converge to the north celestial pole and the south ceestial pole. I'm using some Python modules: matplotlib, skyfield (for the stereographic projection), astroquery (so I can target any object in the deep space) and astropy. This is my code: #!/usr/bin/env python # -*- coding: utf-8 -*- """Generate a skymap with equatorial grid""" import numpy as np from matplotlib import pyplot as plt from matplotlib.collections import LineCollection from skyfield.api import Star, load from skyfield.data import hipparcos, stellarium from skyfield.projections import build_stereographic_projection from astroquery.simbad import Simbad from astropy.coordinates import SkyCoord import astropy.units as u from astropy.wcs import WCS from astropy.visualization.wcsaxes import WCSAxes # Design plt.style.use("dark_background") plt.rcParams['font.family'] = 'serif' plt.rcParams['font.serif'] = ['Times New Roman'] # Query object from Simbad OBJECT = "Alioth" FOV = 30.0 MAG = 6.5 TABLE = Simbad.query_object(OBJECT) RA = TABLE['RA'][0] DEC = TABLE['DEC'][0] COORD = SkyCoord(f"{RA} {DEC}", unit=(u.hourangle, u.deg), frame='fk5') print("RA is", RA) print("DEC is", DEC) ts = load.timescale() t = ts.now() # An ephemeris from the JPL provides Sun and Earth positions. eph = load('de421.bsp') earth = eph['earth'] # Load constellation outlines from Stellarium url = ('https://raw.githubusercontent.com/Stellarium/stellarium/master' '/skycultures/modern_st/constellationship.fab') with load.open(url) as f: constellations = stellarium.parse_constellations(f) edges = [edge for name, edges in constellations for edge in edges] edges_star1 = [star1 for star1, star2 in edges] edges_star2 = [star2 for star1, star2 in edges] # The Hipparcos mission provides our star catalog. with load.open(hipparcos.URL) as f: stars = hipparcos.load_dataframe(f) # Center the chart on the specified object's position. center = earth.at(t).observe(Star(ra_hours=COORD.ra.hour, dec_degrees=COORD.dec.degree)) projection = build_stereographic_projection(center) # Compute the x and y coordinates that each star will have on the plot. star_positions = earth.at(t).observe(Star.from_dataframe(stars)) stars['x'], stars['y'] = projection(star_positions) # Create a True/False mask marking the stars bright enough to be included in our plot. bright_stars = (stars.magnitude <= MAG) magnitude = stars['magnitude'][bright_stars] marker_size = (0.5 + MAG - magnitude) ** 2.0 # The constellation lines will each begin at the x,y of one star and end at the x,y of another. xy1 = stars[['x', 'y']].loc[edges_star1].values xy2 = stars[['x', 'y']].loc[edges_star2].values lines_xy = np.rollaxis(np.array([xy1, xy2]), 1) # Define the limit for the plotting area angle = np.deg2rad(FOV / 2.0) limit = np.tan(angle) # Calculate limit based on the field of view # Build the figure with WCS axes fig = plt.figure(figsize=[6, 6]) wcs = WCS(naxis=2) wcs.wcs.crpix = [1, 1] wcs.wcs.cdelt = np.array([-FOV / 360, FOV / 360]) wcs.wcs.crval = [COORD.ra.deg, COORD.dec.deg] wcs.wcs.ctype = ["RA---STG", "DEC--STG"] ax = fig.add_subplot(111, projection=wcs) # Draw the constellation lines ax.add_collection(LineCollection(lines_xy, colors='#ff7f2a', linewidths=1, linestyle='-')) # Draw the stars ax.scatter(stars['x'][bright_stars], stars['y'][bright_stars], s=marker_size, color='white', zorder=2) ax.scatter(RA, DEC, marker='*', color='red', zorder=3) angle = np.pi - FOV / 360.0 * np.pi limit = np.sin(angle) / (1.0 - np.cos(angle)) # Set plot limits ax.set_xlim(-limit, limit) ax.set_ylim(-limit, limit) ax.set_aspect('equal') # Add RA/Dec grid lines ax.coords.grid(True, color='white', linestyle='dotted') # Set the coordinate grid ax.coords[0].set_axislabel('RA (hours)') ax.coords[1].set_axislabel('Dec (degrees)') ax.coords[0].set_major_formatter('hh:mm:ss') ax.coords[1].set_major_formatter('dd:mm:ss') # Title ax.set_title(f'Sky map centered on {OBJECT}', color='white', y=1.04) # Save the image FILE = "chart.png" plt.savefig(FILE, dpi=100, facecolor='#1a1a1a') And this is the resulting image: As you can see, the grid (parallels and meridians) are totally parallel. However, my goal is to achieve this grid: In this case, I got the right WCS from a FITS image in the DSS survey. It's automatic. However, for plotting star maps I need to create a simulation of that, working fine with the labels and the coordinates system, not as a background image or something else.
I could find the solution in another forum. So I'm gonna post the answer. It was easier than it seems! The essential problem was in my WCS coordinate scales, they were defined incorrectly. Actually, using the FOV 30, the image didn't have a FOV of 30 degrees, it was smaller. This point helped to find the answer. There was a problem in the units used. While I was plltting everything in radians, WCSAxes works in pixels, so I needed to define CDELT to be 1 radian per pixel. So the line corrected is this one: wcs.wcs.cdelt = np.array([-360 / np.pi, 360 / np.pi]) This is the final image: I'm very very happy with this!
2
0
78,776,035
2024-7-21
https://stackoverflow.com/questions/78776035/nlst-times-out-while-connecting-ftps-server-with-python
I can login to with Total Commander to server: ftps://publishedprices.co.il username: "XXXX" password empty And with lftp -u XXXX: publishedprices.co.il But when I tried to login and get the file list with Python on the same machine the nlst function returns time out. Code: from ftplib import FTP_TLS ftp_server = "publishedprices.co.il" username = 'XXXX' password = "" ftps = FTP_TLS() ftps.set_debuglevel(2) ftps.connect(ftp_server,timeout=30) print('connected') ftps.login(username, password) ftps.prot_p() print('log in') file_list = ftps.nlst() debug print: *get* '220-Welcome to Public Published Prices Server\n' *get* '220- Created by NCR L.T.D\n' *get* '220-\n' *get* '220-\n' *get* '220 ** The site is open! Have a good day.\n' *resp* '220-Welcome to Public Published Prices Server\n220- Created by NCR L.T.D\n220-\n220-\n220 ** The site is open! Have a good day.' connected *cmd* 'AUTH TLS' *put* 'AUTH TLS\r\n' *get* '234 Authentication method accepted\n' *resp* '234 Authentication method accepted' *cmd* 'USER XXXX' *put* 'USER XXXX\r\n' *get* '331 User XXXX, password please\n' *resp* '331 User XXXX, password please' *cmd* 'PASS ' *put* 'PASS \r\n' *get* '230 Password Ok, User logged in\n' *resp* '230 Password Ok, User logged in' *cmd* 'PBSZ 0' *put* 'PBSZ 0\r\n' *get* '200 PBSZ=0\n' *resp* '200 PBSZ=0' *cmd* 'PROT P' *put* 'PROT P\r\n' *get* '200 PROT P OK, data channel will be secured\n' *resp* '200 PROT P OK, data channel will be secured' log in *cmd* 'TYPE A' *put* 'TYPE A\r\n' *get* '200 Type ASCII\n' *resp* '200 Type ASCII' *cmd* 'PASV' *put* 'PASV\r\n' *get* '227 Entering Passive Mode (194,90,26,21,47,54)\n' *resp* '227 Entering Passive Mode (194,90,26,21,47,54)' Exception: Traceback (most recent call last): File "c:\project\test.py", line 18, in <module> file_list = ftps.nlst() ^^^^^^^^^^^ File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\ftplib.py", line 553, in nlst self.retrlines(cmd, files.append) File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\ftplib.py", line 462, in retrlines with self.transfercmd(cmd) as conn, \ ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\ftplib.py", line 393, in transfercmd return self.ntransfercmd(cmd, rest)[0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\ftplib.py", line 793, in ntransfercmd conn, size = super().ntransfercmd(cmd, rest) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\ftplib.py", line 354, in ntransfercmd conn = socket.create_connection((host, port), self.timeout, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\socket.py", line 851, in create_connection raise exceptions[0] File "C:\Users\USER\AppData\Local\Programs\Python\Python311\Lib\socket.py", line 836, in create_connection sock.connect(sa) TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond When I set ftps.set_pasv(False) I got *get* '220-Welcome to Public Published Prices Server\n' [27/11310]*get* '220- Created by NCR L.T.D\n' *get* '220-\n' *get* '220-\n' *get* '220 ** The site is open! Have a good day.\n' *resp* '220-Welcome to Public Published Prices Server\n220- Created by NCR L.T.D\n220-\n220-\n220 ** The site is open! Have a good day.' connected *cmd* 'AUTH TLS' *put* 'AUTH TLS\r\n' *get* '234 Authentication method accepted\n' *resp* '234 Authentication method accepted' *cmd* 'USER XXXX' *put* 'USER XXXX\r\n' *get* '331 User XXXX, password please\n' *resp* '331 User XXXX, password please' *cmd* 'PASS ' *put* 'PASS \r\n' *get* '230 Password Ok, User logged in\n' *resp* '230 Password Ok, User logged in' *cmd* 'PBSZ 0' *put* 'PBSZ 0\r\n' *get* '200 PBSZ=0\n' *resp* '200 PBSZ=0' *cmd* 'PROT P' *put* 'PROT P\r\n' *get* '200 PROT P OK, data channel will be secured\n' *resp* '200 PROT P OK, data channel will be secured' log in *cmd* 'TYPE A' *put* 'TYPE A\r\n' *get* '200 Type ASCII\n' *resp* '200 Type ASCII' *cmd* 'PORT 172,17,118,200,159,233' *put* 'PORT 172,17,118,200,159,233\r\n' *get* '500 Port command invalid\n' *resp* '500 Port command invalid' lftp log: lftp :~>debug -o log1.txt -c -t 9 lftp :~>set ftp:ssl-force true lftp :~>set ssl:verify-certificate no lftp :~>set ftp:use-feat false lftp :~>connect -u XXXX: publishedprices.co.il lftp :~>ls log file: 2024-07-22 02:22:22 publishedprices.co.il ---- Resolving host address... 2024-07-22 02:22:22 publishedprices.co.il ---- IPv6 is not supported or configured 2024-07-22 02:22:22 publishedprices.co.il ---- 1 address found: 194.90.26.22 2024-07-22 02:22:26 publishedprices.co.il ---- Connecting to publishedprices.co.il (194.90.26.22) port 21 2024-07-22 02:22:26 publishedprices.co.il <--- 220-Welcome to Public Published Prices Server 2024-07-22 02:22:26 publishedprices.co.il <--- 220- Created by NCR L.T.D 2024-07-22 02:22:26 publishedprices.co.il <--- 220- 2024-07-22 02:22:26 publishedprices.co.il <--- 220- 2024-07-22 02:22:26 publishedprices.co.il <--- 220 ** The site is open! Have a good day. 2024-07-22 02:22:26 publishedprices.co.il ---> AUTH TLS 2024-07-22 02:22:26 publishedprices.co.il <--- 234 Authentication method accepted 2024-07-22 02:22:26 publishedprices.co.il ---> USER XXXX 2024-07-22 02:22:26 Certificate: CN=*.publishedprices.co.il 2024-07-22 02:22:26 Issued by: C=GB,ST=Greater Manchester,L=Salford,O=Sectigo Limited,CN=Sectigo RSA Domain Validation Secure Server CA 2024-07-22 02:22:26 Checking against: C=GB,ST=Greater Manchester,L=Salford,O=Sectigo Limited,CN=Sectigo RSA Domain Validation Secure Server CA 2024-07-22 02:22:26 Trusted 2024-07-22 02:22:26 Certificate: C=GB,ST=Greater Manchester,L=Salford,O=Sectigo Limited,CN=Sectigo RSA Domain Validation Secure Server CA 2024-07-22 02:22:26 Issued by: C=US,ST=New Jersey,L=Jersey City,O=The USERTRUST Network,CN=USERTrust RSA Certification Authority 2024-07-22 02:22:26 Checking against: C=US,ST=New Jersey,L=Jersey City,O=The USERTRUST Network,CN=USERTrust RSA Certification Authority 2024-07-22 02:22:26 Trusted 2024-07-22 02:22:26 Certificate: C=US,ST=New Jersey,L=Jersey City,O=The USERTRUST Network,CN=USERTrust RSA Certification Authority 2024-07-22 02:22:26 Issued by: C=GB,ST=Greater Manchester,L=Salford,O=Comodo CA Limited,CN=AAA Certificate Services 2024-07-22 02:22:26 Trusted 2024-07-22 02:22:26 publishedprices.co.il <--- 331 User XXXX, password please 2024-07-22 02:22:26 publishedprices.co.il ---> PASS 2024-07-22 02:22:26 publishedprices.co.il <--- 230 Password Ok, User logged in 2024-07-22 02:22:26 publishedprices.co.il ---> PWD 2024-07-22 02:22:26 publishedprices.co.il <--- 257 "/" is the current directory 2024-07-22 02:22:26 publishedprices.co.il ---> PBSZ 0 2024-07-22 02:22:26 publishedprices.co.il <--- 200 PBSZ=0 2024-07-22 02:22:26 publishedprices.co.il ---> PROT P 2024-07-22 02:22:26 publishedprices.co.il <--- 200 PROT P OK, data channel will be secured 2024-07-22 02:22:26 publishedprices.co.il ---> PASV 2024-07-22 02:22:26 publishedprices.co.il <--- 227 Entering Passive Mode (194,90,26,21,48,206) 2024-07-22 02:22:26 publishedprices.co.il ---- Connecting data socket to (194.90.26.21) port 12494 2024-07-22 02:22:26 publishedprices.co.il ---- Data connection established 2024-07-22 02:22:26 publishedprices.co.il ---> LIST 2024-07-22 02:22:26 publishedprices.co.il <--- 150 Opening data connection 2024-07-22 02:22:26 Certificate: CN=*.publishedprices.co.il 2024-07-22 02:22:26 Issued by: C=GB,ST=Greater Manchester,L=Salford,O=Sectigo Limited,CN=Sectigo RSA Domain Validation Secure Server CA 2024-07-22 02:22:26 Checking against: C=GB,ST=Greater Manchester,L=Salford,O=Sectigo Limited,CN=Sectigo RSA Domain Validation Secure Server CA 2024-07-22 02:22:26 Trusted 2024-07-22 02:22:26 Certificate: C=GB,ST=Greater Manchester,L=Salford,O=Sectigo Limited,CN=Sectigo RSA Domain Validation Secure Server CA 2024-07-22 02:22:26 Issued by: C=US,ST=New Jersey,L=Jersey City,O=The USERTRUST Network,CN=USERTrust RSA Certification Authority 2024-07-22 02:22:26 Checking against: C=US,ST=New Jersey,L=Jersey City,O=The USERTRUST Network,CN=USERTrust RSA Certification Authority 2024-07-22 02:22:26 Trusted 2024-07-22 02:22:26 Certificate: C=US,ST=New Jersey,L=Jersey City,O=The USERTRUST Network,CN=USERTrust RSA Certification Authority 2024-07-22 02:22:26 Issued by: C=GB,ST=Greater Manchester,L=Salford,O=Comodo CA Limited,CN=AAA Certificate Services 2024-07-22 02:22:26 Trusted 2024-07-22 02:22:26 publishedprices.co.il <--- 226 Transfer complete 2024-07-22 02:22:26 publishedprices.co.il ---- Got EOF on data connection 2024-07-22 02:22:26 publishedprices.co.il ---- Closing data socket 2024-07-22 02:22:29 publishedprices.co.il ---> QUIT 2024-07-22 02:22:29 publishedprices.co.il <--- 221 Goodbye 2024-07-22 02:22:29 publishedprices.co.il ---- Closing control socket
The server is very unusual in using different IP address (...22) for the primary FTP port and data connections (...21). As it is common that servers return invalid IP addresses in PASV responses, ftplib in recent versions of Python (3.6 and newer) ignore the returned IP address and always connect to the primary address for the data connections. See Cannot list FTP directory using ftplib – but FTP client works. While with your unusual server, you actually want the opposite. Set FTP.trust_server_pasv_ipv4_address to make ftplib connect to the actual IP address the server returns: ftps.trust_server_pasv_ipv4_address = True file_list = ftps.nlst()
2
5
78,785,661
2024-7-23
https://stackoverflow.com/questions/78785661/parsing-formulas-efficiently-using-regex-and-polars
I am trying to parse a series of mathematical formulas and need to extract variable names efficiently using Polars in Python. Regex support in Polars seems to be limited, particularly with look-around assertions. Is there a simple, efficient way to parse symbols from formulas? Here's the snippet of my code: import re import polars as pl # Define the regex pattern FORMULA_DECODER = r"\b[A-Za-z][A-Za-z_0-9_]*\b(?!\()" # \b # Assert a word boundary to ensure matching at the beginning of a word # [A-Za-z] # Match an uppercase or lowercase letter at the start # [A-Za-z0-9_]* # Match following zero or more occurrences of valid characters (letters, digits, or underscores) # \b # Assert a word boundary to ensure matching at the end of a word # (?!\() # Negative lookahead to ensure the match is not followed by an open parenthesis (indicating a function) # Sample formulas formulas = ["3*sin(x1+x2)+A_0", "ab*exp(2*x)"] # expected result pl.Series(formulas).map_elements(lambda formula: re.findall(FORMULA_DECODER, formula), return_dtype=pl.List(pl.String)) # Series: '' [list[str]] # [ # ["x1", "x2", "A_0"] # ["ab", "x"] # ] # Polars does not support this regex pattern pl.Series(formulas).str.extract_all(FORMULA_DECODER) # ComputeError: regex error: regex parse error: # \b[A-Za-z][A-Za-z_0-9_]*\b(?!\() # ^^^ # error: look-around, including look-ahead and look-behind, is not supported Edit Here is a small benchmark: import random import string import re import polars as pl def generate_symbol(): """Generate random symbol of length 1-3.""" characters = string.ascii_lowercase + string.ascii_uppercase return ''.join(random.sample(characters, random.randint(1, 3))) def generate_formula(): """Generate random formula with 2-5 unique symbols.""" op = ['+', '-', '*', '/'] return ''.join([generate_symbol()+random.choice(op) for _ in range(random.randint(2, 6))])[:-1] def generate_formulas(num_formulas): """Generate random formulas.""" return [generate_formula() for _ in range(num_formulas)] # Sample formulas # formulas = ["3*sin(x1+x2)+(A_0+B)", # "ab*exp(2*x)"] def parse_baseline(formulas): """Baseline serves as performance reference. It will not detect function names.""" FORMULA_DECODER_NO_LOOKAHEAD = r"\b[A-Za-z][A-Za-z_0-9_]*\b\(?" return pl.Series(formulas).str.extract_all(FORMULA_DECODER_NO_LOOKAHEAD) def parse_lookahead(formulas): FORMULA_DECODER = r"\b[A-Za-z][A-Za-z_0-9_]*\b(?!\()" return pl.Series(formulas).map_elements(lambda formula: re.findall(FORMULA_DECODER, formula), return_dtype=pl.List(pl.String)) def parse_no_lookahead_and_filter(formulas): FORMULA_DECODER_NO_LOOKAHEAD = r"\b[A-Za-z][A-Za-z_0-9_]*\b\(?" return ( pl.Series(formulas) .str.extract_all(FORMULA_DECODER_NO_LOOKAHEAD) # filter for matches not containing an open parenthesis .list.eval(pl.element().filter(~pl.element().str.contains("(", literal=True))) ) formulas = generate_formulas(1000) %timeit parse_lookahead(formulas) %timeit parse_no_lookahead_and_filter(formulas) %timeit parse_baseline(formulas) # 10.7 ms Β± 387 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each) # 1.31 ms Β± 76.1 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) # 708 ΞΌs Β± 6.43 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)
As mentioned in the comment, you could drop the negative lookahead and optionally include the open parenthesis in the match. In a post-processing step, you could then filter out any matches containing an open parenthesis (using pl.Series.list.eval). This could look as follows. # avoid negative lookahead and optionally match open parenthesis FORMULA_DECODER_NO_LOOKAHEAD = r"\b[A-Za-z][A-Za-z_0-9_]*\b\(?" ( pl.Series(formulas) .str.extract_all(FORMULA_DECODER_NO_LOOKAHEAD) # filter for matches not containing an open parenthesis .list.eval(pl.element().filter(~pl.element().str.contains("(", literal=True))) ) shape: (2,) Series: '' [list[str]] [ ["x1", "x2", "A_0"] ["ab", "x"] ]
3
2
78,784,964
2024-7-23
https://stackoverflow.com/questions/78784964/how-to-add-legend-to-df-plot-legend-not-showing-up-df-plot
I am currently creating a scatter plot with the results of some evaluation I am doing. To get a dataframe of the same structure as mine you can run: import pandas as pd models = ["60000_25_6", "60000_26_6"] results = [] for i in range(10): for model in models: results.append({"simulation": i, "model_id": model, "count_at_1": 1, "count_at_5": 5, "count_at_10": 10}) df = pd.DataFrame(results) You will end up with a pandas dataframe that looks like so, just with default values (this is a smaller dataframe, note that the size is variable and much larger depending on the settings I use): simulation model_id count_at_1 count_at_5 count_at_10 0 0 60000_25_6 60 77 84 1 0 60000_26_6 60 76 83 2 1 60000_25_6 69 80 82 ... 18 9 60000_25_6 1 70 79 19 9 60000_26_6 1 68 74 I then use the following code to add colors to each point: import matplotlib.pyplot as plt colors = plt.get_cmap('hsv') colors = [colors(i) for i in np.linspace(0,0.95, len(models))] cmap = {model: colors[i] for i, model in enumerate(models)} df['color'] = df.apply(lambda row: cmap[row['model_id']], axis=1) And df is now: simulation model_id count_at_1 count_at_5 count_at_10 color 0 0 60000_25_6 74 81 83 (1.0, 0.0, 0.0, 1.0) 1 0 60000_26_6 75 80 83 (1.0, 0.0, 0.5, 1.0) 2 1 60000_25_6 71 84 89 (1.0, 0.0, 0.0, 1.0) ... 18 9 60000_25_6 2 69 79 (1.0, 0.0, 0.0, 1.0) 19 9 60000_26_6 2 72 78 (1.0, 0.0, 0.5, 1.0) However when I run: df.plot.scatter('count_at_1', 'count_at_5', c='color', legend=True) plt.show() No legend appears I just get a normal plot like so: How can I add a legend where it looks something like: [model_id] [color] ... But in the normal matplotlib format, I'll take it anywhere on the plot.
The data need assigned a label for the legend, so one option is: fig, ax = plt.subplots() for model in df['model_id'].unique(): df[df["model_id"].eq(model)].plot.scatter('count_at_1', 'count_at_5', c='color', label=model, ax=ax) ax.legend(markerfirst=False) With your sample data:
4
1
78,784,292
2024-7-23
https://stackoverflow.com/questions/78784292/use-list-comprehension-with-if-else-and-for-loop-while-only-keeping-list-items
I use list comprehension to load only the images from a folder that meet a certain condition. In the same operation, I would also like to keep track of those that do not meet the condition. This is where I am having trouble. If the if and else conditions are at the beginning, each iteration yields a resulting element. The items that are caught by else are replaced by None rather than being excluded from the result list. I can't figure out how to add an else condition such that an operation can be done on the items caught by else without including them in the resulting list. here is a simplified and generalized version of the code: exclude_imgs = [1, 3] final = [ n for ix, n in enumerate(sorted(["img1", "img4", "img3", "img5", "img2"])) if ix + 1 not in exclude_imgs ] [ins] In [4]: final Out[4]: ['img2', 'img4', 'img5'] adding else condition to store excluded images: exclude_imgs = [1, 3] excluded = [] final = [ n if ix + 1 not in exclude_imgs else excluded.append(n) for ix, n in enumerate(sorted(["img1", "img4", "img3", "img5", "img2"])) ] [ins] In [6]: final Out[6]: [None, 'img2', None, 'img4', 'img5'] [ins] In [7]: excluded Out[7]: ['img1', 'img3'] How can I write this so that the results are as follows: final: ['img2', 'img4', 'img5'] excluded: ['img1', 'img3'] ?
Consider not using a list comprehension at all, precisely because you want to create two lists, not just one. exclude_imgs = [1, 3] excluded = [] final = [] for ix, n in enumerate(sorted(["img1", "img4", "img3", "img5", "img2"]), start=1): (excluded if ix in excluded_imgs else final).append(n)
1
5
78,783,365
2024-7-23
https://stackoverflow.com/questions/78783365/expand-numpy-array-to-be-able-to-broadcast-with-second-array-of-variable-depth
I have a function which can take an np.ndarray of shape (3,) or (3, N), or (3, N, M), etc.. I want to add to the input array an array of shape (3,). At the moment, I have to manually check the shape of the incoming array and if neccessary, expand the array that is added to it, so that I don't get a broadcasting error. Is there a function in numpy that can expand my array to allow broadcasting for an input array of arbitrary depth? def myfunction(input_array): array_to_add = np.array([1, 2, 3]) if len(input_array.shape) == 1: return input_array + array_to_add elif len(input_array.shape) == 2: return input_array + array_to_add[:, None] elif len(input_array.shape) == 3: return input_array + array_to_add[:, None, None] ...
One option would be to transpose before and after the addition: (input_array.T + array_to_add).T You could also use expand_dims to add the extra dimensions: (np.expand_dims(array_to_add, tuple(range(1, input_array.ndim))) + input_array ) Alternatively with broadcast_to on the reversed shape of input_array + transpose: (np.broadcast_to(array_to_add, input_array.shape[::-1]).T + input_array ) Or with a custom reshape: (array_to_add.reshape(array_to_add.shape+(1,)*(input_array.ndim-1)) + input_array )
3
1
78,783,405
2024-7-23
https://stackoverflow.com/questions/78783405/appending-multiple-dictionaries-to-specific-key-of-a-nested-dictionary
I want to append different dictionaries having the key as the meal_name and having total_calories and total_protein as a tuple being the value of the dictionary without overwriting the original dictionaries under the main dictionary having date as the key value. I tried using the code below to add the new dictionary but it overwrites the previous one. date = input("Enter the date in (Day-Month-Year) format: ") name = input("Enter the name of the meal: ") current_meal[name] = (total_calories, total_protein) meal_data[date] = current_meal This is what the dictionary looks like: {'23-07-2024': {'Smoothie': (245.0, 17.0)}} Suppose I want to add a new meal to the existing date without overwriting the Smoothie dictionary. Is this possible.
You can use the if-else statement to check whether the current date is in the dictionary. date = input("Enter the date in (Day-Month-Year) format: ") name = input("Enter the name of the meal: ") if date in meal_data: # If date is already in meal_date then set the current_meal to existing meal current_meal = meal_data[date] else: # if not then set it to empty dict current_meal = {} current_meal[name] = (total_calories, total_protein) # Adding new meal to the current_meal meal_data[date] = current_meal OUTPUT
2
2
78,766,494
2024-7-18
https://stackoverflow.com/questions/78766494/predictive-control-model-using-gekko
I am modeling an MPC to maintain the temperature in a building within a given interval while minimizing the energy consumption. I am using GEKKO to model my algorithm. I wrote the following code. First, I identified my model using data with the input (the disturbance: external temperature and the control), and the output y , which is the temperature. Then, I built an ARX model (using the arx function in GEKKO. This is my code : # Import library import numpy as np import pandas as pd import time # Initialize Model ts = 300 t = np.arange(0,len(data_1)*ts, ts) u_id = data_1[['T_ext','beta']] y_id = data_1[['T_int']] # system identification #meas : the time-series next step is predicted from prior measurements as in ARX na=5; nb=5 # ARX coefficients print('Identify model') start = time.time() yp,p,K = m.sysid(t,u_id,y_id,na,nb,objf=100,scale=False,diaglevel=0,pred='meas') print('temps de prediction :'+str(time.time()-start)+'s') #%% create control ARX model T_externel = np.array([5.450257,5.448852,5.447447,5.446042,5.444637,5.443232,5.441826,5.440421,5.439016, 5.440421,5.437610,5.436205,5.434799,5.433394,5.431988,5.430583,5.429177,5.427771, 5.426365, 5.424959, 5.423553 ]) m = GEKKO(remote=False) m.y = m.Array(m.CV,1) m.u = m.Array(m.MV,2) m.arx(p,m.y,m.u) # rename CVs m.T = m.y[0] # rename MVs m.beta = m.u[1] # distrubance m.d = m.u[0] # distrubance and parametres m.d = m.Param(T_externel) # lower,heigh bound for MV TL = m.Param(values = 16) TH = m.Param(values = 18) # steady state initialization m.options.IMODE = 1 m.solve(disp=False) # set up MPC m.options.IMODE = 6 # MPC m.options.CV_TYPE = 2 # the objective is an l2-norm (squared error) m.options.NODES = 2 # Collocation nodes m.options.SOLVER = 1 # APOPT m.time = np.arange(0,len(T_externel)*300,300) # step time = 300s # Manipulated variables m.beta.STATUS = 1 # calculated by the optimizer m.beta.FSTATUS = 1 # use measured value m.beta.DMAX = 1.0 # Delta MV maximum step per horizon interval m.beta.DCOST = 2.0 # Delta cost penalty for MV movement m.beta.UPPER = 1.0 # Lower bound m.beta.LOWER = 0.0 m.beta.MEAS = 0 # set u=0 # Controlled variables m.T.STATUS = 1 # drive to set point m.T.FSTATUS = 1 # receive measurement m.T.options.CV_TYPE=2 # the objective is an l2-norm (squared error) m.T.SP = 17 # set point TL.values = np.ones(len(T_externel))*16 TH.values = np.ones(len(T_externel))*18 m.T.value = 17 # Temprature starts at 17 for i in range(len(T_externel)): m.d = T_externel[i] m.solve(disp = False) if m.options.APPSTATUS == 1: # Retrieve new values beta = m.beta.NEWVAL else: # Solution failed beta = 0.0 I get the following error : File c:\algoritmempc\gekko_mpc.py:84 m.arx(p,m.y,m.u) ValueError: operands could not be broadcast together with shapes (2,) (0,) I also have another question. Since one of my inputs is a disturbance, I'm not sure if the way I declared my variables is correct or not (I want to provide the disturbances myself).
When creating a question, please include example data that reproduces the error. It appears that the code is correct with this randomly generated data: # Example data for data_1 np.random.seed(42) # For reproducibility n = 100 # Number of data points # Generating example data T_ext = np.random.uniform(low=5, high=25, size=n) beta = np.random.uniform(low=0, high=1, size=n) T_int = T_ext * 0.5 + beta * 10 + np.random.normal(loc=0, scale=2, size=n) # Create DataFrame data_1 = pd.DataFrame({ 'T_ext': T_ext, 'beta': beta, 'T_int': T_int }) I've also made minor corrections so that T_externel is correctly defined for the steady-state initialization (one value) and for the model predictive control application. # Import library import numpy as np import pandas as pd import time from gekko import GEKKO # Example data for data_1 np.random.seed(42) # For reproducibility n = 100 # Number of data points # Generating example data T_ext = np.random.uniform(low=5, high=25, size=n) beta = np.random.uniform(low=0, high=1, size=n) T_int = T_ext * 0.5 + beta * 10 + np.random.normal(loc=0, scale=2, size=n) # Create DataFrame data_1 = pd.DataFrame({ 'T_ext': T_ext, 'beta': beta, 'T_int': T_int }) # Initialize Model ts = 300 t = np.arange(0,len(data_1)*ts, ts) u_id = data_1[['T_ext','beta']] y_id = data_1[['T_int']] # system identification m = GEKKO() #meas : the time-series next step is predicted from prior measurements as in ARX na=5; nb=5 # ARX coefficients print('Identify model') start = time.time() yp,p,K = m.sysid(t,u_id,y_id,na,nb,objf=100,scale=False,diaglevel=0,pred='meas') print('temps de prediction :'+str(time.time()-start)+'s') #%% create control ARX model T_externel = np.array([5.450257,5.448852,5.447447,5.446042,5.444637,5.443232,5.441826,5.440421,5.439016, 5.440421,5.437610,5.436205,5.434799,5.433394,5.431988,5.430583,5.429177,5.427771, 5.426365, 5.424959, 5.423553 ]) m = GEKKO(remote=False) m.y = m.Array(m.CV,1) m.u = m.Array(m.MV,2) m.arx(p,m.y,m.u) # rename CVs m.T = m.y[0] # rename MVs m.beta = m.u[1] # distrubance m.d = m.u[0] # distrubance and parametres m.d = m.Param(T_externel[0]) # lower,heigh bound for MV TL = m.Param(value = 16) TH = m.Param(value = 18) # steady state initialization m.options.IMODE = 1 m.solve(disp=False) # set up MPC m.d.value = T_externel m.options.IMODE = 6 # MPC m.options.CV_TYPE = 2 # the objective is an l2-norm (squared error) m.options.NODES = 2 # Collocation nodes m.options.SOLVER = 1 # APOPT m.time = np.arange(0,len(T_externel)*300,300) # step time = 300s # Manipulated variables m.beta.STATUS = 1 # calculated by the optimizer m.beta.FSTATUS = 1 # use measured value m.beta.DMAX = 1.0 # Delta MV maximum step per horizon interval m.beta.DCOST = 2.0 # Delta cost penalty for MV movement m.beta.UPPER = 1.0 # Lower bound m.beta.LOWER = 0.0 m.beta.MEAS = 0 # set u=0 # Controlled variables m.T.STATUS = 1 # drive to set point m.T.FSTATUS = 1 # receive measurement m.T.SP = 17 # set point TL.value = np.ones(len(T_externel))*16 TH.value = np.ones(len(T_externel))*18 m.T.value = 17 # Temprature starts at 17 for i in range(len(T_externel)): m.d = T_externel[i] m.solve(disp = False) if m.options.APPSTATUS == 1: # Retrieve new values beta = m.beta.NEWVAL else: # Solution failed beta = 0.0 I recommend the Temperature Control Lab (TCLab) for learning about model predictive control with ARX models. This is a heat transfer application, similar to the home application where there is an external temperature (T2) that can be used as a disturbance, and internal temperature (T1) that can be used as the internal temperature. This code uses TCLabModel() (digital twin) that can be switched to TCLab() if you have the device. It demonstrates MIMO MPC, but you could switch T2 to the disturbance. import numpy as np import time import matplotlib.pyplot as plt import pandas as pd import json # get gekko package with: # pip install gekko from gekko import GEKKO # get tclab package with: # pip install tclab from tclab import TCLab # Connect to Arduino a = TCLabModel() # Make an MP4 animation? make_mp4 = False if make_mp4: import imageio # required to make animation import os try: os.mkdir('./figures') except: pass # Final time tf = 10 # min # number of data points (every 2 seconds) n = tf * 30 + 1 # Percent Heater (0-100%) Q1s = np.zeros(n) Q2s = np.zeros(n) # Temperatures (degC) T1m = a.T1 * np.ones(n) T2m = a.T2 * np.ones(n) # Temperature setpoints T1sp = T1m[0] * np.ones(n) T2sp = T2m[0] * np.ones(n) # Heater set point steps about every 150 sec T1sp[3:] = 50.0 T2sp[40:] = 35.0 T1sp[80:] = 30.0 T2sp[120:] = 50.0 T1sp[160:] = 45.0 T2sp[200:] = 35.0 T1sp[240:] = 60.0 ######################################################### # Initialize Model ######################################################### # load data (20 min, dt=2 sec) and parse into columns url = 'http://apmonitor.com/do/uploads/Main/tclab_2sec.txt' data = pd.read_csv(url) t = data['Time'] u = data[['H1','H2']] y = data[['T1','T2']] # generate time-series model m = GEKKO() ################################################################## # system identification na = 2 # output coefficients nb = 2 # input coefficients print('Identify model') yp,p,K = m.sysid(t,u,y,na,nb,objf=10000,scale=False,diaglevel=1) ################################################################## # plot sysid results plt.figure() plt.subplot(2,1,1) plt.plot(t,u) plt.legend([r'$H_1$',r'$H_2$']) plt.ylabel('MVs') plt.subplot(2,1,2) plt.plot(t,y) plt.plot(t,yp) plt.legend([r'$T_{1meas}$',r'$T_{2meas}$',\ r'$T_{1pred}$',r'$T_{2pred}$']) plt.ylabel('CVs') plt.xlabel('Time') plt.savefig('sysid.png') plt.show() ################################################################## # create control ARX model y = m.Array(m.CV,2) u = m.Array(m.MV,2) m.arx(p,y,u) # rename CVs TC1 = y[0] TC2 = y[1] # rename MVs Q1 = u[0] Q2 = u[1] # steady state initialization m.options.IMODE = 1 m.solve(disp=False) # set up MPC m.options.IMODE = 6 # MPC m.options.CV_TYPE = 1 # Objective type m.options.NODES = 2 # Collocation nodes m.options.SOLVER = 3 # IPOPT m.time=np.linspace(0,120,61) # Manipulated variables Q1.STATUS = 1 # manipulated Q1.FSTATUS = 0 # not measured Q1.DMAX = 50.0 Q1.DCOST = 0.1 Q1.UPPER = 100.0 Q1.LOWER = 0.0 Q2.STATUS = 1 # manipulated Q2.FSTATUS = 0 # not measured Q2.DMAX = 50.0 Q2.DCOST = 0.1 Q2.UPPER = 100.0 Q2.LOWER = 0.0 # Controlled variables TC1.STATUS = 1 # drive to set point TC1.FSTATUS = 1 # receive measurement TC1.TAU = 20 # response speed (time constant) TC1.TR_INIT = 2 # reference trajectory TC1.TR_OPEN = 0 TC2.STATUS = 1 # drive to set point TC2.FSTATUS = 1 # receive measurement TC2.TAU = 20 # response speed (time constant) TC2.TR_INIT = 2 # dead-band TC2.TR_OPEN = 1 ################################################################## # Create plot plt.figure(figsize=(10,7)) plt.ion() plt.show() # Main Loop start_time = time.time() prev_time = start_time tm = np.zeros(n) try: for i in range(1,n-1): # Sleep time sleep_max = 2.0 sleep = sleep_max - (time.time() - prev_time) if sleep>=0.01: time.sleep(sleep-0.01) else: time.sleep(0.01) # Record time and change in time t = time.time() dt = t - prev_time prev_time = t tm[i] = t - start_time # Read temperatures in Celsius T1m[i] = a.T1 T2m[i] = a.T2 # Insert measurements TC1.MEAS = T1m[i] TC2.MEAS = T2m[i] # Adjust setpoints db1 = 1.0 # dead-band TC1.SPHI = T1sp[i] + db1 TC1.SPLO = T1sp[i] - db1 db2 = 0.2 TC2.SPHI = T2sp[i] + db2 TC2.SPLO = T2sp[i] - db2 # Adjust heaters with MPC m.solve() if m.options.APPSTATUS == 1: # Retrieve new values Q1s[i+1] = Q1.NEWVAL Q2s[i+1] = Q2.NEWVAL # get additional solution information with open(m.path+'//results.json') as f: results = json.load(f) else: # Solution failed Q1s[i+1] = 0.0 Q2s[i+1] = 0.0 # Write new heater values (0-100) a.Q1(Q1s[i]) a.Q2(Q2s[i]) # Plot plt.clf() ax=plt.subplot(3,1,1) ax.grid() plt.plot(tm[0:i+1],T1sp[0:i+1]+db1,'k-',\ label=r'$T_1$ target',lw=3) plt.plot(tm[0:i+1],T1sp[0:i+1]-db1,'k-',\ label=None,lw=3) plt.plot(tm[0:i+1],T1m[0:i+1],'r.',label=r'$T_1$ measured') plt.plot(tm[i]+m.time,results['v1.bcv'],'r-',\ label=r'$T_1$ predicted',lw=3) plt.plot(tm[i]+m.time,results['v1.tr_hi'],'k--',\ label=r'$T_1$ trajectory') plt.plot(tm[i]+m.time,results['v1.tr_lo'],'k--') plt.ylabel('Temperature (degC)') plt.legend(loc=2) ax=plt.subplot(3,1,2) ax.grid() plt.plot(tm[0:i+1],T2sp[0:i+1]+db2,'k-',\ label=r'$T_2$ target',lw=3) plt.plot(tm[0:i+1],T2sp[0:i+1]-db2,'k-',\ label=None,lw=3) plt.plot(tm[0:i+1],T2m[0:i+1],'b.',label=r'$T_2$ measured') plt.plot(tm[i]+m.time,results['v2.bcv'],'b-',\ label=r'$T_2$ predict',lw=3) plt.plot(tm[i]+m.time,results['v2.tr_hi'],'k--',\ label=r'$T_2$ range') plt.plot(tm[i]+m.time,results['v2.tr_lo'],'k--') plt.ylabel('Temperature (degC)') plt.legend(loc=2) ax=plt.subplot(3,1,3) ax.grid() plt.plot([tm[i],tm[i]],[0,100],'k-',\ label='Current Time',lw=1) plt.plot(tm[0:i+1],Q1s[0:i+1],'r.-',\ label=r'$Q_1$ history',lw=2) plt.plot(tm[i]+m.time,Q1.value,'r-',\ label=r'$Q_1$ plan',lw=3) plt.plot(tm[0:i+1],Q2s[0:i+1],'b.-',\ label=r'$Q_2$ history',lw=2) plt.plot(tm[i]+m.time,Q2.value,'b-', label=r'$Q_2$ plan',lw=3) plt.plot(tm[i]+m.time[1],Q1.value[1],color='red',\ marker='.',markersize=15) plt.plot(tm[i]+m.time[1],Q2.value[1],color='blue',\ marker='X',markersize=8) plt.ylabel('Heaters') plt.xlabel('Time (sec)') plt.legend(loc=2) plt.draw() plt.pause(0.05) if make_mp4: filename='./figures/plot_'+str(i+10000)+'.png' plt.savefig(filename) # Turn off heaters and close connection a.Q1(0) a.Q2(0) a.close() # Save figure plt.savefig('tclab_mpc.png') # generate mp4 from png figures in batches of 350 if make_mp4: images = [] iset = 0 for i in range(1,n-1): filename='./figures/plot_'+str(i+10000)+'.png' images.append(imageio.imread(filename)) if ((i+1)%350)==0: imageio.mimsave('results_'+str(iset)+'.mp4', images) iset += 1 images = [] if images!=[]: imageio.mimsave('results_'+str(iset)+'.mp4', images) # Allow user to end loop with Ctrl-C except KeyboardInterrupt: # Turn off heaters and close connection a.Q1(0) a.Q2(0) a.close() print('Shutting down') plt.savefig('tclab_mpc.png') # Make sure serial connection still closes when there's an error except: # Disconnect from Arduino a.Q1(0) a.Q2(0) a.close() print('Error: Shutting down') plt.savefig('tclab_mpc.png') raise
2
1
78,780,961
2024-7-22
https://stackoverflow.com/questions/78780961/using-getattr-with-sqlalchemy-orm-leads-to-recursionerror
Simple self-contained example below, presupposing SQLite. I'm using the SQLAlchemy (v1.3) ORM, where I have a table of world-given whatsits that should not be changed. I also have another table with the same whatsits, in a form more usable to the developer. For instance, this table of dev-whatsits has fields to keep cached results of complex calculations made on data in the raw whatsits. The dev-whatsits table is connected to the raw-whatsits table through its ID as a foreign key; this is also modelled as a (one-way) relationship in SQLAlchemy. This works fine. Now, frequently while interacting with a dev-whatsit, the developer will want to look at attributes in the underlying raw version. This is simple enough: result = dev_instance.raw_whatsit.some_attribute However, since it's the same real-world object that is represented, it would be more convenient and intuitive to be able to skip the middle bit and write: result = dev_instance.some_attribute I thought this would be reasonably simple using __getattr__, e.g. like this: def __getattr__(self, item): try: getattr(self.raw_whatsit, item) except AttributeError as e: # possibly notify here? raise e However, this leads to a RecursionError: maximum recursion depth exceeded after going back and forth between the getattr here and the line return self.impl.get(instance_state(instance), dict_) in InstrumentedAttribute.__get__ in sqlalchemy\orm\attributes.py. Is there a better way of "redirecting" attribute access in the way I want? Or is there a simple fix I have not yet found? Self-contained code giving RecursionError follows. Comment out AppWhatsit.__getattr__ and the very last print statement to make it work. from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship, sessionmaker, scoped_session from sqlalchemy import create_engine, Column, ForeignKey, Integer, String Base = declarative_base() class RawWhatsit(Base): '''Data for whatsit objects given by the world, with lots and lots of attributes. This table should not be changed in any way.''' __tablename__ = 'raw_whatsits' whatsit_id = Column(Integer, primary_key=True) one_of_many_attributes = Column(Integer) another_attribute = Column(String(16)) def __init__(self, wid, attr, attr2): '''Just a helper for this demonstration.''' self.whatsit_id, self.one_of_many_attributes, self.another_attribute = wid, attr, attr2 class AppWhatsit(Base): '''A model of a whatsit intended to be used by the developer. It has a separate db table and, for instance, caches results of lengthy calculations. Has foreign key back to corresponding raw whatsit.''' __tablename__ = 'app_whatsits' whatsit_id = Column(Integer, ForeignKey('raw_whatsits.whatsit_id'), primary_key=True) result_of_complex_calc = Column(Integer) raw_whatsit = relationship('RawWhatsit') def __init__(self, raw_instance): self.whatsit_id = raw_instance.whatsit_id def do_complex_calc(self): self.result_of_complex_calc = (self.raw_whatsit.one_of_many_attributes + len(self.raw_whatsit.another_attribute)) # Attempt at making attributes of the raw whatsits more easily accessible. Leads to bottomless recursion. # (Comment out this and the very last print statement below, and the code works.) def __getattr__(self, item): try: getattr(self.raw_whatsit, item) except AttributeError as e: # possibly notify here? raise e def run(): # Set up database stuff: engine = create_engine('sqlite:///:memory:', echo=False) Base.metadata.create_all(engine) Session = scoped_session(sessionmaker(bind=engine)) # a class session = Session() # Populate raw table (in reality, this table is given by the world): raw_instance = RawWhatsit(1, 223, 'hello') session.add(raw_instance) session.commit() print(session.query(RawWhatsit).first().__dict__) # ... 'whatsit_id': 1, 'one_of_many_attributes': 223, ... # Later: Create a developer-friendly whatsit object associated with the raw one: raw_instance_from_db = session.query(RawWhatsit).first() dev_instance = AppWhatsit(raw_instance_from_db) session.add(dev_instance) session.commit() dev_instance.do_complex_calc() print(session.query(AppWhatsit).first().__dict__) # ... 'result_of_complex_calc': 228, 'whatsit_id': 1, ... # All is good. Now I want to see some of the basic data: print(dev_instance.raw_whatsit.another_attribute) # hello # ...but I'd prefer to be able to write: print(dev_instance.another_attribute) if __name__ == '__main__': run()
SQLAlchemy keeps the session state of an instance in the _sa_instance_state attribute, which is set on an instance if the instance doesn't yet have one. To test if an instance has the attribute, however, it has to call __getattr__ of the instance to query the name _sa_instance_state, so your overridden __getattr__ should raise an AttributeError in this case to allow the initialization logics to instantiate a state object for the instance: def __getattr__(self, item): if item == '_sa_instance_state': raise AttributeError return getattr(self.raw_whatsit, item) Demo: https://replit.com/@blhsing1/AwareMerryPatches
2
1
78,780,896
2024-7-22
https://stackoverflow.com/questions/78780896/customtkinter-grid-buttons-not-centered
I am trying to make a control panel GUI using customtkinter which essentially consits of different pages with different buttons, however when I go to a different page, the buttons aren't centered. As you can see, the first page is centered: However when I navigate to the Music Player, the buttons are aligned to the left for some reason: When the Music Controls button is pressed, I delete all home screen widgets using for widget in self.winfo_children(): widget.destroy() And then I configure the columns using self.grid_columnconfigure((0, 2), weight=1) However, the buttons on the Music Player screen seem to be aligned to the left despite grid_columnconfigure being set. Here is the code: import customtkinter from PIL import Image from time import strftime customtkinter.set_default_color_theme("dark-theme.json") playerOpen = False class App(customtkinter.CTk): def __init__(self): super().__init__() self.title("Cai's Bar Controls") self.geometry("1024x600") self.home_screen() def home_screen(self): for widget in self.winfo_children(): widget.destroy() self.grid_columnconfigure((0, 3), weight=1) global playerOpen playerOpen = False self.bullseye = customtkinter.CTkImage(light_image=Image.open("images/bullseye.png"),size=(100, 100)) self.lightbulb = customtkinter.CTkImage(light_image=Image.open("images/lightbulb.png"),size=(100, 100)) self.microphone = customtkinter.CTkImage(light_image=Image.open("images/cog.png"),size=(100, 100)) self.play = customtkinter.CTkImage(light_image=Image.open("images/play.png"),size=(100, 100)) self.nexttrack = customtkinter.CTkImage(light_image=Image.open("images/next.png"),size=(100, 100)) self.previoustrack = customtkinter.CTkImage(light_image=Image.open("images/previous.png"),size=(100, 100)) self.house = customtkinter.CTkImage(light_image=Image.open("images/home.png"),size=(100, 100)) self.power = customtkinter.CTkImage(light_image=Image.open("images/power.png"),size=(100, 100)) self.quit = customtkinter.CTkImage(light_image=Image.open("images/quit.png"),size=(100, 100)) self.playlist = customtkinter.CTkImage(light_image=Image.open("images/playlist.png"),size=(100, 100)) self.singalongindiehits = customtkinter.CTkImage(light_image=Image.open("images/singalongindiehits.jpg"),size=(200, 200)) self.the80shits = customtkinter.CTkImage(light_image=Image.open("images/80shits.jpg"),size=(200, 200)) self.massivedancehits = customtkinter.CTkImage(light_image=Image.open("images/massivedancehits.jpg"),size=(200, 200)) self.label = customtkinter.CTkLabel(self, text="Cai's Bar Controls", fg_color="transparent", font=("Roboto", 75, 'bold')) self.label.grid(row=0, column=0, padx=20, pady=20, columnspan=4) self.clock = customtkinter.CTkLabel(self, text="Time", fg_color="transparent", font=("Roboto", 150)) self.clock.grid(row=1, column=0, padx=20, pady=20, columnspan=4) self.music_controls = customtkinter.CTkButton(self, text="Music Controls", command=self.open_music_controls, image=self.play, width=200, height=200, compound='top') self.music_controls.grid(row=2, column=0, padx=20, pady=20) self.dart_counter = customtkinter.CTkButton(self, text="Dart Counter", command=self.open_dart_counter, image=self.bullseye, width=200, height=200, compound='top') self.dart_counter.grid(row=2, column=1, padx=20, pady=20) self.karaoke = customtkinter.CTkButton(self, text="Settings", command=self.open_karaoke, image=self.microphone, width=200, height=200, compound='top') self.karaoke.grid(row=2, column=2, padx=20, pady=20) self.light_controls = customtkinter.CTkButton(self, text="Light Controls", command=self.open_light_controls, image=self.lightbulb, width=200, height=200, compound='top') self.light_controls.grid(row=2, column=3, padx=20, pady=20) self.time() def time(self): string = strftime('%#I:%M %p') self.clock.configure(text=string) self.clock.after(1000, self.time) def open_music_controls(self): for widget in self.winfo_children(): widget.destroy() self.grid_columnconfigure((0, 2), weight=1) self.label = customtkinter.CTkLabel(self, text="Music Controls", fg_color="transparent", font=("Arial", 75)) self.label.grid(row=0, column=0, padx=20, pady=20, columnspan=3) self.previous = customtkinter.CTkButton(self, text="", command=self.previous_track, image=self.previoustrack, width=200, height=200, compound='top', font=("Arial", 25)) self.previous.grid(row=1, column=0, padx=20, pady=20) self.pause = customtkinter.CTkButton(self, text="", command=self.play_pause, image=self.play, width=200, height=200, compound='top', font=("Arial", 25)) self.pause.grid(row=1, column=1, padx=20, pady=20) self.skip = customtkinter.CTkButton(self, text="", command=self.next_track, image=self.nexttrack, width=200, height=200, compound='top', font=("Arial", 25)) self.skip.grid(row=1, column=2, padx=20, pady=20) self.home = customtkinter.CTkButton(self, text="", command=self.home_screen, image=self.house, width=50, height=50, compound='top', font=("Arial", 25)) self.home.grid(row=3, column=1, padx=20, pady=20) self.library = customtkinter.CTkButton(self, text="", command=self.open_library, image=self.playlist, width=50, height=50, compound='top', font=("Arial", 25)) self.library.grid(row=3, column=2, padx=20, pady=20) self.nowplaying = customtkinter.CTkLabel(self, text="Loading track...", fg_color="transparent", font=("Arial", 40)) self.nowplaying.grid(row=2, column=0, padx=20, pady=20, columnspan=3) global playerOpen playerOpen = True
The fourth column still has a weight of 1 so still takes up space, causing the widgets to appear aligned to the left. Set its weight to 0 when showing the music player: def open_music_controls(self): for widget in self.winfo_children(): widget.destroy() self.grid_columnconfigure((0, 2), weight=1) self.grid_columnconfigure(3, weight=0) # add this
2
4
78,780,567
2024-7-22
https://stackoverflow.com/questions/78780567/why-are-nodes-not-found-in-a-graph-in-osmnx-when-the-graph-is-all-of-new-york-an
Here is my code: import osmnx as ox # Use the following commands to download the graph file of NYC G = ox.graph_from_place('New York City', network_type='drive', simplify=True) # Coordinates for origin and destination orig_x = 40.6662 orig_y = -73.9340 dest_x = 40.6576 dest_y = -73.9208 # Find the nearest nodes orig_node = ox.distance.nearest_nodes(G, orig_x, orig_y, return_dist=True) dest_node = ox.distance.nearest_nodes(G, dest_x, dest_y, return_dist=True) print(f"Origin node: {orig_node}, Destination node: {dest_node}") # Calculate the shortest path route = ox.shortest_path(G, orig_node, dest_node, weight='length') travel_secs = sum(G[u][v][0]['length'] for u, v in zip(route[:-1], route[1:])) * 186.411821 print(f"Travel time (seconds): {travel_secs}") I'm trying to find the shortest path between these two points but I get networkx.exception.NodeNotFound: Either source 15056632.490169104 or target 15056625.267485507 is not in G Sorry if this is something obvious, I'm new to this library and haven't found any good documentation on this issue.
That's because of two problems : You swapped the x/y (i.e, New York is at -73.935242/40.730610) and considered the osmid/dist as pair of coordinates when asking for the shortest_path : import osmnx as ox G = ox.graph_from_place("New York City", network_type="drive", simplify=True) orig_x, orig_y = -73.9340, 40.6662 dest_x, dest_y = -73.9208, 40.6576 orig_node = ox.distance.nearest_nodes(G, orig_x, orig_y) dest_node = ox.distance.nearest_nodes(G, dest_x, dest_y) route = ox.shortest_path(G, orig_node, dest_node, "length") travel_secs = nx.path_weight(G, route, "length") * 186.411821 # or if route is not needed # nx.shortest_path_length(G, orig_node, dest_node, "length") * 186.411821 print(f"Travel time (seconds): {travel_secs}") # 314517.75262762
2
1
78,780,070
2024-7-22
https://stackoverflow.com/questions/78780070/how-do-i-fix-this-reg-ex-so-that-it-matches-hyphenated-words-where-the-final-seg
I want to match all cases where a hyphenated string (which could be made up of one or multiple hyphenated segments) ends in a consonant that is not the letter m. In other words, it needs to match strings such as: 'crack-l', 'crac-ken', 'cr-ca-cr-cr' etc. but not 'crack' (not hyphenated), 'br-oom' (ends in m), br -oo (last segment ends in vowel) or cr-ca-cr-ca (last segment ends in vowel). It is mostly successful except for cases where there is more than one hyphen, then it will return part of the string such as 'cr-ca-cr' instead of the whole string which should be 'cr-ca-cr-ca'. Here is the code I have tried with example data: import re dummy_data = """ broom br-oom br-oo crack crack-l crac-ken crack-ed cr-ca-cr-ca cr-ca-cr-cr cr-ca-cr-cr-cr """ pattern = r'\b(?:\w+-)+\w*[bcdfghjklnpqrstvwxyz](?<!m)\b' final_consonant_hyphenated = [ m.group(0) for m in re.finditer(pattern, dummy_data, flags=re.IGNORECASE) ] print(final_consonant_hyphenated)` expected output: ['crack-l', 'crac-ken', 'crack-ed', 'cr-ca-cr-cr', 'cr-ca-cr-cr-cr'] current output: ['crack-l', 'crac-ken', 'crack-ed', **'cr-ca-cr'**, 'cr-ca-cr-cr', 'cr-ca-cr-cr-cr'] (bold string is an incorrect match as it's part of the cr-ca-cr-ca string where the final segment ends in a vowel not a consonant).
You could add a negative lookahead to prevent having a hyphen after and also an idea to shorten [bcdfghjklnpqrstvwxyz](?<!m) to [a-z](?<![aeioum]). Update: Further as @Thefourthbird mentioned in the comments, as well putting the lookbehind after the word-boundary \b will result in better performance (fewer steps). \b(?:\w+-)+\w*[a-z]\b(?<![aeioum])(?!-) See this demo at regex101 or even \b(?:\w+-)+\w+\b(?<![aeioum\d_])(?!-) (without the [a-z], using \w+ instead of \w* and also disallowing digits and underscore from \w in the lookbehind). With a possessive quantifier (using PyPI) further reduced: \b(?:\w+-)+\w++(?<![aeioum\d_])(?!-)
2
5
78,780,093
2024-7-22
https://stackoverflow.com/questions/78780093/trouble-figuring-out-input-arguments-to-scipy-regulargridinterpolator-for-2d-int
I'm trying to interpolate 2D data to find the value of Z at point (X,Y) as if it were a 2D lookup table. import numpy as np import pandas as pd from scipy.interpolate import RegularGridInterpolator import io xi, yi = 100, 100; # test points I want to evaluate the interpolator object at # the 2D data copied from google sheets and pasted into Jupyter notebook generates this dataframe: values = pd.read_csv(io.StringIO(''' 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 223,223,223,223,223,223,223,223,221,220,219,218,217,217,215,215,215,217,217,218,218,219,220,221,222,222,222 223,223,223,223,223,223,223,223,223,219,218,217,215,215,210,210,210,215,217,217,217,218,219,220,221,222,222 223,223,223,223,223,223,223,223,223,218,217,217,215,210,207,207,207,210,215,215,217,218,219,219,220,221,222 223,223,223,223,223,223,223,223,218,217,223,215,210,207,205,205,207,207,210,210,215,217,218,219,220,221,222 223,223,223,223,223,223,223,223,218,217,216,215,207,205,203,203,205,207,207,210,215,217,218,219,220,221,222 223,223,223,223,223,223,223,223,218,217,216,215,207,205,203,203,205,207,210,212,215,217,218,219,220,221,222 223,223,223,223,223,223,223,223,223,218,217,217,215,207,205,205,207,210,212,215,217,218,219,220,221,222,223 223,223,223,223,223,223,223,223,223,223,218,218,217,215,207,207,215,215,215,217,218,219,220,221,222,223,223 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 '''), header=None); # these are the X and Y values corresponding to the X and Y dimensions of the data (values). X = np.linspace(0, 2600, values.shape[1]); Y = np.linspace(0, 1700, values.shape[0]); Yi, Xi = np.meshgrid(Y, X); interp = RegularGridInterpolator((Y, X), values); # X and Y are switched here because Y corresponds to the rows of the dataframe and X corresponds to the columns but I'm matching the SciPy syntax. interp(np.array([[xi], [yi]]).T) # this is the portion I'm having trouble with I don't understand what the format of the arguments I'm putting into the interp object. From the documentation, it says that the input is a tuple of ndarrays. But this example shows evaluating the interp object on a np.array and on a tuple of lists. I've tried those and a combination of everything between but I'm still not getting it right... thanks for your help. The code as described above returns the following error: InvalidIndexError: (array([1]), array([1])) I'm trying to return the interpolation for a single point (every iteration of a for loop, for example). Or for all points at once after the for loop if it's necessary to do it that way.
The documentation you point to is using an array. You are using a dataframe. Based on 'How to perform interpolation on 2d grid from dataframe in python?', you can use the values attribute of the dataframe: import numpy as np import pandas as pd from scipy.interpolate import RegularGridInterpolator import io xi, yi = 100, 100; # test points I want to evaluate the interpolator object at # the 2D data copied from google sheets and pasted into Jupyter notebook generates this dataframe: values = pd.read_csv(io.StringIO(''' 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 223,223,223,223,223,223,223,223,221,220,219,218,217,217,215,215,215,217,217,218,218,219,220,221,222,222,222 223,223,223,223,223,223,223,223,223,219,218,217,215,215,210,210,210,215,217,217,217,218,219,220,221,222,222 223,223,223,223,223,223,223,223,223,218,217,217,215,210,207,207,207,210,215,215,217,218,219,219,220,221,222 223,223,223,223,223,223,223,223,218,217,223,215,210,207,205,205,207,207,210,210,215,217,218,219,220,221,222 223,223,223,223,223,223,223,223,218,217,216,215,207,205,203,203,205,207,207,210,215,217,218,219,220,221,222 223,223,223,223,223,223,223,223,218,217,216,215,207,205,203,203,205,207,210,212,215,217,218,219,220,221,222 223,223,223,223,223,223,223,223,223,218,217,217,215,207,205,205,207,210,212,215,217,218,219,220,221,222,223 223,223,223,223,223,223,223,223,223,223,218,218,217,215,207,207,215,215,215,217,218,219,220,221,222,223,223 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223,223 '''), header=None); # these are the X and Y values corresponding to the X and Y dimensions of the data (values). X = np.linspace(0, 2600, values.shape[1]); Y = np.linspace(0, 1700, values.shape[0]); Yi, Xi = np.meshgrid(Y, X); interp = RegularGridInterpolator((Y, X), values.values); # X and Y are switched here because Y corresponds to the rows of the dataframe and X corresponds to the columns but I'm matching the SciPy syntax. Alternatively, you can convert the DataFrame to a numpy array and then. transpose the array to match the expected shape, (Y, X), with the addition of values_array = values.to_numpy().T and then adjusting the interpolation call too interp = RegularGridInterpolator((X, Y), values_array).
2
0
78,780,118
2024-7-22
https://stackoverflow.com/questions/78780118/how-can-i-stop-the-alembic-logger-from-deactivating-my-own-loggers-after-using-a
I'm using alembic in my code to apply database migrations at application start. I'm also using Python's builtin logging lib to log to the terminal. After applying the migrations (or running any alembic command that prints to stdout it seems) my loggers stop working, though. Code: import logging import alembic.command from alembic.config import Config as AlembicConfig logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger("app") logger.debug("Applying alembic migrations.") alembic_config = AlembicConfig("alembic.ini") alembic_config.attributes["sqlalchemy.url"] = connection_string alembic.command.upgrade(alembic_config, "head", tag="from_app") logger.debug("Terminating app.") Expected output: DEBUG:app:Applying alembic migrations. INFO [alembic.runtime.migration] Context impl PostgresqlImpl. INFO [alembic.runtime.migration] Will assume transactional DDL. DEBUG:app:Terminating app. Actual output: DEBUG:app:Applying alembic migrations. INFO [alembic.runtime.migration] Context impl PostgresqlImpl. INFO [alembic.runtime.migration] Will assume transactional DDL. The last line is missing. I've tried setting the log level again after applying the migrations (I thought maybe it changed the root logger log level): ... alembic.command.upgrade(alembic_config, "head", tag="from_app") logger.setLevel(logging.DEBUG) logger.debug("Terminating app.") In fact, even logger.critical("blah") won't log anything anymore. I've also tried applying the basic config again and getting the logger again: ... alembic.command.upgrade(alembic_config, "head", tag="from_app") logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger("app") logger.debug("Terminating app.") But to no avail. Even the root logger isn't logging anymore: ... alembic.command.upgrade(alembic_config, "head", tag="from_app") logging.basicConfig(level=logging.DEBUG) logging.debug("Terminating app.") Is there anything I can do to make sure that my loggers are logging? I'd like to keep using the builtin logging functionality, but I'm also open to using some lib for that.
The answer will be found in your alembic.ini file. You're running a command which was intended to be a script, not an API call, so alembic is configuring its own logging using logging.fileConfig, which by default uses disable_existing_loggers=True. That option will disable any existing non-root loggers unless they or their ancestors are explicitly named in the logging configuration file. So, the path of least resistance will be to setup your logging configuration in there too. There will be a section with the logging configuration in alembic.ini - look for a [loggers] section header. You'll want to modify the content so that your own loggers remain visible - add a [logger_app] section with the desired handlers, formatters etc. For consistency's sake, you may want to switch to using fileConfig from your own script too, instead of the logging.basicConfig. An alternative option would be to run alembic scripts in a subprocess, so that their logging configuration doesn't concern you. The stdout/stderr of the subprocess can always be captured and re-emitted as log events from your main process.
5
3
78,778,894
2024-7-22
https://stackoverflow.com/questions/78778894/how-to-make-a-clickable-url-in-shiny-for-python
I tried this: app_ui = ui.page_fluid( ui.output_text("the_txt") ) def server(input, output, session): @render.text def the_txt(): url = 'https://stackoverflow.com' clickable_url = f'<a> href="{url}" target="_blank">Click here</a>' return ui.HTML(clickable_url) But the displayed text is the raw HTML: <a> href="https://stackoverflow.com" target="_blank">Click here</a> How do I display a clickable link in a Shiny for Python app?
In your app you have a small syntax error within the tag, you need <a ...> instead of <a> .... Below are two variants, depending on a static case or a situation where you have to render something. Static case You don't need a render function here since this is only HTML. Below are two alternatives: Either use ui.tags for creating an a tag or use ui.HTML for passing the HTML directly: from shiny import ui, App url = 'https://stackoverflow.com' app_ui = ui.page_fluid( ui.tags.a("Click here", href=url, target='_blank'), ui.p(), ui.HTML(f'<a href="{url}" target="_blank">Click here</a>') ) def server(input, output, session): return app=App(app_ui, server) Dynamic case Here is an example where we have an input and render a link dynamically into a text. The output here is set by ui.output_ui and contains a div with the text and the url. from shiny import ui, App, render, reactive app_ui = ui.page_fluid( ui.input_select("pageSelect", label="Pages", choices=['StackOverflow', 'Google']), ui.p(), ui.output_ui("text") ) def server(input, output, session): @reactive.calc def url(): if (input.pageSelect() == "StackOverflow"): url = 'https://stackoverflow.com' else: url = 'https://google.com' return ui.tags.a("click here", href=url, target='_blank') @render.ui def text(): return ui.div("You can ", url(), " for going to ", input.pageSelect(), ".") app=App(app_ui, server)
2
2
78,776,268
2024-7-21
https://stackoverflow.com/questions/78776268/how-can-i-efficiently-fill-null-only-certain-columns-of-a-dataframe
For example, let us say I want to fill_null(strategy="zero") only the numeric columns of my DataFrame. My current strategy is to do this: import polars as pl import polars.selectors as cs df = pl.DataFrame( [ pl.Series("id", ["alpha", None, "gamma"]), pl.Series("xs", [None, 100, 2]), ] ) final_df = df.select(cs.exclude(cs.numeric())) final_df = final_df.with_columns( df.select(cs.numeric()).fill_null(strategy="zero") ) print(final_df) shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ id ┆ xs β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═════║ β”‚ alpha ┆ 0 β”‚ β”‚ null ┆ 100 β”‚ β”‚ gamma ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ Are there alternative, either more idiomatic or more efficient methods to achieve what I'd like to do?
pl.DataFrame.select returns a dataframe that contains only the columns listed as arguments. Alternatively, pl.DataFrame.with_columns adds columns to the dataframe (and replaces columns with the same name). Especially, this provides you with the tools to perform the filling without an intermediate dataframe. You can simply use pl.DataFrame.with_columns to fill missing values only in numeric columns (i.e. replace them with their filled versions). df.with_columns( cs.numeric().fill_null(strategy="zero") ) shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ id ┆ xs β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═════║ β”‚ alpha ┆ 0 β”‚ β”‚ null ┆ 100 β”‚ β”‚ gamma ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
2
3
78,757,169
2024-7-17
https://stackoverflow.com/questions/78757169/python-pandas-read-sas-with-chunk-size-option-fails-with-value-error-on-index-mi
I have a very large SAS file that won't fit in memory of my server. I simply need to convert to parquet formatted file. To do so, I am reading it in chunks using the chunksize option of the read_sas method in pandas. It is mostly working / doing its job. Except, it fails with the following error after a while. This particular SAS file has 79422642 rows of data. It is not clear why it fails in the middle. import pandas as pd filename = 'mysasfile.sas7bdat' SAS_CHUNK_SIZE = 2000000 sas_chunks = pd.read_sas(filename, chunksize = SAS_CHUNK_SIZE, iterator = True) for sasDf in sas_chunks: print(sasDf.shape) (2000000, 184) (2000000, 184) (2000000, 184) (2000000, 184) (2000000, 184) (2000000, 184) (2000000, 184) (2000000, 184) (2000000, 184) (2000000, 184) (2000000, 184) (2000000, 184) (2000000, 184) (2000000, 184) (2000000, 184) (2000000, 184) (2000000, 184) (2000000, 184) (2000000, 184) (2000000, 184) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/anaconda3/lib/python3.10/site-packages/pandas/io/sas/sas7bdat.py", line 340, in __next__ da = self.read(nrows=self.chunksize or 1) File "/opt/anaconda3/lib/python3.10/site-packages/pandas/io/sas/sas7bdat.py", line 742, in read rslt = self._chunk_to_dataframe() File "/opt/anaconda3/lib/python3.10/site-packages/pandas/io/sas/sas7bdat.py", line 795, in _chunk_to_dataframe rslt[name] = pd.Series(self._string_chunk[js, :], index=ix) File "/opt/anaconda3/lib/python3.10/site-packages/pandas/core/series.py", line 461, in __init__ com.require_length_match(data, index) File "/opt/anaconda3/lib/python3.10/site-packages/pandas/core/common.py", line 571, in require_length_match raise ValueError( ValueError: Length of values (2000000) does not match length of index (1179974) I just tested the same logic of the code on a smaller SAS file with fewer rows using a smaller chunk size as follows, and it seems to work fine without any errors, and also handles the last remaining chunk that is smaller than the chunk size parameter: filename = 'mysmallersasfile.sas7bdat' SAS_CHUNK_SIZE = 1000 sas_chunks = pd.read_sas(filename, chunksize = SAS_CHUNK_SIZE, iterator = True) for sasDf in sas_chunks: print(sasDf.shape) (1000, 5) (1000, 5) (1000, 5) (1000, 5) (983, 5)
Perhaps try this code: import pandas as pd import pyarrow as pa import pyarrow.parquet as pq filename = 'mysasfile.sas7bdat' output_filename = 'output.parquet' SAS_CHUNK_SIZE = 2000000 writer = None # initialize writer sas_chunks = pd.read_sas(filename, chunksize=SAS_CHUNK_SIZE, iterator=True) for i, sasDf in enumerate(sas_chunks): print(f"Processing chunk {i+1} with shape {sasDf.shape}") table = pa.Table.from_pandas(sasDf) # convert pandas DF to Arrow table if writer is None: # Create new Parquet file with 1st chunk writer = pq.ParquetWriter(output_filename, table.schema) writer.write_table(table) # write Arrow Table to Parquet file if writer: writer.close()` It reads in chunks using pd.read_sas function pyarrow.parquet.ParquetWriter writes the data to a Parquet file while allowing appending data in chunks, which is suitable for such large datasets Each chunk is converted to a pyarrow.Table and written to the Parquet file
4
2
78,778,698
2024-7-22
https://stackoverflow.com/questions/78778698/random-stratified-sampling-in-pandas
I have created a pandas dataframe as follows: import pandas as pd import numpy as np ds = {'col1' : [1,1,1,1,1,1,1,2,2,2,2,3,3,3,3,3,4,4,4,4,4,4,4,4,4], 'col2' : [12,3,4,5,4,3,2,3,4,6,7,8,3,3,65,4,3,2,32,1,2,3,4,5,32], } df = pd.DataFrame(data=ds) The dataframe looks as follows: print(df) col1 col2 0 1 12 1 1 3 2 1 4 3 1 5 4 1 4 5 1 3 6 1 2 7 2 3 8 2 4 9 2 6 10 2 7 11 3 8 12 3 3 13 3 3 14 3 65 15 3 4 16 4 3 17 4 2 18 4 32 19 4 1 20 4 2 21 4 3 22 4 4 23 4 5 24 4 32 Based on the values of column col1, I need to extract: 3 random records where col1 == 1 2 random records such that col1 = 2 2 random records such that col1 = 3 3 random records such that col1 = 4 Can anyone help me please?
I would shuffle the whole input with sample(frac=1), then compute a groupby.cumcount to select the first N samples per group (with map and boolean indexing) where N is defined in a dictionary: # {col1: number of samples} n = {1: 3, 2: 2, 3: 2, 4: 3} out = df[df[['col1']].sample(frac=1) .groupby('col1').cumcount() .lt(df['col1'].map(n))] Shorter code, but probably less efficient, using a custom groupby.apply with a different sample for each group: n = {1: 3, 2: 2, 3: 2, 4: 3} out = (df.groupby('col1', group_keys=False) .apply(lambda g: g.sample(n=n[g.name])) ) Example output: col1 col2 0 1 12 3 1 5 4 1 4 7 2 3 8 2 4 11 3 8 13 3 3 17 4 2 18 4 32 24 4 32
2
2
78,777,910
2024-7-22
https://stackoverflow.com/questions/78777910/how-to-make-import-statements-to-import-from-another-path
I want to "hack" python's import so that it will first search in the path I specified and fallback to the original if not found. The traditional way of using sys.path will not work if there is a __init__.py, see below. CASE 1: The following works: Files: . β”œβ”€β”€ a β”‚ β”œβ”€β”€ b.py # content: x='x' β”‚ └── c.py # content: y='y' β”œβ”€β”€ hack β”‚ └── a β”‚ └── b.py # content: x='hacked' └── test.py # content: see below # test.py import sys sys.path.insert(0, 'hack') # insert the hack path from a.b import x from a.c import y print(x, y) Running test.py gives hacked y as desired, where x is hacked :) CASE 2: However, if there is a __init__.py in a, it will not work. Files: . β”œβ”€β”€ a β”‚ β”œβ”€β”€ b.py β”‚ β”œβ”€β”€ c.py β”‚ └── __init__.py # <- NOTE THIS β”œβ”€β”€ hack β”‚ └── a β”‚ └── b.py └── test.py Running test.py gives x y, where x is not hacked :( CASE 3: To fix case 2, I tried adding __init__.py to the hack path, but this disables the fallback behavior. Files: . β”œβ”€β”€ a β”‚ β”œβ”€β”€ b.py β”‚ β”œβ”€β”€ c.py β”‚ └── __init__.py β”œβ”€β”€ hack β”‚ └── a β”‚ β”œβ”€β”€ b.py β”‚ └── __init__.py # <- NOTE THIS └── test.py Running test.py raises the following error as there is no c.py in the hack path and it fails to fallback to the original path. ModuleNotFoundError: No module named 'a.c' My question is, how to make case 2 work? Additional background: The above cases are just simplified examples. In the real situation, Both a and hack/a are large repos with many subfolders and files. The imported modules (e.g. a.b) may also contain import statements that need to be hacked. Therefore, ideally the solution would be to only add a few lines of code at the top of test.py rather than modifying exisiting code. UPDATE: I have come up with a solution below (I cannot accept my own answer in 2 days). If you have better solutions or suggestions, please feel free to discuss.
The solution is to overwrite the default __import__ function (which is used by import statements) so that it first tries to import from the hack folder. __import = __import__ # save the original def _import(name, *a, **b): try: return __import('hack.'+name, *a, **b) except ImportError: return __import(name, *a, **b) __builtins__.__import__ = _import # overwrite with our own
2
0
78,773,004
2024-7-20
https://stackoverflow.com/questions/78773004/why-do-numpy-scalars-multiply-with-custom-sequences-but-not-with-lists
I have a question to NumPy experts. Consider a NumPy scalar: c = np.arange(3.0).sum(). If I try to multiply it with a custom sequence like e.g. class S: def __init__(self, lst): self.lst = lst def __len__(self): return len(self.lst) def __getitem__(self, s): return self.lst[s] c = np.arange(3.0).sum() s = S([1, 2, 3]) print(c * s) it works and I get: array([3., 6., 9.]). However, I can't do so with a list. For instance, if I inherit S from list and try it, this does not work anymore class S(list): def __init__(self, lst): self.lst = lst def __len__(self): return len(self.lst) def __getitem__(self, s): return self.lst[s] c = np.arange(3.0).sum() s = S([1, 2, 3]) print(c * s) and I get "can't multiply sequence by non-int of type 'numpy.float64'". So how does NumPy distinguish between the two cases? I am asking because I want to prevent such behavior for my "S" class without inheriting from list. UPD Since the question has been misunderstood a few times, I try to stress more exactly what the problem is. It's not about why the list does not cope with multiplication by float and the error is raised. It is about why in the first case (when S is NOT inherited from list) the multiplication is performed by the object "c" and in the second case (when S is inherited from list) the multiplication is delegated to "s". Apparently, the method c.__mul__ does some checks which pass in the first case, but fail in the second case, so that s.__rmul__ is called. The question is essentially: What are those checks? (I strongly doubt that this is anything like isinstance(other, list)).
As pointed out by @hpaulj (thanks a lot!) this question was already around: Array and __rmul__ operator in Python Numpy. The mechanics of the problem was explained very well in this answer: stackoverflow.com/a/38230576/901925. However, the proposed solution of inheriting from np.ndarray is certainly a "dirty" one. In the reply immediately after, a solution based on the function __numpy_ufunc__ is proposed. The latter however is called __array_ufunc__ in modern NumPy. This function can be just set to None in the definition of the class "S". This leads to delegation of multiplcation to s.__rmul__ without attempts to perform it via c.__mul__.
2
1
78,776,597
2024-7-22
https://stackoverflow.com/questions/78776597/using-callable-iterator-re-finditer-causes-python-to-freeze
I have a function that is called for every line of a text. def tokenize_line(line: str, cmd = ''): matches = re.finditer(Patterns.SUPPORTED_TOKENS, line) tokens_found, not_found, start_idx = [], [], 0 print(matches) for match in matches: pass # Rest of code The result of print(matches) is something like: <callable_iterator object at 0x0000021201445000> However, when I convert the iterator into a list: matches = list(re.finditer(Patterns.SUPPORTED_TOKENS, line)) or when I iterate with for: for match in matches: print(match) ...Python freezes. This issue occurs inconsistently. For example: tokenize_line('$color AS $length') # Works fine tokenize_line('FALSE + $length IS GT 7 + $length IS 4') # Freezes So, the problem arises when converting the callable_iterator into a list or iterating over it. Here is the pattern (Patterns.SUPPORTED_TOKENS) I'm using: (Β°p\d+Β°|Β°a\d+Β°|Β°m\d+Β°)|((?<!\S)(?:!\'(?:\\.|[^\'\n\\])*\'|!"(?:\\.|[^\n"\\])*")(?!\S))|((?:\'(?:\\.|[^\'\n\\])*\'|"(?:\\.|[^\n"\\])*"))|((\{(.*)\}))|((?<!\S)([@$][\w]*(?:\.[\w]*)*)(?!\S))|((?<!\d)-?\d*\.?\d+)|(\*\*|[\+\-\*\(\)/%\^]|==|&&|\|\||!=|>=|<=|>|<|~~|!~~|::|!::)|([\:/])|(\b(?:AS|AND|AT|:|BETWEEN|BY|FROM|IN|INTO|ON|OF|OR|THAN|TO|USING|WITH)\b)|(\b[a-zA-Z_][a-zA-Z0-9_]* *((?:[^;()\'""]*|"(?:[^"\\]|\\.)*"|\'(?:[^\'\\]|\\.)*\'|\([^)]*\))*?;))|((\b(?:EMPTY|STRING|NUMBER|BOOL|ARRAY|MAP|TRUE|FALSE|NULL|UNKNOWN|DOTALL|IGNORECASE|MULTILINE|ARRAY_ARRAY|ARRAY_STRING|ARRAY_MAP|ARRAY_NUMBER|ARRAY_NULL|DOT|SPACE|NEWLINE|SEMICOLON|COLON|HASH|COMMA|TAB)\b)|(\b(?:IS NOT LT|IS NOT GT|IS NOT GEQ|IS NOT LEQ|IS NOT|IS LT|IS GT|IS GEQ|IS LEQ|IS|NOT IN|NOT|IN|HAS NOT|HAS|AND|OR)\b)) Explanation of the Regular Expression Pattern: Custom Tokens: Matches specific custom tokens that start with particular characters and are followed by digits. Quoted Strings: Matches both single and double-quoted strings, including those with escape characters. Curly Braces Content: Matches anything enclosed in curly braces. Variables: Matches variables that start with specific characters (like @ or $) and can include dots for nested properties. Numbers: Matches both integers and floating-point numbers, including negative numbers. Operators: Matches various mathematical and logical operators. Colons and Slashes: Matches specific punctuation characters like colons and slashes. Keywords: Matches certain keywords that are reserved in the language. Function Definitions: Matches function definitions or similar structures, ensuring they follow specific syntax rules. Data Types and Modifiers: Matches keywords that represent data types or modifiers. Logical Operators: Matches complex logical operators used in conditional expressions. Example: import re SUPPORTED_TOKENS = r'(Β°p\d+Β°|Β°a\d+Β°|Β°m\d+Β°)|((?<!\S)(?:!\'(?:\\.|[^\'\n\\])*\'|!"(?:\\.|[^\n"\\])*")(?!\S))|((?:\'(?:\\.|[^\'\n\\])*\'|"(?:\\.|[^\n"\\])*"))|((\{(.*)\}))|((?<!\S)([@$][\w]*(?:\.[\w]*)*)(?!\S))|((?<!\d)-?\d*\.?\d+)|(\*\*|[\+\-\*\(\)/%\^]|==|&&|\|\||!=|>=|<=|>|<|~~|!~~|::|!::)|([\:/])|(\b(?:AS|AND|AT|:|BETWEEN|BY|FROM|IN|INTO|ON|OF|OR|THAN|TO|USING|WITH)\b)|(\b[a-zA-Z_][a-zA-Z0-9_]* *((?:[^;()\'""]*|"(?:[^"\\]|\\.)*"|\'(?:[^\'\\]|\\.)*\'|\([^)]*\))*?;))|((\b(?:EMPTY|STRING|NUMBER|BOOL|ARRAY|MAP|TRUE|FALSE|NULL|UNKNOWN|DOTALL|IGNORECASE|MULTILINE|ARRAY_ARRAY|ARRAY_STRING|ARRAY_MAP|ARRAY_NUMBER|ARRAY_NULL|DOT|SPACE|NEWLINE|SEMICOLON|COLON|HASH|COMMA|TAB)\b)|(\b(?:IS NOT LT|IS NOT GT|IS NOT GEQ|IS NOT LEQ|IS NOT|IS LT|IS GT|IS GEQ|IS LEQ|IS|NOT IN|NOT|IN|HAS NOT|HAS|AND|OR)\b))' def tokenize_line(line: str, cmd = ''): if not line: return [], [] matches = list(re.finditer(SUPPORTED_TOKENS, line)) print(list) lines = [ '$color AS $length', 'EMPTY + $length IS GT 7 + $length IS 4' ] for x in lines: tokenize_line(x) Any help to understand why this happens and how to fix it would be greatly appreciated!
As pointed out in the comments, the hang is caused by a catastrophic backtracking, as you're wrapping the pattern [^;()\'""]*, which can match the entirety of your input or any part of it, in a group that can repeat zero to many times, followed by a ;, which is not matching your input. Upon failure, the regex engine backtracks a character, still matching [^;()\'""]*, but with the outer * it repeats the inner pattern to match the leftover character, only to fail again with the following ;. It would backtrack again with now 2 leftover characters to allow the inner pattern to match them both or one at a time. In other words, the backtracking goes on until the regex engine exhausts all possible ways to partition the input, amounting to a rapidly out-of-hand 2 ^ (<number of characters> + 1) number of combinations because every gap position in the input can be either a partition or not. In this case the catastrophic backtracking can be fixed by removing the inner quantifier * from (?:[^;()\'""]*|...)*?; because the outer quantifier *? already allows a zero-to-many repetition of the inner pattern. That is, change: (?:[^;()\'""]*|...)*?; to: (?:[^;()\'""]|...)*?; Demo: https://regex101.com/r/MszbVf/1 If you're using Python 3.11 or later, you can also avoid catastrophic backtracking by making the group an atomic group so that the regex engine would not attempt to backtrack once the atomic group itself successfully matches, even if the match then fails with the following ;. That is, change: (?:[^;()\'""]*|...)*?; to: (?>[^;()\'""]*|...)*?; Demo: https://regex101.com/r/MszbVf/2
3
6
78,775,206
2024-7-21
https://stackoverflow.com/questions/78775206/how-to-plot-a-line-on-the-second-axis-over-a-horizontal-not-vertical-bar-chart
I know how to plot a line on the second axis over a VERTICAL bar chart. Now I want the bars HORIZONTAL and the line top-to-bottom, just like the whole chart rotates 90Β°. If I simply replace bar with barh, the line is still left-to-right... Can I do this with Matplotlib? Here is a sample for VERTICAL bar chart: import matplotlib.pyplot as plt import pandas as pd df = pd.DataFrame( { "A": [2, 4, 8], "B": [3, 5, 7], "C": [300, 100, 200], }, index=["a", "b", "c"], ) ax0 = df.plot(y=df.columns[:2], kind="bar", figsize=(10.24, 5.12)) ax1 = df["C"].plot( kind="line", secondary_y=True, color="g", marker=".", ) plt.tight_layout() plt.show() Let me stress: I do see those questions related to VERTICAL bar charts. Now I'm asking about HORIZONTAL bar charts. So this is not a duplicate question.
Pandas plotting doesn't readily support a secondary x axis. Instead, you can directly plot via matplotlib. (Note that df.plot(...) plots via pandas. Pandas plotting is a pandas specific interface towards matplotlib, and only supports a subset of matplotlib's functionality.) import matplotlib.pyplot as plt import pandas as pd df = pd.DataFrame( { "A": [2, 4, 8], "B": [3, 5, 7], "C": [300, 100, 200], }, index=["a", "b", "c"], ) ax0 = df.plot(y=df.columns[:2], kind="barh", figsize=(10.24, 5.12)) ax1 = ax0.twiny() ax1.plot(df["C"], df.index, color="g", marker=".", ) plt.tight_layout() plt.show() PS: To exchange the top and bottom axes, you can use: ax0 = df.plot(y=df.columns[:2], kind="barh", figsize=(10.24, 5.12)) ax1 = ax0.twiny() ax1.plot(df["C"], df.index, color="g", marker=".", ) # change the direction of the y-axis ax0.invert_yaxis() # set the x-axis for ax0 at the top ax0.tick_params(top=True, bottom=False, labeltop=True, labelbottom=False) # for the ticks and tick labels ax0.xaxis.set_label_position('top') # for the axis label # set the x-axis for ax1 at the bottom ax1.tick_params(top=False, bottom=True, labeltop=False, labelbottom=True) ax1.xaxis.set_label_position('bottom')
2
3
78,774,606
2024-7-21
https://stackoverflow.com/questions/78774606/exit-on-click-no-longer-works-after-using-clearscreen
I'm working on a Python Turtle graphics program and I'm trying to use the exitonclick method to close the window when it's clicked. However, it doesn't seem to be working. from turtle import Turtle, Screen rem = Turtle() screen = Screen() rem.fd(70) def clear(): screen.clearscreen() screen.listen() screen.onkey(fun=clear,key = "c") screen.exitonclick() When I run it and try to exit the program by clicking in the screen it works fine and quits but when I press c and cleared the screen then if I try to exit by clicking nothing happens.
screen.clearscreen() completely resets the window. That includes removing all modifications to it made through exitonclick. The easiest solution is just to call exitonclick again in your custom function: from turtle import Turtle, Screen rem = Turtle() screen = Screen() rem.fd(70) def clear(): screen.clearscreen() screen.exitonclick() screen.listen() screen.onkey(fun=clear,key = "c") screen.exitonclick()
3
3
78,770,763
2024-7-19
https://stackoverflow.com/questions/78770763/how-to-compare-rows-within-the-same-csv-file-faster
I have a csv file containing 720,000 rows with and 10 columns, the columns that are relevant to the problem are ['timestamp_utc', 'looted_by__name', 'item_id', 'quantity'] This file is logs of items people loot of the ground in a game, the problem is that sometimes the loot logger of the ground bugs and types in the person looting the same item twice in two different rows (those two rows could be separated by up to 5 rows) with a slight difference in the timestamp_utc column otherwise ['looted_by__name', 'item_id', 'quantity'] are the same, and example of this would be: 2024-06-23T11:40:43.2187312Z,Georgeeto,T4_SOUL,2 2024-06-23T11:40:43.4588316Z,Georgeeto,T4_SOUL,2 where in this example here the 2024-06-23T11:40:43.2187312Z would be the timestamp_utc, 'Georgeeto' would be the looted_by__name, T4_SOUL would be the item_id, and 2 would be the quantity. What am trying to do here is see if ['looted_by__name', 'item_id', 'quantity'] are equal in both rows and if they are subtract both rows time stamps from one another , and if it is less that 0.5 secs I copy both corrupted lines into a Corrupted.csv file and only put one of the lines in a Clean.csv file The way I went about doing this is the following import pandas as pd import time from datetime import datetime start_time = time.time() combined_df_3 = pd.read_csv("Processing/combined_file_refined.csv", delimiter= ',', usecols=['timestamp_utc', 'looted_by__name', 'item_id', 'quantity']) combined_df_4 = pd.read_csv("Processing/combined_file_refined.csv", delimiter= ',', usecols=['timestamp_utc', 'looted_by__name', 'item_id', 'quantity']) bugged_item_df = pd.DataFrame() clean_item_df = pd.DataFrame() bugged_item_list = [] clean_item_list = [] date_format = '%Y-%m-%dT%H:%M:%S.%f' for index1,row1 in combined_df_3.iterrows(): n = 0 time_stamp_1 = datetime.strptime(row1['timestamp_utc'][:26], date_format) name_1 = row1['looted_by__name'] item_id_1 = row1['item_id'] quantity_1 = row1['quantity'] for index2, row2 in combined_df_4.iterrows(): print(str(n)) n += 1 if n > 5: break time_stamp_2 = datetime.strptime(row2['timestamp_utc'][:26], date_format) name_2 = row2['looted_by__name'] item_id_2 = row2['item_id'] quantity_2 = row2['quantity'] if time_stamp_1 == time_stamp_2 and name_1 == name_2 and item_id_1 == item_id_2 and quantity_2 == quantity_2: break # get out of for loop here elif name_1 == name_2 and item_id_1 == item_id_2 and quantity_1 == quantity_2: if time_stamp_1 > time_stamp_2: date_diff = abs(time_stamp_1 - time_stamp_2) date_diff_sec = date_diff.total_seconds() elif time_stamp_1 < time_stamp_2: date_diff = abs(time_stamp_2 - time_stamp_1) date_diff_sec = date_diff.total_seconds() if date_diff_sec < 0.5: bugged_item_df = bugged_item_df._append(row1 ,ignore_index=True) bugged_item_df = bugged_item_df._append(row2 ,ignore_index=True) #add both lines into a csv file and not write 1 of them into the final csv file elif date_diff_sec > 0.5: pass # type line into a csv file normally else: pass # type line into a csv file normally bugged_item_df.to_csv("test.csv", index=False) clean_item_df.to_csv('test2.csv', index=False) end_time = time.time() execution_time = end_time - start_time print(f"Execution time: {execution_time} seconds") The way am Doing it 'Technically' works , but it takes about 6-13hrs to go threw the entire file I came to ask if there is a way to optimize it to run faster note: code is not finished yet but you can get the idea from it update:Thanks to the advice of AKZ (i love you man) i was able to reduce the time from 13.4hrs to 32mins, and i realised that the code i posted was done wrong in the for loop as well so i went with the following answer import time import pandas as pd from datetime import datetime #orgnizing the rows df = pd.read_csv("processing/combined_file_refined.csv", delimiter= ',', usecols=['timestamp_utc', 'looted_by__name', 'item_id', 'quantity']) df = df.groupby(['looted_by__name', 'timestamp_utc']).sum().reset_index() df.to_csv("test.csv", index=False) bugged_item_df = pd.DataFrame() clean_item_df = pd.DataFrame() df1 =pd.read_csv("test.csv", delimiter= ',', usecols=['timestamp_utc', 'looted_by__name', 'item_id', 'quantity']) date_format = '%Y-%m-%dT%H:%M:%S.%f' n = 0 num_of_runs = 0 start_time = time.time() for index1,row1 in df.iterrows(): num_of_runs += 1 n += 1 try: row2 = df1.iloc[n] except IndexError: clean_item_df = clean_item_df._append(row1 ,ignore_index=True) break time_stamp_1 = datetime.strptime(row1['timestamp_utc'][:26], date_format) name_1 = row1['looted_by__name'] item_id_1 = row1['item_id'] quantity_1 = row1['quantity'] time_stamp_2 = datetime.strptime(row2['timestamp_utc'][:26], date_format) name_2 = row2['looted_by__name'] item_id_2 = row2['item_id'] quantity_2 = row2['quantity'] if name_1 != name_2 or item_id_1 != item_id_2 or quantity_1 != quantity_2: #add row 1 to df continue elif time_stamp_1 > time_stamp_2: date_diff_1 = abs(time_stamp_1 - time_stamp_2) date_diff_sec_1 = date_diff_1.total_seconds() if date_diff_sec_1 < 0.5: #donot add row 1 to df and add row 1 and row 2 to bugged item list bugged_item_df = bugged_item_df._append(row1 ,ignore_index=True) bugged_item_df = bugged_item_df._append(row2 ,ignore_index=True) pass elif date_diff_sec_1 > 0.5: clean_item_df = clean_item_df._append(row1 ,ignore_index=True) #add row 1 to df continue elif time_stamp_1 < time_stamp_2: date_diff_2 = abs(time_stamp_2 - time_stamp_1) date_diff_sec_2 = date_diff_2.total_seconds() if date_diff_sec_2 < 0.5: bugged_item_df = bugged_item_df._append(row1 ,ignore_index=True) bugged_item_df = bugged_item_df._append(row2 ,ignore_index=True) #donot add row 1 to df and add row 1 and row 2 to bugged item list pass elif date_diff_sec_2 > 0.5: clean_item_df = clean_item_df._append(row1 ,ignore_index=True) #add row 1 to df continue bugged_item_df.to_csv("bugged.csv", index=False) clean_item_df.to_csv("clean.csv", index=False) end_time = time.time() execution_time = end_time - start_time print(f"Execution time: {execution_time} seconds") if someone has a better answer than the one i did please post it i will greatly appreciate it update 2: i edited the code again and realised i could just remove the bugged lines faster now it does it in 60secs import time import pandas as pd from datetime import datetime #orgnizing the rows combined_df_3 = pd.read_csv("processing/combined_file_refined.csv", delimiter= ',', usecols=['timestamp_utc', 'looted_by__name', 'item_id', 'quantity']) combined_df_3 = combined_df_3.groupby(['looted_by__name', 'timestamp_utc']).sum().reset_index() combined_df_3.to_csv("processing/combined_file_orgnized.csv", index=False) bugged_item_df = pd.DataFrame() bugged_item_2df = pd.DataFrame() combined_df_4 =pd.read_csv("processing/combined_file_orgnized.csv", delimiter= ',', usecols=['timestamp_utc', 'looted_by__name', 'item_id', 'quantity']) date_format = '%Y-%m-%dT%H:%M:%S.%f' num_of_runs = 0 for index1,row1 in combined_df_3.iterrows(): num_of_runs += 1 try: row2 = combined_df_4.iloc[num_of_runs] except IndexError: break time_stamp_1 = datetime.strptime(row1['timestamp_utc'][:26], date_format) name_1 = row1['looted_by__name'] item_id_1 = row1['item_id'] quantity_1 = row1['quantity'] time_stamp_2 = datetime.strptime(row2['timestamp_utc'][:26], date_format) name_2 = row2['looted_by__name'] item_id_2 = row2['item_id'] quantity_2 = row2['quantity'] if name_1 != name_2 or item_id_1 != item_id_2 or quantity_1 != quantity_2: continue elif time_stamp_1 > time_stamp_2: date_diff_1 = abs(time_stamp_1 - time_stamp_2) date_diff_sec_1 = date_diff_1.total_seconds() if date_diff_sec_1 < 0.5: #donot add row 1 to df and add row 1 and row 2 to bugged item list bugged_item_df = bugged_item_df._append(row1 ,ignore_index=True) bugged_item_df = bugged_item_df._append(row2 ,ignore_index=True) bugged_item_2df = bugged_item_2df._append(row1,ignore_index=True) elif time_stamp_1 < time_stamp_2: date_diff_2 = abs(time_stamp_2 - time_stamp_1) date_diff_sec_2 = date_diff_2.total_seconds() if date_diff_sec_2 < 0.5: bugged_item_df = bugged_item_df._append(row1 ,ignore_index=True) bugged_item_df = bugged_item_df._append(row2 ,ignore_index=True) bugged_item_2df = bugged_item_2df._append(row1,ignore_index=True) #donot add row 1 to df and add row 1 and row 2 to bugged item list bugged_item_df.to_csv("bugged.csv", index=False) print('here') clean_item_df = combined_df_3.merge(bugged_item_2df, on=['timestamp_utc', 'looted_by__name', 'item_id', 'quantity'], how='left', indicator=True).query('_merge == "left_only"').drop('_merge', axis=1) clean_item_df.to_csv("clean.csv", index=False) If someone knows how to improve it beyond 30 secs feel free to add another way
Pandas was not designed to iterate rows: Don't iterate rows in Pandas. That answer really goes down the rabbit hole in terms of performance and alternatives, but I think a good takeaway for you would be, that you need a better tool for the job. Enter Python's csv module and its humble but very fast reader: nothing beats the reader in terms of row-reading performance (and probably will remain that way as the Python contributors have optimized this in C over the years). reader = csv.reader(open("input.csv", newline="")) clean_w = csv.writer(open("output-clean.csv", "w", newline="")) corrupt_w = csv.writer(open("output-corrupt.csv", "w", newline="")) Granted, a row is just a list of strings, but this actually works to your advantage becaue for this problem you only need to parse one field, the timestamp; the other three fields work fine just as strings because you use them for their identity, not their valueβ€”"2" or 2? doesn't matter, this problem doesn't require doing math with the quantity, you only care about the quantity "two". I bring up this idea of identity-vs-value because your Pandas code spends some time parsing "2" β†’ 2, when "2" works just fine. I expanded on your sample rows: 2024-06-23T11:40:43.2187312Z,Georgeeto,T4_SOUL,2 2024-06-23T11:40:43.3962342Z,Alicechen,XXYY,3 2024-06-23T11:40:43.4588316Z,Georgeeto,T4_SOUL,2 2024-06-23T11:40:44.5634358Z,Bobbiejjj,AABB,1 With that, I get a clean CSV like: 2024-06-23T11:40:43.2187312Z,Georgeeto,T4_SOUL,2 2024-06-23T11:40:43.3962342Z,Alicechen,XXYY,3 2024-06-23T11:40:44.5634358Z,Bobbiejjj,AABB,1 I ran your final (to date) Pandas code against that input and got a clean CSV like: Alicechen,2024-06-23T11:40:43.3962342Z,XXYY,3 Bobbiejjj,2024-06-23T11:40:44.5634358Z,AABB,1 Georgeeto,2024-06-23T11:40:43.4588316Z,T4_SOUL,2 Similar. In yours, Georgeeto's row is out of chronological sort. My corrupt CSV looks like yours: 2024-06-23T11:40:43.2187312Z,Georgeeto,T4_SOUL,2 2024-06-23T11:40:43.4588316Z,Georgeeto,T4_SOUL,2 My program uses a last_seen dict to keep track of a row and its timestamp, keyed to a tuple of username, item, count (or, in your own terms, looted_by__name, item_id, quantity). I made a lightweight dataclass to hold the timestamp and the complete row, and created a type alias for the key: @dataclass class Entry: ts: datetime row: list[str] Key = tuple[str, str, str] # combo of username,item,quantity then, the dict looks like: last_seen: dict[Key, Entry] = {} As the reader loops through the rows, it records every row by its key. If the reader has the current row, and that row's key already exists in the dict, there could be a possible duplication, which will be determined by subtracting the two timestamps. If the two rows represent a duplicate (corrupt) entry, the current row gets marked as unclean, and then a final check of the clean flag determines whether to write to the clean or corrupt CSV. This allows the program to only have to loop over the input once: clean rows are written for every row with a new key, or if the key hasn't been seen in the last 500ms corrupt rows are written for every pair of rows with the same key, within 500ms of each other max_delta = timedelta(milliseconds=500) last_seen: dict[Key, Entry] = {} for row in reader: this_ts = datetime.strptime(row[0], "%Y-%m-%dT%H:%M:%S.%f") name = row[1] item = row[2] count = row[3] key = (name, item, count) clean = True last = Entry(datetime(1, 1, 1), []) # get around "possibly unbound" error if key in last_seen: last = last_seen[key] delta = this_ts - last.ts if delta < max_delta: clean = False if clean: clean_w.writerow(row) else: corrupt_w.writerow(last.row) corrupt_w.writerow(row) last_seen[key] = Entry(this_ts, row) I created a 720K-row test file and ran both of our programs against it. Yours ran in 28s and used about 239MB of memory; mine ran in under 4s and used about 14MB of memory. If I swap datetime.strptime(row[0], "%Y-%m-%dT%H:%M:%S.%f") for datetime.fromisoformat(row[0]), that shaves another 2s off mine... down to under 2s (probably because it doesn't have to interpet the format string 720K times). My complete program: import csv from dataclasses import dataclass from datetime import datetime, timedelta reader = csv.reader(open("big.csv", newline="")) clean_w = csv.writer(open("output-clean.csv", "w", newline="")) corrupt_w = csv.writer(open("output-corrupt.csv", "w", newline="")) # Copy header from reader to output writers header = next(reader) clean_w.writerow(header) corrupt_w.writerow(header) # Create small class, and a separate type; for cleaner code w/ # type safety. @dataclass class Entry: ts: datetime row: list[str] Key = tuple[str, str, str] # username,item,quantity # Precompute delta (saves about .2s over 720K iterations) max_delta = timedelta(milliseconds=500) # Initialize dict last_seen: dict[Key, Entry] = {} # Iterate reader, parse row, check for previous key in # last_seen and determine clean status, write accordingly, # save current row to key in last_seen. for row in reader: this_ts = datetime.fromisoformat(row[0]) name = row[1] item = row[2] count = row[3] key = (name, item, count) clean = True last = Entry(datetime(1, 1, 1), []) # get around "possibly unbound" error if key in last_seen: last = last_seen[key] delta = this_ts - last.ts if delta < max_delta: clean = False if clean: clean_w.writerow(row) else: corrupt_w.writerow(last.row) corrupt_w.writerow(row) last_seen[key] = Entry(this_ts, row)
3
2
78,774,364
2024-7-21
https://stackoverflow.com/questions/78774364/why-does-pyautogui-throw-an-imagenotfoundexception-instead-of-returning-none
I am trying to make image recognition program using pyautogui. I want the program to print "I can see it!" when it can see a particular image and "Nope nothing there" when it can't. I'm testing for this by using pyautogui.locateOnScreen. If the call returns anything but None, I print the first message. Otherwise, I print the second. However, my code is giving me an ImageNotFoundException instead of printing in the second case. Why is this happening? import pyautogui while True: if pyautogui.locateOnScreen('Dassault Falcon 7X.png', confidence=0.8, minSearchTime=5) is not None: print("I can see it!") time.sleep(0.5) else: print("Nope nothing there")
From the pyautogui docs: NOTE: As of version 0.9.41, if the locate functions can’t find the provided image, they’ll raise ImageNotFoundException instead of returning None. Your program assumes pre-0.9.41 behavior. To update it for the most recent version, replace your if-else blocks with try-except: from pyautogui import locateOnScreen, ImageNotFoundException while True: try: pyautogui.locateOnScreen('Dassault Falcon 7X.png') print("I can see it!") time.sleep(0.5) except ImageNotFoundException: print("Nope nothing there")
2
2
78,773,189
2024-7-20
https://stackoverflow.com/questions/78773189/pulling-labels-for-def-defined-functions-from-a-list-of-strings-in-python
I would like to create functions using the normal def procedure in Python with labels assigned to the namespace that are pulled from a list of strings. How can this be achieved? An example problem: given an arbitrary list of strings exampleList=['label1','label2','label3',...] of length k, initialize k def defined functions of the form exampleList=['label1','label2','label3','label4'] def label1(arg: str): if len(arg)>exampleList[0] print('the word '+arg+' has more letters than the word '+exampleList[0]) def label2(arg: str): if len(arg)>exampleList[1] print('the word '+arg+' has more letters than the word '+exampleList[1]) # ...etc. programmatically. If the functions to be defined only involve operations implementable within lambda functions, then this can be done with lambda functions by updating the globals() dict. E.g., one can write something like exampleList=['label1','label2','label3'] exampleOperation= lambda j,k: len(j)-len(k) globals().update(zip(exampleList,[lambda k:exampleOperation(j,k) for j in exampleList])) The above example initialized some lambda functions with labels assigned from the list exampleList. I would, however, like to accomplish the analogous task with normal def defined functions rather than lambda functions.
you could use a class with a factory function class MyLabels: def __init__(self, label_list): for label in label_list: setattr(self, label, self.label_factory(label)) def label_factory(self, label): def my_func(arg: str): if len(arg) > len(label): print(f'the word {arg} has more letters than the word {label}') return my_func exampleList=['label1','label2','label3','label4'] ml = MyLabels(exampleList) ml.label1('stackoverflow') #the word stackoverflow has more letters than the word label1
2
1
78,770,796
2024-7-19
https://stackoverflow.com/questions/78770796/how-do-i-get-variable-length-slices-of-values-using-pandas
I have data that includes a full name and first name, and I need to make a new column with the last name. I can assume full - first = last. I've been trying to use slice with an index the length of the first name + 1. But that index is a series, not an integer. So it's returning NaN. The commented lines show the things I tried. It took me a while to realize what the series/integer issue was. It seems this shouldn't be so difficult. Thanks import pandas as pd columns = ['Full', 'First'] data = [('Joe Smith', 'Joe'), ('Bobby Sue Ford', 'Bobby Sue'), ('Current Resident', 'Current Resident'), ('', '')] df = pd.DataFrame(data, columns=columns) #first_chars = df['First'].str.len() + 1 #last = df['Full'].str[4:] #last = df['Full'].str[first_chars:] #last = df['Full'].str.slice(first_chars) #last = df.Full.str[first_chars:] #pd.DataFrame.insert(df, loc=2, column='Last', value=last) #df['Last'] = df.Full.str[first_chars:] #df['Last'] = str(df.Full.str[first_chars:]) #first_chars = int(first_chars) #df['Last'] = df['Full'].apply(str).apply(lambda x: x[first_chars:]) df['Last'] = df['Full'].str.slice(df['First'].str.len() + 1) print(df)
Edit: Use removeprefix instead of replace to deal with cases where first and last names are the same: df['Last'] = df.apply(lambda row: row['Full'].removeprefix(row['First']).strip(), axis=1) Full First Last 0 Joe Smith Joe Smith 1 Bobby Sue Ford Bobby Sue Ford 2 Current Resident Current Resident 3 4 Joe Joe Joe Joe Original answer: Use apply on axis=1 to replace each name: df['Last'] = df.apply(lambda row: row['Full'].replace(row['First'], '').strip(), axis=1) Full First Last 0 Joe Smith Joe Smith 1 Bobby Sue Ford Bobby Sue Ford 2 Current Resident Current Resident 3
3
2
78,770,134
2024-7-19
https://stackoverflow.com/questions/78770134/why-is-the-limit-limit-of-maximum-recursion-depth-in-python-232-2-31
In Python, some programs create the error RecursionError: maximum recursion depth exceeded in comparison or similar. This is because there is a limit set for how deep the recursion can be nested. To get the current value for maximum recursion depth, use import sys sys.getrecursionlimit() The default value seems to be 1000 or 1500 depending on the specific system/python/... combination. The value can be increased to have access to deeper recursion. import sys sys.setrecursionlimit(LIMIT) where LIMIT is the new limit. This is the error that I get when I exceed the 'limit' limit: OverflowError: Python int too large to convert to C int Why is the 'limit' limit int(2**32 / 2) - 31? I would expect that it maybe is int(2**32 / 2) - 1 as it makes sense that integers are signed (32-bit) numbers, but why - 31? The question first came up as I was testing with 3.9.13 (64-bit). I also tested with 3.12 (64-bit). Edit: Here is a demo on IDLE with Python 3.9.13. Python 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license()" for more information. >>> import sys >>> v = 2**31-1 >>> print(v) 2147483647 >>> sys.setrecursionlimit(v) Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> sys.setrecursionlimit(v) OverflowError: Python int too large to convert to C int >>> v = 2**31-30 >>> sys.setrecursionlimit(v) Traceback (most recent call last): File "<pyshell#5>", line 1, in <module> sys.setrecursionlimit(v) OverflowError: Python int too large to convert to C int >>> v = 2**31-31 >>> sys.setrecursionlimit(v) >>>
Usually the limit "limit" would be 2**31 - 1, i.e. the max value for a signed C integer (although the highest possible limit is documented as being platform-dependent). The lower usable range you get within IDLE is because it wraps the built-in function to impose a lower limit: >>> import sys >>> sys.setrecursionlimit <function setrecursionlimit at 0xcafef00d> >>> sys.setrecursionlimit.__wrapped__ <built-in function setrecursionlimit> This is specific to IDLE, and you should see the "full" 2**31-1 depth when using the same interpreter on the same platform otherwise. You can find the monkeypatch here in CPython sources: Lib/idlelib/run.py:install_recursionlimit_wrappers Usable values are reduced by RECURSIONLIMIT_DELTA=30, and a comment on wrapper function references bpo-26806 (IDLE not displaying RecursionError tracebacks and hangs), where Terry J. Reedy mentions the reason for patching a lower limit: IDLE adds it own calls to the call stack before (and possibly after) running user code You should also see that in the docstring:
2
4
78,765,462
2024-7-18
https://stackoverflow.com/questions/78765462/how-to-get-the-available-python-versions-on-remote-node
I have some machines with different properties and different installed versions of Python. Now I want to write a task that returns all available Python versions on every machine (some have 2.7.x, some 3.8.x and others are in between). Depending on that I want to register the highest version for later tasks. I tried to filter the output of ansible_facts.packages but I fail at using filters in YAML. - name: Gather the package facts ansible.builtin.package_facts: manager: auto strategy: all - name: Check whether a package called python is installed ansible.builtin.debug: msg: "{{ ansible_facts.packages['python*'] }} versions of foobar are installed!" I need it to return the machine and every Python version on that specific machine.
On a RHEL 9.4 System with Python 3.9.18 a minimal example playbook --- - hosts: localhost become: true gather_facts: true pre_tasks: - package_facts: tasks: - debug: var: ansible_python_version - debug: msg: "{{ ansible_facts.packages[item] }}" loop: "{{ ansible_facts.packages | select('search', regex) }}" vars: regex: 'python*' will result for ansible_python_version simply into an output of TASK [debug] ******************* ok: [localhost] => ansible_python_version: 3.9.18 from Ansible Interpreter Discovery. For ansible_facts.packages it will result into an output of TASK [debug] ************************************************ ok: [localhost] => (item=python3-setuptools-wheel) => msg: - arch: noarch epoch: null name: python3-setuptools-wheel release: 12.el9 source: rpm version: 53.0.0 ok: [localhost] => (item=python3-setuptools) => msg: - arch: noarch epoch: null name: python3-setuptools release: 12.el9 source: rpm version: 53.0.0 ok: [localhost] => (item=python3-dbus) => msg: - arch: x86_64 epoch: null name: python3-dbus release: 2.el9 source: rpm version: 1.2.18 ok: [localhost] => (item=python3-six) => msg: - arch: noarch epoch: null name: python3-six release: 9.el9 source: rpm version: 1.15.0 ok: [localhost] => (item=python3-gobject-base-noarch) => msg: - arch: noarch epoch: null name: python3-gobject-base-noarch release: 6.el9 source: rpm version: 3.40.1 ok: [localhost] => (item=python3-gobject-base) => msg: - arch: x86_64 epoch: null name: python3-gobject-base release: 6.el9 source: rpm version: 3.40.1 ok: [localhost] => (item=python3-iniparse) => msg: - arch: noarch epoch: null name: python3-iniparse release: 45.el9 source: rpm version: '0.4' ok: [localhost] => (item=python3-inotify) => msg: - arch: noarch epoch: null name: python3-inotify release: 25.el9 source: rpm version: 0.9.6 ok: [localhost] => (item=python3-distro) => msg: - arch: noarch epoch: null name: python3-distro release: 7.el9 source: rpm version: 1.5.0 ok: [localhost] => (item=python3-libcomps) => msg: - arch: x86_64 epoch: null name: python3-libcomps release: 1.el9 source: rpm version: 0.1.18 ok: [localhost] => (item=python3-chardet) => msg: - arch: noarch epoch: null name: python3-chardet release: 5.el9 source: rpm version: 4.0.0 ok: [localhost] => (item=python3-decorator) => msg: - arch: noarch epoch: null name: python3-decorator release: 6.el9 source: rpm version: 4.4.2 ok: [localhost] => (item=python3-ethtool) => msg: - arch: x86_64 epoch: null name: python3-ethtool release: 2.el9 source: rpm version: '0.15' ok: [localhost] => (item=python3-pysocks) => msg: - arch: noarch epoch: null name: python3-pysocks release: 12.el9 source: rpm version: 1.7.1 ok: [localhost] => (item=python3-pyyaml) => msg: - arch: x86_64 epoch: null name: python3-pyyaml release: 6.el9 source: rpm version: 5.4.1 ok: [localhost] => (item=python3-systemd) => msg: - arch: x86_64 epoch: null name: python3-systemd release: 18.el9 source: rpm version: '234' ok: [localhost] => (item=python3-gpg) => msg: - arch: x86_64 epoch: null name: python3-gpg release: 6.el9 source: rpm version: 1.15.1 ok: [localhost] => (item=python3-ptyprocess) => msg: - arch: noarch epoch: null name: python3-ptyprocess release: 12.el9 source: rpm version: 0.6.0 ok: [localhost] => (item=python3-pexpect) => msg: - arch: noarch epoch: null name: python3-pexpect release: 7.el9 source: rpm version: 4.8.0 ok: [localhost] => (item=python3-dateutil) => msg: - arch: noarch epoch: 1 name: python3-dateutil release: 7.el9 source: rpm version: 2.8.1 ok: [localhost] => (item=python3-pyparsing) => msg: - arch: noarch epoch: null name: python3-pyparsing release: 1.el9ap source: rpm version: 3.0.9 ok: [localhost] => (item=python3-packaging) => msg: - arch: noarch epoch: null name: python3-packaging release: 2.el9ap source: rpm version: '21.3' ok: [localhost] => (item=python3-pycparser) => msg: - arch: noarch epoch: null name: python3-pycparser release: 2.el9pc source: rpm version: '2.21' ok: [localhost] => (item=python3-cffi) => msg: - arch: x86_64 epoch: null name: python3-cffi release: 3.el9ap source: rpm version: 1.15.0 ok: [localhost] => (item=python3-cryptography) => msg: - arch: x86_64 epoch: null name: python3-cryptography release: 1.el9ap source: rpm version: 38.0.4 ok: [localhost] => (item=python3-resolvelib) => msg: - arch: noarch epoch: null name: python3-resolvelib release: 5.el9 source: rpm version: 0.5.4 ok: [localhost] => (item=python3-xmltodict) => msg: - arch: noarch epoch: null name: python3-xmltodict release: 15.el9 source: rpm version: 0.12.0 ok: [localhost] => (item=python3-pip-wheel) => msg: - arch: noarch epoch: null name: python3-pip-wheel release: 8.el9 source: rpm version: 21.2.3 ok: [localhost] => (item=python3-sssdconfig) => msg: - arch: noarch epoch: null name: python3-sssdconfig release: 6.el9_4 source: rpm version: 2.9.4 ok: [localhost] => (item=python3-file-magic) => msg: - arch: noarch epoch: null name: python3-file-magic release: 16.el9 source: rpm version: '5.39' ok: [localhost] => (item=python3-libselinux) => msg: - arch: x86_64 epoch: null name: python3-libselinux release: 1.el9 source: rpm version: '3.6' ok: [localhost] => (item=python3-libsemanage) => msg: - arch: x86_64 epoch: null name: python3-libsemanage release: 1.el9 source: rpm version: '3.6' ok: [localhost] => (item=python3-setools) => msg: - arch: x86_64 epoch: null name: python3-setools release: 1.el9 source: rpm version: 4.4.4 ok: [localhost] => (item=python3-urllib3) => msg: - arch: noarch epoch: null name: python3-urllib3 release: 5.el9 source: rpm version: 1.26.5 ok: [localhost] => (item=python3-requests) => msg: - arch: noarch epoch: null name: python3-requests release: 8.el9 source: rpm version: 2.25.1 ok: [localhost] => (item=python3-cloud-what) => msg: - arch: x86_64 epoch: null name: python3-cloud-what release: 1.el9 source: rpm version: 1.29.40 ok: [localhost] => (item=python3-audit) => msg: - arch: x86_64 epoch: null name: python3-audit release: 2.el9 source: rpm version: 3.1.2 ok: [localhost] => (item=python3-sss) => msg: - arch: x86_64 epoch: null name: python3-sss release: 6.el9_4 source: rpm version: 2.9.4 ok: [localhost] => (item=python3-librepo) => msg: - arch: x86_64 epoch: null name: python3-librepo release: 2.el9 source: rpm version: 1.14.5 ok: [localhost] => (item=python3-libdnf) => msg: - arch: x86_64 epoch: null name: python3-libdnf release: 8.el9 source: rpm version: 0.69.0 ok: [localhost] => (item=python3-hawkey) => msg: - arch: x86_64 epoch: null name: python3-hawkey release: 8.el9 source: rpm version: 0.69.0 ok: [localhost] => (item=python3-policycoreutils) => msg: - arch: noarch epoch: null name: python3-policycoreutils release: 2.1.el9 source: rpm version: '3.6' ok: [localhost] => (item=python3-rpm) => msg: - arch: x86_64 epoch: null name: python3-rpm release: 29.el9 source: rpm version: 4.16.1.3 ok: [localhost] => (item=python3-subscription-manager-rhsm) => msg: - arch: x86_64 epoch: null name: python3-subscription-manager-rhsm release: 1.el9 source: rpm version: 1.29.40 ok: [localhost] => (item=python3-dnf) => msg: - arch: noarch epoch: null name: python3-dnf release: 9.el9 source: rpm version: 4.14.0 ok: [localhost] => (item=python3-dnf-plugins-core) => msg: - arch: noarch epoch: null name: python3-dnf-plugins-core release: 13.el9 source: rpm version: 4.3.0 ok: [localhost] => (item=python3-pip) => msg: - arch: noarch epoch: null name: python3-pip release: 8.el9 source: rpm version: 21.2.3 ok: [localhost] => (item=python3-idna) => msg: - arch: noarch epoch: null name: python3-idna release: 7.el9_4.1 source: rpm version: '2.10' ok: [localhost] => (item=python3-libs) => msg: - arch: x86_64 epoch: null name: python3-libs release: 3.el9_4.1 source: rpm version: 3.9.18 ok: [localhost] => (item=python-unversioned-command) => msg: - arch: noarch epoch: null name: python-unversioned-command release: 3.el9_4.1 source: rpm version: 3.9.18 ok: [localhost] => (item=python3) => msg: - arch: x86_64 epoch: null name: python3 release: 3.el9_4.1 source: rpm version: 3.9.18 if all keys are selected which begins with python. One may also have a look into the output of - debug: msg: "{{ ansible_facts.packages.keys() }}" The issue with the initial question was ansible_facts.packages['python*'], as globbing in the variable name and the dict will not work. One need to select the dict keys differently and as shown above. Similar Q&A Checking Python version through Ansible and others which might be helpful in this case Ansible: Install Python module only if it not exist Ansible: How to install a package only if same version not installed?
2
3
78,768,306
2024-7-19
https://stackoverflow.com/questions/78768306/compare-strings-from-a-very-large-text-file-over-100-gb-with-a-small-text-file
I have two text files. One contains a very long list of strings (100 GB), the other contains about 30 strings. I need to find which lines in the second file are also in the first file and write them to another,third text file. Manually searching for each line is a pain, so I wanted to write a script to do it automatically. For this I choose Python because it is the only language that I know even a little. Essentially I tried copying this answer since I'm too inexperienced to write my own code: Compare 2 files in Python and extract differences as a strings smallfile = 'smalllist.txt' bigfile = 'biglist.txt' def file_2_list(file): with open(file) as file: lines = file.readlines() lines = [line.rstrip() for line in lines] return lines def diff_lists(lst1, lst2): differences = [] both = [] for element in lst1: if element not in lst2: differences.append(element) else: both.append(element) return(differences, both) listbig = file_2_list(bigfile) listsmall = file_2_list(smallfile) diff, both = diff_lists(listbig, listsmall) print(both) I wanted it to print me the lines that are in both lists. However it gave me a "memory error". But I'm already using a 64-bit version of Python, so the memory limit shouldn't be an issue? (I have 16 GB RAM) So how can you avoid this β€œmemory error”? Or maybe there is a better way to accomplish this task?
The file.readlines method reads the entirety of a file into memory, which you should avoid when the file is that large. You can instead read the lines of the smaller file into a set, and then iterate over the lines of the larger file to find the common lines by testing if a line is in the set: def common_lines(small_file, big_file): small_lines = set(small_file) return [line for line in big_file if line in small_lines] with open(smallfile) as file1, open(bigfile) as file2: both = common_lines(file1, file2)
2
2
78,767,542
2024-7-19
https://stackoverflow.com/questions/78767542/is-there-a-way-to-read-sequentially-pretty-printed-json-objects-in-python
Suppose you have a JSON file like this: { "a": 0 } { "a": 1 } It's not JSONL, because each object takes more than one line. But it's not a single valid JSON object either. It's sequentially listed pretty-printed JSON objects. json.loads in Python gives an error about invalid formatting if you attempt to load this, and the documentation indicates it only loads a single object. But tools like jq can read this kind of data without issue. Is there some reasonable way to work with data formatted like this using the core json library? I have an issue where I have some complex objects and while just formatting the data as JSONL works, for readability it would be better to store the data like this. I can wrap everything in a list to make it a single JSON object, but that has downsides like requiring reading the whole file in at once. There's a similar question here, but despite the title the data there isn't JSON at all.
You can partially decode text as JSON with json.JSONDecoder.raw_decode. This method returns a 2-tuple of the parsed object and the ending index of the object in the string, which you can then use as the starting index to partially decode the text for the next JSON object: import json def iter_jsons(jsons, decoder=json.JSONDecoder()): index = 0 while (index := jsons.find('{', index)) != -1: data, index = decoder.raw_decode(jsons, index) yield data so that: jsons = '''\ { "a": 0 } { "a": 1 }''' for j in iter_jsons(jsons): print(j) outputs: {'a': 0} {'a': 1} Demo here Note that the starting index as the second argument to json.JSONDecoder.raw_decode is an implementation detail, and that if you want to stick to the publicly documented API you would have to use the less efficient approach of slicing the string (which involves copying the string) from the index before you pass it to raw_decode: def iter_jsons(jsons, decoder=json.JSONDecoder()): index = 0 while (index := jsons.find('{', index)) != -1: data, index = decoder.raw_decode(jsons := jsons[index:]) yield data
3
7
78,767,142
2024-7-19
https://stackoverflow.com/questions/78767142/dictionary-indexing-with-numpy-jax
I'm writing an interpolation routine and have a dictionary which stores the function values at the fitting points. Ideally, the dictionary keys would be 2D Numpy arrays of the fitting point coordinates, np.array([x, y]), but since Numpy arrays aren't hashable these are converted to tuples for the keys. # fit_pt_coords: (n_pts, n_dims) array # fn_vals: (n_pts,) array def fit(fit_pt_coords, fn_vals): pt_map = {tuple(k): v for k, v in zip(fit_pt_coords, fn_vals)} ... Later in the code I need to get the function values using coordinates as keys in order to do the interpolation fitting. I'd like this to be within @jax.jited code, but the coordinate values are of type <class 'jax._src.interpreters.partial_eval.DynamicJaxprTracer'>, which can't be converted to a tuple. I've tried other things, like creating a dictionary key as (x + y, x - y), but again this requires concrete values, and calling .item() results in an ConcretizationTypeError. At the moment I've @jax.jited all of the code I can, and have just left this code un-jitted. It would be great if I could jit this code as well however. Are there any better ways to do the dictionary indexing (or better Jax-compatible data structures) which would allow all of the code to be jitted? I am new to Jax and still understading how it works, so I'm sure there must be better ways of doing it...
There is no way to use traced JAX values as dictionary keys. The problem is that the key values will not be known until runtime within the XLA compiler, and XLA has no dictionary-like data structure that such lookups can be lowered to. There are imperfect solutions, such as keeping the dictionary on the host and using something like io_callback to do the dict lookups on host, but this approach comes with performance penalties that will likely make it impractical. Unfortunately, your best approach for doing this efficiently under JIT would probably be to switch to a different interpolation algorithm that doesn't depend on hash table lookups.
2
2
78,766,657
2024-7-18
https://stackoverflow.com/questions/78766657/number-every-first-unique-piece-in-each-group
In each group, each 1st unique item should be given a different number in new column 'num'. I can form the groups but I don't know how to number the unique pieces. Is there a way to do that ? Unique numbers are: AF=1 / CT=2 / RT=4 / CTS=4 data = {'ATEXT': ['AF', 'AF', '', '', 'CT', 'RT', '', 'AF', 'AF', 'CTS', 'AF', 'AF', 'AF', 'CT', 'AF', 'CT', 'AF', 'AF', 'AF', 'AF', 'RT', 'RT', '', '', 'AF', 'CT', 'CT', 'RT', 'AF', 'AF', 'CT']} df = pd.DataFrame(data) df End Result should be: Out[6]: ATEXT num 0 AF 1 1 AF 2 3 4 CT 2 5 RT 3 6 7 AF 1 8 AF 9 CTS 4 10 AF 11 AF 12 AF 13 CT 2 14 AF 15 CT 16 AF 17 AF 18 AF 19 AF 20 RT 3 21 RT 22 23 24 AF 1 25 CT 2 26 CT 27 RT 3 28 AF 29 AF 30 CT my idea (does not yet give a useful result) : nl = df['ATEXT']!=("") df['num'] = (df['ATEXT'].mask(nl) .groupby(nl) .where(~df.ATEXT.duplicated() )
IIUC, I using a few intermediate columns to help with logic. Instead of using factorize you could use map to assign your unique numbers. Try: df['CODE'] = df['ATEXT'].mask(df['ATEXT'] == '').factorize()[0] + 1 df.loc[df['ATEXT'] == '', 'CODE'] = np.nan df['grp'] = df['ATEXT'].eq('').cumsum() df['Num'] = df.groupby('grp', group_keys=False).apply(lambda x: x.drop_duplicates('ATEXT'))['CODE'] df['Num'] = df['Num'].fillna('') df[['ATEXT', 'Num']] Output: ATEXT Num 0 AF 1.0 1 AF 2 3 4 CT 2.0 5 RT 3.0 6 7 AF 1.0 8 AF 9 CTS 4.0 10 AF 11 AF 12 AF 13 CT 2.0 14 AF 15 CT 16 AF 17 AF 18 AF 19 AF 20 RT 3.0 21 RT 22 23 24 AF 1.0 25 CT 2.0 26 CT 27 RT 3.0 28 AF 29 AF 30 CT Or, much like you were doing, let use transform with groupby to mask duplicates with where and duplicated in each group: df['Num_2'] = df.groupby('grp')['CODE'].transform(lambda x: x.where(~x.duplicated(keep='first')).fillna('')) df[['ATEXT', 'Num_2']] Output: ATEXT Num_2 0 AF 1.0 1 AF 2 3 4 CT 2.0 5 RT 3.0 6 7 AF 1.0 8 AF 9 CTS 4.0 10 AF 11 AF 12 AF 13 CT 2.0 14 AF 15 CT 16 AF 17 AF 18 AF 19 AF 20 RT 3.0 21 RT 22 23 24 AF 1.0 25 CT 2.0 26 CT 27 RT 3.0 28 AF 29 AF 30 CT
2
2
78,758,516
2024-7-17
https://stackoverflow.com/questions/78758516/where-to-put-checks-on-the-inputs-of-a-class
Where should I put checks on the inputs of a class. Right now I'm putting it in __init__ as follows, but I'm not sure if that's correct. See example below. import numpy as np class MedianTwoSortedArrays: def __init__(self, sorted_array1, sorted_array2): # check inputs -------------------------------------------------------- # check if input arrays are np.ndarray's if isinstance(sorted_array1, np.ndarray) == False or \ isinstance(sorted_array2, np.ndarray) == False: raise Exception("Input arrays need to be sorted np.ndarray's") # check if input arrays are 1D if len(sorted_array1.shape) > 1 or len(sorted_array2.shape) > 1: raise Exception("Input arrays need to be 1D np.ndarray's") # check if input arrays are sorted - note that this is O(n + m) for ind in range(0, len(sorted_array1)-1, 1): if sorted_array1[ind] > sorted_array1[ind + 1]: raise Exception("Input arrays need to be sorted") # end of input checks-------------------------------------------------- self.sorted_array1 = sorted_array1 self.sorted_array2 = sorted_array2
General Validation You generally have two opportunities to inspect the arguments passed to a constructor expression: In the __new__ method, used for instance creation In the __init__ method, used for instance initialization You should generally use __init__ for initialization and validation. Only use __new__ in cases where its unique characteristics are required. So, your checks are in the "correct" place. If you also want to validate any assignments to the instance variables that occur after initialization, you might find this question and its answers helpful. Validation in __new__ One of the distinguishing characteristics of __new__ is that it is called before an instance of the relevant class is created. In fact, the whole purpose of __new__ is to create the instance. (This instance is then passed to __init__ for initialisation.) As stated in its documentation, "__new__() is intended mainly to allow subclasses of immutable types (like int, str, or tuple) to customize instance creation." Hence, you would likely include validation logic in __new__, rather than __init__, when subclassing an immutable type. Consider a simple example in which you want to create a subclass of tuple, called Point2D, that only allows the creation of instances containing 2 floats (whether it is sensible to subclass tuple for this purpose is another question): class Point2D(tuple): def __new__(cls, x, y): if not isinstance(x, (int, float)) or not isinstance(y, (int, float)): error = "The coordinates of a 2D point have to be numbers" raise TypeError(error) return super().__new__(cls, (float(x), float(y))) The documentation on __new__ states that "it is also commonly overridden in custom metaclasses in order to customize class creation." This use case, and numerous other use cases, are beyond the scope of this question. If you are still interested in the differences between __new__ and __init__, you might find these sources helpful: When to use __new__ vs. __init__? Python (and Python C API): __new__ versus __init__ What is the downside of using object.__new__ in __new__? Exception Types Unrelated to the main question: If the arguments are invalid, the type of the raised exception should be as specific as possible. In your case: If the arguments do not have the correct types, raise TypeError rather than just Exception. If the arguments do not have the correct values (or shape), raise ValueError rather than just Exception.
2
3
78,757,191
2024-7-17
https://stackoverflow.com/questions/78757191/multiprocessing-shared-memory-to-pass-large-arrays-between-processes
Context: I need to analyse some weather data for every hour of the year. For each hour, I need to read in some inputs for each hour before performing some calculation. One of these inputs is a very large numpy array x , which does not change and is the same for every hour of the year. The output is then a vector (1D numpy array) y, which contains the calculation result for every hour of the year. Objective: Speed up the calculation time using multiprocessing module. In particular, I am trying to pass x to each process using the shared_memory submodule of multiprocessing. I'm running CPython 3.10.8 on Windows 10, with Spyder 5.3.3 as the IDE. Code (simplified for testing purposes): import multiprocessing import numpy as np from multiprocessing import shared_memory def multiprocessing_function(args): nn, shm_name, y_shm_name, shape, dtype, N = args existing_shm = shared_memory.SharedMemory(name=shm_name) x = np.ndarray(shape, dtype=dtype, buffer=existing_shm.buf) existing_y_shm = shared_memory.SharedMemory(name=y_shm_name) y = np.ndarray((N,), dtype=dtype, buffer=existing_y_shm.buf) y[nn] = 1 existing_shm.close() existing_y_shm.close() if __name__ == '__main__': x = np.random.rand(int(1e7), 16) N = 8760 # Number of hours in a year dtype = x.dtype shm = shared_memory.SharedMemory(create=True, size=x.nbytes) shm_array = np.ndarray(x.shape, dtype=x.dtype, buffer=shm.buf) np.copyto(shm_array, x) y_shm = shared_memory.SharedMemory(create=True, size=N * x.itemsize) y_array = np.ndarray((N,), dtype=x.dtype, buffer=y_shm.buf) args_case = [(nn, shm.name, y_shm.name, x.shape, dtype, N) for nn in range(N)] with multiprocessing.Pool() as pool: pool.map(multiprocessing_function, args_case) y = np.array(y_array) shm.close() y_shm.close() shm.unlink() y_shm.unlink() Issue: When I run the code, it returns the correct vector, but 50% of the time, I get a "Windows fatal exception: access violation" and the kernel crashes. If I then change the size of the array, it might have no issues, but if I restart Spyder and try to rerun the same code with the new array size, the same error would come up and the kernel would crash again. This inconsistent behavior is incredibly strange. I have a feeling this is a memory leakage issue, but I don't know how to fix it.
Either Spyder itself or the IPython shell is trying to access one of your shared numpy arrays after the shared memory file has been closed. My first guess is that Spyder is trying to populate it's "Variable Explorer" pane by enumerating local variables. This causes an access to the numpy array, but the memory location it is pointing to is no longer valid because the SharedMemory has been closed. SharedMemory creates files on the filesystem (so they're sharable) in a way that they will only reside in memory (so they're fast). Then you are given memory-mapped access to that file as a buffer. There are some differences depending on the OS, but in general this holds true. Like any other file, you have a bit more responsibility to clean up after yourself: close() and unlink(). Unfortunately Numpy has no way to know the buffer it's pointing to has been closed, so it will go ahead and try to access the same memory it was previously pointing to. Windows calls it "Access Violation", and everyone else calls it "Segmentation Fault". To solve this problem: You can simply change the settings in Spyder to run scripts in an external terminal and not access the arrays after the shm's are closed. You can call del shm_array and del y_array after you're done with them (right before closing the shm's). This will remove them from the module scope so the IPython kernel doesn't try to access them. You can put the stuff in a function so your numpy arrays go out of scope and get garbage collected automatically when the function returns. You can just not close the shm files: Like other file handles, they get closed as part of the python process exiting. Closing your current shell (and starting a new one) should do the trick. On windows if no process has a handle to any given SHM open, it will automatically be deleted. On Linux (and maybe MacOS?) the file will only be deleted when calling unlink or when the computer is rebooted.
2
2
78,766,426
2024-7-18
https://stackoverflow.com/questions/78766426/export-pydantic-model-classes-not-instances-to-json
I understand that Pydantic can export models to JSON, see ref. But in practice, this means instances of a model: from datetime import datetime from pydantic import BaseModel class BarModel(BaseModel): whatever: int class FooBarModel(BaseModel): foo: datetime bar: BarModel m = FooBarModel(foo=datetime(2032, 6, 1, 12, 13, 14), bar={'whatever': 123}) print(m.json()) #> {"foo": "2032-06-01T12:13:14", "bar": {"whatever": 123}} My question is parsing the class itself; so that the class could be understood by other languages, for example JavaScript. Here is an example of what I'm envisioning: {BarModel: { 'whatever': 'int'} } {FooBarModel: { 'foo': 'datetime', 'bar': {BarModel: {'whatever': 'int'}} } Is this possible using built-in Pydantic functionality?
Pydantic has built-in functionality to generate the JSON Schema of your models. This is a standardised format that other languages will have tooling to deal with. For example, with your definitions, running: import json print(json.dumps(FooBarModel.model_json_schema(), indent=2)) would output: { "$defs": { "BarModel": { "properties": { "whatever": { "title": "Whatever", "type": "integer" } }, "required": [ "whatever" ], "title": "BarModel", "type": "object" } }, "properties": { "foo": { "format": "date-time", "title": "Foo", "type": "string" }, "bar": { "$ref": "#/$defs/BarModel" } }, "required": [ "foo", "bar" ], "title": "FooBarModel", "type": "object" }
2
3
78,763,342
2024-7-18
https://stackoverflow.com/questions/78763342/compromise-between-quality-and-file-size-how-to-save-a-very-detailed-image-into
I am facing a small (big) problem: I want to generate a high resolution speckle pattern and save it as a file that I can import into a laser engraver. Can be PNG, JPEG, PDF, SVG, or TIFF. My script does a decent job of generating the pattern that I want: The user needs to first define the inputs, these are: ############ # INPUTS # ############ dpi = 1000 # dots per inch dpmm = 0.03937 * dpi # dots per mm widthOfSampleMM = 50 # mm heightOfSampleMM = 50 # mm patternSizeMM = 0.1 # mm density = 0.75 # 1 is very dense, 0 is not fine at all variation = 0.75 # 1 is very bad, 0 is very good ############ After this, I generate the empty matrix and fill it with black shapes, in this case a circle. # conversions to pixels widthOfSamplesPX = int(np.ceil(widthOfSampleMM*dpmm)) # get the width widthOfSamplesPX = widthOfSamplesPX + 10 - widthOfSamplesPX % 10 # round up the width to nearest 10 heightOfSamplePX = int(np.ceil(heightOfSampleMM*dpmm)) # get the height heightOfSamplePX = heightOfSamplePX + 10 - heightOfSamplePX % 10 # round up the height to nearest 10 patternSizePX = patternSizeMM*dpmm # this is the size of the pattern, so far I am going with circles # init an empty image im = 255*np.ones((heightOfSamplePX, widthOfSamplesPX), dtype = np.uint8) # horizontal circle centres numPoints = int(density*heightOfSamplePX/patternSizePX) # get number of patterns possible if numPoints==1: horizontal = [heightOfSamplePX // 2] else: horizontal = [int(i * heightOfSamplePX / (numPoints + 1)) for i in range(1, numPoints + 1)] # vertical circle centres numPoints = int(density*widthOfSamplesPX/patternSizePX) if numPoints==1: vertical = [widthOfSamplesPX // 2] else: vertical = [int(i * widthOfSamplesPX / (numPoints + 1)) for i in range(1, numPoints + 1)] for i in vertical: for j in horizontal: # generate the noisy information iWithNoise = i+variation*np.random.randint(-2*patternSizePX/density, +2*patternSizePX/density) jWithNoise = j+variation*np.random.randint(-2*patternSizePX/density, +2*patternSizePX/density) patternSizePXWithNoise = patternSizePX+patternSizePX*variation*(np.random.rand()-0.5)/2 cv2.circle(im, (int(iWithNoise),int(jWithNoise)), int(patternSizePXWithNoise//2), 0, -1) # add circle After this step, I can get im, here's a low quality example at dpi=1000: And here's one with my target dpi (5280): Now I would like to save im in a handlable way at high quality (DPI>1000). Is there any way to do this? Stuff that I have tried so far: plotting and saving the plot image with PNG, TIFF, SVG, PDF with different DPI values plt.savefig() with different dpi's cv2.imwrite() too large of a file, only solution here is to reduce DPI, which also reduces quality SVG write from matrix: I developed this function but ultimately, the files were too large: import svgwrite def matrix_to_svg(matrix, filename, padding = 0, cellSize=1): # get matrix dimensions and extremes rows, cols = matrix.shape minVal = np.min(matrix) maxVal = np.max(matrix) # get a drawing dwg = svgwrite.Drawing(filename, profile='tiny', size = (cols*cellSize+2*padding,rows*cellSize+2*padding)) # define the colormap, in this case grayscale since black and white colorScale = lambda val: svgwrite.utils.rgb(int(255*(val-minVal)/(maxVal-minVal)), int(255*(val-minVal)/(maxVal-minVal)), int(255*(val-minVal)/(maxVal-minVal))) # get the color of each pixel in the matrix and draw it for i in range(rows): for j in range(cols): color = colorScale(matrix[i, j]) dwg.add(dwg.rect(insert=(j * cellSize + padding, i * cellSize + padding), size=(cellSize, cellSize), fill=color)) dwg.save() # save PIL.save(). Files too large The problem could be also solved by generating better shapes. This would not be an obstacle either. I am open to re-write using a different method, would be grateful if someone would just point me in the right direction.
Let's make some observations of the effects of changing the DPI: DPI 1000 Height=1970 Width=1970 # Spots=140625 Raw pixels: 3880900 DPI 10000 Height=19690 Width=19690 # Spots=140625 Raw pixels: 387696100 We can see that while the number of spots drawn remains quite consistent (it does vary due to the various rounding in your calculations, but for all intents and purposes, we can consider it constant), the raw pixel count of a raster image generated increases quadratically. A vector representation would seem desireable, since it is freely scalable (quality depending on the capabilities of a renderer). Unfortunately, the way you generate the SVG is flawed, since you've basically turned it into an extremely inefficient raster representation. This is because you generate a rectangle for each individual pixel (even for those that are technically background). Consider that in an 8-bit grayscale image, such as the PNGs you generate requires 1 byte to represent a raw pixel. On the other hand, your SVG representation of a single pixel looks something like this: <rect fill="rgb(255,255,255)" height="1" width="1" x="12345" y="15432" /> Using ~70 bytes per pixel, when we're talking about tens of megapixels... clearly not the way to go. However, let's recall that the number of spots doesn't depend on DPI. Can we just represent the spots in some efficient way? Well, the spots are actually circles, parametrized by position, radius and colour. SVG supports circles, and their representation looks like this: <circle cx="84" cy="108" fill="rgb(0,0,0)" r="2" /> Let's look at the effects of changing the DPI now. DPI 1000 # Spots=140625 Raw pixels: 3880900 SVG size: 7435966 DPI 10000 # Spots=140625 Raw pixels: 387696100 SVG size: 7857942 The slight increase in size is due to increased range of position/radius values. I somewhat refactored your code example. Here's the result that demonstrates the SVG output. import numpy as np import cv2 import svgwrite MM_IN_INCH = 0.03937 def round_int_to_10s(value): int_value = int(value) return int_value + 10 - int_value % 10 def get_sizes_pixels(height_mm, width_mm, pattern_size_mm, dpi): dpmm = MM_IN_INCH * dpi # dots per mm width_px = round_int_to_10s(np.ceil(width_mm * dpmm)) height_px = round_int_to_10s(np.ceil(height_mm * dpmm)) pattern_size_px = pattern_size_mm * dpmm return height_px, width_px, pattern_size_px def get_grid_positions(size, pattern_size, density): count = int(density * size / pattern_size) # get number of patterns possible if count == 1: return [size // 2] return [int(i * size / (count + 1)) for i in range(1, count + 1)] def get_spot_grid(height_px, width_px, pattern_size_px, density): vertical = get_grid_positions(height_px, pattern_size_px, density) horizontal = get_grid_positions(width_px, pattern_size_px, density) return vertical, horizontal def generate_spots(vertical, horizontal, pattern_size, density, variation): spots = [] noise_halfspan = 2 * pattern_size / density; noise_min, noise_max = (-noise_halfspan, noise_halfspan) for i in vertical: for j in horizontal: # generate the noisy information center = tuple(map(int, (j, i) + variation * np.random.randint(noise_min, noise_max, 2))) d = int(pattern_size + pattern_size * variation * (np.random.rand()-0.5) / 2) spots.append((center, d//2)) # add circle params return spots def render_raster(height, width, spots): im = 255 * np.ones((height, width), dtype=np.uint8) for center, radius in spots: cv2.circle(im, center, radius, 0, -1) # add circle return im def render_svg(height, width, spots): dwg = svgwrite.Drawing(profile='tiny', size = (width, height)) fill_color = svgwrite.utils.rgb(0, 0, 0) for center, radius in spots: dwg.add(dwg.circle(center, radius, fill=fill_color)) # add circle return dwg.tostring() # INPUTS # ############ dpi = 100 # dots per inch WidthOfSample_mm = 50 # mm HeightOfSample_mm = 50 # mm PatternSize_mm = 1 # mm density = 0.75 # 1 is very dense, 0 is not fine at all Variation = 0.75 # 1 is very bad, 0 is very good ############ height, width, pattern_size = get_sizes_pixels(HeightOfSample_mm, WidthOfSample_mm, PatternSize_mm, dpi) vertical, horizontal = get_spot_grid(height, width, pattern_size, density) spots = generate_spots(vertical, horizontal, pattern_size, density, Variation) img = render_raster(height, width, spots) svg = render_svg(height, width, spots) print(f"Height={height} Width={width} # Spots={len(spots)}") print(f"Raw pixels: {img.size}") print(f"SVG size: {len(svg)}") cv2.imwrite("timo.png", img) with open("timo.svg", "w") as f: f.write(svg) This generates the following output: PNG | Rendered SVG Note: Since it's not possible to upload SVGs here, I put it on pastebin, and provide capture of it rendered by Firefox. Further improvements to the size of the SVG are possible. For example, we're currently using the same colour over an over. Styling or grouping should help remove this redundancy. Here's an example that groups all the spots in one group with constant fill colour: def render_svg(height, width, spots): dwg = svgwrite.Drawing(profile='tiny', size = (width, height)) dwg_spots = dwg.add(dwg.g(id='spots', fill='black')) for center, radius in spots: dwg_spots.add(dwg.circle(center, radius)) # add circle return dwg.tostring() The output looks the same, but the file is now 4904718 bytes instead of 7435966 bytes. An alternative (pointed out by AKX) if you only desire to draw in black, you may omit the fill specification as well as the grouping, since the default SVG fill colour is black. The next thing to notice is that most of the spots have the same radius -- in fact, using your settings at DPI of 1000 the unique radii are [1, 2] and at DPI of 10000 they are [15, 16, 17, 18, 19, 20, 21, 22, 23]. How could we avoid repeatedly specifying the same radius? (As far as I can tell, we can't use groups to specify it) In fact, how can we omit repeatedly specifying it's a circle? Ideally we'd just tell it "Draw this mark at all of those positions" and just provide a list of points. Turns out there are two features of SVG that let us do exactly that. First of all, we can specify custom markers, and later refer to them by an ID. <marker id="id1" markerHeight="2" markerWidth="2" refX="1" refY="1"> <circle cx="1" cy="1" fill="black" r="1" /> </marker> Second, the polyline element can optionally draw markers at every vertex of the polyline. If we draw the polyline with no stroke and no fill, all we end up is with the markers. <polyline fill="none" marker-end="url(#id1)" marker-mid="url(#id1)" marker-start="url(#id1)" points="2,5 8,22 11,26 9,46 8,45 2,70 ... and so on" stroke="none" /> Here's the code: def group_by_radius(spots): radii = set([r for _,r in spots]) groups = {r: [] for r in radii} for c, r in spots: groups[r].append(c) return groups def render_svg_v2(height, width, spots): dwg = svgwrite.Drawing(profile='full', size=(width, height)) by_radius = group_by_radius(spots) dwg_grp = dwg.add(dwg.g(stroke='none', fill='none')) for r, centers in by_radius.items(): dwg_marker = dwg.marker(id=f'r{r}', insert=(r, r), size=(2*r, 2*r)) dwg_marker.add(dwg.circle((r, r), r=r)) dwg.defs.add(dwg_marker) dwg_line = dwg_grp.add(dwg.polyline(centers)) dwg_line.set_markers((dwg_marker, dwg_marker, dwg_marker)) return dwg.tostring() The output SVG still looks the same, but now the filesize at DPI of 1000 is down to 1248852 bytes. With high enough DPI, a lot of the coordinates will be 3, 4 or even 5 digits. If we bin the coordinates into tiles of 100 or 1000 pixels, we can then take advantage of the use element, which lets us apply an offset to the referenced object. Thus, we can limit the polyline coordinates to 2 or 3 digits at the cost of some extra overhead (which is generally worth it). Here's an initial (clumsy) implementation of that: def bin_points(points, bin_size): bins = {} for x,y in points: bin = (max(0, x // bin_size), max(0, y // bin_size)) base = (bin[0] * bin_size, bin[1] * bin_size) offset = (x - base[0], y - base[1]) if base not in bins: bins[base] = [] bins[base].append(offset) return bins def render_svg_v3(height, width, spots, bin_size): dwg = svgwrite.Drawing(profile='full', size=(width, height)) by_radius = group_by_radius(spots) dwg_grp = dwg.add(dwg.g(stroke='none', fill='none')) polyline_counter = 0 for r, centers in by_radius.items(): dwg_marker = dwg.marker(id=f'm{r}', insert=(r, r), size=(2*r, 2*r)) dwg_marker.add(dwg.circle((r, r), r=r, fill='black')) dwg.defs.add(dwg_marker) dwg_marker_grp = dwg_grp.add(dwg.g()) marker_iri = dwg_marker.get_funciri() for kind in ['start','end','mid']: dwg_marker_grp[f'marker-{kind}'] = marker_iri bins = bin_points(centers, bin_size) for base, offsets in bins.items(): dwg_line = dwg.defs.add(dwg.polyline(id=f'p{polyline_counter}', points=offsets)) polyline_counter += 1 dwg_marker_grp.add(dwg.use(dwg_line, insert=base)) return dwg.tostring() With bin size set to 100, and DPI of 1000, we get to a file size of 875012 bytes, which means about 6.23 bytes per spot. That's not so bad for XML based format. With DPI of 10000 we need bin size of 1000 to make a meaningful improvement, which yields something like 1349325 bytes (~9.6B/spot).
5
7
78,765,505
2024-7-18
https://stackoverflow.com/questions/78765505/how-to-use-overload-from-the-typing-package-within-a-higher-order-function-in-py
Returning an overloaded function from a higher-order callable does not produce expected results with respect to type hints: namely, the resulting function behaves as if it was unannotated at all. Here a minimal example: from typing import Optional, overload def get_f(b: int): @overload def f(x: int) -> str: ... @overload def f(x: int, c: bool) -> int: ... def f(x: int, c: Optional[bool] = None) -> int | str: return 2 * x * b if c is not None else "s" return f d = get_f(2) a = d(2) b = a ** 2 Here mypy does not realize that b = a**2 leads to an TypeError. How to remedy this?
You can create a type denoting an overloaded callable using typing.Protocol (read the PEP to learn more about structural subtyping): from typing import Optional, Protocol, overload class FProtocol(Protocol): @overload def __call__(self, x: int) -> str: ... @overload def __call__(self, x: int, c: bool) -> int: ... def get_f(b: int) -> FProtocol: @overload def f(x: int) -> str: ... @overload def f(x: int, c: bool) -> int: ... def f(x: int, c: Optional[bool] = None) -> int | str: return 2 * x * b if c is not None else "s" return f d = get_f(2) a = d(2) b = a ** 2 # E: Unsupported operand types for ** ("str" and "int") [operator] mypy playground pyright playground Using protocol is the only way to specify an overloaded callable, as far as I know. When you leave the return type unspecified, it defaults to Any, as PEP484 mandates: For a checked function, the default annotation for arguments and for the return type is Any
2
4
78,764,991
2024-7-18
https://stackoverflow.com/questions/78764991/how-do-i-merge-multiple-dataframes-and-sum-common-values-into-column
I have many dataframe like: df1 df2 and so on... gene | counts gene | counts KRAS 136 KRAS 96 DNAH5 3 DNAH5 4 TP53 105 TP53 20 I want to merge them and sum the column 'counts' so I end with only one dataframe merged_df gene | counts KRAS 232 DNAH5 7 TP53 125 I have tried to use pd.merge but it only accepts 2 dataframes at once, I have 14 dataframes. I used pd.concat for multiple dataframes but can't sum them after.
Indeed, pd.merge only merges 2 dataframes. But pd.join can join many, if they have the same index: # Some example data. Note the None in `df3`. We want our code to handle that well. df1 = pd.DataFrame({'gene': ['KRAS', 'DNAH5', 'TP53'], 'counts': [136, 3, 105]}) df2 = pd.DataFrame({'gene': ['KRAS', 'DNAH5', 'TP53'], 'counts': [96, 4, 20]}) df3 = pd.DataFrame({'gene': ['KRAS', 'DNAH5', 'TP53'], 'counts': [1000, None, 3000]}) dfs = [df1, df2, df3] # We need the same index for the pd.DataFrame.join to work dfs = [df.set_index('gene') for df in dfs] # All the non-index columns need unique names, so chanding `columns` to `columns_0`, `columns_1` dfs = [df.rename(columns={'counts': f'counts_{i}'}) for i, df in enumerate(dfs)] # Actual join. We are joining first with the rest df = dfs[0].join(dfs[1:], how='outer') # Since we don't have any other data, we can just sum all columns. df.sum(1) This prints: gene KRAS 1232.0 DNAH5 7.0 TP53 3125.0
3
1