question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
76,812,405 | 2023-8-1 | https://stackoverflow.com/questions/76812405/typeerror-stringmethods-rsplit-takes-from-1-to-2-positional-arguments-but-3-w | I'm trying to use Series.str.rsplit to return everything up to the delimiter, but can't seem to get past this TypeError I'm receiving: TypeError: StringMethods.rsplit() takes from 1 to 2 positional arguments but 3 were given I am using pandas 2.0.3. Here is my code: import pandas as pd df1 = pd.DataFrame({'name': ['tom ent', 'nick', 'krish std', 'jack whatever'], 'age': ['5', '6', '7', '8']}) df1['name'] = df1['name'].str.rsplit(' ', 1).str[0] Output should look like: name age tom 5 nick 6 krish 7 jack 8 Not sure what I'm missing here. | In Pandas 1.4 and before, the .rsplit method used ordinary parameters, so a call like this would mean that ' ' is the delimiter and 1 is the maximum number of times to split - just like the built-in method of the str class. However, starting in 1.5, n and expand are keyword-only parameters. To use them, the name needs to be specified explicitly, like: .rsplit(' ', n=1). The idea is that, because the programmer is forced to use names, there is no chance of mistaking the order. | 6 | 11 |
76,806,185 | 2023-7-31 | https://stackoverflow.com/questions/76806185/what-is-the-meaning-of-valueerror-shapes-200-200-and-199-not-aligned-200 | I am trying to simulate an option pricing scheme by defining functions (Black-Scholes and Crank-Nicolson) and comparing them. The implementation of the Crank-Nicolson scheme with boundary conditions is as follows (I include the full code for replication): import numpy as np from scipy.stats import norm from scipy.integrate import solve_ivp import matplotlib.pyplot as plt def bs_call(S, K, r, sigma, T): d1 = (np.log(S/K) + (r + 0.5*sigma**2)*T) / (sigma*np.sqrt(T)) d2 = d1 - sigma*np.sqrt(T) return S*norm.cdf(d1) - K*np.exp(-r*T)*norm.cdf(d2) def crank_nicolson(S0, K, r, sigma, T, Smin, Smax, M, N): # Initialization dt = T/N ds = (Smax - Smin)/M S = np.linspace(Smin, Smax, M+1) tau = np.linspace(0, T, N+1) V = np.zeros((M+1, N+1)) # Boundary conditions V[:, 0] = np.maximum(S-K, 0) V[0, :] = 0 V[-1, :] = Smax-K*np.exp(-r*tau) # Tridiagonal matrix for the implicit step alpha = 0.25*dt*(sigma**2*S**2/ds**2 - r*S/ds) beta = -dt*0.5*(sigma**2*S**2/ds**2 + r) gamma = 0.25*dt*(sigma**2*S**2/ds**2 + r*S/ds) A = np.diag(alpha[2:], -1) + np.diag(1+beta[1:]) + np.diag(gamma[:-2], 1) B = np.diag(-alpha[2:], -1) + np.diag(1-beta[1:]) + np.diag(-gamma[:-2], 1) # Time loop for n in range(1, N+1): b = np.dot(B, V[1:-1, n-1]) b[0] -= alpha[1]*V[0, n] b[-1] -= gamma[-2]*V[-1, n] V[1:-1, n] = np.linalg.solve(A, b) # Interpolation for S0 i = int(round((S0-Smin)/ds)) if i == M+1: i = M elif i == 0: i = 1 a = (V[i+1, -1]-V[i, -1])/ds b = (V[i+1, -1]-2*V[i, -1]+V[i-1, -1])/ds**2 return V[i, -1] + a*(S0-S[i]) + 0.5*b*(S0-S[i])**2 Up to here, so far so good. Afterwards, I run an example as follows (full code also included for replication): # Example usage S0 = 100 K = 100 r = 0.05 sigma = 0.2 T = 1 Smin = 0 Smax = 200 M = 200 N = 1000 option_price = crank_nicolson(S0, K, r, sigma, T, Smin, Smax, M, N) However, I got an error here with the following message: ValueError: shapes (200,200) and (199,) not aligned: 200 (dim 1) != 199 (dim 0) and I do not know if it comes from the function definition (first part) or from the parameters definition (second script) and how can be solved. | The error is caused since B.shape[1] is M, whereas V[1:-1, n-1].shape[0] is M-1, which do not match, thus I'd guess the error is due to the function definition. Changing A = np.diag(alpha[2:], -1) + np.diag(1+beta[1:]) + np.diag(gamma[:-2], 1) B = np.diag(-alpha[2:], -1) + np.diag(1-beta[1:]) + np.diag(-gamma[:-2], 1) to A = np.diag(-alpha[2:M], -1) + np.diag(1-beta[1:M]) + np.diag(-gamma[1:M-1], 1) B = np.diag(alpha[2:M], -1) + np.diag(1+beta[1:M]) + np.diag(gamma[1:M-1], 1) (heavily "inspired" by this very similar implementation) gets rid of the error message and running cn_price = crank_nicolson(S0, K, r, sigma, T, Smin, Smax, M, N) bs_price = bs_call(S0, K, r, sigma, T) print(cn_price, bs_price) (with otherwise unchanged code) prints 10.3201002605872 10.450583572185565 Note that I am not familiar with the Crank-Nicolson method and can therefore only hope that the proximity of those values indicates a fix (i.e. correct adjustment) indeed, as opposed to a mere ridding of the error message. | 3 | 1 |
76,793,045 | 2023-7-29 | https://stackoverflow.com/questions/76793045/python-error-showing-pygame-and-gymnasium-classic-control-not-installed-howev | I have just started learning OpenAI gymnasium and started with CartPole-v1. Being new I was following a YouTube tutorial; video:https://www.youtube.com/watch?v=Mut_u40Sqz4&t=2076s (I am up to 1:08:22) and I also wanted to see a window of the game. However when running the code, Python threw this error: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gymnasium/envs/classic_control/cartpole.py", line 223, in render import pygame ModuleNotFoundError: No module named 'pygame' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/artemnikoan/Documents/CartPole-v1.py", line 9, in <module> state = env.reset() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gymnasium/wrappers/time_limit.py", line 75, in reset return self.env.reset(**kwargs) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gymnasium/wrappers/order_enforcing.py", line 61, in reset return self.env.reset(**kwargs) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gymnasium/wrappers/env_checker.py", line 57, in reset return env_reset_passive_checker(self.env, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gymnasium/utils/passive_env_checker.py", line 186, in env_reset_passive_checker result = env.reset(**kwargs) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gymnasium/envs/classic_control/cartpole.py", line 209, in reset self.render() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gymnasium/envs/classic_control/cartpole.py", line 226, in render raise DependencyNotInstalled( gymnasium.error.DependencyNotInstalled: pygame is not installed, run `pip install gymnasium[classic-control]` I made sure pygame and gymnasium classic-control are installed, but it still didn't work. Here is the code: import os import gymnasium as gym from stable_baselines3 import PPO from stable_baselines3.common.vec_env import DummyVecEnv from stable_baselines3.common.evaluation import evaluate_policy env=gym.make('CartPole-v1',render_mode='human') episodes = 5 for episode in range(1, episodes+1): state = env.reset() done = False score = 0 while not done: env.render() action = env.action_space.sample() n_state, reward, done, truncated, info = env.step(action) score+=reward print('Episode:{} Score:{}'.format(episode, score)) env.close() log_path=os.path.join('Training', 'Logs') env = gym.make(environment_name) env = DummyVecEnv([lambda: env]) model = PPO('MlpPolicy', env, verbose = 1) model.learn(total_timesteps=10000) PPO_path = os.path.join('Training', 'Saved Models', 'PPO_model') model.save(PPO_path) model = PPO.load(PPO_path, env=env) evaluate_policy(model, env, n_eval_episodes=10, render=True) I am working on MAC PC with using IDLE. I installed Pytorch and every other required module. What may be the cause? | You have not installed pygame. The error message already shows that you have to do: pip install gymnasium[classic-control]. Considering the video provided in OG was from 6th of June 2021, the Gym version (not Gymnasium) must have been 0.18.3 or lower. The syntax has changed a lot throughout the newer Gym versions, which causes many errors just like you have experienced. One thing for sure, the video provided does not use Gymnasium. I thus also suggest either to downgrade Gym to a version <= 0.18.3. I could not find out from the video what version they used, but it must be in aforementioned range. Or, you could make yourself familiar with Gymnasium or newer versions of Gym to rewrite parts of the code (since Stable Baselines 3 does work with either version). Might help: All Gym versions: https://pypi.org/project/gym/#history Gym documentation: https://www.gymlibrary.dev/index.html Gymnasium documentation: https://gymnasium.farama.org/ | 2 | 1 |
76,806,529 | 2023-7-31 | https://stackoverflow.com/questions/76806529/find-ranges-of-a-larger-range-not-covered-by-a-list-of-subranges | Apologies for the perhaps confusing title. Suppose you have a range (0,10000) and a list of smaller sub-ranges that may overlap [(20,3460),(200,3000),(2800,8704)] and you want to return the ranges of the original range NOT covered by the sub-ranges, in this case [(0, 19), (8705, 10000)] I am trying to figure out how to do this efficiently to work with larger ranges and more numerous sub-ranges. I thought about using itertools.chain to create a single range from the list of sub-ranges to take care of overlaps and then somehow using that range to slice the larger range, but I'm not sure if this is the best approach. If it helps to know the purpose, I am trying to create a list of coding sequences that are wholly or partially in the unaligned regions of a genomic comparison given the ranges of the aligned regions and ranges of the coding sequences. If the coding sequence is only partially aligned, I want to be able to return the ranges that are unaligned. | intspan can be used to get the complement of the sub ranges. from intspan import intspan start, end = (0,10000) subs = [(20,3460),(200,3000),(2800,8704)] span = intspan.from_ranges(subs) comp = span.complement(low=start, high=end).ranges() print(comp) Prints: [(0, 19), (8705, 10000)] | 2 | 3 |
76,805,028 | 2023-7-31 | https://stackoverflow.com/questions/76805028/scipy-genextreme-fit-returns-different-parameters-from-matlab-gev-fit-function-o | I'm trying to port some code from MATLAB to PYTHON and I realized gevfit function in MATLAB seems to behave differently from scipy genextreme, so I realized this minimal example: MATLAB % Create the MATLAB array x = [0.5700, 0.8621, 0.9124, 0.6730, 0.5524, 0.7608, 0.2150, 0.5787, ... 0.7210, 0.7826, 0.8181, 0.5449, 0.7501, 1.1301, 0.7784, 0.5378, ... 0.9550, 0.9623, 0.6865, 0.6863, 0.6153, 0.4372, 0.5485, 0.6318, ... 0.5501, 0.8333, 0.8044, 0.9111, 0.8560, 0.6178, 1.0688, 0.7535, ... 0.7554, 0.7123, 0.7589, 0.8415, 0.7586, 0.3865, 0.3087, 0.7067]; disp(numbers); parmHat = gevfit(x); disp('Estimated parameters (A, B):'); disp(parmHat); Estimated parameters (A, B): -0.3351 0.1962 0.6466 PYTHON import numpy as np import scipy.stats as stats x = np.array([0.5700, 0.8621, 0.9124, 0.6730, 0.5524, 0.7608, 0.2150, 0.5787, 0.7210, 0.7826, 0.8181, 0.5449, 0.7501, 1.1301, 0.7784, 0.5378, 0.9550, 0.9623, 0.6865, 0.6863, 0.6153, 0.4372, 0.5485, 0.6318, 0.5501, 0.8333, 0.8044, 0.9111, 0.8560, 0.6178, 1.0688, 0.7535, 0.7554, 0.7123, 0.7589, 0.8415, 0.7586, 0.3865, 0.3087, 0.7067]) # Fit the GEV distribution to the data parameters3 = stats.genextreme.fit(x) print("Estimated GEV parameters:", parameters3) Estimated GEV parameters: (1.0872284332032054, 0.534605335200113, 0.6474387313912493) I'd expect the same parameters, but results are totally different. Any help? | The method genextreme.fit is failing to compute the correct result. You can help it generate the correct value by providing initial values for the numerical solver that are better than the default used by genextreme.fit. The initial values are set by providing values for the shape, location and scale parameters: In [29]: from scipy.stats import genextreme In [30]: x = np.array([0.5700, 0.8621, 0.9124, 0.6730, 0.5524, 0.7608, 0.2150, 0.5787, ...: 0.7210, 0.7826, 0.8181, 0.5449, 0.7501, 1.1301, 0.7784, 0.5378, ...: 0.9550, 0.9623, 0.6865, 0.6863, 0.6153, 0.4372, 0.5485, 0.6318, ...: 0.5501, 0.8333, 0.8044, 0.9111, 0.8560, 0.6178, 1.0688, 0.7535, ...: 0.7554, 0.7123, 0.7589, 0.8415, 0.7586, 0.3865, 0.3087, 0.7067]) ...: In [31]: genextreme.fit(x, 0.34, loc=0.65, scale=0.20) # Include initial guess of the parameters Out[31]: (0.33513328610099824, 0.6466250071208526, 0.19615018966970216) Note that the parameter c used by SciPy's genextreme is the negative of the parameter k in Matlab. Also note that the order of the parameters in SciPy is c, location, scale, while in Matlab it is k, scale, location. | 3 | 1 |
76,806,273 | 2023-7-31 | https://stackoverflow.com/questions/76806273/pandas-add-increment-index | I have the following Pandas DataFrame Key Value A 10 A 20 B 30 B 40 C 50 A 60 A 70 A 70 B 80 A 90 And I need to create an index that auto increment only when the key repeats after sequence of different keys. So, I need this output: Key Value Index A 10 1 A 20 1 B 30 1 B 40 1 C 50 1 A 60 2 A 70 2 A 70 2 B 80 2 A 90 3 Thanks! I try with the method groupby and cumcount() + 1 but it does not work. | Using an ordered Categorical and numpy.cumsum: import numpy as np s = pd.Categorical(df['Key'], ordered=True) df['Index'] = np.cumsum(s<s.shift())+1 If you want a custom order pass categories=['X', 'Z', 'Y']. Or, like @SimonT commented, if your categories are lexicographically sorted: df['Index'] = np.cumsum(df['Key']<df['Key'].shift())+1 Output: Key Value Index 0 A 10 1 1 A 20 1 2 B 30 1 3 B 40 1 4 C 50 1 5 A 60 2 6 A 70 2 7 A 70 2 8 B 80 2 9 A 90 3 | 3 | 1 |
76,805,676 | 2023-7-31 | https://stackoverflow.com/questions/76805676/how-to-replace-loop-with-if-and-elif-but-without-else-by-list-comprehensio | I am trying to use list comprehension for below code # Input values = [1, 2, 3, 4] # Using for loop ans = [] for value in values: if value == 1: ans.append(value + value) elif value == 4: ans.append(value * value) print(f"Ans: {ans}") # Using list comprehension ans = [ (value + value) if value == 1 else (value * value) if value == 4 for value in values ] print(f"Ans: {ans}") But getting error for list comprehension as below: SyntaxError: expected 'else' after 'if' expression I tried defining else part like below, but it did not work: else pass else continue The output should be [2, 16]. Other values are skipped. What I am missing? | I'm guessing that this is a small example of something more complicated, but as it is it is fairly easy to create a comprehension like: values = [1, 2, 3, 4] ans = [ value + value if value == 1 else value * value for value in values if value in [1,4] ] print(ans) | 2 | 6 |
76,802,047 | 2023-7-31 | https://stackoverflow.com/questions/76802047/remove-a-string-from-the-column-when-it-meets-the-condition | I would like to remove the string from a string column when it contains a lower letter (the string column may be NaN or include multiple string in one row) Column1 Column2 Column3 NaN NaN NaN BCDE8ENGUGUETNJN NaN NaN vk4JGFIEh5 NaN NaN t7chJGWYEj BCSTACK BCTENSORFLOW 5hNIE7EEP OVERFLOW NaN The original df is look like the above I have tried the "str.contains" function to locate and replace it when it contains lowerletter Since str function could not be used for NaN value, I replaced the NaN value with string 'nan' first, then aimed to replace all the lower letter with regex. Since 'nan' is also a lower letter, it should be replaced as well df['Column1'].fillna('nan',inplace=True) df['Column2'].fillna('nan',inplace=True) df['Column3'].fillna('nan',inplace=True) lowerletterpattern = r'[a-z]*' mask1 = df['Column1'].str.contains(lowerletterpattern) df.loc[mask1,'Column1'] = np.nan mask2 = df['Column2'].str.contains(lowerletterpattern) df.loc[mask2,'Column2'] = np.nan mask3 = df['Column3'].str.contains(lowerletterpattern) df.loc[mask3,'Column3'] = np.nan but the df returned with all NaN value Below is the expected result: Column1 Column2 Column3 NaN NaN NaN BCDE8ENGUGUETNJN NaN NaN NaN NaN NaN NaN BCSTACK BCTENSORFLOW NaN OVERFLOW NaN | One option, check for [a-z] with str.contains and mask: out = df.mask(df.apply(lambda s: s.str.contains('[a-z]').fillna(True))) Or, with replace: out = df.replace('.*[a-z].*', float('nan'), regex=True) Output: Column1 Column2 Column3 0 NaN NaN NaN 1 BCDE8ENGUGUETNJN NaN NaN 2 NaN NaN NaN 3 NaN BCSTACK BCTENSORFLOW 4 NaN OVERFLOW NaN | 3 | 1 |
76,801,382 | 2023-7-31 | https://stackoverflow.com/questions/76801382/parsing-data-from-bash-table-with-empty-fields | I am currently trying to parse some data from bash tables, and I found strange behavior in parsing data if some columns is empty for example i have data like this containerName ipAddress memoryMB name numberOfCpus status --------------- --------------- ---------- ------- -------------- ---------- TEST_VM 192.168.150.111 8192 TEST_VM 4 POWERED_ON and sometimes like this containerName ipAddress memoryMB name numberOfCpus status --------------- ----------- ---------- ---------------------- -------------- ----------- TEST_VM_second 3072 TEST_VM_second_renamed 1 POWERED_OFF I tried with python and with bash, but same results, I need data "name" but when I am using bash for example awk '{print $4}' in first table it prints expected result: name ------- TEST_VM but in second table in prints: name ---------------------- 1 same results with python: df_info = pd.read_table(StringIO(table), delim_whitespace=True) df_info = df_info.drop(0) pd.set_option('display.max_colwidth', None) print(df_info['name'], df_info['containerName']) Output: 1 TEST_VM Name: name, dtype: object 1 TEST_VM Name: containerName, dtype: object 1 1 Name: name, dtype: object 1 TEST_VM_second Name: containerName, dtype: object Maybe someone knows how to play around if ipaddress is empty field ? | Don't parse the file manually, take advantage of pandas.read_fwf: df_info = pd.read_fwf(StringIO(table), skiprows=[1], delim_whitespace=True) df_info.columns = df_info.columns.str.strip() Output: containerName ipAddre memoryMB name numberOfCpu tatu 0 TEST_VM_second 3072 TEST_VM_second_renamed 1 POWERED_OFF Or, to get NaNs: df_info = pd.read_fwf(StringIO(table), skiprows=[1]) Output: containerName ipAddress memoryMB name numberOfCpus status 0 TEST_VM_second NaN 3072 TEST_VM_second_renamed 1 POWERED_OFF | 4 | 1 |
76,786,670 | 2023-7-28 | https://stackoverflow.com/questions/76786670/what-is-the-difference-between-threshold-and-prominence-in-scipy-signal-find-pe | I'm trying to find the peaks of a noisy signal using scipy.signal.find_peaks and I realised that I don't fully understand the difference between the threshold and prominence arguments. I understand that prominence is equivalent to topographical prominence, i.e. the height of a peak relative to the surrounding terrain. However, I don't quite understand in which ways the threshold argument is different from this. From the above link both arguments seem equivalent to me. What is exactly the difference between threshold and prominence in this case? | Threshold is about vertical distance to the samples right before and after. Prominence is about vertical distance to the deepest valley. Here's a visual explanation of the difference: In this graph of cos(x), the peak has a threshold of 0.191, and a prominence of 2. What implications does this have? Threshold is good at distinguishing peaks which go up for a single sample, then go down. Prominence is good at distinguishing peaks which go up and down over the course of several samples. A single threshold value gets more selective as the sampling rate goes up. However, prominence is independent of sampling rate. (As an example, try changing the number of samples from 21 to 41 in the example below, and see how threshold changes.) The parameter wlen controls how many samples are searched for the valley to determine prominence. If you set wlen=3, then prominence and threshold are the same. By default, it will search until the next larger peak. See documentation. Code used to produce graphic import matplotlib.pyplot as plt import numpy as np import scipy.signal x = np.linspace(0, 4*np.pi, 21) y = np.cos(x) plt.plot(x, y) # Show threshold and prominence for peak print(scipy.signal.find_peaks(y, threshold=0, prominence=0)) | 6 | 6 |
76,777,140 | 2023-7-27 | https://stackoverflow.com/questions/76777140/manage-supabase-as-an-infrastructure-as-a-code-for-version-control | I want to be able to manage my Supabase project as a code, with regards to the SQL tables, the functions, auth, et cetera. The purpose is to allow myself to version control the project. I have gone through the official documentation and have not found anything pertaining to this. The closest approach I can find is simply version controlling all the SQL code being used - but that is not reassuring for production purposes. | Everything (except for some auth related features) you mentioned can be done with the Supabase CLI and it's migration capabilities. There is a guide on thw website explaining how to manage environments for this sort of setup https://supabase.com/docs/guides/cli/managing-environments. | 4 | 3 |
76,798,411 | 2023-7-30 | https://stackoverflow.com/questions/76798411/solving-assignment-problem-with-or-tools-with-number-of-tasks | I am trying to understand how OR-TOOLS works and wanted to solve the assignment problem but with number of tasks (2 out of 4) as constraints. But my code does not work as expectected. I have tried: from ortools.sat.python import cp_model costs = [ [90, 80, 75, 70], [35, 85, 55, 65], [125, 95, 90, 95], [45, 110, 95, 115], [50, 100, 90, 100], ] num_workers = len(costs) num_tasks = len(costs[0]) # Model model = cp_model.CpModel() # Variables x = [] for i in range(num_workers): t = [] for j in range(num_tasks): t.append(model.NewBoolVar(f'x[{i},{j}]')) x.append(t) # Constraints # Each worker is assigned to exactly one task. for worker in range(num_workers): model.AddExactlyOne(x[worker][task] for task in range(num_tasks)) # Choosing number of tasks. for worker in range(num_workers): model.Add(sum([x[worker][task] for task in range(num_tasks)]) == 2) # Objective objective_terms = [] for i in range(num_workers): for j in range(num_tasks): objective_terms.append(costs[i][j] * x[i][j]) model.Minimize(sum(objective_terms)) # Solve solver = cp_model.CpSolver() status = solver.Solve(model) # Print solution. if status == cp_model.OPTIMAL or status == cp_model.FEASIBLE: print(f'Total cost = {solver.ObjectiveValue()}') print() for i in range(num_workers): for j in range(num_tasks): if solver.BooleanValue(x[i][j]): print( f'Worker {i} assigned to task {j} Cost = {costs[i][j]}') else: print('No solution found.') So how do I define a number of tasks to use as a constraint? I get "No solution found" as output. But I expect that task 0 and task 3 will be selected and output should look like this Worker 0 assigned to task 1 Cost = 70 Worker 1 assigned to task 0 Cost = 35 Worker 2 assigned to task 1 Cost = 95 Worker 3 assigned to task 0 Cost = 45 Worker 4 assigned to task 0 Cost = 50 If we assign every worker to only one task and choose two tasks, this should be the optimal solution. | This can be achieved using channeling constraint Expressing @AirSquid's idea in CP-SAT syntax: task_assigned = [model.NewBoolVar(f"task_assigned{i}") for i in range(num_tasks)] for task_id in range(num_tasks): sum_expr = sum([x[worker][task_id] for worker in range(num_workers)]) model.Add(sum_expr >= 1).OnlyEnforceIf(task_assigned[task_id]) model.Add(sum_expr == 0).OnlyEnforceIf(task_assigned[task_id].Not()) model.Add(sum(task_assigned) == 2) This gives the output: Total cost = 295.0 Worker 0 assigned to task 2 Cost = 75 Worker 1 assigned to task 0 Cost = 35 Worker 2 assigned to task 2 Cost = 90 Worker 3 assigned to task 0 Cost = 45 Worker 4 assigned to task 0 Cost = 50 | 2 | 1 |
76,797,647 | 2023-7-30 | https://stackoverflow.com/questions/76797647/how-to-filter-part-of-a-multi-index-level-dataframe-with-different-conditions | Here is an original dataframe, country2 year 2017 2018 2019 country1 1 2 1 2 3 6 data_provider indicator prov_1 ind_a 45 30 22 30 30 30 prov_2 ind_a 30 30 30 30 25 30 ind_b 30 32 30 30 30 30 prov_3 ind_b 30 30 30 35 30 28 and I wish to filter the column and finally get a new dataframe, # item country2 # year 2017 2018 2019 # country1 1 2 1 2 3 6 # data_provider indicator # prov_1 ind_a 45.0 NaN 22.0 NaN NaN NaN # prov_2 ind_a NaN NaN NaN NaN NaN NaN # ind_b NaN 32.0 NaN NaN NaN NaN # prov_3 ind_b NaN NaN NaN NaN NaN NaN you can get the original dataframe by, df = pd.DataFrame( data={"data_provider": ["prov_1", "prov_1", "prov_2", "prov_2", "prov_3", "prov_3"], "indicator": ["ind_a", "ind_a", "ind_a", "ind_b", "ind_b", "ind_b"], "unit": ["EUR", "EUR", "EUR", "EUR", "EUR", "EUR"], "year": ["2017", "2018","2019", "2017","2018","2019"], "country1": [1, 1, 3, 2, 2, 6], "country2": [45, 22, 25, 32, 35, 28] } ) df = df.pivot_table( index=['data_provider', 'indicator'], columns=['year', 'country1'], fill_value=30 ) df.columns.names = ['item', 'year', 'country1'] here is how I get the new dataframe, locate the 2 group of target column labels x1 = df.columns[df.columns.get_level_values(level='year')=='2017'] x2 = df.columns[df.columns.get_level_values(level='year')=='2018'] get newdf1 with condition1 df[x1]>30 newdf1 = df[df[x1] > 30] get newdf2 with condition2 df[x2]<30 newdf2 = df[df[x2] < 30] update newdf2 with newdf1 newdf = newdf2.combine_first(newdf1) in my solution, I first get 2 dataframe after filtering the original dataframe with different conditions and then combine them together. I am wondering if there's a one direct way to achieve the goal. | import pandas as pd df = pd.DataFrame( data={ "data_provider": ["prov_1", "prov_1", "prov_2", "prov_2", "prov_3", "prov_3"], "indicator": ["ind_a", "ind_a", "ind_a", "ind_b", "ind_b", "ind_b"], "unit": ["EUR", "EUR", "EUR", "EUR", "EUR", "EUR"], "year": ["2017", "2018","2019", "2017","2018","2019"], "country1": [1, 1, 3, 2, 2, 6], "country2": [45, 22, 25, 32, 35, 28] } ) df = df.pivot_table( index=['data_provider', 'indicator'], columns=['year', 'country1'], fill_value=30 ) df.columns.names = ['item', 'year', 'country1'] cond1 = (df.columns.get_level_values(level='year') == '2017') & df.gt(30) cond2 = (df.columns.get_level_values(level='year') == '2018') & df.lt(30) df_new = df.where(cond1 | cond2) print(df_new) Output : country2 year 2017 2018 2019 country1 1 2 1 2 3 6 data_provider indicator prov_1 ind_a 45.0 NaN 22.0 NaN NaN NaN prov_2 ind_a NaN NaN NaN NaN NaN NaN ind_b NaN NaN NaN NaN NaN NaN prov_3 ind_b NaN NaN NaN NaN NaN NaN | 2 | 5 |
76,793,693 | 2023-7-29 | https://stackoverflow.com/questions/76793693/how-to-simplify-huge-arithmatic-expression | I've a huge expression like: x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52] + FUNC1(z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51] + FUNC0(h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50] + FUNC0(a49) + FUNC2(a49, x49, y49)) + FUNC2(h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50] + FUNC0(a49) + FUNC2(a49, x49, y49), a49, x49) + y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51] + FUNC1(h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50] + FUNC0(a49) + FUNC2(a49, x49, y49) + w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50] + FUNC1(a49 + v49 + FUNC1(x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52]) + RET(x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52], y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + movr[53] + m1[53]) + RET(a49 + v49 + FUNC1(x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52]) + RET(x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52], y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + movr[53] + m1[53], x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52], y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + movr[54] + m1[54]) + RET(h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50] + FUNC0(a49) + FUNC2(a49, x49, y49) + w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50] + FUNC1(a49 + v49 + FUNC1(x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52]) + RET(x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52], y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + movr[53] + m1[53]) + RET(a49 + v49 + FUNC1(x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52]) + RET(x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52], y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) ... I need to simplify this expression by replacing some part of repeating expression with other variables. For example: a = RET(v49, t49, z49) b= w49 + h49 + FUNC1(v49) + a + movr[50] + m1[50] and so on... my problem is; this is really huge expression (like 2MB long expression) and doing this manually is near impossible and also without mistakes. now my question is; is there any app that'll do such thing? or any python program to do so? I can program python easily, but I lack of such algorithm knowing. any help appreciated. | The following function extracts all function calls and puts them into variables. def simplify(progstr, variable_prefix='x'): progstr = f' {progstr} ' prog = [] while progstr.count('(') > 0: for i, c in enumerate(progstr): if c == ')': c2, i2 = None, i while c2 != '(': i2 -= 1 c2 = progstr[i2] i2 -= 1 c2 = progstr[i2] while c2 not in [',', ' ', '(', ')']: i2 -= 1 c2 = progstr[i2] variable = progstr[i2+1:i+1] vname = f'{variable_prefix}{str(len(prog))}' progstr = progstr.replace(variable, vname) prog.append(f'{vname} = {variable}') break prog.append(progstr[1:-1]) return '\n'.join(prog) expression = 'x49 + t49 + FUNC1(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51]) + RET(y49 + z49 + FUNC1(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50]) + RET(w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49, t49) + movr[51] + m1[51], w49 + h49 + FUNC1(v49) + RET(v49, t49, z49) + movr[50] + m1[50], v49) + movr[52] + m1[52]' print(simplify(expression, 'x')) prints x0 = FUNC1(v49) x1 = RET(v49, t49, z49) x2 = FUNC1(w49 + h49 + x0 + x1 + movr[50] + m1[50]) x3 = RET(w49 + h49 + x0 + x1 + movr[50] + m1[50], v49, t49) x4 = FUNC1(y49 + z49 + x2 + x3 + movr[51] + m1[51]) x5 = RET(y49 + z49 + x2 + x3 + movr[51] + m1[51], w49 + h49 + x0 + x1 + movr[50] + m1[50], v49) t49 + x4 + x5 + movr[52] + m1[52] Next to making the code more readable, this allows avoiding much repeated computation, which should speed it up a lot (especially if the individual function calls are costly), e.g. here FUNC1(v49) gets executed only once as opposed to 5 times. (Edit): How it works: While there are parentheses in the expression, do the following: Go through the expression from left to right, until you encounter a closing bracket (call this location j), then walk to the left until you encounter an opening bracket, then walk to the left until you encounter a whitespace, comma or bracket (and call this location i). The segment expression[i:j] then marks the first function call. Then simply replace each occurrence expression[i:j] in expression by a variable name x and add x = expression[i:j] to your list of variables. Some remarks on the code: it is only briefly tested on some valid and well-formatted code, e.g. it assumes a space after each comma, a space around each operator (+, *, ..., no space), no space before a closing bracket, no space after an opening bracket, ...), it also assumes that the variable prefix does not collide with the variable and function names in the expression (can be made more robust either by knowing about the names and providing some known-to-be safe prefix or by adjusting the code to itself select a prefix that is valid), it further assumes that f(*args) (for a function f and some constant arguments args) always returns the same result, no matter what was computed beforehand, and it also creates variables for sub-expressions with only a single occurrence, which may not be desirable (but this should easily be changeable too, by some small adaptations of the code). | 4 | 3 |
76,793,660 | 2023-7-29 | https://stackoverflow.com/questions/76793660/remove-first-letter-prefix-of-the-command-message | so i am currently making the error handles for my discord bot, and when it returns that it has not all arguments, this is my code: @bot.event async def on_command_error(ctx, error): if isinstance(error, commands.MissingRequiredArgument): await ctx.send(f'❓ it seems like your entry `{ctx.message.content}` **does not contain all needed arguments!** try `+help {ctx.message.content.split()[0]}` for more info!') as you can see, i want the user to be able to see a +help [COMMAND EXECUTED WITHOUT PREFIX] in the response, how would i do that? everything works, it‘s just the prefix in the ctx.message.content.split()[0] that i don‘t want to be there.. thanks in advance! (and btw, the response message is attached as picture!) | You can simply use the ctx.command.name attribute: @bot.event async def on_command_error(ctx, error): if isinstance(error, commands.MissingRequiredArgument): command = ctx.command.name await ctx.send(f'❓ it seems like your entry `{ctx.message.content}` **does not contain all needed arguments!** try `+help {command}` for more info!') | 3 | 2 |
76,794,391 | 2023-7-29 | https://stackoverflow.com/questions/76794391/difference-between-all-and-all-for-checking-if-an-iterable-is-true-everywhe | I believe there are two ways of checking if a torch.Tensor has values all greater than 0. Either with .all() or all(), a Minimal Reproducible Example will illustrate my idea: import torch walls = torch.tensor([-1, 0, 1, 2]) result1 = (walls >= 0.0).all() # DIFFERENCE WITH BELOW??? result2 = all(walls >= 0.0) # DIFFERENCE WITH ABOVE??? print(result1) # Output: False print(result2) # Output: False all() is builtin so I think I would prefer using that one, but most code I see on the internet uses .all() so I'm afraid there is unexpected behaviour. Are they both behaving the exact same? | all is Python builtin, meaning that it only works using extremely generic interfaces. In this case, all treats the tensor as an opaque iterable. It proceeds by iterating the elements of the tensor one-by-one, constructing a Python object for each one and then checking the truthiness of that Python object. That is slow, with several added layers of unnecessary inefficiency. In contrast, Tensor.all knows what a Tensor object is, and can operate on it directly. It only needs to directly scan the tensors internal storage. No iterator protocol function calls, no intermediate Python objects. Tensor.all will always be far more efficient, both in time and memory, that the builtin all. | 2 | 3 |
76,790,233 | 2023-7-28 | https://stackoverflow.com/questions/76790233/how-to-make-pylance-and-pydantic-understand-each-other-when-instantiating-basemo | I am trying to instantiate user = User(**external_data), where User is Pydantic BaseModel, but I am getting error from Pylance, which don't like my external_data dictionary and unable to figure out that data in the dict is actually correct (see first screenshot). I found a workaround by creating TypedDict with the same declaration as for User(BaseModel). Now Pylance is happy, but I am not, because I need to repeat myself (see second screenshot). Any ideas on how to make Pylance and Pydantic understand each other without repetition? from datetime import datetime from pydantic import BaseModel from typing import TypedDict class UserDict(TypedDict, total=False): id: int name: str signup_ts: datetime friends: list[int] class User(BaseModel): id: int name: str = "John Doe" signup_ts: datetime | None = None friends: list[int] = [] external_data: UserDict = { 'id': 123, 'name': 'Vlad', 'signup_ts': datetime.now(), 'friends': [1, 2, 3], } user = User(**external_data) print(user) print(user.id) Pylance error for the case with no UserDict: Argument of type "int | str | datetime | list[int]" cannot be assigned to parameter "id" of type "int" in function "__init__" Type "int | str | datetime | list[int]" cannot be assigned to type "int" "datetime" is incompatible with "int"PylancereportGeneralTypeIssues Argument of type "int | str | datetime | list[int]" cannot be assigned to parameter "name" of type "str" in function "__init__" Type "int | str | datetime | list[int]" cannot be assigned to type "str" "datetime" is incompatible with "str"PylancereportGeneralTypeIssues Argument of type "int | str | datetime | list[int]" cannot be assigned to parameter "signup_ts" of type "datetime | None" in function "__init__"PylancereportGeneralTypeIssues Argument of type "int | str | datetime | list[int]" cannot be assigned to parameter "friends" of type "list[int]" in function "__init__" Type "int | str | datetime | list[int]" cannot be assigned to type "list[int]" "datetime" is incompatible with "list[int]"PylancereportGeneralTypeIssues (variable) external_data: dict[str, int | str | datetime | list[int]] | Why the errors? These warnings are to be expected because the type of your external_data object (without using a TypedDict) will be inferred by Pylance as dict[str, U], where U is the union of all the types it sees in the dictionary values. In your example U = int | str | datetime | list[int]. Normal dictionaries are homogeneous in their key type as well is their value type and thus have no way to distinguish types between items. This means that every key of external_data will be seen as a str (which is fine) and every value will be seen as int | str | datetime | list[int]. Since your Pylance is configured to expect distinct types for the keyword arguments passed to the model's __init__ method, this causes a problem. It sees that you are unpacking a dictionary, where the values are all of the aforementioned union type, yet for example the id argument must be of type int and of course int | str | datetime | list[int] is not a subtype of int. Same for all the other __init__ parameters. TypedDict (sort of) works The main difference between a normal dict and a TypedDict is that the latter can distinguish (value) types between items. But the drawback in this case is the repetition of course. Since TypedDict is a construct that exists solely for the benefit of static type checkers like Pylance, it would make no sense to somehow "dynamically" construct it from the field definitions of a model, even though that may get rid of the repetition. The other way around, i.e. construct a model class from a TypedDict, is possible, but has the reverse problem. A type checker will have no way to infer the model members and their types. Workarounds Since you are not using the keyword-argument syntax directly anyway, but constructing a dictionary first, there is really no need to rely on those keyword-argument type checks at all. If you needed/wanted that additional type safety, you would simply initialize the model as User(id=123, ....). Therefore I see essentially two ways around your issue: Use a type: ignore directive. Use an alternative constructor. The first is easy. Simply put a # type: ignore comment next to the constructor call. The second is different depending on the major version of Pydantic you are using, and it is what I usually do in that situation. Pydantic 1.x You have the BaseModel.parse_obj method for this purpose: parse_obj: this is very similar to the __init__ method of the model, except it takes a dict rather than keyword arguments. ... external_data = { 'id': 123, 'name': 'Vlad', # ... } user = User.parse_obj(external_data) Pydantic 2.x You have the BaseModel.model_validate for this purpose: model_validate: this is very similar to the __init__ method of the model, except it takes a dict rather than keyword arguments. ... external_data = { 'id': 123, 'name': 'Vlad', # ... } user = User.model_validate(external_data) | 3 | 2 |
76,788,887 | 2023-7-28 | https://stackoverflow.com/questions/76788887/polars-apply-function-dosent-pass-column-name-to-function | I am migrating my old code from pandas to polars. But i am not able to have a workaround for apply function. I have a function where i do some calculation from the data received from apply function where i have to use some column. My code In pandas for example: import pandas as pd data = {'A': [1, 2, 3], 'B': [4, 5, 6]} df = pd.DataFrame(data) self.var1 = 'A' self.var2 = 'B' def some_calculation(row): var = self.var1 var2 = self.var2 cal1 = row[var] + row[var2] cal2 = row[var] * row[var2] return list(zip(cal1 , cal2)) df['calc_data'] = df.apply(some_calculation, axis=1) Now, after going through the polars documentation and using the apply function i found out that it doesn't pass the column name and due to this I am unable to do the calculations as my dataframe may have varying columns and i was not able to find a solution to this. Please help. | The solution is to stop thinking in terms of apply. You can do something like ( pl.from_pandas(df) .with_columns( calc_data= pl.concat_list( pl.col(var1)+pl.col(var2), pl.col(var1)*pl.col(var2) ) ) ) shape: (3, 3) ┌─────┬─────┬───────────┐ │ A ┆ B ┆ calc_data │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ list[i64] │ ╞═════╪═════╪═══════════╡ │ 1 ┆ 4 ┆ [5, 4] │ │ 2 ┆ 5 ┆ [7, 10] │ │ 3 ┆ 6 ┆ [9, 18] │ └─────┴─────┴───────────┘ although you probably don't actually want the return to be a list so you'd do something more like ( pl.from_pandas(df) .with_columns( calc1=pl.col(var1)+pl.col(var2), calc2=pl.col(var1)*pl.col(var2) ) ) shape: (3, 4) ┌─────┬─────┬───────┬───────┐ │ A ┆ B ┆ calc1 ┆ calc2 │ │ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═══════╪═══════╡ │ 1 ┆ 4 ┆ 5 ┆ 4 │ │ 2 ┆ 5 ┆ 7 ┆ 10 │ │ 3 ┆ 6 ┆ 9 ┆ 18 │ └─────┴─────┴───────┴───────┘ If you are adamant about maintaining the anti-pattern that is rowwise iteration with names then you can use iter_rows(named=True) to get a generator of dicts for each row. Your function, as defined, doesn't work for me so I amended it to this... var1='A' var2='B' def some_calculation(row): cal1 = row[var1] + row[var2] cal2 = row[var1] * row[var2] return [cal1 , cal2] in which case I get... df['calc_data'] = df.apply(some_calculation, axis=1) df A B calc_data 0 1 4 [5, 4] 1 2 5 [7, 10] 2 3 6 [9, 18] Using the generator you can do: pldf=pl.from_pandas(df) pldf.with_columns( calc_data=pl.Series(some_calculation(x) for x in pldf.iter_rows(named=True)) ) shape: (3, 3) ┌─────┬─────┬───────────┐ │ A ┆ B ┆ calc_data │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ list[i64] │ ╞═════╪═════╪═══════════╡ │ 1 ┆ 4 ┆ [5, 4] │ │ 2 ┆ 5 ┆ [7, 10] │ │ 3 ┆ 6 ┆ [9, 18] │ └─────┴─────┴───────────┘ That being said, there's little to be gained by "migrating" to polars if you are going to maintain rowwise iteration. | 3 | 1 |
76,787,417 | 2023-7-28 | https://stackoverflow.com/questions/76787417/in-python-combine-two-iterator-based-on-time | I'd like to combine two iterators and yield the value(s) of the iterator that has the highest timestamp. Minimal example and expectations: # Outputs of these generators are "timestamps" def gen_even(): for x in range(0, 11, 2): yield x def gen_odd(): for x in sorted(list(range(1, 15, 2)) + [6]): yield x Combining these two should result in in the following sequence [0, 1, 2, 3, 4, 5, (6, 6), 7, 8, 9, 10, 11, 13] I tried the following which runs into StopIteration after gen1 has been consumed. gen1 = gen_even() gen2 = gen_odd() def gen_both(gen1, gen2): first = next(gen1) second = next(gen2) while True: if first < second: yield first first = next(gen1) elif first == second: yield first, second first = next(gen1) second = next(gen2) else: yield second second = next(gen2) gen = gen_both(gen1, gen2) for i in gen: print(i) Output: 0 1 2 3 4 5 (6, 6) 7 8 9 10 --------------------------------------------------------------------------- StopIteration Traceback (most recent call last) Cell In[8], line 11, in gen_both(gen1, gen2) 10 yield first ---> 11 first = next(gen1) 12 elif first == second: StopIteration: The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) Cell In[8], line 21 18 second = next(gen2) 20 gen = gen_both(gen1, gen2) ---> 21 for i in gen: 22 print(i) RuntimeError: generator raised StopIteration How can I do this in Python? | You could avoid the stop iterator by handling it as follows: def gen_both(gen1, gen2): first = next(gen1, None) second = next(gen2, None) while True: if first is None and second is None: break elif first is None: yield second second = next(gen2, None) elif second is None: yield first first = next(gen1, None) else: if first < second: yield first first = next(gen1, None) elif first == second: yield first, second first = next(gen1, None) second = next(gen2, None) else: yield second second = next(gen2, None) | 4 | 2 |
76,781,391 | 2023-7-27 | https://stackoverflow.com/questions/76781391/speed-up-loading-of-multiple-csv-files | I am trying to write a parser/API for the France's public drug database (https://base-donnees-publique.medicaments.gouv.fr/). It consists of eight CSV files (in fact TSV as they use tabs), each from a few kB to 4MB, the biggest having ~20000 lines (each line representing a drug with its name, codes, prices, etc). As these files may chance periodically, I'd like to parse them directly instead of creating a cleaner database (as I would probably have to recreate it regularly anyway). Importing these files took a little time (about one second), so I tried to speed it up a bit and did some benchmarking of several methods, and I was surprised to see that the most basic one seemed to also be the fastest. Here is my test code (sorry it is quite long). Each file is associated with a dedicated class to parse its lines. Basically, these classes are namedtuples with a custom classmethod to parse dates, numbers, etc. import pathlib import enum import datetime from decimal import Decimal from collections import namedtuple import csv def parse_date(date: str) -> datetime.datetime: return datetime.datetime.strptime(date, "%d/%m/%Y").date() def parse_date_bis(date: str) -> datetime.datetime: return datetime.datetime.strptime(date, "%Y%m%d").date() def parse_text(text): if not text: return "" return text.replace("<br>", "\n").strip() def parse_list(raw): return raw.split(";") def parse_price(price: str) -> Decimal: if not price: return None # Handles cases like "4,417,08". price = '.'.join(price.rsplit(",", 1)).replace(',', '') return Decimal(price) def parse_percentage(raw: str) -> int: if not raw: return None return int(raw.replace("%", "").strip()) class StatutAdministratifPresentation(enum.Enum): ACTIVE = "Présentation active" ABROGEE = "Présentation abrogée" class EtatCommercialisation(enum.Enum): DC = "Déclaration de commercialisation" S = "Déclaration de suspension de commercialisation" DAC = "Déclaration d'arrêt de commercialisation" AC = "Arrêt de commercialisation (le médicament n'a plus d'autorisation)" class MotifAvisSMR(enum.Enum): INSCRIPTION = "Inscription (CT)" RENOUVELLEMENT = "Renouvellement d'inscription (CT)" EXT = "Extension d'indication" EXTNS = "Extension d'indication non sollicitée" REEV_SMR = "Réévaluation SMR" REEV_ASMR = "Réévaluation ASMR" REEV_SMR_ASMR = "Réévaluation SMR et ASMR" REEV_ETUDE = "Réévaluation suite à résultats étude post-inscript" REEV_SAISINE = "Réévaluation suite saisine Ministères (CT)" NOUV_EXAM = "Nouvel examen suite au dépôt de nouvelles données" MODIF_COND = "Modification des conditions d'inscription (CT)" AUTRE = "Autre demande" class ImportanceSMR(enum.Enum): IMPORTANT = "Important" MODERE = "Modéré" FAIBLE = "Faible" INSUFFISANT = "Insuffisant" COMMENTAIRES = "Commentaires" NP = "Non précisé" class ImportanceASMR(enum.Enum): COM = "Commentaires sans chiffrage de l'ASMR" I = "I" II = "II" III = "III" IV = "IV" V = "V" NP = "Non précisée" SO = "Sans objet" class Specialite(namedtuple("Specialite", ("cis", "denomation", "forme", "voies_administration", "statut_amm", "type_amm", "commercialisation", "date_amm", "statut_bdm", "numero_autorisation_europeenne", "titulaire", "surveillance_renforcee"))): @classmethod def from_line(cls, line): line[2] = line[2].replace(" ", " ").strip() line[3] = parse_list(line[3]) line[7] = parse_date(line[7]) line[10] = line[10].strip() # There are often leading spaces here (like ' OPELLA HEALTHCARE FRANCE'). return cls(*line) class Presentation(namedtuple("Specialite", ("cis", "cip7", "libelle", "statut", "commercialisation", "date_commercialisation", "cip13", "agrement_collectivites", "taux_remboursement", "prix", "prix_hors_honoraires", "montant_honoraires", "indications_remboursement"))): @classmethod def from_line(cls, line): if line[3] == "Présentation active": line[3] = StatutAdministratifPresentation.ACTIVE else: line[3] = StatutAdministratifPresentation.ABROGEE line[4] = { "Déclaration de commercialisation": EtatCommercialisation.DC, "Déclaration de suspension de commercialisation": EtatCommercialisation.S, "Déclaration d'arrêt de commercialisation": EtatCommercialisation.DAC, "Arrêt de commercialisation (le médicament n'a plus d'autorisation)": EtatCommercialisation.AC }.get(line[4]) line[5] = parse_date(line[5]) line[7] = True if line[7] == "oui" else False line[8] = parse_percentage(line[8]) line[9] = parse_price(line[9]) line[10] = parse_price(line[10]) line[11] = parse_price(line[11]) line[12] = parse_text(line[12]) return cls(*line) class Composition(namedtuple("Composition", ("cis", "element", "code", "substance", "dosage", "ref_dosage", "nature_composant", "cle"))): @classmethod def from_line(cls, line): line.pop(-1) return cls(*line) class AvisSMR(namedtuple("AvisSMR", ("cis", "dossier_has", "motif", "date", "valeur", "libelle"))): @classmethod def from_line(cls, line): line[2] = MotifAvisSMR(line[2]) line[3] = parse_date_bis(line[3]) line[4] = ImportanceSMR(line[4]) line[5] = parse_text(line[5]) return cls(*line) class AvisASMR(namedtuple("AvisASMR", ("cis", "dossier_has", "motif", "date", "valeur", "libelle"))): @classmethod def from_line(cls, line): line[2] = MotifAvisSMR(line[2]) line[3] = parse_date_bis(line[3]) line[4] = ImportanceASMR(line[4]) line[5] = parse_text(line[5]) return cls(*line) class AvisCT(namedtuple("AvisCT", ("dossier_has", "lien"))): @classmethod def from_line(cls, line): return cls(*line) FILE_MATCHES = { "CIS_bdpm.txt": Specialite, "CIS_CIP_bdpm.txt": Presentation, "CIS_COMPO_bdpm.txt": Composition, "CIS_HAS_ASMR_bdpm.txt": AvisASMR, "CIS_HAS_SMR_bdpm.txt": AvisSMR, "HAS_LiensPageCT_bdpm.txt": AvisCT } def sequential_import_file_data(filename, cls): result = {cls: []} with (pathlib.Path("data") / filename).open("r", encoding="latin1") as f: rows = csv.reader(f, delimiter="\t") for line in rows: data = cls.from_line(line) result[cls].append(data) return result def import_data_sequential(): results = [] for filename, cls in FILE_MATCHES.items(): results.append(sequential_import_file_data(filename, cls)) from multiprocessing.pool import ThreadPool def import_data_mp_tp(n=2): pool = ThreadPool(n) results = [] for filename, cls in FILE_MATCHES.items(): results.append(pool.apply_async( sequential_import_file_data, (filename, cls) )) results = [r.get() for r in results] from multiprocessing.pool import Pool def import_data_mp_p(n=2): pool = Pool(n) results = [] for filename, cls in FILE_MATCHES.items(): results.append(pool.apply_async( sequential_import_file_data, (filename, cls) )) results = [r.get() for r in results] import asyncio import aiofiles from aiocsv import AsyncReader async def async_import_file_data(filename, cls): results = {cls: []} async with aiofiles.open( (pathlib.Path("data") / filename), mode="r", encoding="latin1" ) as afp: async for line in AsyncReader(afp, delimiter="\t"): data = cls.from_line(line) results[cls].append(data) return results def import_data_async(): results = [] for filename, cls in FILE_MATCHES.items(): results.append(asyncio.run(async_import_file_data(filename, cls))) def main(): import timeit print( "Sequential:", timeit.timeit(lambda: import_data_sequential(), number=10) ) print( "Multi ThreadPool:", timeit.timeit(lambda: import_data_mp_tp(), number=10) ) print( "Multi Pool:", timeit.timeit(lambda: import_data_mp_p(), number=10) ) print( "Async:", timeit.timeit(lambda: import_data_async(), number=10) ) if __name__ == "__main__": main() So when I run it, I get the following result. Sequential: 9.821639589001279 Multi ThreadPool: 10.137484730999859 Multi Pool: 12.531487682997977 Async: 30.953154197999538 The most basic solution of iterating through all files and through all their lines seems to also be the fastest. So, did I do anything wrong which would slow the import? Or is it normal/expected to have such time differences? Edit 2023-08-15: I realized that since all the files I need to parse are TSV (and they don't contain tabs in their values), I could still speed up the parsing by using a simple line.strip('\n').split('\t') instead of the CSV module, which saves another 40% in runtime. :) I'll probably post a Gist when I have a complete API for this database. | As usual: run a profiler on your code to see where it's spending its time. (This is PyCharm's, it wraps the stdlib cProfile.) Sequential: 7.865187874995172 Huh, okay. strptime, I can tell that'd get called by datetime.datetime.strptime. Also, weird, getlocale... why do we need locales there? Clicking through to the call graph shows that strptime actually looks up the current locale, and has a bunch of locks and all that – what if we replace those parse_dates with our own implementations? def parse_date(date: str) -> datetime.date: d, m, y = (int(x) for x in date.split("/", 2)) return datetime.date(2000 + y, m, d) def parse_date_bis(date: str) -> datetime.datetime: y = int(date[:4]) m = int(date[4:6]) d = int(date[6:8]) return datetime.datetime(y, m, d) Sequential: 3.8978060420195106 Okay, we're cooking! 52% improvement right there! (It doesn't show up on the screenshot here because I was a silly goose cropping it, but the re stuff that strptime uses under the hood dropped right off too.) Now let's work with the assumption that there'll be a lot of the same date and slap @lru_cache(maxsize=None)s (RAM flex there, unbounded caches) on those hot parse_date_* functions, run the code and also print out the cache info: Sequential: 3.2240814580000006 CacheInfo(hits=358989, misses=6991, maxsize=None, currsize=6991) CacheInfo(hits=221607, misses=513, maxsize=None, currsize=513) Looks pretty good to me, we got another 15% off the last number. parse_price could apparently use a cache too, though: Sequential: 2.928746833000332 CacheInfo(hits=358989, misses=6991, maxsize=None, currsize=6991) CacheInfo(hits=221607, misses=513, maxsize=None, currsize=513) CacheInfo(hits=622064, misses=4096, maxsize=None, currsize=4096) Hey, who knew, there was only 4096 individual price strings in the data. The remaining parsing functions could use a cache too if you have the memory, but with a little bit of profiling and parsing elbow grease, it's now 2.7x faster [when running everything 10 times, which means those caches will be hot – a single run's speedup isn't as dramatic], with no parallel processing required. Magic! And just so the playing ground is a bit more even, here's a hyperfine benchmark where the Python interpreter is started from scratch for each import (and each interpreter runs the import only once): $ hyperfine 'python3 so76781391-orig.py' 'python3 so76781391-opt.py' --warmup 5 --min-benchmarking-time 10 Benchmark 1: python3 so76781391-orig.py Time (mean ± σ): 363.0 ms ± 2.7 ms [User: 340.8 ms, System: 20.7 ms] Range (min … max): 358.9 ms … 367.9 ms 27 runs Benchmark 2: python3 so76781391-opt.py Time (mean ± σ): 234.1 ms ± 2.5 ms [User: 215.6 ms, System: 17.0 ms] Range (min … max): 228.2 ms … 238.5 ms 42 runs Summary 'python3 so76781391-opt.py' ran 1.55 ± 0.02 times faster than 'python3 so76781391-orig.py' so, 55% concrete speed boost with a quick look at the profiler (and a couple of additional optimizations such as not creating mapping dicts within from_line functions, etc., etc.). | 2 | 3 |
76,780,968 | 2023-7-27 | https://stackoverflow.com/questions/76780968/pytables-install-with-python-3-11-fails-on-macos-m1 | $ python -m pip install tables stops with Error: compiling Cython file Environment (I am within a virtual environment, created with pyenv. ) Only few packages installed atm Package Version ---------- ------- Cython 3.0.0 numpy 1.25.1 pip 23.2.1 setuptools 65.5.0 wheel 0.41.0 My exports export HDF5_DIR="$(brew --prefix hdf5)" export BLOSC_DIR="$(brew --prefix c-blosc)" export C_INCLUDE_PATH=/opt/homebrew/Cellar/lzo/2.10/include/lzo:/opt/homebrew/Cellar/lzo/2.10/include/ export LIBRARY_PATH=/opt/homebrew/lib Here is the complete error message: $ python -m pip install tables ─╯ Collecting tables Using cached tables-3.8.0.tar.gz (8.0 MB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> [51 lines of output] <string>:19: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html Error compiling Cython file: ------------------------------------------------------------ ... def _dump_h5_backtrace(): cdef object bt = [] if H5Ewalk(H5E_DEFAULT, H5E_WALK_DOWNWARD, e_walk_cb, <void*>bt) < 0: ^ ------------------------------------------------------------ tables/utilsextension.pyx:384:47: Cannot assign type 'herr_t (unsigned int, const H5E_error_t *, void *) except? -1 nogil' to 'H5E_walk_t' cpuinfo failed, assuming no CPU features: 'flags' * Using Python 3.11.4 (main, Jul 27 2023, 14:26:12) [Clang 14.0.3 (clang-1403.0.22.14.1)] * Found cython 3.0.0 * USE_PKGCONFIG: False * Found HDF5 headers at ``/opt/homebrew/opt/hdf5/include``, library at ``/opt/homebrew/opt/hdf5/lib``. * Found LZO 2 headers at ``/opt/homebrew/Cellar/lzo/2.10/include``, the library is located in the standard system search dirs. * Skipping detection of LZO 1 since LZO 2 has already been found. * Found bzip2 headers at ``/opt/homebrew/opt/bzip2/include``, library at ``/opt/homebrew/opt/bzip2/lib``. * Found blosc headers at ``/opt/homebrew/opt/c-blosc/include``, library at ``/opt/homebrew/opt/c-blosc/lib``. * Found blosc2 headers at ``/private/var/folders/h2/tcw923v140sbnqcz0p0xp3y80000gp/T/pip-build-env-n4mpp65u/overlay/include``, library at ``/private/var/folders/h2/tcw923v140sbnqcz0p0xp3y80000gp/T/pip-build-env-n4mpp65u/overlay/lib``. * Copying blosc2 runtime library to 'tables' dir because it was not found in standard locations Compiling tables/utilsextension.pyx because it changed. [1/1] Cythonizing tables/utilsextension.pyx Traceback (most recent call last): File "/Users/mario.theuermann/.pyenv/versions/i4SEE-sandbox/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module> main() File "/Users/mario.theuermann/.pyenv/versions/i4SEE-sandbox/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mario.theuermann/.pyenv/versions/i4SEE-sandbox/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel return hook(config_settings) ^^^^^^^^^^^^^^^^^^^^^ File "/private/var/folders/h2/tcw923v140sbnqcz0p0xp3y80000gp/T/pip-build-env-n4mpp65u/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 341, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/private/var/folders/h2/tcw923v140sbnqcz0p0xp3y80000gp/T/pip-build-env-n4mpp65u/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 323, in _get_build_requires self.run_setup() File "/private/var/folders/h2/tcw923v140sbnqcz0p0xp3y80000gp/T/pip-build-env-n4mpp65u/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 338, in run_setup exec(code, locals()) File "<string>", line 928, in <module> File "<string>", line 923, in get_cython_extfiles File "/private/var/folders/h2/tcw923v140sbnqcz0p0xp3y80000gp/T/pip-build-env-n4mpp65u/overlay/lib/python3.11/site-packages/Cython/Build/Dependencies.py", line 1134, in cythonize cythonize_one(*args) File "/private/var/folders/h2/tcw923v140sbnqcz0p0xp3y80000gp/T/pip-build-env-n4mpp65u/overlay/lib/python3.11/site-packages/Cython/Build/Dependencies.py", line 1301, in cythonize_one raise CompileError(None, pyx_file) Cython.Compiler.Errors.CompileError: tables/utilsextension.pyx [end of output] What I investigated so far H5Ewalk() was renamed to H5Ewalk1() and deprecated in 1.8.0 as stated here: https://docs.hdfgroup.org/hdf5/develop/group___h5_e.html#title29 I use 1.14.1, so I don't really understand that the build seems to to use this deprecated function? $ brew info hdf5 ==> hdf5: stable 1.14.1 (bottled) Test tables 3.7.0 This version does not result in the above error, but in the following step of my requirements.txt file package installation: Building wheels for collected packages: lxml, python-pptx, simplekml, tables, timezonefinder Building wheel for lxml (setup.py) ... done Created wheel for lxml: filename=lxml-4.9.2-cp311-cp311-macosx_13_0_arm64.whl size=1642538 sha256=00fdcd8bd9a533750afc5c27015d13fc99e25cad31f7f8d91b200cd4806ca7ef Stored in directory: /Users/mario.theuermann/Library/Caches/pip/wheels/fb/5b/f7/0a27880b4a007daeff53a196d01901627f640392b7e76e76e5 Building wheel for python-pptx (setup.py) ... done Created wheel for python-pptx: filename=python_pptx-0.6.21-py3-none-any.whl size=470934 sha256=05c7f46456d4ab5df20c574b3b051386e6965af187d1b31ee469b01d5e42f144 Stored in directory: /Users/mario.theuermann/Library/Caches/pip/wheels/f4/c7/af/d1d91f3decfaa7621033f30b69a29bf0b1206005663d233e7a Building wheel for simplekml (setup.py) ... done Created wheel for simplekml: filename=simplekml-1.3.6-py3-none-any.whl size=65860 sha256=1c1b2052ef80cfc795ba6a281a718c7a338a10723bd388c18b6910ce40356f0c Stored in directory: /Users/mario.theuermann/Library/Caches/pip/wheels/72/3e/80/c3e5c354c3cbe62d8c5e4fb63d9e7cdccc7f93399997ae465f Building wheel for tables (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for tables (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [296 lines of output] <string>:18: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html cpuinfo failed, assuming no CPU features: No module named 'cpuinfo' * Using Python 3.11.4 (main, Jul 27 2023, 14:26:12) [Clang 14.0.3 (clang-1403.0.22.14.1)] * Found cython 3.0.0 * USE_PKGCONFIG: False * Found HDF5 headers at ``/opt/homebrew/opt/hdf5/include``, library at ``/opt/homebrew/opt/hdf5/lib``. * Found LZO 2 headers at ``/opt/homebrew/Cellar/lzo/2.10/include``, the library is located in the standard system search dirs. * Skipping detection of LZO 1 since LZO 2 has already been found. * Found bzip2 headers at ``/opt/homebrew/opt/bzip2/include``, library at ``/opt/homebrew/opt/bzip2/lib``. * Found blosc headers at ``/opt/homebrew/opt/c-blosc/include``, library at ``/opt/homebrew/opt/c-blosc/lib``. running bdist_wheel running build running build_py creating build creating build/lib.macosx-13.4-arm64-cpython-311 creating build/lib.macosx-13.4-arm64-cpython-311/tables copying tables/link.py -> build/lib.macosx-13.4-arm64-cpython-311/tables copying tables/description.py -> build/lib.macosx-13.4-arm64-cpython-311/tables copying tables/index.py -> build/lib.macosx-13.4-arm64-cpython-311/tables copying tables/attributeset.py -> build/lib.macosx-13.4-arm64-cpython-311/tables copying tables/registry.py -> build/lib.macosx-13.4-arm64-cpython-311/tables copying tables/leaf.py -> build/lib.macosx-13.4-arm64-cpython-311/tables copying tables/carray.py -> build/lib.macosx-13.4-arm64-cpython-311/tables copying tables/__init__.py -> build/lib.macosx-13.4-arm64-cpython-311/tables copying tables/unimplemented.py -> build/lib.macosx-13.4-arm64-cpython-311/tables . . . . . running build_ext building 'tables.utilsextension' extension creating build/temp.macosx-13.4-arm64-cpython-311/hdf5-blosc creating build/temp.macosx-13.4-arm64-cpython-311/hdf5-blosc/src creating build/temp.macosx-13.4-arm64-cpython-311/src creating build/temp.macosx-13.4-arm64-cpython-311/tables clang -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/opt/homebrew/opt/bzip2/include -DNDEBUG=1 -DHAVE_LZO2_LIB=1 -DHAVE_BZ2_LIB=1 -DHAVE_BLOSC_LIB=1 -Ihdf5-blosc/src -I/opt/homebrew/Cellar/lzo/2.10/include/lzo -I/opt/homebrew/Cellar/lzo/2.10/include -I/opt/homebrew/opt/bzip2/include -I/usr/local/include -I/sw/include -I/opt/include -I/opt/local/include -I/usr/include -I/include -I/opt/homebrew/opt/hdf5/include -I/opt/homebrew/opt/c-blosc/include -I/private/var/folders/h2/tcw923v140sbnqcz0p0xp3y80000gp/T/pip-build-env-kvflwhff/overlay/lib/python3.11/site-packages/numpy/core/include -I/Users/mario.theuermann/.pyenv/versions/i4SEE-sandbox/include -I/Users/mario.theuermann/.pyenv/versions/3.11.4/include/python3.11 -c hdf5-blosc/src/blosc_filter.c -o build/temp.macosx-13.4-arm64-cpython-311/hdf5-blosc/src/blosc_filter.o -Isrc -DH5_USE_18_API -DH5Acreate_vers=2 -DH5Aiterate_vers=2 -DH5Dcreate_vers=2 -DH5Dopen_vers=2 -DH5Eclear_vers=2 -DH5Eprint_vers=2 -DH5Epush_vers=2 -DH5Eset_auto_vers=2 -DH5Eget_auto_vers=2 -DH5Ewalk_vers=2 -DH5E_auto_t_vers=2 -DH5Gcreate_vers=2 -DH5Gopen_vers=2 -DH5Pget_filter_vers=2 -DH5Pget_filter_by_id_vers=2 -DH5Tarray_create_vers=2 -DH5Tget_array_dims_vers=2 -DH5Z_class_t_vers=2 -DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION clang -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/opt/homebrew/opt/bzip2/include -DNDEBUG=1 -DHAVE_LZO2_LIB=1 -DHAVE_BZ2_LIB=1 -DHAVE_BLOSC_LIB=1 -Ihdf5-blosc/src -I/opt/homebrew/Cellar/lzo/2.10/include/lzo -I/opt/homebrew/Cellar/lzo/2.10/include -I/opt/homebrew/opt/bzip2/include -I/usr/local/include -I/sw/include -I/opt/include -I/opt/local/include -I/usr/include -I/include -I/opt/homebrew/opt/hdf5/include -I/opt/homebrew/opt/c-blosc/include -I/private/var/folders/h2/tcw923v140sbnqcz0p0xp3y80000gp/T/pip-build-env-kvflwhff/overlay/lib/python3.11/site-packages/numpy/core/include -I/Users/mario.theuermann/.pyenv/versions/i4SEE-sandbox/include -I/Users/mario.theuermann/.pyenv/versions/3.11.4/include/python3.11 -c src/H5ARRAY.c -o build/temp.macosx-13.4-arm64-cpython-311/src/H5ARRAY.o -Isrc -DH5_USE_18_API -DH5Acreate_vers=2 -DH5Aiterate_vers=2 -DH5Dcreate_vers=2 -DH5Dopen_vers=2 -DH5Eclear_vers=2 -DH5Eprint_vers=2 -DH5Epush_vers=2 -DH5Eset_auto_vers=2 -DH5Eget_auto_vers=2 -DH5Ewalk_vers=2 -DH5E_auto_t_vers=2 -DH5Gcreate_vers=2 -DH5Gopen_vers=2 -DH5Pget_filter_vers=2 -DH5Pget_filter_by_id_vers=2 -DH5Tarray_create_vers=2 -DH5Tget_array_dims_vers=2 -DH5Z_class_t_vers=2 -DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION clang -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/opt/homebrew/opt/bzip2/include -DNDEBUG=1 -DHAVE_LZO2_LIB=1 -DHAVE_BZ2_LIB=1 -DHAVE_BLOSC_LIB=1 -Ihdf5-blosc/src -I/opt/homebrew/Cellar/lzo/2.10/include/lzo -I/opt/homebrew/Cellar/lzo/2.10/include -I/opt/homebrew/opt/bzip2/include -I/usr/local/include -I/sw/include -I/opt/include -I/opt/local/include -I/usr/include -I/include -I/opt/homebrew/opt/hdf5/include -I/opt/homebrew/opt/c-blosc/include -I/private/var/folders/h2/tcw923v140sbnqcz0p0xp3y80000gp/T/pip-build-env-kvflwhff/overlay/lib/python3.11/site-packages/numpy/core/include -I/Users/mario.theuermann/.pyenv/versions/i4SEE-sandbox/include -I/Users/mario.theuermann/.pyenv/versions/3.11.4/include/python3.11 -c src/H5ATTR.c -o build/temp.macosx-13.4-arm64-cpython-311/src/H5ATTR.o -Isrc -DH5_USE_18_API -DH5Acreate_vers=2 -DH5Aiterate_vers=2 -DH5Dcreate_vers=2 -DH5Dopen_vers=2 -DH5Eclear_vers=2 -DH5Eprint_vers=2 -DH5Epush_vers=2 -DH5Eset_auto_vers=2 -DH5Eget_auto_vers=2 -DH5Ewalk_vers=2 -DH5E_auto_t_vers=2 -DH5Gcreate_vers=2 -DH5Gopen_vers=2 -DH5Pget_filter_vers=2 -DH5Pget_filter_by_id_vers=2 -DH5Tarray_create_vers=2 -DH5Tget_array_dims_vers=2 -DH5Z_class_t_vers=2 -DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION src/H5ATTR.c:453:8: warning: explicitly assigning value of variable of type 'hid_t' (aka 'long long') to itself [-Wself-assign] loc_id=loc_id; ~~~~~~^~~~~~~ 1 warning generated. clang -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/opt/homebrew/opt/bzip2/include -DNDEBUG=1 -DHAVE_LZO2_LIB=1 -DHAVE_BZ2_LIB=1 -DHAVE_BLOSC_LIB=1 -Ihdf5-blosc/src -I/opt/homebrew/Cellar/lzo/2.10/include/lzo -I/opt/homebrew/Cellar/lzo/2.10/include -I/opt/homebrew/opt/bzip2/include -I/usr/local/include -I/sw/include -I/opt/include -I/opt/local/include -I/usr/include -I/include -I/opt/homebrew/opt/hdf5/include -I/opt/homebrew/opt/c-blosc/include -I/private/var/folders/h2/tcw923v140sbnqcz0p0xp3y80000gp/T/pip-build-env-kvflwhff/overlay/lib/python3.11/site-packages/numpy/core/include -I/Users/mario.theuermann/.pyenv/versions/i4SEE-sandbox/include -I/Users/mario.theuermann/.pyenv/versions/3.11.4/include/python3.11 -c src/utils.c -o build/temp.macosx-13.4-arm64-cpython-311/src/utils.o -Isrc -DH5_USE_18_API -DH5Acreate_vers=2 -DH5Aiterate_vers=2 -DH5Dcreate_vers=2 -DH5Dopen_vers=2 -DH5Eclear_vers=2 -DH5Eprint_vers=2 -DH5Epush_vers=2 -DH5Eset_auto_vers=2 -DH5Eget_auto_vers=2 -DH5Ewalk_vers=2 -DH5E_auto_t_vers=2 -DH5Gcreate_vers=2 -DH5Gopen_vers=2 -DH5Pget_filter_vers=2 -DH5Pget_filter_by_id_vers=2 -DH5Tarray_create_vers=2 -DH5Tget_array_dims_vers=2 -DH5Z_class_t_vers=2 -DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION src/utils.c:290:14: warning: variable 'namedtypes' set but not used [-Wunused-but-set-variable] int namedtypes = 0; ^ 1 warning generated. clang -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/opt/homebrew/opt/bzip2/include -DNDEBUG=1 -DHAVE_LZO2_LIB=1 -DHAVE_BZ2_LIB=1 -DHAVE_BLOSC_LIB=1 -Ihdf5-blosc/src -I/opt/homebrew/Cellar/lzo/2.10/include/lzo -I/opt/homebrew/Cellar/lzo/2.10/include -I/opt/homebrew/opt/bzip2/include -I/usr/local/include -I/sw/include -I/opt/include -I/opt/local/include -I/usr/include -I/include -I/opt/homebrew/opt/hdf5/include -I/opt/homebrew/opt/c-blosc/include -I/private/var/folders/h2/tcw923v140sbnqcz0p0xp3y80000gp/T/pip-build-env-kvflwhff/overlay/lib/python3.11/site-packages/numpy/core/include -I/Users/mario.theuermann/.pyenv/versions/i4SEE-sandbox/include -I/Users/mario.theuermann/.pyenv/versions/3.11.4/include/python3.11 -c tables/utilsextension.c -o build/temp.macosx-13.4-arm64-cpython-311/tables/utilsextension.o -Isrc -DH5_USE_18_API -DH5Acreate_vers=2 -DH5Aiterate_vers=2 -DH5Dcreate_vers=2 -DH5Dopen_vers=2 -DH5Eclear_vers=2 -DH5Eprint_vers=2 -DH5Epush_vers=2 -DH5Eset_auto_vers=2 -DH5Eget_auto_vers=2 -DH5Ewalk_vers=2 -DH5E_auto_t_vers=2 -DH5Gcreate_vers=2 -DH5Gopen_vers=2 -DH5Pget_filter_vers=2 -DH5Pget_filter_by_id_vers=2 -DH5Tarray_create_vers=2 -DH5Tget_array_dims_vers=2 -DH5Z_class_t_vers=2 -DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION tables/utilsextension.c:8032:52: warning: comparison of integers of different signs: 'hsize_t' (aka 'unsigned long long') and 'long long' [-Wsign-compare] __pyx_t_2 = (((__pyx_v_maxdims[__pyx_v_i]) == -1LL) != 0); ~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~~~ tables/utilsextension.c:12367:33: warning: comparison of integers of different signs: 'int' and 'hsize_t' (aka 'unsigned long long') [-Wsign-compare] for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { ~~~~~~~~~ ^ ~~~~~~~~~ tables/utilsextension.c:15186:35: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare] for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) { ~~~~~~~~~ ^ ~~~~~~~~~ tables/utilsextension.c:15413:52: warning: comparison of integers of different signs: 'hsize_t' (aka 'unsigned long long') and 'long long' [-Wsign-compare] __pyx_t_3 = (((__pyx_v_maxdims[__pyx_v_j]) == -1LL) != 0); ~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~~~ tables/utilsextension.c:22030:23: error: no member named 'exc_type' in 'struct _err_stackitem' while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) && ~~~~~~~~ ^ tables/utilsextension.c:22030:53: error: no member named 'exc_type' in 'struct _err_stackitem' while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) && ~~~~~~~~ ^ tables/utilsextension.c:22044:23: error: no member named 'exc_type' in 'struct _err_stackitem' *type = exc_info->exc_type; ~~~~~~~~ ^ tables/utilsextension.c:22046:21: error: no member named 'exc_traceback' in 'struct _err_stackitem' *tb = exc_info->exc_traceback; ~~~~~~~~ ^ tables/utilsextension.c:22060:26: error: no member named 'exc_type' in 'struct _err_stackitem' tmp_type = exc_info->exc_type; ~~~~~~~~ ^ tables/utilsextension.c:22062:24: error: no member named 'exc_traceback' in 'struct _err_stackitem' tmp_tb = exc_info->exc_traceback; ~~~~~~~~ ^ tables/utilsextension.c:22063:15: error: no member named 'exc_type' in 'struct _err_stackitem' exc_info->exc_type = type; ~~~~~~~~ ^ tables/utilsextension.c:22065:15: error: no member named 'exc_traceback' in 'struct _err_stackitem' exc_info->exc_traceback = tb; ~~~~~~~~ ^ tables/utilsextension.c:22147:30: error: no member named 'exc_type' in 'struct _err_stackitem' tmp_type = exc_info->exc_type; ~~~~~~~~ ^ tables/utilsextension.c:22149:28: error: no member named 'exc_traceback' in 'struct _err_stackitem' tmp_tb = exc_info->exc_traceback; ~~~~~~~~ ^ tables/utilsextension.c:22150:19: error: no member named 'exc_type' in 'struct _err_stackitem' exc_info->exc_type = local_type; ~~~~~~~~ ^ tables/utilsextension.c:22152:19: error: no member named 'exc_traceback' in 'struct _err_stackitem' exc_info->exc_traceback = local_tb; ~~~~~~~~ ^ tables/utilsextension.c:22201:43: warning: 'ob_shash' is deprecated [-Wdeprecated-declarations] hash1 = ((PyBytesObject*)s1)->ob_shash; ^ /Users/mario.theuermann/.pyenv/versions/3.11.4/include/python3.11/cpython/bytesobject.h:7:5: note: 'ob_shash' has been explicitly marked deprecated here Py_DEPRECATED(3.11) Py_hash_t ob_shash; ^ /Users/mario.theuermann/.pyenv/versions/3.11.4/include/python3.11/pyport.h:336:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ tables/utilsextension.c:22202:43: warning: 'ob_shash' is deprecated [-Wdeprecated-declarations] hash2 = ((PyBytesObject*)s2)->ob_shash; ^ /Users/mario.theuermann/.pyenv/versions/3.11.4/include/python3.11/cpython/bytesobject.h:7:5: note: 'ob_shash' has been explicitly marked deprecated here Py_DEPRECATED(3.11) Py_hash_t ob_shash; ^ /Users/mario.theuermann/.pyenv/versions/3.11.4/include/python3.11/pyport.h:336:54: note: expanded from macro 'Py_DEPRECATED' #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__)) ^ tables/utilsextension.c:22347:26: error: no member named 'exc_type' in 'struct _err_stackitem' tmp_type = exc_info->exc_type; ~~~~~~~~ ^ tables/utilsextension.c:22349:24: error: no member named 'exc_traceback' in 'struct _err_stackitem' tmp_tb = exc_info->exc_traceback; ~~~~~~~~ ^ tables/utilsextension.c:22350:15: error: no member named 'exc_type' in 'struct _err_stackitem' exc_info->exc_type = *type; ~~~~~~~~ ^ tables/utilsextension.c:22352:15: error: no member named 'exc_traceback' in 'struct _err_stackitem' exc_info->exc_traceback = *tb; ~~~~~~~~ ^ tables/utilsextension.c:23020:5: error: incomplete definition of type 'struct _frame' __Pyx_PyFrame_SetLineNumber(py_frame, py_line); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ tables/utilsextension.c:445:62: note: expanded from macro '__Pyx_PyFrame_SetLineNumber' #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) ~~~~~~~^ /Users/mario.theuermann/.pyenv/versions/3.11.4/include/python3.11/pytypedefs.h:22:16: note: forward declaration of 'struct _frame' typedef struct _frame PyFrameObject; ^ 6 warnings and 17 errors generated. error: command '/usr/bin/clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for tables Building wheel for timezonefinder (pyproject.toml) ... done Following the Github issue I tried building and installing the latest version of tables like so: python setup.py build --hdf5=/opt/homebrew/opt/hdf5 --use-pkgconfig=FALSE --blosc=/opt/homebrew/opt/c-blosc --lzo=/opt/homebrew/Cellar/lzo/2.10 --bzip2=/opt/homebrew/Cellar/bzip2/1.0.8 python setup.py install --hdf5=/opt/homebrew/opt/hdf5 Then, the test suite of tables fails with: python3 -m tables.tests.test_all ─╯ Traceback (most recent call last): File "<frozen runpy>", line 189, in _run_module_as_main File "<frozen runpy>", line 112, in _get_module_details File "/Users/xyz/projects-my/PyTables/tables/__init__.py", line 42, in <module> from .utilsextension import get_hdf5_version as _get_hdf5_version ModuleNotFoundError: No module named 'tables.utilsextension' | I managed to reproduce your issue, the problem seems to be installing the package from PyPi. From the error message it seems the Cython cannot find lzo libraries. The solution of the mentioned GitHub thread seems to be installing c-blosc, which also did not fix the problem for me. There are two options that worked for me: install the package from GitHub pip3 install git+https://github.com/PyTables/PyTables install the package using conda conda install -c anaconda pytables | 6 | 6 |
76,785,441 | 2023-7-28 | https://stackoverflow.com/questions/76785441/python-pandas-filter-columns-for-true-false-and-both | I am currently trying to filter a pandas df with certain boolean columns for true, false and both types within a loop. For better explanation let's say we have the following df: df = pd.DataFrame({ 'a': [1, 2, 3, 5, 6, 7, 8, 9, 10], 'b': [True, True, True, False, False, True, False, True, False], 'c': [True, False, False, False, True, True, True, True, False], 'd': [35, 59, 12, 2, 19, 24, 33, 5, 11] }) What I want to do now is loop over the both boolean columns like so: loops = [[True, True], [True, any], [any, True], [any, any]] for loop in loops: df[(df['b'] == loop[0]) & (df['c'] == loop[1])]['d'].sum() Unfortunately using any is not working, as well as bool. As anyone an idea how that could be archived? Thank you! | Using any doesn't make sense here, this is a function not a generic term that pandas would understand. Why not using a set for your conditions? T = {True} F = {False} ANY = {True, False} NONE = {} loops = [[T, T], [T, ANY], [ANY, T], [ANY, ANY]] for loop in loops: x = df[df['b'].isin(loop[0]) & df['c'].isin(loop[1])]['d'].sum() print(loop, x) Output: [{True}, {True}] 64 [{True}, {False, True}] 135 [{False, True}, {True}] 116 [{False, True}, {False, True}] 200 Alternative using functions If you create custom functions, make sure they are vectorized. def istrue(x): return x == True # or just "return x" if you only have booleans def isfalse(x): return x == False # not using "~ x" to make the function generic def isany(x): return x | True def isnone(x): return x & False loops = [[istrue, istrue], [istrue, isany], [isany, istrue], [isany, isany]] for loop in loops: x = df[loop[0](df['b']) & loop[1](df['c'])]['d'].sum() print([x.__name__ for x in loop], x) Output: ['istrue', 'istrue'] 64 ['istrue', 'isany'] 135 ['isany', 'istrue'] 116 ['isany', 'isany'] 200 | 2 | 4 |
76,778,911 | 2023-7-27 | https://stackoverflow.com/questions/76778911/faster-way-to-split-a-large-csv-file-evenly-by-groups-into-smaller-csv-files | I'm sure there is a better way for this but I am drawing a blank. I have a CSV file in this format. The ID column is sorted so everything is grouped together at least: Text ID this is sample text, AAAA this is sample text, AAAA this is sample text, AAAA this is sample text, AAAA this is sample text, AAAA this is sample text2, BBBB this is sample text2, BBBB this is sample text2, BBBB this is sample text3, CCCC this is sample text4, DDDD this is sample text4, DDDD this is sample text5, EEEE this is sample text5, EEEE this is sample text6, FFFF this is sample text6, FFFF What I want to do is split the CSV fast across X amount of smaller CSV files fast. So if X==3, then AAAA would go into "1.csv", BBBB would go into "2.csv", CCCC would go into "3.csv" and the next group would loop back around and go into "1.csv". The groups vary in size so a hardcoded split by numbers won't work here. Is there a faster way to split these reliably then my current method which just uses Pandas groupby in Python to write them? file_ = 0 num_files = 3 for name, group in df.groupby(by=['ID'], sort=False): file_+=1 group['File Num'] = file_ group.to_csv(file_+'.csv',index=False, header=False, mode='a') if file_ == num_files: file_ = 0 This is a python based solution but I am open to stuff using awk or bash if it gets the job done. EDIT: For clarification, I want the groups split across a fixed amount of files I can set. In this case, 3. (So x = 3). The first group (AAAA) would go into 1.csv, the 2nd into 2.csv, the third into 3.csv and then for the fourth group, it would loop back and insert it into 1.csv. etc. Example output 1.csv: Text ID this is sample text, AAAA this is sample text, AAAA this is sample text, AAAA this is sample text, AAAA this is sample text, AAAA this is sample text4, DDDD this is sample text4, DDDD Example output 2.csv: Text ID this is sample text2, BBBB this is sample text2, BBBB this is sample text2, BBBB this is sample text5, EEEE this is sample text5, EEEE Example output 3.csv: Text ID this is sample text3, CCCC this is sample text6, FFFF this is sample text6, FFFF | Using any awk in any shell on every Unix box: $ cat tst.awk NR==1 { hdr = $0 next } $NF != prev { out = (((blockCnt++) % X) + 1) ".csv" if ( blockCnt <= X ) { print hdr > out } prev = $NF } { print > out } $ awk -v X=3 -f tst.awk input.csv $ head [0-9]*.csv ==> 1.csv <== Text ID this is sample text, AAAA this is sample text, AAAA this is sample text, AAAA this is sample text, AAAA this is sample text, AAAA this is sample text4, DDDD this is sample text4, DDDD ==> 2.csv <== Text ID this is sample text2, BBBB this is sample text2, BBBB this is sample text2, BBBB this is sample text5, EEEE this is sample text5, EEEE ==> 3.csv <== Text ID this is sample text3, CCCC this is sample text6, FFFF this is sample text6, FFFF If X was some large enough number that you exceed your system limit for concurrently open files and you start getting a "too many open files" error then you'd need to use GNU awk as it handles that internally or change the code to only have 1 file open at a time: NR==1 { hdr = $0 next } $NF != prev { close(out) out = (((blockCnt++) % X) + 1) ".csv" if ( blockCnt <= X ) { print hdr > out } prev = $NF } { print >> out } or implement your own way of managing how many files are open concurrently. EDIT: here's what the suggestion by @PaulHodges in the comments would result in a script like: NR == 1 { for ( i=1; i <= X; i++ ) { print > (i ".csv") } next } $NF != prev { out = (((NR-1) % X) + 1) ".csv" prev = $NF } { print > out } | 5 | 4 |
76,784,012 | 2023-7-27 | https://stackoverflow.com/questions/76784012/pip-install-e-fails-with-expected-end-or-semicolon-for-githttps-in-require | I have a requirements.txt that uses git+https URLs like git+https://github.com/myorg/[email protected]#egg=mypackage This installs fine with pip install -r requirements.txt . But if I reference it in my setup.cfg: [options] install_requires = file: requirements.txt and pip install -e . my package as an editable install from a local dir for development, pip fails with: Getting requirements to build editable ... error error: subprocess-exited-with-error × Getting requirements to build editable did not run successfully. │ exit code: 1 [...snip...] setuptools.extern.packaging._tokenizer.ParserSyntaxError: Expected end or semicolon (after name and no valid version specifier) git+https://github.com/myorg/[email protected]#egg=mypackage despite the same requirements.txt being valid for -r . What's up? | It looks like setuptools requires the package name to be supplied as a prefix, then the fetch URL after an @, e.g. # requirements.txt mypackage @ git+https://github.com/myorg/[email protected]#egg=mypackage This form is accepted by pip install -e . with a setup.cfg that indirectly references the requirements.txt, and by pip install -r requirements.txt. It might be relevant that my project uses setuptools-scm. | 5 | 9 |
76,771,761 | 2023-7-26 | https://stackoverflow.com/questions/76771761/why-does-llama-index-still-require-an-openai-key-when-using-hugging-face-local-e | I am creating a very simple question and answer app based on documents using llama-index. Previously, I had it working with OpenAI. Now I want to try using no external APIs so I'm trying the Hugging Face example in this link. It says in the example in the link: "Note that for a completely private experience, also setup a local embedding model (example here)." I'm assuming the example given below is the example being referred to. So, naturally, I'm trying to copy the example (fuller example here). Here is my code: from pathlib import Path import gradio as gr import sys import logging import os from llama_index.llms import HuggingFaceLLM from llama_index.prompts.prompts import SimpleInputPrompt logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout)) from llama_index import SimpleDirectoryReader, VectorStoreIndex, ServiceContext, load_index_from_storage, StorageContext storage_path = "storage/" docs_path="docs" def construct_index(directory_path): max_input_size = 4096 num_outputs = 512 #max_chunk_overlap = 20 chunk_overlap_ratio = 0.1 chunk_size_limit = 600 #prompt_helper = PromptHelper(max_input_size, num_outputs, chunk_overlap_ratio, chunk_size_limit=chunk_size_limit) system_prompt = """<|SYSTEM|># StableLM Tuned (Alpha version) - StableLM is a helpful and harmless open-source AI language model developed by StabilityAI. - StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user. - StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes. - StableLM will refuse to participate in anything that could harm a human. """ # This will wrap the default prompts that are internal to llama-index query_wrapper_prompt = SimpleInputPrompt("<|USER|>{query_str}<|ASSISTANT|>") llm = HuggingFaceLLM( context_window=4096, max_new_tokens=256, generate_kwargs={"temperature": 0.7, "do_sample": False}, system_prompt=system_prompt, query_wrapper_prompt=query_wrapper_prompt, tokenizer_name="StabilityAI/stablelm-tuned-alpha-3b", model_name="StabilityAI/stablelm-tuned-alpha-3b", device_map="auto", stopping_ids=[50278, 50279, 50277, 1, 0], tokenizer_kwargs={"max_length": 4096}, # uncomment this if using CUDA to reduce memory usage # model_kwargs={"torch_dtype": torch.float16} ) #llm=ChatOpenAI(temperature=0.7, model_name="gpt-3.5-turbo", max_tokens=num_outputs) #llm_predictor = LLMPredictor(llm=llm) service_context = ServiceContext.from_defaults(chunk_size=1024, llm=llm) documents = SimpleDirectoryReader(directory_path).load_data() index = VectorStoreIndex.from_documents(documents, service_context=service_context) #index = VectorStoreIndex(documents, llm_predictor=llm_predictor, prompt_helper=prompt_helper) index.storage_context.persist(persist_dir=storage_path) return index def chatbot(input_text): index = load_index_from_storage(StorageContext.from_defaults(persist_dir=storage_path)) #index = GPTVectorStoreIndex.load_from_disk('index.json') #query_engine = index.as_query_engine(response_synthesizer=response_synthesizer); query_engine = index.as_query_engine(streaming=True) response = query_engine.query(input_text) print(response.source_nodes) relevant_files=[] for node_with_score in response.source_nodes: print(node_with_score) print(node_with_score.node) print(node_with_score.node.metadata) print(node_with_score.node.metadata['file_name']) file = node_with_score.node.metadata['file_name'] print( file ) # Resolve the full file path for the downloading full_file_path = Path( docs_path, file ).resolve() # See if it's already in the array if full_file_path not in relevant_files: relevant_files.append( full_file_path ) # Add it print( relevant_files ) return response.get_response(), relevant_files iface = gr.Interface(fn=chatbot, inputs=gr.components.Textbox(lines=7, label="Enter your text"), outputs=[ gr.components.Textbox(label="Response"), gr.components.File(label="Relevant Files") ], title="Custom-trained AI Chatbot", allow_flagging="never") index = construct_index(docs_path) iface.launch(share=False) Regardless, the code errors out saying: ValueError: No API key found for OpenAI. Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization. API keys can be found or created at https://platform.openai.com/account/api-keys Am I not understanding how to set up a local model? | Turns out I had to set the embed_model to "local" on the ServiceContext. ServiceContext.from_defaults(chunk_size=1024, llm=llm, embed_model="local") Also, when I was loading the vector index from disk I wasn't setting the llm predictor again which cause a secondary issue. So I decided to make the vector index a global variable. Here is my final code that works. from pathlib import Path import gradio as gr import sys import logging import os from llama_index.llms import HuggingFaceLLM from llama_index.prompts.prompts import SimpleInputPrompt logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout)) from llama_index import SimpleDirectoryReader, VectorStoreIndex, ServiceContext, load_index_from_storage, StorageContext storage_path = "storage" docs_path="docs" print(storage_path) max_input_size = 4096 num_outputs = 512 #max_chunk_overlap = 20 chunk_overlap_ratio = 0.1 chunk_size_limit = 600 system_prompt = """<|SYSTEM|># StableLM Tuned (Alpha version) - StableLM is a helpful and harmless open-source AI language model developed by StabilityAI. - StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user. - StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes. - StableLM will refuse to participate in anything that could harm a human. """ # This will wrap the default prompts that are internal to llama-index query_wrapper_prompt = SimpleInputPrompt("<|USER|>{query_str}<|ASSISTANT|>") llm = HuggingFaceLLM( context_window=4096, max_new_tokens=256, generate_kwargs={"temperature": 0.7, "do_sample": False}, system_prompt=system_prompt, query_wrapper_prompt=query_wrapper_prompt, tokenizer_name="StabilityAI/stablelm-tuned-alpha-3b", model_name="StabilityAI/stablelm-tuned-alpha-3b", device_map="auto", stopping_ids=[50278, 50279, 50277, 1, 0], tokenizer_kwargs={"max_length": 4096}, # uncomment this if using CUDA to reduce memory usage # model_kwargs={"torch_dtype": torch.float16} ) service_context = ServiceContext.from_defaults(chunk_size=1024, llm=llm, embed_model="local") documents = SimpleDirectoryReader(docs_path).load_data() index = VectorStoreIndex.from_documents(documents, service_context=service_context) def chatbot(input_text): query_engine = index.as_query_engine() response = query_engine.query(input_text) print(response.source_nodes) relevant_files=[] for node_with_score in response.source_nodes: print(node_with_score) print(node_with_score.node) print(node_with_score.node.metadata) print(node_with_score.node.metadata['file_name']) file = node_with_score.node.metadata['file_name'] print( file ) # Resolve the full file path for the downloading full_file_path = Path( docs_path, file ).resolve() # See if it's already in the array if full_file_path not in relevant_files: relevant_files.append( full_file_path ) # Add it print( relevant_files ) return response.response, relevant_files iface = gr.Interface(fn=chatbot, inputs=gr.components.Textbox(lines=7, label="Enter your text"), outputs=[ gr.components.Textbox(label="Response"), gr.components.File(label="Relevant Files") ], title="Custom-trained AI Chatbot", allow_flagging="never") iface.launch(share=False) | 20 | 25 |
76,780,741 | 2023-7-27 | https://stackoverflow.com/questions/76780741/how-to-read-zipfile-from-stdin | I'm trying to solve reading a zipfile from stdin in python, but I keep getting issues. What I want is to be able to run cat test.xlsx | python3 test.py and create a valid zipfile.ZipFile object without first writing a temporary file if possible. My initial approach was this, but ZipFile complained the file is not seekable, import sys import zipfile zipfile.ZipFile(sys.stdin) so I changed it around, but now it complains that this is not a valid zip file: import io import sys import zipfile zipfile.ZipFile(io.StringIO(sys.stdin.read())) Can this be solved without writing the zip to a temporary file? | Zip files are binary data, not UTF-8 encoded text. You won't be able to read the file into a str with sys.stdin.read() without immediately hitting a UnicodeDecodeError: 'utf-8' codec can't decode byte ... error. Instead, you can access the underlying binary buffer object to read stdin as raw bytes. Pair that with BytesIO to get an in-memory seekable file-like object: zipfile.ZipFile(io.BytesIO(sys.stdin.buffer.read())) Alternatively, if you provide a seekable stdin (for example, by redirecting stdin instead of streaming from a pipe), you can operate on sys.stdin.buffer directly: zipfile.ZipFile(sys.stdin.buffer) paired with something like python3 test.py <test.xlsx If you care to, you can select between the two depending on whether stdin is seekable by querying the IO object's seekable method: if sys.stdin.buffer.seekable(): zip_file = zipfile.ZipFile(sys.stdin.buffer) else: buffer = io.BytesIO(sys.stdin.buffer.read()) zip_file = zipfile.ZipFile(buffer) print(zip_file.filelist) | 2 | 3 |
76,780,929 | 2023-7-27 | https://stackoverflow.com/questions/76780929/how-do-reverse-the-indexing-order-of-a-sub-loop-every-other-pass | I have a 2 dimensional array (10x10) that represents x & y positions. I'm writing a script that goes to each position, does some stuff, then moves to the next position. The most efficient way would be to start in one corner, scan at a constant x value (moving through y values), then at the end of all of the y values, moves over one x value and moves through y values in reverse order. Kind of like a snake pattern if that makes sense. My loop currently looks something like this: for x in x_values: for y in y_values: do_something() move to y + 1 move to x + 1 move back to first y value As you can see, I'm completing a column (in y), advancing one x value, then rewinding back to the first y value. I'd like to run the y values in reverse for even number x values. What is the simplest way to do this? | This should solve your problem for x in x_values: for y in y_values: do_something() y_values.reverse() After each iteration over y_values , y_values is reversed. | 2 | 3 |
76,779,874 | 2023-7-27 | https://stackoverflow.com/questions/76779874/python-django-typeerror-cannot-unpack-non-iterable-matchall-object | I am facing below error when try to query using 'Q' in a viewset. It will work without any issues if I use this in a management command file. My view. @permission_classes((AllowAny,)) class ClipartViewSet(viewsets.GenericViewSet): serializer_class = ClipartSerializer queryset = Clipart.objects.filter(is_active=True).all() def list(self, request, **kwargs): # Some extra logic # qs = Clipart.objects.filter(name="Apes") #This line will work without any issues qs = Clipart.objects.filter(Q(name="Apes") | Q(name="Dog")) # This line will show error print(qs) return super(ClipartViewSet, self).list(self, request, **kwargs) Error: Internal Server Error: /api/s/configurator/cliparts backend_1 | Traceback (most recent call last): backend_1 | File "/usr/local/lib/python3.7/site-packages/django/core/handlers/exception.py", line 47, in inner backend_1 | response = get_response(request) backend_1 | File "/usr/local/lib/python3.7/site-packages/django/core/handlers/base.py", line 181, in _get_response backend_1 | response = wrapped_callback(request, *callback_args, **callback_kwargs) backend_1 | File "/usr/local/lib/python3.7/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view backend_1 | return view_func(*args, **kwargs) backend_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/viewsets.py", line 125, in view backend_1 | return self.dispatch(request, *args, **kwargs) backend_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 509, in dispatch backend_1 | response = self.handle_exception(exc) backend_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 469, in handle_exception backend_1 | self.raise_uncaught_exception(exc) backend_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception backend_1 | raise exc backend_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 506, in dispatch backend_1 | response = handler(request, *args, **kwargs) backend_1 | File "/backend/mycomp/apps/ecommerce/configurator/views/design_views.py", line 109, in list backend_1 | qs = Clipart.objects.filter(Q(name="Apes") | Q(name="Dog")) # This line will show error backend_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/manager.py", line 85, in manager_method backend_1 | return getattr(self.get_queryset(), name)(*args, **kwargs) backend_1 | File "/usr/local/lib/python3.7/site-packages/safedelete/queryset.py", line 72, in filter backend_1 | return super(SafeDeleteQueryset, queryset).filter(*args, **kwargs) backend_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 941, in filter backend_1 | return self._filter_or_exclude(False, args, kwargs) backend_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 961, in _filter_or_exclude backend_1 | clone._filter_or_exclude_inplace(negate, args, kwargs) backend_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 968, in _filter_or_exclude_inplace backend_1 | self._query.add_q(Q(*args, **kwargs)) backend_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/sql/query.py", line 1391, in add_q backend_1 | clause, _ = self._add_q(q_object, self.used_aliases) backend_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/sql/query.py", line 1413, in _add_q backend_1 | split_subq=split_subq, check_filterable=check_filterable, backend_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/sql/query.py", line 1281, in build_filter backend_1 | arg, value = filter_expr backend_1 | TypeError: cannot unpack non-iterable MatchAll object | Likely you implement the wrong Q, you import this with: from django.db.models import Q @permission_classes((AllowAny,)) class ClipartViewSet(viewsets.GenericViewSet): serializer_class = ClipartSerializer queryset = Clipart.objects.filter(is_active=True).all() def list(self, request, **kwargs): qs = Clipart.objects.filter(Q(name='Apes') | Q(name='Dog')) return super(ClipartViewSet, self).list(self, request, **kwargs) that being said, you can simplify the query with an __in lookup [Django-doc]: @permission_classes((AllowAny,)) class ClipartViewSet(viewsets.GenericViewSet): serializer_class = ClipartSerializer queryset = Clipart.objects.filter(is_active=True).all() def list(self, request, **kwargs): # Some extra logic qs = Clipart.objects.filter(name__in=('Apes', 'Dog')) print(qs) return super(ClipartViewSet, self).list(self, request, **kwargs) | 2 | 3 |
76,779,580 | 2023-7-27 | https://stackoverflow.com/questions/76779580/dataframe-max-matching-two-columns | I have a Dataframe and I am looking for a way to find the maximum number on uniquly matched pairs on two columns. If for example I limit my Dataframe to only these two columns: X Y 1 a 1 b 2 c 2 d 3 b 3 a 4 c 4 d 4 e 5 e 5 c The result shoud be: X Y 1 a 2 c 3 b 4 d 5 e So if I match 1 & a, then I cannot use 1 for column X anymore, nor a for Y anymore. The difficult part is to match the maximum number of pairs possible with this rule. Many thanks. | I would use a linear_sum_assignment on the crosstab: from scipy.optimize import linear_sum_assignment tmp = pd.crosstab(df['X'], df['Y']) idx, col = linear_sum_assignment(tmp, maximize=True) out = pd.DataFrame({'X': tmp.index[idx], 'Y': tmp.columns[col]}) Output: X Y 0 1 a 1 2 c 2 3 b 3 4 d 4 5 e Intermediate crosstab (tmp): Y a b c d e X 1 1 1 0 0 0 2 0 0 1 1 0 3 1 1 0 0 0 4 0 0 1 1 1 5 0 0 1 0 1 other example # df: here we removed the 3/b and 5/e pairs X Y 0 1 a 1 1 b 2 2 c 3 2 d 4 3 a 5 4 c 6 4 d 7 4 e 8 5 c # out X Y 0 1 b 1 2 d 2 3 a 3 4 e 4 5 c | 3 | 2 |
76,778,624 | 2023-7-27 | https://stackoverflow.com/questions/76778624/attributeerror-module-plotly-tools-has-no-attribute-set-credentials-file | I am following an introduction tutorial for plotly and I am facing a problem, using Jupyter. This is the code below: import numpy as np import pandas as pd import cufflinks as cf import chart_studio.plotly as py import plotly.tools as tls import plotly.graph_objs as go tls.set_credentials_file(username = "xxx", api_key = "yyyy") And the error I am getting. -------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[21], line 1 ----> 1 tls.set_credentials_file(username = "xxxx", api_key = "yyy") AttributeError: module 'plotly.tools' has no attribute 'set_credentials_file' All the other questions I am seeing on SO are from some years ago and I am afraid the documentation of the package is updated. Can someone help? | As it was mentioned by @Sotos in the comment, the set_credentials_file file is no longer available in plotly.tools. You should use the chart_studio instead: https://pypi.org/project/chart-studio/ and it should work for you: import chart_studio.tools as tls tls.set_credentials_file(username='xxx', api_key='yyyy') Also have a look at the current Getting Started doc: https://plotly.com/python/getting-started-with-chart-studio/. | 2 | 3 |
76,776,186 | 2023-7-27 | https://stackoverflow.com/questions/76776186/even-with-return-dtype-in-map-rows-i-get-could-not-determine-output-type-i | This works: f = lambda t: {'x': 1, 'y': "abc"} df = pl.DataFrame( {'c': [f(0)] } ) shape: (1, 1) ┌───────────┐ │ c │ │ --- │ │ struct[2] │ ╞═══════════╡ │ {1,"abc"} │ └───────────┘ >>> df.schema # Schema([('c', Struct({'x': Int64, 'y': String}))]) While using map_rows with same f and schema dies with: RuntimeError: BindingsError: "Could not determine output type" (df.with_columns('c') .map_rows(f, return_dtype= pl.Struct([pl.Field('x', pl.Int64), pl.Field('y', pl.String)]) ) ) | There are really two different apply methods that are relevant here. There's DataFrame.apply and there's Expr.apply. When you do: df.with_columns('c') The output of that is a DataFrame that includes c. The with_columns isn't doing anything except making sure c exists and throwing an error if it doesn't. Therefore when you chain an apply to that you're invoking the DataFrame.apply method. That method is equivalent to doing a for loop on df.rows where the input to the function is a tuple of the values of each column one row at a time. It's expecting the output of the function to be a tuple so if your f was: f1 = lambda t: ({'x': 1, 'y': "abc"},) then doing df.apply(f1) would work. On the other hand if you invoke the Expr.apply by doing df.with_columns(pl.col('c').apply(f)) then that works too because the Expr.apply is equivalent to running a for loop on all the values of the column from which it was invoked. Obligatory anti-apply rant: If you're using apply in polars then you're not getting any benefit from the library. At that point, you're just doing a python for-loop with all of the inherent slowness. | 3 | 0 |
76,777,484 | 2023-7-27 | https://stackoverflow.com/questions/76777484/find-the-sum-of-values-in-rows-of-one-column-for-where-the-other-column-has-nan | I have a dataframe with columns A and B. Column A has non continuous data where some of the rows are NAN and B has continuous data. I would like to create a third column where for each set of A rows with NAN it will have the sum of values in those same rows in B + the next valid value in B. All other values in C should be NAN for NAN in A AND the value of B for rows following a valid number in A. Example: data = { 'A': [1, 1, None, None, 2, 5, None, None,3 ,4, 3, None , 5], 'B': [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130]} Everything works fine except for the rows where I need the sum of B + next valid value in B. I use the following code. I have this code but is seems it's a mess by now. `result = df.groupby(df['A'].isnull().cumsum())['B'].sum().reset_index() df_result = pd.DataFrame({'C': result['Pumped']}) df_result.loc[1:, 'C'] -= result.loc[0, 'Pumped'] df.loc[~mask, 'C'] = df.loc[~mask, 'Pumped'] valid_rows_after_nan = df['dWL'].notnull() & mask.shift(1).fillna(False) df.loc[valid_rows_after_nan, 'C'] = df_result print(df)` I would like the output to look like this: `data = { 'A': [1, 1, None, None, 2, 5, None, None,3 ,4, 3, None , 5], 'B': [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130], 'C': [10, 20, None, None, 120, 60, None, None, 240, 100, 110, None, 5] } | A simple version using groupby.transform: # identify the non-NA and reverse m = df.loc[::-1, 'A'].notna() # group the preceding NA, sum, mask where NA df['C'] = df.groupby(m.cumsum())['B'].transform('sum').where(m) Output: A B C 0 1.0 10 10.0 1 1.0 20 20.0 2 NaN 30 NaN 3 NaN 40 NaN 4 2.0 50 120.0 5 5.0 60 60.0 6 NaN 70 NaN 7 NaN 80 NaN 8 3.0 90 240.0 9 4.0 100 100.0 10 3.0 110 110.0 11 NaN 120 NaN 12 5.0 130 250.0 | 2 | 4 |
76,775,248 | 2023-7-26 | https://stackoverflow.com/questions/76775248/argumentparser-add-argument-raises-attributeerror-str-object-has-no-attribute | When I run my program, I immediately get an AttributeError when it's setting up the argument parser: Traceback (most recent call last): File "blah.py", line 55, in <module> parser.add_argument('--select-equal', '--equal', nargs=2, metavar=[ 'COLUMN', 'VALUE' ], action='append', dest='filter_equality_keys_and_values', default=[], help='Only return rows for which this column is exactly equal to this value.') File "…/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/argparse.py", line 1346, in add_argument chars = self.prefix_chars AttributeError: 'str' object has no attribute 'prefix_chars' Well, that sounds reasonable. strs indeed don't have a prefix_chars attribute. Clearly the exception message is correct. Except I can't parse arguments, or even get to parsing arguments, because of this exception. Why is this throwing an exception? Why is it trying to get prefix_chars of a str in the first place? The exception is coming from inside the Python module; is this a bug in Python or something I'm doing wrong? | It is, in fact, something I was doing wrong. The clue is that the method is trying to access an attribute of an object that doesn't have such an attribute—and it's trying to do so on self. The first argument to my call to add_argument is indeed a string ('--select-equal'). The only way that string could end up being self is if I were calling add_argument on the ArgumentParser class and not an ArgumentParser object. And indeed, scrolling up a couple lines in my own code, there's the problem: import argparse parser = argparse.ArgumentParser I need to instantiate ArgumentParser, which means calling the class (ArgumentParser()), not just stash the class in a variable. For want of a (), the exception was thrown. | 3 | 6 |
76,772,127 | 2023-7-26 | https://stackoverflow.com/questions/76772127/problem-with-using-python-via-rs-reticulate-in-docker-container | I create a Docker image based on rocker/shinyversewith the Dockerfile: # File: Dockerfile FROM rocker/shiny-verse:4.2.2 RUN echo "apt-get start" RUN apt-get update && apt-get install -y \ python3 \ python3-pip # install R packages RUN R -e "install.packages('remotes')" RUN R -e "install.packages('reticulate')" RUN R -e "install.packages('tidyverse')" # Install Python packages datetime and zeep RUN python3 -m pip install datetime zeep # Set the environment variable to use Python3 ENV RETICULATE_PYTHON /usr/bin/python3 Then I have the simple R file: library("tidyverse") library(reticulate) # Call a simple Python command to calculate 3*5 py_run_string("z = 3+4") py$z %>% print() After I launched the container, I want to run this R script with the shell command: docker exec shiny_new Rscript /home/shiny/ETL/reticulate_test.R but I get the following error: Error: Python shared library not found, Python bindings not loaded. Use reticulate::install_miniconda() if you'd like to install a Miniconda Python environment. Execution halted It fails when executing the python I am unsure how to setup the Python in such a way that I can use python code via reticulate in my R script. Does anybody have an idea where I go wrong in setting up the Docker image? | Folowing this issue, it seems pyenv installs Python without the Python shared library. You should try to add: reticulate::install_python() which wraps pyenv and set --enable-shared option or set the --enable-shared variable as suggested here | 3 | 1 |
76,769,872 | 2023-7-26 | https://stackoverflow.com/questions/76769872/how-to-provide-embedding-function-to-a-langchain-vector-store | I am trying to get a simple vector store (chromadb) to embed texts using the add_texts method with langchain, however I get the following error despite successfully using the OpenAI package with a different simple langchain scenario: ValueError: You must provide embeddings or a function to compute them Code: from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma db = Chroma() texts = [ """ One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search for you. """, """ Today's applications are required to be highly responsive and always online. To achieve low latency and high availability, instances of these applications need to be deployed in datacenters that are close to their users. Applications need to respond in real time to large changes in usage at peak hours, store ever increasing volumes of data, and make this data available to users in milliseconds. """, ] db.add_texts(texts, embedding_function=OpenAIEmbeddings()) | embedding_function need to be passed when you construct the object of Chroma. source : Chroma class Class Code so your code would be: from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma db = Chroma(embedding_function=OpenAIEmbeddings()) texts = [ """ One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search for you. """, """ Today's applications are required to be highly responsive and always online. To achieve low latency and high availability, instances of these applications need to be deployed in datacenters that are close to their users. Applications need to respond in real time to large changes in usage at peak hours, store ever increasing volumes of data, and make this data available to users in milliseconds. """, ] db.add_texts(texts) and result you will see as ['58f12150-2bc4-11ee-9ff5-ac87a32b530e', '58f12240-2bc4-11ee-9ff5-ac87a32b530e'] | 4 | 3 |
76,771,208 | 2023-7-26 | https://stackoverflow.com/questions/76771208/warningrootcan-not-find-chromedriver-for-currently-installed-chrome-version | I use selenium in my python project. Few days ago program worked correctly but today I ran my script and it caused this warning. Script: from selenium import webdriver import chromedriver_autoinstaller chromedriver_autoinstaller.install() options = webdriver.ChromeOptions() prefs = {"profile.default_content_setting_values.notifications": 1} options.add_argument('--ignore-certificate-errors') options.add_argument('--ignore-ssl-errors') options.add_experimental_option("prefs", prefs) options.add_argument("--disable-blink-features=AutomationControlled") options.add_experimental_option("excludeSwitches", ["enable-automation"]) options.add_experimental_option("useAutomationExtension", False) driver = webdriver.Chrome(options=options) driver.execute_script("Object.defineProperty(navigator, 'webdriver', {get: () => undefined})") driver.execute_cdp_cmd("Network.setUserAgentOverride", {"userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36"}) I tried reinstall chromedriver but it didn't help | Selenium Manager is now fully included with selenium 4.10.0, so this is all you need: from selenium import webdriver from selenium.webdriver.chrome.service import Service service = Service() options = webdriver.ChromeOptions() driver = webdriver.Chrome(service=service, options=options) # ... driver.quit() If the driver isn't found on your system PATH, Selenium Manager will automatically download it. If you're wondering why you're now seeing this error, it's because https://chromedriver.chromium.org/downloads only goes up to version 114 due to driver restructuring by the Chromium Team for the new Chrome-for-Testing. | 4 | 6 |
76,770,505 | 2023-7-26 | https://stackoverflow.com/questions/76770505/numpy-finding-multiple-occurrence-in-an-array-by-index | Given the following array: array = [-1, -1, -1, -1, -1, -1, 3, 3, -1, 3, -1, -1, 2, 2, -1, -1, 1, -1] indexes 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 I need to find the indexes where the same number appears. In this example this would return a list of lists like this: list(list(), list(16), list(12, 13), list(6, 7, 9), list() etc...) 0 1 \ 2 3 4 ^ \ \ \ the index in the array at which "1" appears \ \ the numbers in the array how would one do this in numpy? the number 1 appears at index 16 the number 2 appears at indexes 12, 13 etc. NOTES based on comments: -1 can be ignored, i'm only interested in the rest array has ~50 elements with values up to int(500) this function will be called 6000+ times. | For a solution in O(n) time*, use a dictionary to collect the indices: # collect positions per value for each item d = {} for i, x in enumerate(array): d.setdefault(x, []).append(i) # sort the output (optional) out = {k: d[k] for k in sorted(d)} Output: {-1: [0, 1, 2, 3, 4, 5, 8, 10, 11, 14, 15, 17], 1: [16], 2: [12, 13], 3: [6, 7, 9]} * + O(k*log(k)) where k is the number of unique values if you need a sorted output For a list of lists: out = [d.get(k, []) for k in range(min(d), max(d)+1)] # or for only positive values out = [d.get(k, []) for k in range(1, max(d)+1)] Output: [[0, 1, 2, 3, 4, 5, 8, 10, 11, 14, 15, 17], [], [16], [12, 13], [6, 7, 9]] alternative Or a very simple approach if you pre initialize the output: out = [[] for i in range(max(array)+1)] for i, x in enumerate(array): out[x].append(i) comparison of all approaches pure python is the fastest here. The initial array was generated using np.random.randint(0, k, size=n).tolist() where n is the length of the array and k the maximum value in the array. With k=4: With k=100: We can now see the quadratic behavior of @TalhaTayyab/@Stitt/@PaulS approaches. With k=10_000: We can note that numpy is a bit faster for relatively small arrays (when the values are likely unique). | 4 | 4 |
76,769,098 | 2023-7-26 | https://stackoverflow.com/questions/76769098/how-to-sort-and-group-on-column-using-pandas-loop | from a data frame df1 = Area Sequence X Y A 2 604582.25 320710 A 1 604590.25 320704.75 A 3 604579.25 320710 B 2 536584.47 176977.83 B 1 536570 176996.43 C 1 509202.13 307995.99 C 2 509205.3 307951.24 Need to generate into df1 = Area XY_by_sequence A 604590.25 320704.75 , 604582.25 320710 , 604579.25 320710 B 536570 176996.43 , 536584.47 176977.83 C 509202.13 307995.99 , 509205.3 307951.24 | Try: x = ( df.set_index(["Area", "Sequence"]) .stack() .groupby(level=[0, 1]) .agg(lambda x: " ".join(map(str, x))) .groupby(level=0) .agg(", ".join) .reset_index(name="XY_by_sequence") ) print(x) Prints: Area XY_by_sequence 0 A 604590.25 320704.75, 604582.25 320710.0, 604579.25 320710.0 1 B 536570.0 176996.43, 536584.47 176977.83 2 C 509202.13 307995.99, 509205.3 307951.24 | 2 | 7 |
76,767,219 | 2023-7-26 | https://stackoverflow.com/questions/76767219/error-ignored-the-following-versions-that-require-a-different-python-version | I need to migrate a python project from a Ubuntu machine to my local Windows laptop. In Ubuntu I am working in a virtual environment with python version 3.10.6. I created a requirements.txt using pip freeze > requirements.txt and pushed it together with my code to remote repository. Then I pulled the repository on my local Windows machine and tried to set up a virtual environment using the requirements.txt So I created a conda environment with python version 3.10.12 (I know it's not exactly the same, but it was a hassle so I figured it would be okay) and in that I created a virtual environment using python3 -m venv myenv I activate it using myenv\Scripts\activate.bat. So far so good. But now I try to install the packages using pip install -r requirements.txtand it seems to work until at the very end I get ERROR: Ignored the following versions that require a different python version: 0.23.0 Requires-Python >=3.6, <3.10; 0.36.0 Requires-Python >=3.6,<3.10; 0.37.0 Requires-Python >=3.7,<3.10; 0.52.0 Requires-Python >=3.6,<3.9; 0.52.0rc3 Requires-Python >=3.6,<3.9; 0.53.0 Requires-Python >=3.6,<3.10; 0.53.0rc1.post1 Requires-Python >=3.6,<3.10; 0.53.0rc2 Requires-Python >=3.6,<3.10; 0.53.0rc3 Requires-Python >=3.6,<3.10; 0.53.1 Requires-Python >=3.6,<3.10; 0.54.0 Requires-Python >=3.7,<3.10; 0.54.0rc2 Requires-Python >=3.7,<3.10; 0.54.0rc3 Requires-Python >=3.7,<3.10; 0.54.1 Requires-Python >=3.7,<3.10; 1.6.2 Requires-Python >=3.7,<3.10; 1.6.3 Requires-Python >=3.7,<3.10; 1.7.0 Requires-Python >=3.7,<3.10; 1.7.1 Requires-Python >=3.7,<3.10 ERROR: Could not find a version that satisfies the requirement tensorflow-io-gcs-filesystem==0.32.0 (from versions: 0.23.1, 0.24.0, 0.25.0, 0.26.0, 0.27.0, 0.28.0, 0.29.0, 0.30.0, 0.31.0) ERROR: No matching distribution found for tensorflow-io-gcs-filesystem==0.32.0 And running pip list afterwards reveals that no packages have been installed: Package Version ---------- ------- pip 23.0.1 setuptools 65.5.0 I don't understand what's going on here, the message tells me I need a python version <3.10 (<3.9 even!), but the code ran under python3.10, I created the requirements.txt there! I tried the same with virtual conda environments with python versions 3.8 and 3.11, but both times I got very similar (if not the same) error messages. What should I do to install the packages from the requirements.txt? | Your error is not Ignored the following versions But Could not find a version that satisfies the requirement tensorflow-io-gcs-filesystem==0.32.0 (from versions: 0.23.1, 0.24.0, 0.25.0, 0.26.0, 0.27.0, 0.28.0, 0.29.0, 0.30.0, 0.31.0) The first line is simply the output of pip trying to figure out the list of versions that are compatible with your python version and OS, which is printed in the second line in the parantheses. If you look at the list from versions: 0.23.1, 0.24.0, 0.25.0, 0.26.0, 0.27.0, 0.28.0, 0.29.0, 0.30.0, 0.31.0 you will notice that 0.32 is not on there. The reason is, that this package does not provide windows whls for that version. Providing them is an open issue on github For you this leaves the following options: Downgrade tensorflow-io to 0.31 and therefore also tensorflow Switch operating system either using a dual-boot or WSL on your Laptop Try to build tensorflow-io from source, however if that was easily done on windows they would probably have released whl files 1 is the least amount of work, but you will ahve to check if your project is compatible, 2 is most likely to succeed and would ensure future compatibility between your two machines. 3 is unlikely to be achieved easily | 3 | 3 |
76,744,193 | 2023-7-22 | https://stackoverflow.com/questions/76744193/type-hint-for-attribute-that-might-not-exist | How do I do a type hint for attribute that might not exist on the object in Python? For example: class X: __slots__ = ('attr',) attr: int # either int or doesn't exist - what do I put here? So, concretely: >>> x = X() >>> x.attr Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'X' object has no attribute 'attr' >>> x.attr = 1 >>> x.attr 1 ...but there are also cases where __slots__ isn't involved. | The answer currently appears to be "You can't". I've started a discussion thread for this in the Python Typing repo, which may get more visibility. | 3 | 1 |
76,758,415 | 2023-7-24 | https://stackoverflow.com/questions/76758415/pydantic-issue-for-tuple-length | I have the following model in pydantic (Version 2.0.3) from typing import Tuple from pydantic import BaseModel class Model(BaseModel): test_field: Tuple[int] But when I enter model = Model(test_field=(1,2)) I get as error: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 1, in <module> File "/Users/tobi/Documents/scraiber/z_legacy/fastapi_test_app/venv/lib/python3.10/site-packages/pydantic/main.py", line 150, in __init__ __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__) pydantic_core._pydantic_core.ValidationError: 1 validation error for Model test_field Tuple should have at most 1 item after validation, not 2 [type=too_long, input_value=(1, 2), input_type=tuple] For further information visit https://errors.pydantic.dev/2.0.3/v/too_long Do you know how I can fix that? | Following @Tim Robert's Answer, the linked PR suggests using the Ellipsis ... is the syntax you're after! https://github.com/pydantic/pydantic/pull/512/files class Model(BaseModel): test_field: Tuple[int, ...] >>> Model(test_field=(1,2)) Model(test_field=(1, 2)) Additionally, and though I don't think it's really advisable (prefer codegen or reconsider the design), you can generate and expand given an exact count of the fields count = 5 class Model_Five(BaseModel): test_field: Tuple[*([int]*count)] >>> Model_Five(test_field=(1,2,3,4,5)) Model_Five(test_field=(1, 2, 3, 4, 5)) >>> Model_Five(test_field=(1,2,3,4)) [..] omitted test_field.4 Field required [type=missing, input_value=(1, 2, 3, 4), input_type=tuple] For further information visit https://docs.pydantic.dev/dev/errors/validation_errors/#missing Finally, there might be cases where you need a longer form like *(cls for _ in range(count)) (or the even more troublesome for sustainability *(some_class_factory() for _ in range(count))) to expressly avoid having the inner values refer to the same object | 6 | 9 |
76,750,445 | 2023-7-23 | https://stackoverflow.com/questions/76750445/filling-date-gaps-with-polars | I have a problem I'm trying to solve but can't figure it out. I have something similar to this table: df = pl.from_repr(""" ┌─────┬────────────┬───────┐ │ id ┆ date ┆ sales │ │ --- ┆ --- ┆ --- │ │ i64 ┆ str ┆ f64 │ ╞═════╪════════════╪═══════╡ │ 1 ┆ 2023-01-01 ┆ 10.0 │ │ 1 ┆ 2023-02-01 ┆ 20.0 │ │ 1 ┆ 2023-03-01 ┆ 30.0 │ │ 1 ┆ 2023-05-01 ┆ 40.0 │ │ 2 ┆ 2023-02-01 ┆ 50.0 │ │ 2 ┆ 2023-03-01 ┆ 60.6 │ │ 2 ┆ 2023-04-01 ┆ 70.2 │ │ 3 ┆ 2023-01-01 ┆ 80.5 │ │ 3 ┆ 2023-02-01 ┆ 90.0 │ └─────┴────────────┴───────┘ """) as you can see, for each id i have dates and sales. I want to get for every id all dates from the minimum date in the data frame up to the maximum(included). in addition, i want to fill in the sales column with 0 and the id column with the matching id, so it looks like this: ┌─────┬────────────┬───────┐ │ id ┆ date ┆ sales │ │ --- ┆ --- ┆ --- │ │ i32 ┆ date ┆ f64 │ ╞═════╪════════════╪═══════╡ │ 1 ┆ 2023-01-01 ┆ 10.0 │ │ 1 ┆ 2023-01-02 ┆ 20.0 │ │ 1 ┆ 2023-01-03 ┆ 30.0 │ │ 1 ┆ 2023-01-04 ┆ 0.0 │ │ 1 ┆ 2023-01-05 ┆ 40.0 │ │ 2 ┆ 2023-01-01 ┆ 0.0 │ │ 2 ┆ 2023-01-02 ┆ 50.0 │ │ 2 ┆ 2023-01-03 ┆ 60.6 │ │ 2 ┆ 2023-01-03 ┆ 70.2 │ │ 2 ┆ 2023-01-04 ┆ 0.0 │ │ 2 ┆ 2023-01-05 ┆ 0.0 │ │ 3 ┆ 2023-01-01 ┆ 80.5 │ │ 3 ┆ 2023-01-02 ┆ 90.0 │ │ 3 ┆ 2023-01-03 ┆ 0.0 │ │ 3 ┆ 2023-01-04 ┆ 0.0 │ │ 3 ┆ 2023-01-05 ┆ 0.0 │ └─────┴────────────┴───────┘ so on and so forth. i've tried to create a new dataframe by using the pl.date_range function and then to join it against the main data, by using outer or cross, but to no avail, since it doesn't compute against each id. maybe you have any ideas on how to go about it? many thanks in advance for any input! | Try: def fn(x, r): print(pl.DataFrame({"id": x["id"][0], "date": r})) print(x) return ( pl.DataFrame({"id": x["id"][0], "date": r}) .join(x, on="date", how="left")[["id", "date", "sales"]] .with_columns(pl.col("sales").fill_null(strategy="zero")) ) # convert to date df = df.with_columns(pl.col("date").str.to_date("%Y-%d-%m")) # get min, max date mn, mx = df["date"].min(), df["date"].max() # construct the range r = pl.date_range(mn, mx, "1d", eager=True) # group by "id" and fill the missing dates df = df.group_by("id", maintain_order=True).map_groups(lambda x: fn(x, r)) with pl.Config(tbl_rows=-1): print(df) Prints: shape: (15, 3) ┌─────┬─────────────────────┬───────┐ │ id ┆ date ┆ sales │ │ --- ┆ --- ┆ --- │ │ i64 ┆ datetime[μs] ┆ f64 │ ╞═════╪═════════════════════╪═══════╡ │ 1 ┆ 2023-01-01 00:00:00 ┆ 10.0 │ │ 1 ┆ 2023-01-02 00:00:00 ┆ 20.0 │ │ 1 ┆ 2023-01-03 00:00:00 ┆ 30.0 │ │ 1 ┆ 2023-01-04 00:00:00 ┆ 0.0 │ │ 1 ┆ 2023-01-05 00:00:00 ┆ 40.0 │ │ 2 ┆ 2023-01-01 00:00:00 ┆ 0.0 │ │ 2 ┆ 2023-01-02 00:00:00 ┆ 50.0 │ │ 2 ┆ 2023-01-03 00:00:00 ┆ 60.6 │ │ 2 ┆ 2023-01-04 00:00:00 ┆ 70.2 │ │ 2 ┆ 2023-01-05 00:00:00 ┆ 0.0 │ │ 3 ┆ 2023-01-01 00:00:00 ┆ 80.5 │ │ 3 ┆ 2023-01-02 00:00:00 ┆ 90.0 │ │ 3 ┆ 2023-01-03 00:00:00 ┆ 0.0 │ │ 3 ┆ 2023-01-04 00:00:00 ┆ 0.0 │ │ 3 ┆ 2023-01-05 00:00:00 ┆ 0.0 │ └─────┴─────────────────────┴───────┘ | 3 | 1 |
76,762,540 | 2023-7-25 | https://stackoverflow.com/questions/76762540/pandas-eval-replacement-in-polars | Suppose I have an expression like "col3 = col2 + col1" so pandas we can directly call pandas.dataframe.eval() but in polars I cannot find such method. I have series.eval in polars but no luck as I want evaluate user given expression on a dataframe. | Acception strings You can pass SQL to pl.sql_expr. df = pl.DataFrame({ "col1": [1, 2], "col2": [1, 2], }) df.select( pl.sql_expr("col2 + col1 as col3") ) Or you can run a complete SQL query with pl.sql pl.sql("SELECT col2 + col1 as col3 FROM df").collect() shape: (2, 1) ┌──────┐ │ col3 │ │ --- │ │ i64 │ ╞══════╡ │ 2 │ │ 4 │ └──────┘ Accept expressions directly i want evaluate user given expression on a dataframe. I would accept pl.Expression directly instead of strings. This gives more type safety than strings and probably also a better user experience as you can have autocomplete and the IDE may show available methods/arguments. | 4 | 4 |
76,737,940 | 2023-7-21 | https://stackoverflow.com/questions/76737940/i-was-trying-to-build-an-django-project-it-shows-got-an-unexpected-keyword-a | The project when I previously build on my laptop it works but when I was trying to build this project on my pc it shows docker.errors.DockerException: Error while fetching server API version: HTTPConnection.request() got an unexpected keyword argument 'chunked'. My docker-compose.yml file looks like version: "3.3" services: db: image: postgres:14.0-alpine3.14 restart: always environment: POSTGRES_DB: ${DB_NAME} POSTGRES_USER: ${DB_USER} POSTGRES_PASSWORD: ${DB_PASSWORD} volumes: - korean_mbc_appointment:/var/lib/postgresql/data web: build: ./ restart: always command: "python manage.py runserver 0.0.0.0:8000" volumes: - ./:/code expose: - 421 # ports: # - "420:8000" depends_on: - db nginx: build: ./nginx restart: always volumes: - ./static:/code/static - ./media:/code/media ports: - "420:80" depends_on: - web volumes: korean_mbc_appointment: The issue when I was trying to build the project looks like sudo docker-compose -f docker-compose.yml up -d --build Traceback (most recent call last): File "/usr/lib/python3/dist-packages/docker/api/client.py", line 214, in _retrieve_server_version return self.version(api_version=False)["ApiVersion"] File "/usr/lib/python3/dist-packages/docker/api/daemon.py", line 181, in version return self._result(self._get(url), json=True) File "/usr/lib/python3/dist-packages/docker/utils/decorators.py", line 46, in inner return f(self, *args, **kwargs) File "/usr/lib/python3/dist-packages/docker/api/client.py", line 237, in _get return self.get(url, **self._set_request_timeout(kwargs)) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 602, in get return self.request("GET", url, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 790, in urlopen response = self._make_request( File "/usr/local/lib/python3.10/dist-packages/urllib3/connectionpool.py", line 496, in _make_request conn.request( TypeError: HTTPConnection.request() got an unexpected keyword argument 'chunked' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/bin/docker-compose", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.10/dist-packages/compose/cli/main.py", line 81, in main command_func() File "/usr/local/lib/python3.10/dist-packages/compose/cli/main.py", line 200, in perform_command project = project_from_options('.', options) File "/usr/local/lib/python3.10/dist-packages/compose/cli/command.py", line 60, in project_from_options return get_project( File "/usr/local/lib/python3.10/dist-packages/compose/cli/command.py", line 152, in get_project client = get_client( File "/usr/local/lib/python3.10/dist-packages/compose/cli/docker_client.py", line 41, in get_client client = docker_client( File "/usr/local/lib/python3.10/dist-packages/compose/cli/docker_client.py", line 170, in docker_client client = APIClient(use_ssh_client=not use_paramiko_ssh, **kwargs) File "/usr/lib/python3/dist-packages/docker/api/client.py", line 197, in __init__ self._version = self._retrieve_server_version() File "/usr/lib/python3/dist-packages/docker/api/client.py", line 221, in _retrieve_server_version raise DockerException( docker.errors.DockerException: Error while fetching server API version: HTTPConnection.request() got an unexpected keyword argument 'chunked' | My Operating system is ubuntu 22 and running sudo apt install docker-compose-v2 did not work for me. The following command works for me. *Uninstall old docker-compose package sudo apt remove docker-compose Install docker-compose-plugin package sudo apt install docker-compose-plugin After installing this you are unable to run docker-compose <action> command. To run you have to use following command. docker compose <action> | 5 | 0 |
76,736,361 | 2023-7-21 | https://stackoverflow.com/questions/76736361/llama-qlora-error-target-modules-query-key-value-dense-dense-h-to-4h | I tried to load Llama-2-7b-hf LLM with QLora with the following code: model_id = "meta-llama/Llama-2-7b-hf" tokenizer = AutoTokenizer.from_pretrained(model_id, use_auth_token=True) # I have permissions. model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, quantization_config=bnb_config, device_map="auto", use_auth_token=True) model.gradient_checkpointing_enable() model = prepare_model_for_kbit_training(model) config = LoraConfig( r=8, lora_alpha=32, target_modules=[ "query_key_value", "dense", "dense_h_to_4h", "dense_4h_to_h", ], lora_dropout=0.05, bias="none", task_type="CAUSAL_LM" ) model = get_peft_model(model, config) # got the error here I got this error: File "/home/<my_username>/.local/lib/python3.10/site-packages/peft/tuners/lora.py", line 333, in _find_and_replace raise ValueError( ValueError: Target modules ['query_key_value', 'dense', 'dense_h_to_4h', 'dense_4h_to_h'] not found in the base model. Please check the target modules and try again. How can I solve this? Thank you! | the strings in target_modules are different from models to models, you can debug the model = AutoModelForCausalLM.from_pretrained(model_id) and look into the model then you can find what are the linear layers names and input them into the target_modules | 2 | 2 |
76,734,333 | 2023-7-20 | https://stackoverflow.com/questions/76734333/pydantic-v2-field-validator-values-argument-equivalent | I'm migrating from v1 to v2 of Pydantic and I'm attempting to replace all uses of the deprecated @validator with @field_validator. Previously, I was using the values argument to my validator function to reference the values of other previously validated fields. As the v1 docs say: You can also add any subset of the following arguments to the signature (the names must match): values: a dict containing the name-to-value mapping of any previously-validated fields It seems this values argument is no longer passed as the @field_validator signature has changed. However, the migration docs don't mention a values equivalent in v2 and the v2 validator documentation page has not yet been updated for v2.0. Does anyone know the preferred approach for v2? V1 validator: @validator('password2') def passwords_match(cls, v, values, **kwargs): if 'password1' in values and v != values['password1']: raise ValueError('passwords do not match') return v | The current version of the Pydantic v2 documentation is actually up to date for the field validators section in terms of what the signature of your validation method must/can look like. the second argument is the field value to validate; it can be named as you please the third argument is an instance of pydantic.ValidationInfo [...] If you want to access values from another field inside a @field_validator, this may be possible using ValidationInfo.data, which is a dict of field name to field value. Validation is done in the order fields are defined, so you have to be careful when using ValidationInfo.data to not access a field that has not yet been validated/populated [...] (Side note: You can look up a bit more information about the ValidationData protocol in the Pydantic Core API reference, although it is a bit terse and could do with some cross-references.) The old way (v1) Say you had the following code in Pydantic v1: from typing import Any from pydantic import BaseModel, ValidationError, validator class UserModel(BaseModel): ... password1: str password2: str @validator("password2") def passwords_match(cls, v: str, values: dict[str, Any]) -> str: if "password1" in values and v != values["password1"]: raise ValueError("passwords do not match") return v try: UserModel(password1="abc", password2="xyz") except ValidationError as err: print(err.json(indent=4)) Output: [ { "loc": [ "password2" ], "msg": "passwords do not match", "type": "value_error" } ] The new way (v2) using @field_validator You would have to rewrite the v1 code above like this in v2: from pydantic import BaseModel, ValidationError, ValidationInfo, field_validator class UserModel(BaseModel): ... password1: str password2: str @field_validator("password2") def passwords_match(cls, v: str, info: ValidationInfo) -> str: if "password1" in info.data and v != info.data["password1"]: raise ValueError("passwords do not match") return v try: UserModel(password1="abc", password2="xyz") except ValidationError as err: print(err.json(indent=4)) v2 output: [ { "type": "value_error", "loc": [ "password2" ], "msg": "Value error, passwords do not match", "input": "xyz", "ctx": { "error": "passwords do not match" }, "url": "https://errors.pydantic.dev/2.6/v/value_error" } ] Another way (v2) using an annotated validator For the sake of completeness, Pydantic v2 offers a new way of validating fields, which is annotated validators. The code above could just as easily be written with an AfterValidator (for example) like this: from typing import Annotated from pydantic import AfterValidator, BaseModel, ValidationError, ValidationInfo def ensure_passwords_match(v: str, info: ValidationInfo) -> str: if "password1" in info.data and v != info.data["password1"]: raise ValueError("passwords do not match") return v class UserModel(BaseModel): ... password1: str password2: Annotated[str, AfterValidator(ensure_passwords_match)] try: UserModel(password1="abc", password2="xyz") except ValidationError as err: print(err.json(indent=4)) The output is exactly the same as in the @field_validator example. Background As you can see from the Pydantic core API docs linked above, annotated validator constructors take the same type of argument as the decorator returned by @field_validator, namely either a NoInfoValidatorFunction or a WithInfoValidatorFunction, so either a Callable[[Any], Any] or a Callable[[Any, ValidationInfo], Any]. (Strictly speaking the signature of the field_validator inner decorator is different because it technically can deal with more nuances like implicit classmethods etc., but it is designed to essentially deal with the same type of functions.) Key takeaway Field-specific validator functions should therefore always have either one parameter - the value to validate, or two parameters - the value and the ValidationInfo object. | 22 | 27 |
76,727,774 | 2023-7-20 | https://stackoverflow.com/questions/76727774/selenium-webdriver-chrome-115-stopped-working | I have Chrome 115.0.5790.99 installed on Windows, and I use Selenium 4.10.0. In my Python code, I call service = Service(ChromeDriverManager().install()) and it returns the error: ValueError: There is no such driver by url [sic] https://chromedriver.storage.googleapis.com/LATEST_RELEASE_115.0.5790. I use ChromeDriverManager().install() in order to ensure the use of last stable version of webdriver. How to solve the issue? My simple code: from selenium import webdriver from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager import time # Install Webdriver service = Service(ChromeDriverManager().install()) # Create Driver Instance driver = webdriver.Chrome(service=service) # Get Web Page driver.get('https://www.crawler-test.com') time.sleep(5) driver.quit() Error output: Traceback (most recent call last): File "C:\Users\Administrator\Documents\...\test.py", line 7, in <module> service = Service(ChromeDriverManager().install()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\webdriver_manager\chrome.py", line 39, in install driver_path = self._get_driver_path(self.driver) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\webdriver_manager\core\manager.py", line 30, in _get_driver_path file = self._download_manager.download_file(driver.get_driver_download_url()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\webdriver_manager\drivers\chrome.py", line 40, in get_driver_download_url driver_version_to_download = self.get_driver_version_to_download() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\webdriver_manager\core\driver.py", line 51, in get_driver_version_to_download self._driver_to_download_version = self._version if self._version not in (None, "latest") else self.get_latest_release_version() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\webdriver_manager\drivers\chrome.py", line 62, in get_latest_release_version resp = self._http_client.get(url=latest_release_url) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\webdriver_manager\core\http.py", line 37, in get self.validate_response(resp) File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\webdriver_manager\core\http.py", line 16, in validate_response raise ValueError(f"There is no such driver by url {resp.url}") ValueError: There is no such driver by url https://chromedriver.storage.googleapis.com/LATEST_RELEASE_115.0.5790 I tried the following but no success: to disable Chrome auto-update, but Chrome manages to update itself anyway (https://www.minitool.com/news/disable-automatic-chrome-updates.html and https://www.webnots.com/7-ways-to-disable-automatic-chrome-update-in-windows-and-mac); to install Chrome 114 and webdriver for version 114, than it works till Chrome get updated automatically; to follow instructions https://chromedriver.chromium.org/downloads/version-selection but when generating URL and running link https://chromedriver.storage.googleapis.com/LATEST_RELEASE_115.0.5790 I get error No such object: chromedriver/LATEST_RELEASE_115.0.5790 How can I solve the issue till webdriver for Chrome 115 will be finally released at the download location? | Until the stable webdriver version 115 is released, the solution is to use the test webdriver and test Chrome accordingly. The steps are: uninstall the current installed webdriver and Chrome from the system; find the stable version of webdriver and Chrome at Chrome for Testing availability search for the binary Chrome and chromedriver (the version of the webdriver and Chrome should be the same!); install Chrome (actually you just unzip it and put it in some folder, i.e.: C:\chrome-test-ver); set folder C:\chrome-test-ver to the PATH environment variable); install webdriver.exe (just unzip it and copy it to the Python folder, i.e.: C:\Users\Administrator\AppData\Local\Programs\Python\Python311); run your Python script with Selenium, and it should work. | 30 | 3 |
76,760,906 | 2023-7-25 | https://stackoverflow.com/questions/76760906/installing-mamba-on-a-machine-with-conda | I dont know if I have missed it, but this is not clear to me. I already have miniconda on my machine and I want now to install mamba. How this should be done please, I am supposed to download/run the correct mambaforge installer? Does this co-exist happily side by side with conda or you are supposed to have just one of them on your system. Doing something like conda install mamba is not recommended | Mamba is a replacement conda package manager. They do not co-exist happily as they both rely on a base environment for their dependencies. The recommended way is to install mambaforge or micromamba separately as a replacement. documentation for Mamba: https://mamba.readthedocs.io/en/latest/installation/mamba-installation.html documentation for Micromamba: https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html | 15 | 5 |
76,728,876 | 2023-7-20 | https://stackoverflow.com/questions/76728876/sqlalchemy-flask-and-cross-contamination | I have taken over a flask app, but it does not use the flask-sqlalchemy plugin. I am having a hard time wrapping my head around how it's set up. It has a database.py file. from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker, scoped_session, Session _session_factory = None _scoped_session_cls = None _db_session: Session = None def _get_session_factory(): global _session_factory if _session_factory is None: _session_factory = sessionmaker( bind=create_engine(CONNECTION_URL) ) return _session_factory def new_session(): session_factory = _get_session_factory() return session_factory() def new_scoped_session(): global _scoped_session_cls if _scoped_session_cls is None: session_factory = _get_session_factory() if not session_factory: return _scoped_session_cls = scoped_session(session_factory) return _scoped_session_cls() def init_session(): global _db_session if _db_session is not None: log.warning("already init") else: _db_session = new_scoped_session() return _db_session def get_session(): return _db_session We we start up the flask app, it calls database.init_session() and then anytime we want to use the database it calls database.get_session(). Is this a correct/safe way to interact with the database? What happens if there are two requests being processed at the same time by different threads? Will this result in cross-contamination with both using the same session | To explain what is happening in your code, let's first dive into how sqlalchemy connects to your database: 1. Creating an Engine: Connection to your database is stored in an engine. At minimum, you must supply a connection URI / database URL Connection engine is quite powerful, see connection parameters for more. An engine is created with create_engine() function. It is advised to re-use the same engine, since each engine will reserve database connections (for a reserved connection pool). Connect to your database with engine.connect() -> this will open one connection to your database. Connections are not thread safe. At this point, you can connect with your database directly: engine = create_engine(...) with engine.connect() as connection: result = connection.execute(...) Do note, objects created/altered within the context of this connection are not guaranteed to share state outside of the connection context, until the transaction is completed / connection closed. To perform more complex queries (ex: mix of select and insert), you will want to use a Session. 2. Opening a session: A session is created from an engine object: with Session(engine) as session: ... More often, a session will be created by the sessionmaker() factory method: Session = sessionmaker(engine) Sessions can be used across a project, and are often how objects are mapped to a shared global state. Note: sessions cannot be shared across threads. 3. Examining the code above: The oversimplified purpose of the code above is to create a session object, via get_session(). Initially, this will return None, since this is the value of _db_session. Therefore, you must call init_session() which will create the various objects needed to return a session from get_session(). Here's what happens within init_session(): Calls new_scoped_session() which will return a session_factory Calls _get_session_factory() which will create the session_factory -> note how it creates an engine from the CONNECTION_URL Calls scoped_session(session_factory) which creates a scoped session and assigned to _scoped_session_cls. Note: This performs very similarly to a regular session, except is even more isolated (and safe). Lots of the complexity is related to the caching of state. Essentially, the code is doing the following: _session_factory = sessionmaker( bind=create_engine(CONNECTION_URL) ) _scoped_session_cls = scoped_session(session_factory) def new_session(): return _scoped_session_cls() I hope this was helpful. Good luck! | 3 | 5 |
76,726,419 | 2023-7-20 | https://stackoverflow.com/questions/76726419/langchain-modulenotfounderror-no-module-named-langchain | When I write code in VS Code, beginning with: import os from langchain.chains import RetrievalQA from langchain.llms import OpenAI from langchain.document_loaders import TextLoader I am met with the error: ModuleNotFoundError: No module named 'langchain' I have updated my Python to version 3.11.4, have updated pip, and reinstalled langchain. I have also checked sys.path and the folder C:\\Python311\\Lib\\site-packages in which the Langchain folder is, is appended. EDIT: Langchain import works when I run it in the Python console (functionality works too), but when I run the code from the VSCode run button it still provides the ModuleNotFoundError. Has anyone else run into this issue and found a solution? | I had installed packages with python 3.9.7 but this version was causing issues so I switched to Python 3.10. When I installed the langhcain it was in python 3.9.7 directory. If yo run pip show langchain, you get this Name: langchain Version: 0.0.220 Summary: Building applications with LLMs through composability Home-page: https://www.github.com/hwchase17/langchain Author: Author-email: License: MIT Location: /home/anaconda3/lib/python3.9/site-packages Requires: aiohttp, async-timeout, dataclasses-json, langchainplus-sdk, numexpr, numpy, openapi-schema-pydantic, pydantic, PyYAML, requests, SQLAlchemy, tenacity Required-by: jupyter_ai, jupyter_ai_magics If you look at the Location property, you see this /home/anaconda3/lib/python3.9/site-packages. But since I am using Pyhton3.10 I had to make sure langchain is in the directory of Python 3.10. so installed the langhchain with python3.10 -m pip install langchain now when I run, python3.10 -m pip show langchain I get this Name: langchain Version: 0.0.264 Summary: Building applications with LLMs through composability Home-page: https://www.github.com/hwchase17/langchain Author: Author-email: License: MIT Location: /home/.local/lib/python3.10/site-packages Requires: aiohttp, async-timeout, dataclasses-json, langsmith, numexpr, numpy, openapi-schema-pydantic, pydantic, PyYAML, requests, SQLAlchemy, tenacity Required-by: Now new Location is referring to Python3.10 directory | 12 | 11 |
76,748,279 | 2023-7-23 | https://stackoverflow.com/questions/76748279/changing-the-default-cache-path-for-all-huggingface-data | The default cache path of huggingface is in ~/.cache/huggingface, and in that folder, there are multiple cache files like models, and hub. The huggingface documents indicates that the default dataset cache location can be modified by setting the shell environment variable, HF_DATASETS_CACHE to a different directory as shown below: $ export HF_DATASETS_CACHE="/path/to/another/directory" However, my objective is to alter the default cache directory for all HuggingFace data and not solely the dataset. I am facing difficulties in finding the respective shell environment variable in the HuggingFace documentation to accomplish this. Any help would be appreciated. | It seems that the variable is HF_HOME as this document indicates. So probably this terminal code should be the solution: export HF_HOME="/path/.cache/huggingface" export HF_DATASETS_CACHE="/path/.cache/huggingface/datasets" export TRANSFORMERS_CACHE="/path/.cache/huggingface/models" p.s. If you want to make your change permanent, you should write these lines in your .bashrc(Can do with .bash_profile too) file using nano ~/.bashrc | 3 | 8 |
76,756,132 | 2023-7-24 | https://stackoverflow.com/questions/76756132/python-client-library-for-language-server-protocol-lsp | I need to interact with a language server (Eclipse JDT LS) implementing the Language Server Protocol (LSP), and I need it for a Python project that will be used to analyze different programming languages, starting from Java. I've been looking around for a client library in Python that would fit my purpose, any suggestions? If it was very easy I wouldn't mind writing one, but I feel the world doesn't need yet another library if a good one exists already. | I've discovered that, as of July 2023, there is a library named pygls (https://github.com/openlawlibrary/pygls) that is currently developing a usable client for the Language Server Protocol (LSP). While we wait for the first official release, I've included it in my setup.py file under "install_requires". This ensures it will be automatically installed during the setup of my project. Here's how I modified the file: install_requires=[ 'pygls @ git+https://github.com/openlawlibrary/pygls.git' ] With this line of code, pygls will be directly fetched from the GitHub repository and installed into your environment. | 5 | 2 |
76,759,128 | 2023-7-25 | https://stackoverflow.com/questions/76759128/typeerror-when-running-compute-that-includes-map-blocks-and-reduce | I am having difficulty diagnosing the cause of the error. My code involves running a convolution (with map_blocks) over some arrays if they belong to the same group of variables, otherwise just record the 2-dim array. I then do an argmax operation and add the result to a list, that we then concatenate. I tried running compute with scheduler='single-threaded' argument, to help debug, but I still wasn't able to see the cause of the error. import dask.array as da from functools import reduce import numpy as np size = 100000 vals = da.linspace(0, 1, size) nvars = 12 test = da.random.uniform(low=0, high=1, size=(100000, nvars, size), chunks=(100, nvars, size)) # number of total unique items corresponds to nvars var_lookup = { 'a': [0, 1], 'b': [0, 1], 'c': [0], 'd': [0, 1], 'e': [0], 'f': [0, 1, 2], 'g': [0], } # Iterates over all 0 dimension coordinates # and convolves relevant values from x and y def custom_convolve(x,y): temp_lst = [] for i in range(x.shape[0]): a = da.fft.rfft(x[i]) b = da.fft.rfft(y[i]) conv_res = da.fft.irfft(a * b, n = size) temp_lst.append(conv_res) res = da.stack(temp_lst, axis=0) return res n_groups = len(var_lookup.keys()) counter = 0 group_cols = [] for i in var_lookup.keys(): grp = var_lookup[i] # if group consists of 1 value, then just record that 2-dim array if len(grp)==1: temp = test[:,counter,:] counter += 1 else: test_list = [] for _ in var_lookup[i]: test_list.append(test[:, counter, :]) counter += 1 temp = reduce(lambda x, y: da.map_blocks(custom_convolve, x, y, dtype='float32'), test_list) res = vals[da.argmax(temp, axis=1)] group_cols.append(res) loc = da.stack(group_cols, axis=1) Error when running compute: res = loc.compute() Traceback for error from the last line is long, but the end is here File c:\Users\x\lib\site-packages\dask\array\slicing.py:990, in check_index(axis, ind, dimension) 987 elif ind is None: 988 return --> 990 elif ind >= dimension or ind < -dimension: 991 raise IndexError( 992 f"Index {ind} is out of bounds for axis {axis} with size {dimension}" 993 ) TypeError: '>=' not supported between instances of 'str' and 'int' Maybe the reduce function coupled with map_blocks is causing the problem? Debug attempt update 1: I used pdb, converted the code to a .py file, changed compute argument to scheduler='single-threaded'), added a set_trace to right after the for i line and stepped through. It only errors out when I get to the compute step with the same error, so not helpful. Debug attempt update 2: I've identified the exact line that gives the problem. I simplified the code a little to make sure that it wasn't the reduce function and got rid of the loops. size = 10000 x_vals = da.linspace(0, 1, 1000) test = da.random.uniform(low=0, high=1, size=(size,4,1000), chunks=(size / 10, 1, 1000)) def simple_convolve(x, y): temp_lst = [] for i in range(x.shape[0]): a = da.fft.rfft(x[i]) b = da.fft.rfft(y[i]) conv_res = da.fft.irfft(a * b, n = size) temp_lst.append(conv_res) res = da.stack(temp_lst, axis=0) return res res = da.map_blocks(simple_convolve, test[:,0], test[:,1], dtype='float32') temp = x_vals[da.argmax(res, axis=1)] We get an error here. If we drill in, then the error actually comes from running this da.argmax(res, axis=1) Since the error is saying I'm comparing a string and an integer, I checked that res has no nulls and no infinity values: # btw don't understand why just 1 compute still returns a dask array da.isnan(res).sum().compute().compute() 0 (~da.isfinite(res)).sum().compute().compute() 0 | As answered in https://dask.discourse.group/t/typeerror-on-da-argmax-when-executing-compute/2053: You need to work with Numpy arrays in simple_convolve, because this method is applied on Dask Array chunks, which are Numpy arrays. It should at least return a Numpy Array. | 3 | 0 |
76,749,878 | 2023-7-23 | https://stackoverflow.com/questions/76749878/i-cant-install-chromedrivermanager-with-chromedrivermanager-install | I've been learning Python for 2 month, and this error has never occurred to me once but all of a sudden I can't download CHROMEDRIVERMANAGER and whenever I get to its website to download it manually it says This XML file does not appear to have any style information. The document tree is shown below. The error: ` Access Denied Access denied. We're sorry, but this service is not available in your location This XML file does not appear to have any style information associated with it. The document tree is shown below. Access denied Access denied. We're sorry, but this service is not available in your location ` this is the error that I am getting trying to download CHROMEDRIVER I believe its because my code requests to download CHROMEDRIVER the error pops up the same one that pops up when I try to download it My code import selenium.webdriver.chrome.webdriver from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options import pyautogui from webdriver_manager.chrome import ChromeDriverManager import random import numpy as np from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By options = Options() options.add_experimental_option("detach", True) driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options) driver.get("https://www.pinterest.com/ideas/") apparently, something is wrong with the line where I said what is the driver | Change the below line from: driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options) TO: driver = webdriver.Chrome(service=Service(ChromeDriverManager(version="114.0.5735.90").install()),options=options) Above should solve your issue. Having said that, with latest selenium(v4.6.0 or above), you don't really need ChromeDriverManager to download/handle browser drivers. Selenium's new tool known as SeleniumManager will do what ChromeDriverManager used to do. So the code now can be simplified as below: options = Options() options.add_experimental_option("detach", True) driver = webdriver.Chrome(options=options) driver.get("https://www.pinterest.com/ideas/") References - Introducing Selenium Manager | 5 | 11 |
76,763,668 | 2023-7-25 | https://stackoverflow.com/questions/76763668/pybind11-bound-class-method-returns-new-class-instance-rather-than-editing-in | I am unable to return the input of a class method (input: specific instances of a seperate class) to Python. The binding compiles and I can use the resulting module in Python. The class method should however return the same instances as it admits (after some processing). The Obstacle class is used as the input. The ObstacleProcess class has a method (Python: __call__ / C++: operator_py) which processes the input (instances of Obstacle). The following Python code shows that different instances of Obstacle is returned: import example obstacle_1 = example.Obstacle() obstacle_2 = example.Obstacle() obstacles = [obstacle_1, obstacle_2] print(obstacles) params = example.Params() obstacle_process = example.ObstacleProcess(params) obstacles = obstacle_process(obstacles) print(obstacles) The first print returns: [<example.Obstacle object at 0x7fb65271e1b0>, <example.Obstacle at 0x7fb652735070>], whilst the second print retuns: [<example.Obstacle at 0x7fb652734670>, <example.Obstacle object at 0x7fb652735230>]. This is not the desired output as obstacle_1 initially lives at 0x7fb65271e1b0, and after the operator()/__call__ call it lives at 0x7fb652734670. I want obstacle_1 to keep its initial address of 0x7fb65271e1b0, even after the other class (ObstacleProcess) has processed obstacle_1. The following code shows the source code is bound with pybind11: // pybind11 binding py::class_<Obstacle, std::shared_ptr<Obstacle>>(m, "Obstacle") .def(py::init<>()); py::class_<ObstacleProcess>(m, "ObstacleProcess") .def(py::init< const Params&>() ) .def("__call__", &ObstacleProcess::operator_py<Params>, py::return_value_policy::reference); The next block shows how operator_py is implemented in the source code: template <Params> std::vector<Obstacle>& operator_py( std::vector<Obstacle>& obstacles, const Params ¶meters ) { ... return obstacles } I have tried with and without std::shared_ptr<Obstacle>. The current implementation gives the same result as not using shared_ptr at all, thus there is something wrong with how I have implemented shared_ptr. I have tried to use PYBIND11_MAKE_OPAQUE(std::shared_ptr<std::vector<Obstacle>>);, but my implementation of this did not change the result. I have not tried to use the pybind11 — smart_holder branch, maybe this branch has to be used for this case? | Thanks to comments from @DanMašek and @n.m.willseey'allonReddit it was unveiled that making these changes to operator_py() fixes the issue in the question: template <class Params> std::vector<std::shared_ptr<Obstacle>>& operator_py( std::vector<std::shared_ptr<Obstacle>> &obstacles, const Params ¶ms ) { for (auto& obstacle : obstacles) { obstacle->someObstacleClassMethod() someObstacleProcessMethod(obstacles, params) } return obstacles; } With this implemented the desired output was obtained: import example obstacle_1 = example.Obstacle() obstacle_2 = example.Obstacle() obstacles = [obstacle_1, obstacle_2] print(obstacles) params = example.Params() obstacle_process = example.ObstacleProcess(params) obstacles = obstacle_process(obstacles) print(obstacles) # output: [<example.Obstacle object at 0x7f709bdc2430>, <example.Obstacle object at 0x7f709b12a830>] [<example.Obstacle object at 0x7f709bdc2430>, <example.Obstacle object at 0x7f709b12a830>] | 4 | 3 |
76,730,773 | 2023-7-20 | https://stackoverflow.com/questions/76730773/how-to-edit-and-confirm-form-slots-in-rasa-framework | I'm building a booking bot using Python and Rasa. I want to add a confirmation step when user fills in all slots in my form. The requirements are: User may confirm the values. User may change the values. Bot must be expandable to handling other inputs and intents from the user. I drew a flow-diagram which demonstrates more clearly the logic I want to implement: flow-diagram I have 2 questions: How do you usually implement the confirmation step? How would you implement the logic on the picture? I'm using Rasa v3.5.13 and Rasa SDK v3.5.1. I already tried the following: Confirmation within the form: I added an additional slot confirmation to the form, implemented extraction and validation to handle user input, but I couldn't make Rasa to call custom actions before or after book_foorm is called. Inheriting from FormAction appeared to be impossible, because it doesn't inherit Action from interfaces.py. Rules conflict with action_listen that is invoked right after book_foorm. Making a wrapper action around book_foorm and calling it instead of the original form works only in the first time, but then active loop starts to call book_foorm instead of my wrapper. Confirmation after the form: passing a custom action in active_loop after book_form is filled in seems to work, but I found out that Rasa only calls the last FollowupAction returned by my custom action, but I want to call multiple actions on each iteration. I also tried to create a custom action that only sets the categorical slot confirmation_status with values: "initialized", "confirmed", "rejected" or "fixed". The idea was that stories can handle my logic relying on confirmation_status, but they do it poorly or don't do it at all. Two stories were not enough even to just activate action_extract_confirmation_status after active_loop was set to null, let alone handling all the logic. So far, the second option seems to be OK if using rules instead of stories. But I need a lot of rules, and they may become a bottleneck in the further development. Coding my logic in python seems much more robust and easier to me, but I can't find a way to do this normally in Rasa. UPD: Tried out another approach: - rule: Activate confirmation loop steps: - action: book_form - slot_was_set: - requested_slot: null - active_loop: null - action: action_extract_confirmation_status - active_loop: action_extract_confirmation_status wait_for_user_input: false - rule: Ask for confirmation the first time condition: - active_loop: action_extract_confirmation_status steps: - action: action_extract_confirmation_status - slot_was_set: - confirmation_status: initialized - action: utter_introduce_slots - action: action_utter_slots - action: utter_ask_for_confirmation - rule: Ask for confirmation condition: - active_loop: action_extract_confirmation_status steps: - action: action_extract_confirmation_status - slot_was_set: - confirmation_status: fixed - action: action_utter_slots - action: utter_ask_for_confirmation - rule: Ask for correction in confirmation loop condition: - active_loop: action_extract_confirmation_status steps: - action: action_extract_confirmation_status - slot_was_set: - confirmation_status: rejected - action: utter_ask_for_correction - rule: Finish confirmation condition: - active_loop: action_extract_confirmation_status steps: - action: action_extract_confirmation_status - slot_was_set: - confirmation_status: confirmed - active_loop: null - action: utter_booking_completed And got another conflict: - the prediction of the action 'action_utter_slots' in rule 'Ask for confirmation' is contradicting with rule(s) 'handling active loops and forms - action_extract_confirmation_status - action_listen' which predicted action 'action_listen'. - the prediction of the action 'utter_ask_for_correction' in rule 'Ask for correction in confirmation loop' is contradicting with rule(s) 'handling active loops and forms - action_extract_confirmation_status - action_listen' which predicted action 'action_listen'. - the prediction of the action 'utter_introduce_slots' in rule 'Ask for confirmation the first time' is contradicting with rule(s) 'handling active loops and forms - action_extract_confirmation_status - action_listen' which predicted action 'action_listen'. Please update your stories and rules so that they don't contradict each other. This is so annoying. | The only way I could do this is to implement everything in a single action, and set this action as an active loop. async def run( self, dispatcher: CollectingDispatcher, tracker: Tracker, domain: DomainDict ) -> List[EventType]: latest_confirmation_status = tracker.get_slot('confirmation_status') latest_intent = tracker.get_intent_of_latest_message() latest_entities = tracker.latest_message.get('entities') logging.debug(f'Latest confirmation status: {latest_confirmation_status}; ' f'Latest intent: {latest_intent}; ' f'Latest entities: {latest_entities}') if latest_confirmation_status == INACTIVE: dispatcher.utter_message(response='utter_introduce_slots') dispatcher.utter_message(format_slot_values(tracker)) dispatcher.utter_message(response='utter_ask_for_confirmation') return [ActiveLoop(ACTION_NAME), SlotSet("confirmation_status", RUNNING)] elif latest_confirmation_status == RUNNING: if latest_entities: dispatcher.utter_message(format_slot_values(tracker)) dispatcher.utter_message(response='utter_ask_for_confirmation') return [] elif latest_intent == 'affirm': dispatcher.utter_message(response='utter_booking_completed') return [ActiveLoop(None), SlotSet("confirmation_status", COMPLETED)] elif latest_intent == 'deny': dispatcher.utter_message(response='utter_ask_for_correction') return [] else: logging.warning('Aborted confirmation loop') return [ActiveLoop(None), SlotSet("confirmation_status", ABORTED)] else: raise ValueError(f'Unexpected status: {repr(latest_confirmation_status)}') - rule: Activate confirmation loop steps: - action: book_form - slot_was_set: - requested_slot: null - active_loop: null - action: action_confirm - active_loop: action_confirm - rule: Abort confirmation loop steps: - slot_was_set: - confirmation_status: aborted - active_loop: null - rule: Complete confirmation loop steps: - slot_was_set: - confirmation_status: completed - active_loop: null | 3 | 1 |
76,760,107 | 2023-7-25 | https://stackoverflow.com/questions/76760107/extracting-crs-codes-from-pyproj-library | I need help with a python program. I am looking to obtain crs codes/definitions from Pyrpoj library, add them to a list, allow the user to select a crs from the list and print it out, but it does not seem to work. I have made sure to update the library to the latest version. I will be grateful if you guys can help me out here. Here is the code I tried. import tkinter as tk from tkinter import ttk import pyproj def get_crs_list(): # Get a list of available CRS codes from the pyproj CRS database crs_list = list(pyproj.database.get_codes("CRS")) return sorted(crs_list) def print_selected_crs(): selected_crs = combobox.get() print("Selected CRS:", selected_crs) # Create the main application window root = tk.Tk() root.title("CRS Selector") # Create a label and a drop-down list (Combobox) for selecting the CRS label = ttk.Label(root, text="Select a CRS:") label.pack(pady=10) crs_list = get_crs_list() combobox = ttk.Combobox(root, values=crs_list) combobox.pack() # Create a button to print the selected CRS button = ttk.Button(root, text="Print CRS", command=print_selected_crs) button.pack(pady=10) # Start the main event loop root.mainloop() Error message:- line 7, in get_crs_list crs_list = pyproj.get_crs_list() AttributeError: module 'pyproj' has no attribute 'get_crs_list' Found a solution that worked for me. def get_crs_list(): crs_info_list = pyproj.database.query_crs_info(auth_name=None, pj_types=None) crs_list = ["EPSG:" + info[1] for info in crs_info_list] print(crs_list) return sorted(crs_list) Output: ['EPSG:2000', 'EPSG:2001', 'EPSG:2002', 'EPSG:2003', 'EPSG:2004',.......] | As mentioned by @acw1668 in the comments, your error message looks different from the code you tried. However, if I run the same code you posted, I get the following error - TypeError: get_codes() takes at least 2 positional arguments (1 given) Following the official documentation, the function "pyproj.database.get_codes()" requires atleast 2 arguments. These are "auth_name" and "pj_type" You can get the list of acceptable values for "auth_name" using - pyproj.get_authorities() The output that I get from above is - ['EPSG', 'ESRI', 'IAU_2015', 'IGNF', 'NKG', 'OGC', 'PROJ'] Now, for "pj_type" you have mentioned to use "CRS", therefore, a fix for your problem would be - def get_crs_list(): # Get a list of available CRS codes from the pyproj CRS database # SOLUTION crs_list = list(pyproj.database.get_codes(auth_name="all", pj_type="CRS")) return sorted(crs_list) After referring to the documentation, the modified final solution - def get_crs_list(): crs_info_list = pyproj.database.query_crs_info(auth_name=None, pj_types=None) crs_list = ["EPSG:" + info[1] for info in crs_info_list] print(crs_list) return sorted(crs_list) Output: ['EPSG:2000', 'EPSG:2001', 'EPSG:2002', 'EPSG:2003', 'EPSG:2004',.......] Thanks @Hanzo Hasashi for updating! | 3 | 1 |
76,764,911 | 2023-7-25 | https://stackoverflow.com/questions/76764911/doctest-of-function-with-random-output | I have a function that prints a somewhat random string to the console, for example: from random import choice def hello(): print(f"Hello {choice(('Guido', 'Raymond'))}!") Please note that my actual function is more complicated than this. The random part is a request to a database that can either succeed or fail. This means that I cannot initialize a seed to have a constant outcome. What I have tried is to use the ellipsis, but I also need to add an ugly comment for doctest to recognize it. def hello(): """ >>> hello() # doctest: +ELLIPSIS Hello ...! """ print(f"Hello {choice(('Guido', 'Raymond'))}!") Is there a better strategy in this situation? For example, instead of an ellipsis it would be great if I could test that the answer is one between Hello Guido! and Hello Raymond!. | You could use regex: Hello followed by either Guido or Raymond followed by an exclamation point and newline: Hello (Guido|Raymond)!\n However, capturing the stdout and running the regex is a lot of noise for a doctest. So instead, it'd be better to give an example in the docstring but skip testing it, and use a different test system, like pytest, which has a builtin way to capture stdout. For example: from random import choice def hello(): """ >>> hello() # doctest: +SKIP Hello Raymond! """ print(f"Hello {choice(('Guido', 'Raymond'))}!") import re def test_hello(capsys): hello() captured = capsys.readouterr() assert re.fullmatch(r'Hello (Guido|Raymond)!\n', captured.out) | 5 | 3 |
76,765,523 | 2023-7-25 | https://stackoverflow.com/questions/76765523/python-listt-not-assignable-to-listt-none | I have a type T, a namedtuple actually: from collections import namedtuple T = namedtuple('T', ('a', 'b')) I have a function that accepts a list[T | None] and a list: def func(arg: list[T | None]): ... l = [T(1, 2), T(2, 3)] func(l) # Pylance error l is of type list[T]. When I pass a l to the function I get an error from Pylance, that a list[T] is incompatible with list[T | None] because T cannot be assigned to T | None. Aside from manually specifying my list[T] is actually a list[T | None], what can I do to make this work without an error? Of course at runtime everything runs as expected. | The built-in mutable generic collection types such as list are all invariant in their type parameter. (see PEP 484) This means that given a type S that is a subtype of T, list[S] is not a subtype (nor a supertype) of list[T]. You can simplify your error even further: def f(a: list[int | None]) -> None: ... b = [1, 2] f(b) int is obviously a subtype of int | None, but list[int] is not a subtype of list[int | None]. For the code above, Mypy is so kind as to present additional info telling us exactly that: error: Argument 1 to "f" has incompatible type "List[int]"; expected "List[Optional[int]]" [arg-type] note: "List" is invariant -- see https://mypy.readthedocs.io/en/stable/common_issues.html#variance note: Consider using "Sequence" instead, which is covariant Aside from the obvious solution you mentioned of explicitly annotating your list as list[T | None] beforehand, maybe you could follow the advice from Mypy and instead change the type accepted by your function to something less specific that still offers the protocols you need, but also happens to be covariant in its type parameter, e.g. collections.abc.Sequence: from collections import namedtuple from collections.abc import Sequence T = namedtuple('T', ('a', 'b')) def f(a: Sequence[T | None]) -> None: pass b = [T(1, 2), T(2, 3)] f(b) This should pass without errors. PS: You did not mention, what exactly your function is doing with that list, but the solution with an immutable type like Sequence will obviously not work, if you intend to mutate it inside the function. Mutablity really is the issue here and the reason why list is declared to be invariant in the first place. The Mypy docs offer a really nice explanation for this reasoning. So while it is technically possible to for example define your own generic protocol with a covariant type variable that emulates the methods you need in that function, I am not sure that would be a good idea. Better to reconsider what you actually need from that function and what would be safe to call it with. | 4 | 8 |
76,763,873 | 2023-7-25 | https://stackoverflow.com/questions/76763873/trio-seems-to-start-tasks-in-the-nursery-in-exactly-the-opposite-order-that-task | I don't expect trio to run in any particular order. It is async, after all. But I noticed something strange and wanted to ask if anyone else could explain what might have happened: I wanted to test the rate of data ingestion from Google's Pub Sub if I send a small message one at a time. In order to focus on the I/O of pushing to Pub Sub, I sent messages async, and I use trio because, well, I want to keep my head from exploding. I specifically wanted to look at how fast Pub Sub would be if I turned on it's ordering capability. I really just wanted to test throughput, and since I was using an async process, I didn't expect any ordering of messages, but I tagged the messages just out of curiosity. I noticed that the messages were processed in pub sub (and therefore sent to pub sub) at exactly the opposite order that is written in the imperative code. Here is the important snippet (I can provide more if it is helpful): async with open_nursery() as nursery: for num in range(num_messages): logger.info(f"===Creating data entry # {num}===") raw_data = gen_sample(DATASET, fake_generators=GENERATOR) # you can ignore this, it is just a toy data generator. It is synchronous code, but _very_ fast. raw_data["message number"] = num # <== This is the CRITICAL LINE, adding the message number so that I can observe the ordering. data = dumps(raw_data).encode("utf-8") nursery.start_soon(publish, publisher, topic_path, data, key) and here is the publish function: async def publish( publisher: PublisherClient, topic: str, data: bytes, ordering_key: str ): future = publisher.publish(topic, data=data, ordering_key=ordering_key) result = future.result() logger.info( f"Published {loads(data)} on {topic} with ordering key {ordering_key} " f"Result: {result}" ) And when I look at the logs in Pub/Sub, they are 100% consistently in reverse order, such that I see "message number" 50_000 first, then 49_999, 49_998, ..., 3, 2, 1. Pub Sub is maintaining ordering. This means somehow, the async code above is "first" starting the very last task to reach nursery.start_soon. I'm not sure why that is. I don't understand exactly how Pub Sub's Future works, because the documentation is sparse (at least what I found), so it is possible that the "problem" lies with Google's PublisherClient.publish() method, or Google's result() method that the returned future uses. But it seems to me that it is actually due to the nursery.start_soon. Any ideas why it would be exactly in the opposite order of how things are written imperatively? | Heh, nicely spotted. So what's going on here is that Trio intentionally randomizes the order that tasks get scheduled, to help you catch places where you're accidentally relying on scheduler details to coordinate between tasks (since this is fragile, hard to reason about, and constrains trio's ability to improve the scheduler in the future). Originally, the way this worked is that when tasks became runnable, they'd get appended to a list, and then each scheduler tick would use random.shuffle to randomize the list, then loop over it and run each task. However! It turns out that randomly shuffling a list is actually kind of expensive, esp for something you're doing on every scheduler tick. So as an optimization, we switched to a much simpler randomization scheme: flip a coin, and if it's heads do nothing, if it's tails then reverse the list. The intuition here is that this is way cheaper, but it still guarantees that any two tasks will be scheduled both as A->B and B->A on some fraction of runs, and this should be enough to catch most cases where you're accidentally relying on scheduling order. So if you do enough runs, you should find that your log lines are actually 50% in ascending order, and 50% in descending order. (But you shouldn't rely on this :-)) | 2 | 3 |
76,764,592 | 2023-7-25 | https://stackoverflow.com/questions/76764592/legend-obscures-plot-using-seaborn-objects-api | I have a recurring problem with legends on many of my plots with legends. The legend often obscures part of the data. Reproducible example: import seaborn.objects as so import numpy as np import pandas as pd res_df = pd.DataFrame( {'value': np.random.rand(20), 'cfg__status_during_transmit': ['OK']*5 + ['ERR']*5 + ['WARN']*5 + ['OTH']*5} ) ( so.Plot(res_df, y='value', x='cfg__status_during_transmit', color='cfg__status_during_transmit') .add(so.Dots(), so.Jitter(width=0.5)) .layout(size=(8, 4)) ).show() I've tried elongating the plot. This is a hack and isn't convenient, especially since it makes everything a lot smaller. I've also tried plotting using .on() onto a matplotlib figure or axis, but the same problem persists. There's a related SO issue which advises that the legend properties are still in development. I would appreciate suggestions for how to get around this problem. Thanks. | Control over the legend using seaborn.objects might still be a work in progress according to this post. However, if I used plot() instead of show() I got a figure where the legend was placed outside the graph: import seaborn.objects as so import numpy as np import pandas as pd res_df = pd.DataFrame( {'value': np.random.rand(20), 'cfg__status_during_transmit': ['OK']*5 + ['ERR']*5 + ['WARN']*5 + ['OTH']*5} ) (so.Plot(res_df, y='value', x='cfg__status_during_transmit', color='cfg__status_during_transmit') .add(so.Dots(), so.Jitter(width=0.5)) .layout(size=(8, 4)) ).plot() Output: | 5 | 4 |
76,760,682 | 2023-7-25 | https://stackoverflow.com/questions/76760682/tensorflow-2-13-1-no-matching-distribution-found-for-tensorflow-text-2-13-0 | I am trying to install the latest Tensorflow models 2.13.1 (pip install tf-models-official==2.13.1), with Python 3.11. There seems to be an issue with Cython and PyYAML not playing nice together since last week in Tensorflow models 2.13.0, so it won't install. But 2.13.1 is giving me an error that the corresponding tensorflow-text version 2.13.0 is not found. The error I am receiving is as follows: (tensorflow-env) username@DESKTOP:~/projects/tensorflow/models-master/research$ pip install tf-models-official==2.13.1 INFO: pip is looking at multiple versions of tf-models-official to determine which version is compatible with other requirements. This could take a while. ERROR: Ignored the following versions that require a different python version: 1.21.2 Requires-Python >=3.7,<3.11; 1.21.3 Requires-Python >=3.7,<3.11; 1.21.4 Requires-Python >=3.7,<3.11; 1.21.5 Requires-Python >=3.7,<3.11; 1.21.6 Requires-Python >=3.7,<3.11; 1.6.2 Requires-Python >=3.7,<3.10; 1.6.3 Requires-Python >=3.7,<3.10; 1.7.0 Requires-Python >=3.7,<3.10; 1.7.1 Requires-Python >=3.7,<3.10; 1.7.2 Requires-Python >=3.7,<3.11; 1.7.3 Requires-Python >=3.7,<3.11; 1.8.0 Requires-Python >=3.8,<3.11; 1.8.0rc1 Requires-Python >=3.8,<3.11; 1.8.0rc2 Requires-Python >=3.8,<3.11; 1.8.0rc3 Requires-Python >=3.8,<3.11; 1.8.0rc4 Requires-Python >=3.8,<3.11; 1.8.1 Requires-Python >=3.8,<3.11 ERROR: Could not find a version that satisfies the requirement tensorflow-text~=2.13.0 (from tf-models-official) (from versions: 2.12.0rc0, 2.12.0, 2.12.1, 2.13.0rc0) ERROR: No matching distribution found for tensorflow-text~=2.13.0 But the release history on pypi.org shows that the 2.13.0 version of tensorflow-text is out: https://pypi.org/project/tensorflow-text/2.13.0/#history What am I doing wrong? | I resolved the issue, by installing Python 3.10 Tensorflow 2.13.0 tensorflow-text 2.13.0 tensorflow-models-official 2.13.1 Everything works with these versions, but I did not find a way to make it work with Python 3.11 atm. The issue is best described here (https://github.com/yaml/pyyaml/issues/724) and has to do with Cython and PyYAML 5.4 dependency issues. | 8 | 3 |
76,760,704 | 2023-7-25 | https://stackoverflow.com/questions/76760704/python-docstring-inconsistent-leading-whitespace-error-without-new-line | my docstring: def some_f(): """ 1. First story >>> str(5) if you want to see the value: >>> print(str(5)) 2. Another story bla bla """ When using doctest, I'm getting: ValueError: line 6 of the docstring for File.Class.some_f has inconsistent leading whitespace: '2. Another story' I've read (here and here) that this problem may occur when one use \n or other special character in the docstring, but it's not the case here. No idea why this is happenning and how to fix that. In fact it expects me to move the second point to the right, since this is working properly: def some_f(): """ 1. First story >>> str(5) if you want to see the value: >>> print(str(5)) 2. Another story bla bla """ But it's not what I want. | The question and title should include the clarification that this error occurs because you are running doctest.testmod(); otherwise Python doesn't care about your formatting. The problem is that doctest expects each line that appears to be an interactive prompt to be followed by the expected output for the doctest, and it considers a blank line or another appearance of >>> to be the end of the expected results. Leave a blank line to show the end of expected output, and it won't care about the whitespace of the next line. def some_f(): """ 1. First story >>> str(5) '5' if you want to see the value: >>> print(str(5)) 5 2. Another story bla bla """ pass if __name__ == '__main__': import doctest doctest.testmod() Without the line break after '5' that case would fail as doctest would include if you want to see the value: as part of the expected output from >>>str(5) | 2 | 4 |
76,760,381 | 2023-7-25 | https://stackoverflow.com/questions/76760381/modulenotfounderror-no-module-named-exceptions | I need to transform 140 .docx files into txt. I am using the following Python code that I found here in StackOverflow. I tried this: import os from docx import Document # Path to the folder containing .docx files input_folder = "C:/Users/XXXXX/Desktop/word" # Path to the folder where .txt files will be saved output_folder = "C:/Users/XXXXX/Desktop/Txt" # Get a list of all .docx files in the input folder files = [f for f in os.listdir(input_folder) if f.endswith(".docx")] # Loop through each .docx file and convert it to .txt for file in files: docx_path = os.path.join(input_folder, file) txt_path = os.path.join(output_folder, os.path.splitext(file)[0] + ".txt") doc = Document(docx_path) content = [p.text for p in doc.paragraphs] with open(txt_path, "w", encoding="utf-8") as txt_file: txt_file.write("\n".join(content)) print("Conversion complete!") But whenever I run the code, I get this error: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) Cell In[1], line 2 1 import os ----> 2 from docx import Document 4 # Path to the folder containing .docx files 5 input_folder = "C:/Users/XXXXX/Desktop/word" File ~\anaconda3\Lib\site-packages\docx.py:30 27 except ImportError: 28 TAGS = {} ---> 30 from exceptions import PendingDeprecationWarning 31 from warnings import warn 33 import logging ModuleNotFoundError: No module named 'exceptions' Do you know why am I getting this error and how could I solve this? Thank you! | The error can occur for multiple reasons: Not having the python-docx package installed by running pip install python-docx. Installing the package in a different Python version than the one you're using. Installing the package globally and not in your virtual environment. Your IDE running an incorrect version of Python. Naming your module exception.py which would shadow the official module. Declaring a variable named exceptions which would shadow the imported variable. See this answer: https://bobbyhadz.com/blog/python-no-module-named-exceptions#:~:text=The%20Python%20%22ModuleNotFoundError%3A%20No%20module,pip%20install%20python%2Ddocx%20command. | 3 | 2 |
76,729,393 | 2023-7-20 | https://stackoverflow.com/questions/76729393/how-to-properly-capture-library-package-version-in-my-package-when-using-pypro | I have moved away from a setup.py file to build my packages / libraries to fully using pyproject.toml. I prefer it overall, but it seems that the version placed in the pyproject.toml does not propagate through the build in any way. So I cannot figure out how to inject the package version -- or any other metadata provided in the pyproject.toml -- into my package. A google search led me to this thread, which had some suggestions. They all seemed like hacks, but I tried this one because it seemed best: from pip._vendor import tomli ## I need to be backwards compatible to Python 3.8 with open("pyproject.toml", "rb") as proj_file: _METADATA = tomli.load(proj_file) DESCRIPTION = _METADATA["project"]["description"] NAME = _METADATA["project"]["name"] VERSION = _METADATA["project"]["version"] It worked fine upon testing, but I did not test robustly enough: once I tried to install this in a fresh location / machine, it failed because the pyproject.toml file is not part of the package installation. (I should have realized this.) So, what is the right / best way to provide metadata, like the package version, to my built package? I need the following requirements: I only want to provide the information once, in the pyproject.toml. (I know that if I need to repeat a value, at some point there will be a mismatch.) I want the information to be available to the end user, so that someone who installs the package can do something like mypackage.VERSION from her interactive Python session. I want to only use pyproject.toml and Poetry / PDM. (I actually use PDM, but I know that Poetry is more popular. The point is that I don't want a setup.py or setup.cfg hack. I want to purely use the new way.) | As you've seen, the pyproject.toml file from the original source tree is generally not available to read from within an installed package. This isn't new, it's also the case that the setup.py file from a legacy source tree would be missing from an actual installation. The package metadata, however, is available in a .dist-info subdirectory alongside your package installation. To demonstrate using a real example of an installed package and the corresponding metadata files, which contain the version info and other stuff coming from the setup.py or pyproject.toml: $ pip install -q six $ pip show --files six Name: six Version: 1.16.0 Summary: Python 2 and 3 compatibility utilities Home-page: https://github.com/benjaminp/six Author: Benjamin Peterson Author-email: [email protected] License: MIT Location: /tmp/eg/.venv/lib/python3.11/site-packages Requires: Required-by: Files: __pycache__/six.cpython-311.pyc six-1.16.0.dist-info/INSTALLER six-1.16.0.dist-info/LICENSE six-1.16.0.dist-info/METADATA six-1.16.0.dist-info/RECORD six-1.16.0.dist-info/REQUESTED six-1.16.0.dist-info/WHEEL six-1.16.0.dist-info/top_level.txt six.py $ grep '^Version:' .venv/lib/python3.11/site-packages/six-1.16.0.dist-info/METADATA Version: 1.16.0 This .dist-info/METADATA file is the location containing the version info and other metadata. Users of your package, and the package itself, may access package metadata if/when necessary by using stdlib importlib.metadata. The version string is guaranteed to be there for an installed package, because it's a required field in the metadata specification, and there is a dedicated function for it. Other keys, which are optional, can be found using the metadata headers, e.g.: >>> from importlib.metadata import metadata >>> metadata("six")["Summary"] 'Python 2 and 3 compatibility utilities' All versions of Python where accessing the metadata was non-trivial are now EOL, but if you need to maintain support for Python <= 3.7 then importlib-metadata from PyPI can be used the same way as stdlib. When you only publish one top-level directory containing an __init__.py file and it has the same name as the project name in pyproject.toml, then you may use the __package__ attribute so that you don't have to hardcode the package name in source code: # can be accessed anywhere within the package import importlib.metadata the_version_str = importlib.metadata.version(__package__) If the names are mismatching (example: the distribution package "python-dateutil" provides import package "dateutil"), or you have multiple top-level names (example: the distribution package "setuptools" provides import packages "setuptools" and "pkg_resources") then you may want to just hardcode the package name, or try and discover it from the installed packages_distributions() mapping. This satisfies point 1. and point 3. For point 2.: I want the information to be available to the end user, so that someone who installs the package can do something like mypackage.VERSION from her interactive Python session. The similar recipe works: # in your_package/__init__.py import importlib.metadata VERSION = importlib.metadata.version(__package__) However, I would recommend not to provide a version attribute at all. There are several disadvantages to doing that, for reasons I've described here and here. If you have historically provided a version attribute, and need to keep it for backwards compatibility concerns, you may consider maintaining support using a fallback module __getattr__ which will be invoked when an attribute is not found by the usual means: def __getattr__(name): if name == "VERSION": # consider adding a deprecation warning here advising caller to use # importlib.metadata.version directly return importlib.metadata.version("mypackage") raise AttributeError(f"module {__name__!r} has no attribute {name!r}") | 2 | 4 |
76,755,607 | 2023-7-24 | https://stackoverflow.com/questions/76755607/how-to-optimize-the-ulam-spiral-infinite-iterator | I have created an infinite iterator which maps natural numbers to all lattice points in a spiral like manner, akin to the Ulam Spiral: The code is below, I have made it as fast as possible, and I didn't use a single if condition: from itertools import islice, repeat def ulamish_spiral_gen(): xc = yc = length = 0 yield 0, 0 while True: length += 1 yield from zip(range(xc + 1, (xc := xc + length) + 1, 1), repeat(yc)) yield from zip(repeat(xc), range(yc + 1, (yc := yc + length) + 1, 1)) length += 1 yield from zip(range(xc - 1, (xc := xc - length) - 1, -1), repeat(yc)) yield from zip(repeat(xc), range(yc - 1, (yc := yc - length) - 1, -1)) def ulamish_spiral(n): return list(islice(ulamish_spiral_gen(), n)) I want to know, how to memoize the output of the infinite iterator, so that list(islice(ulamish_spiral_gen(), n)) will be called only when the value of n is greater than the last n. Something like this: COMPUTED = [] def ulamish_spiral(n): global COMPUTED if n > len(COMPUTED): COMPUTED = list(islice(ulamish_spiral_gen(), n)) return COMPUTED[:n] It is very easy, but the first len(COMPUTED) terms have already been computed, only terms in range(len(COMPUTED), n) need to be computed, but the call computes all the already computed terms. So I tried to reuse the same generator object and only ask for the next n - len(COMPUTED) items, and I have succeeded. But doing so actually makes code slower: COMPUTED = [] ULAMISH_GEN = ulamish_spiral_gen() def ulamish_spiral(n): if n > (l := len(COMPUTED)): COMPUTED.extend(islice(ULAMISH_GEN, n - l)) return COMPUTED[:n] In [225]: %timeit COMPUTED.clear(); ULAMISH_GEN = ulamish_spiral_gen(); ulamish_spiral(8192) 928 µs ± 8.96 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [226]: %timeit COMPUTED.clear(); ULAMISH_GEN = ulamish_spiral_gen(); ulamish_spiral(1024); ulamish_spiral(2048); ulamish_spiral(4096); ulamish_spiral(8192) 993 µs ± 18.2 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [227]: %timeit COMPUTED.clear(); ULAMISH_GEN = ulamish_spiral_gen(); ulamish_spiral(1024); ulamish_spiral(2048); ulamish_spiral(4096); ulamish_spiral(8192); ulamish_spiral(16384) 2.14 ms ± 106 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [228]: %timeit COMPUTED.clear(); ULAMISH_GEN = ulamish_spiral_gen(); ulamish_spiral(16384) 2 ms ± 88.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [229]: COMPUTED.clear(); ULAMISH_GEN = ulamish_spiral_gen(); ulamish_spiral(1024); ulamish_spiral(2048); ulamish_spiral(16384) == list(islice(ulamish_spiral_gen(), 16384)) Out[229]: True How can I skip already computed terms and make code faster? | You already covered "memoize"/"skip". Here are faster infinite iterators. Times for the first 16384 coordinates (as in your benchmark): 1.01 ± 0.00 ms gen_Kelly_4 1.01 ± 0.00 ms gen_Kelly_5 1.03 ± 0.00 ms gen_Kelly_3 1.06 ± 0.00 ms gen_Kelly_2 1.21 ± 0.00 ms gen_Kelly_1 1.57 ± 0.00 ms gen_original Python: 3.11.4 (main, Jun 24 2023, 10:18:04) [GCC 13.1.1 20230429] You combine all the zip iterators with your own generator. My gen_Kelly_1 instead uses chain.from_iterable for that. You use the range function to produce the int objects over and over again. My gen_Kelly_2 instead keeps them in a list and reuses them. My gen_Kelly_3 further reuses the repeat iterators, gen_Kelly_4 reverses my list instead of using reverse iterators, and gen_Kelly_5 removes the now duplicated code (i goes through 0, 1, -1, 2, -2, 3, -3, etc): def gen_Kelly_5(): def parts(): i = 0 range = [] while True: rep = repeat(i) yield zip(rep, range) range.append(i) range.reverse() yield zip(range, rep) i = (i<1) - i return chain.from_iterable(parts()) Full code (Attempt This Online!): from timeit import timeit from statistics import mean, stdev from itertools import islice, repeat, cycle, chain, count import sys def gen_original(): xc = yc = length = 0 yield 0, 0 while True: length += 1 yield from zip(range(xc + 1, (xc := xc + length) + 1, 1), repeat(yc)) yield from zip(repeat(xc), range(yc + 1, (yc := yc + length) + 1, 1)) length += 1 yield from zip(range(xc - 1, (xc := xc - length) - 1, -1), repeat(yc)) yield from zip(repeat(xc), range(yc - 1, (yc := yc - length) - 1, -1)) def gen_Kelly_1(): def parts(): xc = yc = length = 0 yield (0, 0), while True: length += 1 yield zip(range(xc + 1, (xc := xc + length) + 1, 1), repeat(yc)) yield zip(repeat(xc), range(yc + 1, (yc := yc + length) + 1, 1)) length += 1 yield zip(range(xc - 1, (xc := xc - length) - 1, -1), repeat(yc)) yield zip(repeat(xc), range(yc - 1, (yc := yc - length) - 1, -1)) return chain.from_iterable(parts()) def gen_Kelly_2(): def parts(): i = 0 range = [] while True: yield zip(repeat(-i), reversed(range)) range.insert(0, -i) yield zip(range, repeat(-i)) i += 1 yield zip(repeat(i), range) range.append(i) yield zip(reversed(range), repeat(i)) return chain.from_iterable(parts()) def gen_Kelly_3(): def parts(): i = 0 range = [] while True: rep = repeat(-i) yield zip(rep, reversed(range)) range.insert(0, -i) yield zip(range, rep) i += 1 rep = repeat(i) yield zip(rep, range) range.append(i) yield zip(reversed(range), rep) return chain.from_iterable(parts()) def gen_Kelly_4(): def parts(): i = 0 range = [] while True: rep = repeat(-i) yield zip(rep, range) range.append(-i) range.reverse() yield zip(range, rep) i += 1 rep = repeat(i) yield zip(rep, range) range.append(i) range.reverse() yield zip(range, rep) return chain.from_iterable(parts()) def gen_Kelly_5(): def parts(): i = 0 range = [] while True: rep = repeat(i) yield zip(rep, range) range.append(i) range.reverse() yield zip(range, rep) i = (i<1) - i return chain.from_iterable(parts()) funcs = gen_original, gen_Kelly_1, gen_Kelly_2, gen_Kelly_3, gen_Kelly_4, gen_Kelly_5 n = 16384 # Correctness expect = list(islice(funcs[0](), n)) for f in funcs[1:]: result = list(islice(f(), n)) assert result == expect # Speed times = {f: [] for f in funcs} def stats(f): ts = [t * 1e3 for t in sorted(times[f])[:10]] return f'{mean(ts):6.2f} ± {stdev(ts):4.2f} ms ' for _ in range(1000): for f in funcs: t = timeit(lambda: list(islice(f(), n)), number=1) times[f].append(t) for f in sorted(funcs, key=stats): print(stats(f), f.__name__) print('\nPython:', sys.version) | 2 | 4 |
76,757,194 | 2023-7-24 | https://stackoverflow.com/questions/76757194/why-do-i-get-the-error-unrecognized-request-argument-supplied-functions-when | I'm trying to use functions when calling Azure OpenAI GPT, as documented in https://platform.openai.com/docs/api-reference/chat/create#chat/create-functions I use: import openai openai.api_type = "azure" openai.api_base = "https://XXXXXXXX.openai.azure.com/" openai.api_version = "2023-06-01-preview" openai.api_key = os.getenv("OPENAI_API_KEY") response = openai.ChatCompletion.create( engine="gpt-35-turbo-XXX", model="gpt-35-turbo-0613-XXXX" messages=messages, functions=functions, function_call="auto", ) but I get the error: openai.error.InvalidRequestError: Unrecognized request argument supplied: functions Why? Data to run the example code above (messages and functions need to be defined): messages = [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"}] functions = [ { "name": "fetch_pages", "description": "Fetch the content of specified pages from the document.", "parameters": { "type": "object", "properties": { "pages": { "type": "array", "items": { "type": "number" }, "description": "The list of pages to fetch." } }, "required": ["pages"] } }, { "name": "fetch_section", "description": "Fetch the content of a specified section.", "parameters": { "type": "object", "properties": { "section_title": { "type": "string", "description": "The title of the section to fetch." } }, "required": ["section_title"] } }, { "name": "search", "description": "Search the document for a string query.", "parameters": { "type": "object", "properties": { "query": { "type": "string", "description": "The search term." } }, "required": ["query"] } } ] | Function support with the Azure API was added in 2023-07-01-preview. The API version needs to be updated in your example: openai.api_version = "2023-07-01-preview" | 9 | 15 |
76,752,059 | 2023-7-24 | https://stackoverflow.com/questions/76752059/how-can-i-stop-a-worker-thread-in-tkinter-when-window-is-closed | When I close the tkinter window while the worker thread is running I get this error after 10 seconds: RuntimeError: main thread is not in main loop probably because ROOT is destroyed. import tkinter as tk from tkinter import ttk import threading import time def worker(event_ready): event_ready.wait() progress_bar = ttk.Progressbar(ROOT, orient='horizontal', mode='indeterminate', length=280) progress_bar.grid(column=0, row=0, columnspan=2, padx=10, pady=20) progress_bar.start() time.sleep(10) progress_bar.stop() event_ready = threading.Event() worker_thread = threading.Thread(target=worker, args=(event_ready,)) worker_thread.start() global ROOT ROOT = tk.Tk() ROOT.geometry('300x120') ROOT.title('TEST') ROOT.grid() event_ready.set() ROOT.mainloop() How can stop worker thread a soon as I close window? | You simply only need to start the Thread as a daemon, at least in Windows. worker_thread = threading.Thread(target=worker, args=(event_ready,), daemon=True) Now when you close the Window, the program will terminate immediately. | 3 | 1 |
76,748,693 | 2023-7-23 | https://stackoverflow.com/questions/76748693/how-can-i-resolve-this-bubble-sort-problem | I'm trying to make a Python bubble sort with a 5 variable list. The user will input the 5 values and the bubble_sort is going to arrange them in ascending order, but when I try to run it it gives the error: lista = list(n1, n2, n3, n4, n5) ^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: list expected at most 1 argument, got 5 This is the code I wrote: print('Olá!') msg = 'Olá!' n1 = float(input('Digite um número qualquer:')) n2 = float(input('Digite outro número qualquer:')) n3 = float(input('Digite outro número qualquer:')) n4 = float(input('Digite outro número qualquer:')) n5 = float(input('Digite outro número qualquer:')) lista = list(n1, n2, n3, n4, n5) def bubble_sort(arr): n = len(arr) for i in range(n): for j in range(0, n - i - 1): if arr[j] > arr[j + 1]: arr[j], arr[j + 1] = arr[j + 1], arr[j] return arr print(bubble_sort(lista)) print('Fim') | you are trying to create the list lista using the list() function but it expects an iterable as its argument, not multiple individual elements! create a list containing all the elements like this: lista = [n1, n2, n3, n4, n5] | 2 | 5 |
76,747,462 | 2023-7-23 | https://stackoverflow.com/questions/76747462/how-to-optimize-printing-pascals-triangle-in-python | I have implemented the Pascal's triangle in Python, it is pretty efficient, but it isn't efficient enough and there are a few things I don't like. The Pascal's triangle is like the following: I have read this useless tutorial and this question, and the solutions are extremely inefficient, involving factorials and don't use caching. Instead, I implemented a different algorithm I created myself. My mathematics isn't that good, but I have spotted the following simple recursive relationships: The triangle starts with a row with only 1 number in it, and that number is 1. For each subsequent row, the length of the row increment by 1, and the first and last number of the row is 1. Each number that isn't the first or last, is the sum of the number at the row above it with index equal to the number's index minus 1, and the number at row above it with the same index. And the rows of the triangle are symmetric. In other words, if we use zero-based indexing: p(r, 0) = p(r, r) = 1 p(r, c) = p(r - 1, c - 1) + p(r - 1, c) p(r, c) = p(r, r - c) Below is my code: from typing import List class Pascal_Triangle: def __init__(self, rows: int = 0, fill: bool = True): self.data = [] self.length = 0 if rows: self.fill_rows(rows) if fill: self.fill_values() def add_row(self, length: int): row = [0] * length row[0] = row[-1] = 1 self.data.append(row) def fill_rows(self, rows: int): for length in range(self.length + 1, rows + 1): self.add_row(length) self.length = rows def comb(self, a: int, b: int) -> int: if not 0 <= b <= a: raise ValueError(f'cannot choose {b} elements from a population of {a}') if self.length < (length := a + 1): self.fill_rows(length) return self.at(a, b) def at(self, row: int, col: int) -> int: if val := self.data[row][row - col]: self.data[row][col] = val return val if val := self.data[row][col]: return val self.data[row][col] = val = self.at(row - 1, col - 1) + self.at(row - 1, col) return val def fill_values(self): for row in range(2, self.length): for col in range(1, row): self.at(row, col) def get_row(self, row: int) -> List[int]: if self.length < (length := row + 1): self.fill_rows(length) self.fill_values() return self.data[row] def pretty_print(self): print('\n'.join(f"{' ' * (self.length - i)}{' '.join(map(str, row))}" for i, row in enumerate(self.data))) First, the output of tri = Pascal_Triangle(12); tri.pretty_print() is extremely ugly: 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 1 6 15 20 15 6 1 1 7 21 35 35 21 7 1 1 8 28 56 70 56 28 8 1 1 9 36 84 126 126 84 36 9 1 1 10 45 120 210 252 210 120 45 10 1 1 11 55 165 330 462 462 330 165 55 11 1 How can I dynamically adjust the spacing between the elements so that the output looks more like an equilateral triangle? Second I don't like the recursive function, is there any way that I can get rid of the recursive function and calculate the values using the recursive relationship by iteration, while remembering already computed numbers? Third, is there a data structure more efficient than my nested lists for the same data? I have thought of numpy.array but arrays need each row to have the same length and arrays can't grow. Finally can my algorithm be optimized further? The data after calling tri.at(16, 5) is: [[1], [1, 1], [1, 2, 1], [1, 3, 3, 1], [1, 4, 6, 4, 1], [1, 5, 10, 10, 5, 1], [1, 6, 15, 20, 15, 6, 1], [1, 7, 21, 35, 35, 21, 0, 1], [1, 8, 28, 56, 70, 56, 0, 0, 1], [1, 9, 36, 84, 126, 126, 0, 0, 0, 1], [1, 10, 45, 120, 210, 252, 0, 0, 0, 0, 1], [1, 11, 55, 165, 330, 462, 0, 0, 0, 0, 0, 1], [1, 12, 66, 220, 495, 792, 0, 0, 0, 0, 0, 0, 1], [1, 0, 78, 286, 715, 1287, 0, 0, 0, 0, 0, 0, 0, 1], [1, 0, 0, 364, 1001, 2002, 0, 0, 0, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 1365, 3003, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 4368, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]] I know I am already doing memoization, and that is not what I meant. I want to calculate the unfilled values without ever using a recursive function. Instead of using the recursive definition and going backwards, we can somehow use iteration, start from where the lowest value that was filled and needed for the query, and iterate through all needed numbers, make two copies of each number and go forwards, until the requested index was reached. The needed numbers can be computed using indexing and mathematics. In this way there is no recursive function call at all. Update I have rewrote my code to the following: class Pascal_Triangle: def __init__(self, end_row: int = 2, opt: int = 0): self.data = [[1], [1, 1]] self.length = 2 self.opt = [self.add_rows_o0, self.add_rows_o1] if end_row > 2: self.opt[opt](end_row) def add_rows_o0(self, end_row: int): last_row = self.data[-1] for _ in range(self.length, end_row): self.data.append( last_row := [1] + [a + b for a, b in zip(last_row, last_row[1:])] + [1] ) self.length = end_row def add_rows_o1(self, end_row: int): last_row = self.data[-1] for n in range(self.length, end_row): mid = n // 2 + 1 row = [0] * (n - 1) m = n - 2 for i, (a, b) in enumerate(zip(last_row, last_row[1:mid])): row[i] = row[m - i] = a + b self.data.append(last_row := [1] + row + [1]) self.length = end_row def pretty_print(self): longest = len(str(self.data[-1][self.length // 2])) line_length = (longest + 1) * self.length for row in self.data: print(" ".join(f"{n:{longest}}" for n in row).center(line_length)) I have used list comprehension to generate new rows and got rid of the expensive recursive function call. The code is much faster as a result. However, I tried to exploit the symmetric nature of the rows and only calculate half of the row and mirror it to get the other half. In this way the number of calculations would be halved. But it is actually slower: In [257]: %timeit Pascal_Triangle(64, 1) 237 µs ± 7.43 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [258]: %timeit Pascal_Triangle(64, 0) 224 µs ± 9.75 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [259]: Pascal_Triangle(64, 1).data == Pascal_Triangle(64, 0).data Out[259]: True Why is it slower? And how can I actually skip the unnecessary calculations and make it faster? | You can improve the pretty_print by getting the length (as string) of the longest number and using that as the basis for all the numbers' width; also using str.center might be easier. def pretty_print(self): longest = max(len(str(n)) for row in self.data for n in row) line_length = (longest + 1) * self.length for row in self.data: print(' '.join(f'{n:{longest}}' for n in row).center(line_length)) With this check if val := self.data[row][col]: return val, you are already doing that, and each value is calculated exactly once. You could make it purely iterative in fill_values directly, and drop the at method entirely, though: def fill_values(self): for row in range(2, self.length): for col in range(1, row): self.data[row][col] = self.data[row - 1][col - 1] + self.data[row - 1][col] I'd say a nested list-of-lists is a good choice here, and your algorithm (even before 2.) should already be as efficient as possible. Having said that, I noticed you have a comb function, so maybe your goal is not really to print the triangle, but to calculate individual values. In this case, there are two possible ways to make you code faster (although I did not actually time it). First, you could use a dict as data structure and then only calculate the values that are actually needed to find the value at a given row and col. In the worst case (centre of bottom row) that will be 50% of the entire triangle, and on average much less than that. class Pascal_Triangle: def __init__(self): self.data = {(0, 0): 1} def fill_rows(self, rows: int): # actually, just the last row would be enough here... for row in range(rows + 1): for col in range(row + 1): self.at(row, col) def at(self, row: int, col: int) -> int: if not 0 <= col <= row: raise ValueError(f'column position {col} is invalid for row {row}') if (row, col) not in self.data: self.data[row, col] = 1 if col in (0, row) else self.at(row - 1, col - 1) + self.at(row - 1, col) return self.data[row, col] def pretty_print(self): longest = max(len(str(n)) for n in self.data.values()) max_row = max(row for (row, col) in self.data) line_length = (longest + 1) * max_row for row in range(max_row+1): print(' '.join(str(self.data.get((row,col), "")).center(longest) for col in range(row + 1)).center(line_length)) This version still has the fill_rows and pretty_print functions (nicely showing which values were actually calculated). If you don't need those, you could also just make at a function and use functools.cache to cache the values... from functools import cache @cache def at(row: int, col: int) -> int: if not 0 <= col <= row: raise ValueError(f'column position {col} is invalid for row {row}') return 1 if col in (0, row) else at(row - 1, col - 1) + at(row - 1, col) ... or calculate the binomial coefficient directly using factorials: from math import factorial as fac def comb(n, k): return fac(n) // (fac(k)*(fac(n-k))) | 2 | 2 |
76,747,279 | 2023-7-23 | https://stackoverflow.com/questions/76747279/pyqt-how-to-use-qmessagebox-open-and-connect-callback | How can I connect a callback when using QMessageBox.open() ? In the documentation it says: QMessageBox.open(receiver, member) Opens the dialog and connects its finished() or buttonClicked() signal to the slot specified by receiver and member . If the slot in member has a pointer for its first parameter the connection is to buttonClicked() , otherwise the connection is to finished() . What should I use for receiver and member in the below example: import sys from PyQt6.QtWidgets import ( QApplication, QMainWindow, QPushButton, QMessageBox ) def show_dialog(app, window): mbox = QMessageBox(window) mbox.setIcon(QMessageBox.Icon.Information) mbox.setText("Message box pop up window") mbox.setWindowTitle("QMessageBox Example") mbox.setStandardButtons(QMessageBox.StandardButton.Ok | QMessageBox.StandardButton.Cancel) def button_clicked(): print("message box button clicked") mbox.done(0) app.quit() # How to connect button_clicked() to the standard buttons of the message box? mbox.open(receiver, member) # What to use for receiver and member? def main(): app = QApplication(sys.argv) window = QMainWindow() window.resize(330, 200) window.setWindowTitle("Testing") button = QPushButton(window) button.setText("Button1") button.setGeometry(100,100,100,30) def button_clicked(): print("button clicked") show_dialog(app, window) button.clicked.connect(button_clicked) window.show() app.exec() if __name__ == '__main__': main() | After some trial and error I found that I could simply use mbox.open(button_clicked) but I am still not sure how I can determine from within the callback which of the two buttons was clicked. Also one should not call QMessageBox.done() from within the callback since that would lead to infinite recursion. Here is how I call QMessageBox.open() which works, but I am still missing the last piece of the puzzle: How to determine which button was clicked? def show_dialog(app, window): mbox = QMessageBox(window) mbox.setIcon(QMessageBox.Icon.Information) mbox.setText("Message box pop up window") mbox.setWindowTitle("QMessageBox Example") mbox.setStandardButtons(QMessageBox.StandardButton.Ok | QMessageBox.StandardButton.Cancel) #@pyqtSlot() # It is not necessary to use this decorator? def button_clicked(): print("message box button clicked") # But which button? #mbox.done(0) # Do not call this method as it will lead to infinite recursion app.quit() mbox.open(button_clicked) print("Dialog opened") Update: How to determine which button was clicked? You can use mbox.clickedButton().text(), for example like this: def button_clicked(): if mbox.clickedButton().text() == "&OK": print("OK clicked") else: print("Cancel clicked") app.quit() | 3 | 0 |
76,724,939 | 2023-7-19 | https://stackoverflow.com/questions/76724939/there-is-no-such-driver-by-url-https-chromedriver-storage-googleapis-com-lates | I recently updated my Google Chrome browser to version 115.0.5790.99 and I'm using Python webdrivermanager library (version 3.8.6) for Chrome driver management. However, since this update, when I call the ChromeDriverManager().install() function, I encounter the following error: There is no such driver by URL https://chromedriver.storage.googleapis.com/LATEST_RELEASE_115.0.5790 Steps to reproduce the issue: Update Google Chrome to version 115.0.5790.99. Execute the following Python code: from webdriver_manager.chrome import ChromeDriverManager driver_path = ChromeDriverManager().install() capture: | Selenium Manager With the availability of Selenium v4.6 and above you don't need to explicitly download ChromeDriver, GeckoDriver or any browser drivers as such using webdriver_manager. You just need to ensure that the desired browser client i.e. google-chrome, firefox or microsoft-edge is installed. Selenium Manager is the new tool integrated with selenium4 that would help to get a working environment to run Selenium out of the box. Beta 1 of Selenium Manager will configure the browser drivers for Chrome, Firefox, and Edge if they are not present on the PATH. Solution As a solution you can simply do: from selenium import webdriver from selenium.webdriver.chrome.options import Options options = Options() options.add_argument("start-maximized") driver = webdriver.Chrome(options=options) driver.get("https://www.google.com/") | 21 | 4 |
76,744,865 | 2023-7-22 | https://stackoverflow.com/questions/76744865/python-encoding-all-possible-5-combinations-into-an-integer | I've got a set of 25 integers, ranging from 0 to 24 and my application involves selecting 5 of them (there are no repeated values, no value can be selected more than once) obtaining a combination like this one [15, 7, 12, 3, 22]. It's important to consider the the previous combination is considered equal to [7, 22, 12, 15, 3], the order doesn't matter, only the values do. By applying the binomial coefficient (25 choose 5) we can find out that there are 53.130 possible combinations. I want to encode all the possible combinations into an integer so that all values from 0 to 53129 are linked to a combination. | Use more_itertools.nth_combination that can compute the nth combination without computing all previous ones: # pip install more-itertools from more_itertools import nth_combination nth_combination(range(25), 5, 0) # (0, 1, 2, 3, 4) nth_combination(range(25), 5, 42) # (0, 1, 2, 5, 7) nth_combination(range(25), 5, 53129) # (20, 21, 22, 23, 24) You can make things more interesting by combining the above with functools.partial/cache: from functools import partial, cache encode = partial(nth_combination, range(25), 5) # or with a cache # encode = cache(partial(nth_combination, range(25), 5)) encode(0) # (0, 1, 2, 3, 4) encode(42) # (0, 1, 2, 5, 7) encode(53129) # (20, 21, 22, 23, 24) efficiency The advantage of nth_combination is that for large ranges, it is not needed to compute all n-1 combinations to access the nth one. Also, it is not needed to store all combinations, making it both CPU and memory efficient. Combined with cache this makes a compromise between memory and CPU by avoiding to recompute the same value twice if the same code is requested several times. However, if you must access all values eventually, then pre-computing all combinations as show by @ti7 will be more straightforward and efficient, but will require to compute and store all values from the beginning: from itertools import combinations encode = list(combinations(range(25), 5)) encode[0] # (0, 1, 2, 3, 4) encode[42] # (0, 1, 2, 5, 7) encode[53129] # (20, 21, 22, 23, 24) | 2 | 5 |
76,744,939 | 2023-7-22 | https://stackoverflow.com/questions/76744939/importerror-using-the-trainer-with-pytorch-requires-accelerate-0-20-1 | Please help me when I tried to use it in my Google Colab for transformers error: ImportError: Using the Trainer with PyTorch requires accelerate=0.20.1: Please run pip install transformers[torch] or pip install accelerate -U` NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. transformers - GPT - pytorch | Steps: pip install accelerate -U then Restart the notebook run all and you'll see it'll solve | 3 | 1 |
76,742,725 | 2023-7-22 | https://stackoverflow.com/questions/76742725/python-flask-application-in-azure-web-app-only-showing-default-page | I have been struggling for days to deploy a Python web app with flask on Azure. But all I ever get is the default page. This is the address: https://el-priser.azurewebsites.net/ This is my Github repo: https://github.com/Michaelgimm/energinet-elpris I also tried with the azure guide for a test app, but it does the same. The deployment is a success and I can see all the files when accessing through SSH via this link: https://el-priser.scm.azurewebsites.net/ Any ideas? I haave tried everything I can think of. What I did expect to see was my app. In the structure of my app I have a main.py but placed the index.html in a folder called templates. | I have tried to deploy your application to Azure web app through VScode. I could deploy the application, but I was able to see only default page and few times it also led me to Application Error. After many trials, I could figure out the resolution for this issue. To run the deployed Flask application, we have to add few settings in the web app. Go to your App service=> Settings =>Configuration =>Application Settings=> Click on Add new application settings and add the settings: 1. SCM_DO_BUILD_DURING_DEPLOYMENT=1 2. WEBSITES_CONTAINER_START_TIME_LIMIT=1800 3. WEBSITES_PORT=<port_number_of_your_application> Go to General Settings in Configuration of your web app and add the startup command gunicorn --bind=0.0.0.0 main:app. As your Flask app main module is main.py and the Flask app object in that file is named as app, use the above given Startup command. Restart the web app after adding all these settings. Now, you will be able to access your application (index and prices pages): Response: I could access both the index and prices pages as shown below: | 3 | 3 |
76,741,158 | 2023-7-21 | https://stackoverflow.com/questions/76741158/python-regex-findall-not-finding-all-matches-of-shorter-length | How can I find all matches that don't necessarily consume all characters with * and + modifiers? import regex as re matches = re.findall("^\d+", "123") print(matches) # actual output: ['123'] # desired output: ['1', '12', '123'] I need the matches to be anchored to the start of the string (hence the ^), but the + doesn't even seem to be considering shorter-length matches. I tried adding overlapped=True to the findall call, but that does not change the output. Making the regex non-greedy (^\d+?) makes the output ['1'], overlapped=True or not. Why does it not want to keep searching further? I could always make shorter substrings myself and check those with the regex, but that seems rather inefficient, and surely there must be a way for the regex to do this by itself. s = "123" matches = [] for length in range(len(s)+1): matches.extend(re.findall("^\d+", s[:length])) print(matches) # output: ['1', '12', '123'] # but clunky :( Edit: the ^\d+ regex is just an example, but I need it to work for any possible regex. I should have stated this up front, my apologies. | You could use overlapped=True with the PyPi regex module and reverse searching (?r) Then reverse the resulting list from re.findall import regex as re res = re.findall(r"(?r)^\d+", "123", overlapped=True) res.reverse() print(res) Output ['1', '12', '123'] See a Python demo. | 6 | 5 |
76,741,476 | 2023-7-21 | https://stackoverflow.com/questions/76741476/how-can-i-create-a-table-with-a-uuid-column-in-a-postgres-db-using-sqlalchemy | I have created some tables using SQLALchemy 2.0's MappedAsDataclass and DeclarativeBase models, creating my tables like this: class Base(MappedAsDataclass, DeclarativeBase): """subclasses will be converted to dataclasses""" class Post(Base): __tablename__ = 'allposts' post_id: Mapped[int] = mapped_column(init=False, primary_key=True) collection_id: Mapped[int] = mapped_column(ForeignKey("collections.collection_id"), default=None, nullable=True) user: Mapped[str] = mapped_column(default=None, nullable=True) title: Mapped[str] = mapped_column(default=None, nullable=True) description: Mapped[str] = mapped_column(default=None, nullable=True) date_added: Mapped[datetime.datetime] = mapped_column(default=None, nullable=True) date_modified: Mapped[Optional[datetime.datetime]] = mapped_column(default=None, nullable=True) tags: Mapped[list] = mapped_column(ARRAY(TEXT, dimensions=1), default=None, nullable=True) views: Mapped[int] = mapped_column(default=0, nullable=True) Post.__table__ Base.metadata.create_all(engine) I want to use a GUID as the PK, but I am having trouble creating a UUID type column via SQLALchemy using DeclarativeBase/MappedAsDataclass I was trying stuff like this to make a UUID column, but no luck: from sqlalchemy.dialects.postgresql import UUID class Post(Base): __tablename__ = 'allposts' post_id: Mapped[int] = mapped_column(init=False, primary_key=True) guid: Mapped[uuid.uuid1] = mapped_column(default=None, nullable=True) collection_id: Mapped[int] = mapped_column(ForeignKey("collections.collection_id"), default=None, nullable=True) user: Mapped[str] = mapped_column(default=None, nullable=True) title: Mapped[str] = mapped_column(default=None, nullable=True) description: Mapped[str] = mapped_column(default=None, nullable=True) date_added: Mapped[datetime.datetime] = mapped_column(default=None, nullable=True) date_modified: Mapped[Optional[datetime.datetime]] = mapped_column(default=None, nullable=True) tags: Mapped[list] = mapped_column(ARRAY(TEXT, dimensions=1), default=None, nullable=True) views: Mapped[int] = mapped_column(default=0, nullable=True) I tried different varients of uuid.uuid1, but if that's the issue I didn't guess the right one. There are some SQLAlchemy examples of UUID columns here, but I can't seem to figure how to translate it into the model I am using. | Here is an example of how you can do it on a PostgreSQL server: import uuid from sqlalchemy import create_engine, select, text, types from sqlalchemy.orm import (DeclarativeBase, Mapped, MappedAsDataclass, mapped_column, sessionmaker) DATABASE_URL = "postgresql+psycopg2://test_username:test_password@localhost:5432/test_db" engine = create_engine( DATABASE_URL, echo=True, ) Session = sessionmaker( bind=engine, ) class Base(MappedAsDataclass, DeclarativeBase): pass class Post(Base): __tablename__ = 'post' guid: Mapped[uuid.UUID] = mapped_column( types.Uuid, primary_key=True, init=False, server_default=text("gen_random_uuid()") # use what you have on your server ) if __name__ == "__main__": Base.metadata.create_all(engine) with Session() as session: post1 = Post() post2 = Post() session.add_all([post1, post2]) session.commit() with Session() as session: for post in session.scalars(select(Post)): print(post) Here, it's using a sever default to have the server generate the UUID for us. CREATE TABLE post ( guid UUID DEFAULT gen_random_uuid() NOT NULL, PRIMARY KEY (guid) ) The result will be: Post(guid=UUID('b8b01f06-8daf-4c2c-9a7b-c967c752a7bf')) Post(guid=UUID('7b94c8a3-ff0e-4d09-92e9-caf4377370e1')) | 4 | 6 |
76,736,925 | 2023-7-21 | https://stackoverflow.com/questions/76736925/df-to-sql-missing-optional-dependency-pandas | SOLVED: I'm having an issue using sqlalchemy and snowflake.connector to push my df to a new table in Snowflake but I am prompted with this error: raise errors.MissingDependencyError(self._dep_name) snowflake.connector.errors.MissingDependencyError: Missing optional dependency: pandas Edit: Accidentally posted without adding in more detail! I am importing pandas, and all necessary packages. I do not understand why the error is suggesting pandas is missing as it is used elsewhere many times in my scripts. Versions are: snowflake-connector-python 3.0.2 snowflake-sqlalchemy 1.4.7 SQLAlchemy 1.4.49 Below is a stripped out example of my code: import pandas as pd from snowflake.connector.pandas_tools import pd_writer from sqlalchemy.types import String from snowflake.sqlalchemy import URL from sqlalchemy import create_engine engine = create_engine(URL( user='service account', password='password', account='account', database='database', schema='schema', warehouse='warehouse', role="role" )) df.columns = map(lambda x: str(x).upper(), df.columns) df.to_sql(name=table_name.lower(), con=engine, schema="schema", index=False, if_exists='append', method=pd_writer, dtype={'field': String}) engine.dispose() Solution: Run the following in terminal/powershell: pip install snowflake-connector-python[pandas] | Moving the answer from the question to the answer section, so it doesn't look as unanswered: pip install snowflake-connector-python[pandas] | 2 | 8 |
76,731,084 | 2023-7-20 | https://stackoverflow.com/questions/76731084/calculate-the-rolling-mean-by-group-with-conditions-for-each-rows-in-a-pandas-da | I'm trying to create a rolling_median column as follow: For each row: Filter the DataFrame to only keep rows of the same Category, having Date_B < my_current_row.Date_A Sort this filtered DataFrame by Date_B Calculate the median of Value using the N last entries with my filters : for example, if N=2, I will use the last 2 rows that respect my filtering to estimate the mean for my current row Doing it using iterrows is doable but is not scalable. Do you have an idea how to do it in a more "Vectorized" way with less complexity ? Please find a snippet on how to generate a similar dataframe: import pandas as pd import numpy as np # Sample DataFrame (replace this with your actual DataFrame) data = { 'Category': ['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B'], 'Date_A': ['2023-07-08', '2023-07-09', '2023-07-11', '2023-07-12', '2023-07-13', '2023-07-08', '2023-07-09', '2023-07-11', '2023-07-12', '2023-07-13'], 'Date_B': ['2023-07-08', '2023-07-10', '2023-07-12', '2023-07-12', '2023-07-13', '2023-07-08', '2023-07-10', '2023-07-12', '2023-07-12', '2023-07-13'], 'Value': [10, 15, 20, 25, 30, 35, 40, 45, 50, 55] } df = pd.DataFrame(data) # Convert 'Date_A' and 'Date_B' columns to datetime type df['Date_A'] = pd.to_datetime(df['Date_A']) df['Date_B'] = pd.to_datetime(df['Date_B']) df['rolling_mean'] = np.nan # Last 2 values N=2 for category in df.Category.unique(): df_cat = df[df.Category==category] for idx, row in df_cat.iterrows(): rm = df_cat[df_cat.Date_B < row.Date_A][:N].Value.mean() df.at[idx, 'rolling_mean'] = rm df Category Date_A Date_B Value rolling_mean 0 A 2023-07-08 2023-07-08 10 NaN 1 A 2023-07-09 2023-07-10 15 10.0 2 A 2023-07-11 2023-07-12 20 12.5 3 A 2023-07-12 2023-07-12 25 12.5 4 A 2023-07-13 2023-07-13 30 12.5 5 B 2023-07-08 2023-07-08 35 NaN 6 B 2023-07-09 2023-07-10 40 35.0 7 B 2023-07-11 2023-07-12 45 37.5 8 B 2023-07-12 2023-07-12 50 37.5 9 B 2023-07-13 2023-07-13 55 37.5 | Probably not the most efficient way, but seems a little faster than the double for-loop: import pandas as pd import numpy as np # Sample DataFrame (replace this with your actual DataFrame) data = { 'Category': ['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B'], 'Date_A': ['2023-07-08', '2023-07-09', '2023-07-11', '2023-07-12', '2023-07-13', '2023-07-08', '2023-07-09', '2023-07-11', '2023-07-12', '2023-07-13'], 'Date_B': ['2023-07-08', '2023-07-10', '2023-07-12', '2023-07-12', '2023-07-13', '2023-07-08', '2023-07-10', '2023-07-12', '2023-07-12', '2023-07-13'], 'Value': [10, 15, 20, 25, 30, 35, 40, 45, 50, 55] } df = pd.DataFrame(data) # Convert 'Date_A' and 'Date_B' columns to datetime type df['Date_A'] = pd.to_datetime(df['Date_A']) df['Date_B'] = pd.to_datetime(df['Date_B']) df['rolling_mean'] = np.nan # Last 2 values N=2 df["rolling_mean"] = df.groupby("Category").apply( lambda grp: grp["Date_A"].map( lambda a: grp.loc[grp.Date_B < a, "Value"].head(N).mean() ), ).droplevel(level="Category") print(df) Category Date_A Date_B Value rolling_mean 0 A 2023-07-08 2023-07-08 10 NaN 1 A 2023-07-09 2023-07-10 15 10.0 2 A 2023-07-11 2023-07-12 20 12.5 3 A 2023-07-12 2023-07-12 25 12.5 4 A 2023-07-13 2023-07-13 30 12.5 5 B 2023-07-08 2023-07-08 35 NaN 6 B 2023-07-09 2023-07-10 40 35.0 7 B 2023-07-11 2023-07-12 45 37.5 8 B 2023-07-12 2023-07-12 50 37.5 9 B 2023-07-13 2023-07-13 55 37.5 Time comparison via %%timeit Original %%timeit for category in df.Category.unique(): df_cat = df[df.Category==category] for idx, row in df_cat.iterrows(): rm = df_cat[df_cat.Date_B < row.Date_A][:N].Value.mean() df.at[idx, 'rolling_mean'] = rm 7.42 ms ± 277 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) Proposed %%timeit df["rolling_mean"] = df.groupby("Category").apply( lambda grp: grp["Date_A"].map( lambda a: grp.loc[grp.Date_B < a, "Value"].head(N).mean() ), ).droplevel(level="Category") 4.55 ms ± 214 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) Some explanation: # this handles the initial for-loop over "Categories" # it returns an Iterator of tuples with the first # item being the group name and the second being the dataframe group df.groupby("Category") # `grp` is the dataframe part returned by the `.groupby` above. # it has the same columns as the original, but filtered down # to the rows in the selected group. .apply( lambda grp: grp["Date_A"]... ) # `map` iterates over the values of the declared series (i.e. "Date_A") # `a` is one of the values in "Date_A" and it is compared to the # series "Date_B". # `grp` is filtered to the rows where "Date_B" is < `a`. # we keep the first `N` rows of the returned dataframe and take the mean .map( lambda a: grp.loc[grp.Date_B < a, "Value"].head(N).mean() ) # we drop the newly generated "Category" index so we can join back # with the original `df`. .droplevel(level="Category") References pandas.core.groupby.DataFrameGroupBy.apply pandas.Series.apply pandas.Series.map | 3 | 1 |
76,734,236 | 2023-7-20 | https://stackoverflow.com/questions/76734236/pydantic-v2-field-validator-got-an-unexpected-keyword-argument-pre | I'm migrating from v1 to v2 of Pydantic and I'm attempting to replace all uses of the deprecated @validator with @field_validator. However, I was previously using the pre validator argument and after moving to @field_validator, I'm receiving the following error: TypeError: field_validator() got an unexpected keyword argument 'pre' Has the use of pre also been deprecated in V2? It seems it's still referenced in the V2 validator documentation though with the top-of-page warning: This page still needs to be updated for v2.0. Hoping somebody else has already worked through this and can suggest the best route forward. Thanks! | It seems it's still referenced in the V2 validator documentation [...] Not really. At least nowhere in the field validators section. (There is just an outdated mention of it in the "model validators" section as of today.) However, there is also a link to the detailed API reference for the field_validator decorator showing exactly what arguments you can pass to it. The closest thing to the v1 pre=True argument would now be the mode="before" argument. Say you have the following code in Pydantic v1: from pydantic import BaseModel, validator class Foo(BaseModel): x: int @validator("x", pre=True) def do_stuff(cls, v: object) -> object: if v is None: return 0 return v print(Foo(x=None)) # x=0 You would have to rewrite it like this in v2: from pydantic import BaseModel, field_validator class Foo(BaseModel): x: int @field_validator("x", mode="before") def do_stuff(cls, v: object) -> object: if v is None: return 0 return v print(Foo(x=None)) # x=0 I expect the maintainers will gradually rewrite and expand the docs to cover more details. | 7 | 10 |
76,729,345 | 2023-7-20 | https://stackoverflow.com/questions/76729345/is-it-possible-to-chain-several-discord-modals-discord-py | My goal is to chain or link several different discord modals after one-another in order to create a continuous form since Discord does not allow to add more than 5 items or "questions" per modal. The goal is to have modal_1 and modal_2 with 5 questions in each. Every time a command is ran by a user, modal_1 would be displayed. After completion, the user would submit modal_1, after which modal_2 would appear. Sadly, all my tries were unsuccessful, with Discord not closing the first modal after submitting it and instead returning the error: "Something went wrong, try again". This error is not a code error, but inside of the Discord modal. Here is a code snippet to illustrate what I am doing: class Modal_2(discord.ui.Modal,title="Modal_2"): question_1 = discord.ui.TextInput(label="That's the second question") asnyc def on_sumbit(self,interaction:discord.Interaction): await interaction.response.send_message(content="Thanks for your input") class Modal_1(discord.ui.Modal,title="Modal_1"): question_1 = discord.ui.TextInput(label="That's the first question") asnyc def on_sumbit(self,interaction:discord.Interaction): await interaction.response.send_modal(Modal_2()) class main(commands.Cog): def __init__(self,bot): self.bot = bot @app_commands.command(name="survey") async def main(self,interaction:discord.Interaction) await interaction.response.send_modal(Modal_1()) After the command is run, shows the first modal. Though when submitted, the second modal never shows up nor is the first modal dismissed. It only returns the error: "Something went wrong, try again" within the first modal after submitting it, but now actual "code" error or exception is shown in the console. | No, you can't. Looking in this section of Discord's Developers' Portal, you can read that MODAL can't be the callback of a MODAL_SUBMIT interaction. Depending on what the data is, I would either use a form webpage or try to do something with the View Widgets, especially TextInputs, since you can send as many messages with these as you want. | 3 | 1 |
76,734,284 | 2023-7-20 | https://stackoverflow.com/questions/76734284/copying-bytes-object-to-shared-memory-list-truncates-trailing-zeros | I'm trying to copy large bytes objects into a shareable list that's been initialized with bytes objects of large enough size. When the bytes object I'm copying over has trailing zeros (i.e. final values are 0x00), the new entry in the shared memory list is missing these zeros. Zeros in the middle of the data do not cause early truncation. A minimal example is below with the output. Code: from multiprocessing import shared_memory as shm shmList = shm.ShareableList([bytes(50)]) testBytes = bytes.fromhex("00112233445566778899aabbccddeeff0000") shmList[0] = testBytes print(testBytes) print(shmList[0]) shmList.shm.close() shmList.shm.unlink() Output: b'\x00\x11"3DUfw\x88\x99\xaa\xbb\xcc\xdd\xee\xff\x00\x00' b'\x00\x11"3DUfw\x88\x99\xaa\xbb\xcc\xdd\xee\xff' I would expect the bytes object would get copied over in full, including the trailing zeros. This seems like a "feature": sourcecode, but why? Are there any straightforward workarounds? | CPython seems to explicitly .rstrip(b'\x00'), but there doesn't seem to be any mention of this in the documentation. There is nothing documented at all about the storage neither in terms of semantics nor implementation (except that it should use (or be compatible with) the struct module). Personally, I'd say the way it is handled currently deserves to be considered a bug. Valid bytes objects are allowed to have trailing zero bytes, so the current implementation does not preserve specific valid values where it should do. You could file an issue to CPython about this. | 4 | 5 |
76,733,270 | 2023-7-20 | https://stackoverflow.com/questions/76733270/avoid-inner-loop-while-iterating-through-nested-data-performance-improvement | I am working with a large dataset (1 million + rows) and am tasked with counting up each truthy value for each ID and generating a new dict. The solution I came up with works, but does very poor in regards to performance. I have a dictionary as follow: "data": { "employees": [ { "id": 274, "report": true }, { "id": 274, "report": false }, { "id": 276, "report": true }, { "id": 276, "report": true }, { "id": 278, "report": true }, { "id": 278, "report": false } ] } I am looking to create a new dictionary with each individual employee ID with a count of each true value. Something like this: {274: {'id': 274, 'count': 1}, 276: {'id': 276, 'count': 2}, 278: {'id': 278, 'count': 1}} My current code: final_dict = {} for employee in result["data"]["employees"]: if employee["id"] not in final_dict.keys(): final_dict[employee["id"]] = {"id": employee["id"]} grouped_results = [res for res in result["data"]["employees"] if employee["id"] == res['id']] final_dict[employee["id"]]["count"] = len( [res for res in grouped_results if res["report"]] ) return final_dict This does what it needs to do, but with the amount of data that is being processed it does very poorly. I am looking for some advice on how to avoid the multiple loops, in order to improve performance. Any advice helps! | There is no need to make multiple passes, just accumulate as you go so it is linear time not quadratic result = {} for employee in input_dict["data"]["employees"]: _id = employee["id"] if _id not in result: # note id is being added redundantly maybe rethink this result[_id] = dict(id=_id, count=0) result[_id]["count"] += employee["report"] | 4 | 2 |
76,733,588 | 2023-7-20 | https://stackoverflow.com/questions/76733588/calculate-mean-of-consecutive-repititions-of-values-in-pandas-dataframe-numpy | I have the following dataframe / numpy array: a = np.array([ 55, 55, 69, 151, 151, 151, 103, 151, 151, 103]) df = pd.DataFrame([ 55, 55, 69, 151, 151, 151, 103, 151, 151, 103], columns = ["AOI"]) The values in this array can range from 1 to 151. I need to calculate the average consecutive repetition of the values. This involves the following logic: 55 occurs 2 time then 69 occurs 1 time then 151 occurs 3 times then 103 occurs 1 time then 151 occurs 2 times then 103 occurs 1 time. So I have the following occurences for each value: 55: 2 69: 1 151: 3, 2 103: 1, 1 Now I need to calculate the mean of the occurences: 55: 2 69: 1 151: 2.5 ((3+2)/2) 103: 1 ((1+1)/2) Ideally, in the end I have a numpy array with 151 entries. Each entry is zero and at index 55 I have the value 2, at index 69 the value 1, at index 151 the value 2.5 and at index 103 the value 1: [0,0,0,0,...,2,0,0,...,1,0,0,0,...,1,0,0,0,....,2.5] How can this be achieved? Unfortunately, I am not even able to perform step 1. A simple groupby and count the rows does not work, as I need the count of consecutive occurences. | Here is one possible solution using pandas: df['group'] = (df.AOI != df.AOI.shift()).cumsum() x = df.groupby('group').agg({'AOI':('first', 'count')}).droplevel(0, axis=1) x = x.groupby('first')['count'].mean() x = x.reindex(range(0, x.index[-1] + 1), fill_value=0) # print(x) <- if you want pd.Series print(x.values) # print as numpy arrray Prints: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2.5] | 2 | 2 |
76,729,145 | 2023-7-20 | https://stackoverflow.com/questions/76729145/explicit-matrix-multiplication-much-faster-than-numpy-matmul | In a Python code, I need at some point to multiply two large lists of 2x2 matrices, individually. In the code, both these lists are numpy arrays with shape (n,2,2). The expected result in another (n,2,2) array, where the matrix 1 is the result of the multiplication between matrix 1 of the first list and matrix 1 of the second list, etc. After some profiling, I found that matrix multiplication was a performance bottleneck. Out of curiosity, I tried writing the matrix multiplication "explicitly". Below is a code example with measured runtimes. import timeit import numpy as np def explicit_2x2_matrices_multiplication( mats_a: np.ndarray, mats_b: np.ndarray ) -> np.ndarray: matrices_multiplied = np.empty_like(mats_b) for i in range(2): for j in range(2): matrices_multiplied[:, i, j] = ( mats_a[:, i, 0] * mats_b[:, 0, j] + mats_a[:, i, 1] * mats_b[:, 1, j] ) return matrices_multiplied matrices_a = np.random.random((1000, 2, 2)) matrices_b = np.random.random((1000, 2, 2)) assert np.allclose( # Checking that the explicit version is correct matrices_a @ matrices_b, explicit_2x2_matrices_multiplication(matrices_a, matrices_b), ) print( # 1.1814142999992328 seconds timeit.timeit(lambda: matrices_a @ matrices_b, number=10000) ) print( # 1.1954495010013488 seconds timeit.timeit(lambda: np.matmul(matrices_a, matrices_b), number=10000) ) print( # 2.2304022700009227 seconds timeit.timeit(lambda: np.einsum('lij,ljk->lik', matrices_a, matrices_b), number=10000) ) print( # 0.19581600800120214 seconds timeit.timeit( lambda: explicit_2x2_matrices_multiplication(matrices_a, matrices_b), number=10000, ) ) As tested in the code, this function produces the same results as a regular matrix __matmul__ result. However what is not the same is the speed: on my machine, the explicit expression is up to 10 times faster. This is quite a surprising result to me. I would have expected the numpy expression to be faster or at least comparable to the longer Python version, not an order of magnitude slower as we see here. I would be curious for insights as to why the difference in performance is so drastic. I am running numpy version 1.25, and Python version 3.10.6. | TL;DR: all the methods provided in the question are very inefficient. In fact, Numpy is clearly sub-optimal and there is no way to compute this efficiently only with Numpy. Still, there are faster solutions than the ones provided in the question. Explanation and faster implementation The Numpy code make use of powerful generic iterators to apply a given computation (like a matrix multiplication) to multiple array slice. Such iterators are useful to implement broadcasting and also to generate a relatively simple implementation of einsum. The thing is they are also quite expensive when the number of iteration is huge and the array are small. This is exactly what happen in your use-case. While the overhead of Numpy's iterators can be mitigated by optimizing the Numpy code, there is no way to reduce the overhead to a negligible time in this specific use-case. Indeed, there is only 12 floating-point operations to perform per matrix. A relatively recent x86-64 processor can be able to compute each matrix in less than 10 nanoseconds using scalar FMA units. In fact, one can use SIMD instruction so to compute each matrix in only few nanosecond. First thing first, we can mostly remove the overhead of the internal Numpy iterator by doing the matrix multiplication ourself operating on vectors along the first axis. This is exactly what explicit_2x2_matrices_multiplication does! While explicit_2x2_matrices_multiplication should be significantly faster, it is still sub-optimal: it performes non-contiguous memory reads, creates several useless temporary arrays and each Numpy call introduce a small overhead. A faster solution is to write a Numba/Cython code so the underlying compiler can generate a very good sequence of instructions optimized for the 2x2 matrices. Good compilers can even generate SIMD instructions in this case which is something impossible for Numpy. Here is the resulting code: import numba as nb @nb.njit('(float64[:,:,::1], float64[:,:,::1])') def compute_fastest(matrices_a, matrices_b): assert matrices_a.shape == matrices_b.shape assert matrices_a.shape[1] == 2 and matrices_a.shape[2] == 2 n = matrices_a.shape[0] result_matrices = np.empty((n, 2, 2)) for k in range(n): for i in range(2): for j in range(2): result_matrices[k,i,j] = matrices_a[k,i,0] * matrices_b[k,0,j] + matrices_a[k,i,1] * matrices_b[k,1,j] return result_matrices Performance results Here are performance results on my machine with a i5-9600KF CPU for 1000x2x2 matrices: Naive einsum: 214 µs matrices_a @ matrices_b: 102 µs explicit_2x2_matrices_multiplication: 24 µs compute_fastest: 2.7 µs <----- Discussion The Numba implementation reach 4.5 GFlops. Each matrix is computed in only 12 CPU cycles (2.7 ns)! My machine is able to reach up to 300 GFlops in practice (theoretically 432 GFlops), but only 50 GFlops with 1 core and 12.5 GFlops using a scalar code (theoretically 18 GFlops). The granularity of the operation is too small for multiple thread to be useful (the overhead of spawning thread is at least few microseconds). Besides, a SIMD code can hardly saturate de FMA units because the input data layout requires SIMD shuffles so 50 GFlops is actually an optimistic upper bound. As a result, we can safely say that the Numba implementation is a pretty efficient implementation. Still, a faster code can be written thanks to SIMD instructions (I expect a speed-up of about x2 in practice). That being said, writing a native code using SIMD intrinsics of helping the compiler to generate a fast SIMD code is really far from being easy (not to mention the final code will be ugly and hard to maintain). Thus, an SIMD implementation might not worth the effort. | 4 | 6 |
76,730,133 | 2023-7-20 | https://stackoverflow.com/questions/76730133/summing-large-matrices-of-image-data-efficiently | How do I sum large (100, 1024, 1024) .npz files most efficiently? Is there a better file format to store python data of the given dimensions? I currently use this to sum the matrices: summed_matrix = np.sum([np.load(file) for file in files_list]) This exceeds my available memory. | Currently, you're loading all files at once and only then begin summing. If memory is a concern, a better approach would be to read and add one at a time: summed_matrix = np.zeros((100, 1024, 1024)) for file in files_list: summed_matrix += np.load(file) | 3 | 7 |
76,722,536 | 2023-7-19 | https://stackoverflow.com/questions/76722536/with-python-textual-package-how-do-i-linearly-switch-between-different-screen | With textual I'd like to build a simple program which presents me with different options I can choose from using OptionList, but one by one, e.g. First "screen": what do you want to buy (Car/Bike)? +---------+ | Car | | > Bike | +---------+ bike And after I pressed/clicked on "Bike" I'd like to see the second 'screen' (with potentially different widgets): electric (yes/no)? +---------+ | Yes | | > No | +---------+ No The following code shows me the first list of options but I have no idea how to proceed: from textual.app import App, ComposeResult from textual.widgets import Footer, Header, OptionList, Static from textual import events, on class SelectType(Static): def compose(self) -> ComposeResult: yield OptionList( "Car", "Bike", ) @on(OptionList.OptionSelected) def selected(self, *args): return None # What to do here? class MainProgram(App[None]): def compose(self) -> ComposeResult: yield Header() yield Footer() yield SelectType() MainProgram().run() What to do now? I crawled the tutorial, guides, examples but it looks like they all show me how to build one set of widgets but I didn't find a way to make a transition between one input screen and another one.. | Depending on the scope and purpose of the application you're wanting to build, there's a few different approaches you could take here. Some options are: If it's a limited number of main choices with a handful of widgets making up the sub-questions, perhaps TabbedContent would be what you're looking for. If you want a wee bit more control, you could wire things up on a single screen using ContentSwitcher. You could also build a "create the DOM as you go" approach by creating your initial question(s) with compose and then using a combination of mount and remove. As suggested by Will in his answer, one very likely approach would be to use a Screen or two. With a little bit of thought you could probably turn it into quite a flexible and comprehensive application for asking questions. What follows is a very simplistic illustration of some of the approaches you could take. In it you'll find I've only put together a "bike" screen (with some placeholder questions), and only put in a placeholder screen for a car. Hopefully though it will illustrate some of the key ideas. What's important here is that it uses ModalScreen and the screen callback facility to query the user and then get the data back to the main entry point. There are, of course, a lot of details "left for the reader"; do feel free to ask more about this if anything isn't clear in the example. from typing import Any from textual import on from textual.app import App, ComposeResult from textual.widgets import Button, OptionList, Label, Checkbox, Pretty from textual.widgets.option_list import Option from textual.screen import ModalScreen class SubScreen(ModalScreen): DEFAULT_CSS = """ SubScreen { background: $panel; border: solid $boost; } """ class CarScreen(SubScreen): def compose(self) -> ComposeResult: yield Label("Buy a car!") yield Label("Lots of car-oriented widgets here I guess!") yield Button("Buy!", id="buy") yield Button("Cancel", id="cancel") @on(Button.Pressed, "#buy") def buy_it(self) -> None: self.dismiss({ "options": "everything -- really we'd ask" }) @on(Button.Pressed, "#cancel") def cancel_purchase(self) -> None: self.dismiss({}) class BikeScreen(SubScreen): def compose(self) -> ComposeResult: # Here we compose up the question screen for a bike. yield Label("Buy a bike!") yield Checkbox("Electric", id="electric") yield Checkbox("Mudguard", id="mudguard") yield Checkbox("Bell", id="bell") yield Checkbox("Wheels, I guess?", id="wheels") yield Button("Buy!", id="buy") yield Button("Cancel", id="cancel") @on(Button.Pressed, "#buy") def buy_it(self) -> None: # The user has pressed the buy button, so we make a structure that # has a key/value mapping of the answers for all the questions. Here # I'm just using the Checkbox; in a full application you'd want to # take more types of widgets into account. self.dismiss({ **{"type": "bike"}, **{ question.id: question.value for question in self.query(Checkbox) } }) @on(Button.Pressed, "#cancel") def cancel_purchase(self) -> None: # Cancel was pressed. So here we'll return no-data. self.dismiss({}) class VehiclePurchaseApp(App[None]): # Here you could create a structure of all of the types of vehicle, with # their names and the screen that asks the questions. VEHCILES: dict[str, tuple[str, type[ModalScreen]]] = { "car": ("Car", CarScreen), "bike": ("Bike", BikeScreen) } def compose(self) -> ComposeResult: # This builds the initial option list from the vehicles listed above. yield OptionList( *[Option(name, identifier) for identifier, (name, _) in self.VEHCILES.items()] ) # The `Pretty` is just somewhere to show the result. See # selection_made below. yield Pretty("") def selection_made(self, selection: dict[str, Any]) -> None: # This is the method that receives the selection after the user has # asked to buy the vehicle. For now I'm just dumping the selection # into a `Pretty` widget to show it. self.query_one(Pretty).update(selection) @on(OptionList.OptionSelected) def next_screen(self, event: OptionList.OptionSelected) -> None: # If the ID of the option that was selected is known to us... if event.option_id in self.VEHCILES: # ...create an instance of the screen associated with it, push # it and set up the callback. self.push_screen(self.VEHCILES[event.option_id][1](), callback=self.selection_made) if __name__ == "__main__": VehiclePurchaseApp().run() | 5 | 5 |
76,728,740 | 2023-7-20 | https://stackoverflow.com/questions/76728740/two-fields-with-same-name-causing-problem-in-pydantic-model-classes | I have two Pydantic model classes where two fields have the same name as can be seen below: class SnowflakeTable(BaseModel): database: str schema: str table: str table_schema: List[SnowflakeColumn] class DataContract(BaseModel): schema: List[OpenmetadataColumn] These model classes have been integrated with other modules to run via FastAPI and Mangum. Now, when I try hitting the APIs it gives me the below error: NameError: Field name "schema" shadows an attribute in parent "BaseModel"; you might want to use a different field name with "alias='schema'". So, to resolve this I tried using Field of Pydantic as below but it didn't work either. class SnowflakeTable(BaseModel): database: str schema: str = Field(..., alias='schema') table: str table_schema: List[SnowflakeColumn] class DataContract(BaseModel): schema: List[OpenmetadataColumn] = Field(..., alias='schema') The error remains the same. I tried multiple things w.r.t Fields. Also, tried using Route API but nothing worked. What am I missing here? TIA P.S. I can't rename the attributes. Thats against the API rules here. | You have to change the variable names because those names are already in use by pydantic. Try: class SnowflakeTable(BaseModel): database: str my_var_schema: str = Field(..., alias='schema') table: str table_schema: List[SnowflakeColumn] class DataContract(BaseModel): my_var_schema: List[OpenmetadataColumn] = Field(..., alias='schema') Don't worry about API fields, if your field has an alias, then fastapi uses the field name in the alias in requests and responses. More about it: https://docs.pydantic.dev/latest/usage/model_config/#alias-precedence | 3 | 2 |
76,726,671 | 2023-7-20 | https://stackoverflow.com/questions/76726671/im-populating-some-cards-using-for-loop-in-nicegui-each-has-a-label-and-a-butto | When I click on the button it should print the id of the label inside that card only. But it always prints the id of last label no matter which button I click on. In the code below my_grid is the name of the grid in which I'm trying to populate the cards and listcards is the list of cards labels that I'm trying to put inside the card Code with my_grid: for i in range(0,len(listcards)): with ui.card() as mycard: label_card = ui.label(text=f"{listcards[i]}") bt = ui.button("ID", on_click=lambda:print(label_card.id)) When I click on the button I want to print the label id of that card only but it always prints the id of last label. I really want to fix this issue. Any suggestions? | This is a very frequently asked question. We're actually thinking about introducing a FAQ section because of this question. But it is indeed a tricky Python behavior that keeps confusing developers. In the following code example, a button click will call the lambda function, which in turn prints the value of label_card.id. Since the for loop has terminated long before the button click, label_card is the label with text "9". Therefore, regardless of which button is clicked, the ID of this last label is printed. for i in range(10): with ui.card(): label_card = ui.label(text=i) ui.button("ID", on_click=lambda: print(label_card.id)) The trick is to add label_card=label_card to the parameter list of the lambda function. The lambda function is unique for every loop cycle. And so are its parameters. The current value of the label_card variable is written into the parameter list of the unique lambda function. This way a button click prints the ID of the corresponding label_card. for i in range(10): with ui.card(): label_card = ui.label(text=i) ui.button("ID", on_click=lambda label_card=label_card: print(label_card.id)) | 2 | 4 |
76,726,572 | 2023-7-20 | https://stackoverflow.com/questions/76726572/what-is-the-difference-between-excel-and-csv | I am learning about Python. I am trying to store project data to a .CSV file using Pandas library. I know csv is comma separated values, the data is separated by comma(,). I am wondering why I would use a .CSV instead of the other Excel file types? | The major differences between Excel XLSX and CSV file format are the file size and the formatting. In a *.CSV file, the file size is smaller, and the data looks like this: (there is no formatting, just raw data) And if you open using a text editor, you'd get this: idx,col1,col2 123,aaa,xxx 456,bbb,yyy 789,ccc,zzz And in a *.XLSX file, the file size is larger, and the data looks like this: (this format allows formatting such as tables, borders, background color, bold, etc) And if you open with a text editor, you'd get this: PK ! A7傁n [Content_Types].xml ?(? 琓蒼?絎?D綱墶嚜??[$?榵扻$跺(鼄'fQU??Ql蟍&?&YB@鉲.鶼O$`璻?鼿烢偆琕嵆悑5? 镲拥 L岗b.j""%5?3缌騈锽珗?C%?妾?陕YK)ub8x僐-J轜技Q23V$瘺sU.旝?盤勾?I晔?燷県:C@i?╩23???g€/#莺矢2 泌x|`隚簼惝秛_?傃悓U燨詹w筳鋸髾s箪4去瓑-蔤e霳?e|鮫,ん佅??愸y絼s?i? 藓??s??耵V7?麛幵88彍? 梬a懏:??霤rh伥??轁鄸?? PK ! 礥0#? L _rels/.rels ?(? <truncated> Generally, I use CSV format to store raw data, and use XLSX format to present the data. | 2 | 4 |
76,719,175 | 2023-7-19 | https://stackoverflow.com/questions/76719175/np-sqrt-with-integers-and-where-condition-returns-wrong-results | I am getting weird results from numpy sqrt method when applying it on an array of integers with a where condition. See below. With integers: a = np.array([1, 4, 9]) np.sqrt(a, where=(a>5)) Out[3]: array([0. , 0.5, 3. ]) With floats: a = np.array([1., 4., 9.]) np.sqrt(a, where=(a>5)) Out[25]: array([1., 4., 3.]) Is it a bug or a misunderstanding of how this method works? | I think there might be a bug inconsistency*, when repeating the command several times I don't get consistent results. *this is actually due to the probable use of numpy.empty to initialize the output. This function reuses memory without setting the default values and thus will have non-predictable values in the cells that are not manually overwritten. Here is an example: import numpy as np print(np.__version__) a = np.array([1, 4, 9]) # integers print(np.sqrt(a, where=(a>5)).round(3)) a = np.array([1., 4., 9.]) # floats print(np.sqrt(a, where=(a>5)).round(3)) a = np.array([1, 4, 9]) # integers again print(np.sqrt(a, where=(a>5)).round(3)) Output: 1.24.2 [0. 0. 3.] [0. 0. 3.] [1. 4. 3.] # this is now different From the remark of @hpaulj, it seems that providing out is required. Indeed, this prevents the inconsistent behavior with the above example. out should be initialized with a predictable function, such as numpy.zeros. import numpy as np print(np.__version__) out = np.zeros(3) a = np.array([1, 4, 9]) # integers print(np.sqrt(a, where=(a>5), out=out)) print(out) out = np.zeros(3) a = np.array([1., 4., 9.]) # floats print(np.sqrt(a, where=(a>5), out=out)) print(out) a = np.array([1, 4, 9]) # integers again print(np.sqrt(a, where=(a>5), out=out)) print(out) Output: 1.24.2 [0. 0. 3.] [0. 0. 3.] [0. 0. 3.] [0. 0. 3.] [0. 0. 3.] [0. 0. 3.] Nevertheless, this seems to be an inconsistent behavior. The doc specifies that it out is not provided, a new array should be allocated: out ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. where array_like, optional This condition is broadcast over the input. At locations where the condition is True, the out array will be set to the ufunc result. Elsewhere, the out array will retain its original value. Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized. In your case you rather want to use numpy.where: out = np.where((a>5), np.sqrt(a), a) # or, to avoid potential errors if you have forbidden values out = np.where(a>5, np.sqrt(a, where=a>5), a) # or out = np.sqrt(a, where=a>5, out=a.astype(float)) Output: array([1., 4., 3.]) | 3 | 2 |
76,717,805 | 2023-7-19 | https://stackoverflow.com/questions/76717805/how-can-i-show-a-python-package-installed-packages | If I install some python pacakges, like opencv-python, it will install the cv2 pacakge. But before I look at opencv's document, how can I inspector the opencv-python pacakge, and find out what it installed. Like pip info opencv-python, but it will not print the installed packages. Update: I find the pip installed place /usr/local/lib/python3.11/site-packages/opencv_python-4.7.0.72.dist-info/top_level.txt contains the top_level package(like installed packages). So I can write some script to parse this file, or python has builltin util to print this info? | The answers demonstrating pip show usage are fine if you just wanted to discover this information from the command-line interactively. If you want access to the top-level package names programmatically, however, pip is not much help because it has no API. Since Python 3.10 the information is easily available using stdlib importlib.metadata: >>> from importlib.metadata import packages_distributions >>> dists = packages_distributions() >>> [name for name, pkgs in dists.items() if "opencv-python" in pkgs] ['cv2'] If you want to know all the files owned by an installed package, not just the top-level import names, you can get that too like this: from importlib.metadata import distribution dist = distribution("opencv-python") for file in dist.files: print(file) ... | 2 | 2 |
76,668,373 | 2023-7-12 | https://stackoverflow.com/questions/76668373/declare-intermediate-variables-in-select-statements-in-polars | When writing select or with_columns statements in Polars I often wish to declare intermediate variables to make the code more readable. I'd also like to be able to query a column in a context and reuse it in another column's expression. I am currently forced to chain multiple select/with_columns calls which lacks elegance. Here is a fictive example of what I would like to do: df.with_columns( [ <some expression>.alias('step_1'), # here I want step_1 to become a column in the output table temporary_variable = <some other expression>, # here I want this variable not to be present in the output table pl.col(['step_1']).some_function(temporary_variable).alias('step_2'), # this column's expression uses both the first column: 'step_1' and temporary_variable pl.col(...).some_other_function(temporary_variable).alias('another_column') # temporary_variable might need to be used in multiple column's expression, being able to declare the temporary variable and reuse it makes the code shorter, more modular and avoids copy pasts ] ) My question is: is there any way to do this in Polars? | We can actually use walrus assignment to get very close to what you're after Say you have this: df = pl.DataFrame({'a':[1,2,3], 'b':[2,3,4], 'c':[3,4,5]}) You can do: df.with_columns( (stp_1 := (pl.col("a") * 2)).alias("step_1"), stp_1.pow(tempvar := (pl.col("b") + 1.5)).alias("step_2"), (pl.col("c") + tempvar).alias("another_column"), ) Note that I intentionally named the walrus assigned variable stp_1 to distinguish it from the alias for the column. There's no way to have the walrus also give the column its name. What is assigned to stp1 and tempvar aren't data but expressions which will be resolved by the engine and is computationally equivalent to typing out: df.with_columns( (pl.col('a') * 2).alias('step_1'), (pl.col('a') * 2).pow((pl.col('b') + 1.5)).alias('step_2'), (pl.col('c') + (pl.col('b') + 1.5)).alias('another_column') ) For performance concerns, also remember:: All polars expressions within a context are executed in parallel. So they cannot refer to a column that does not yet exist. That quote is from before CSER was implemented for lazy, which will detect that (pl.col("a") * 2)) and (pl.col('b') + 1.5) occur twice (or more) and will only compute them once, caching the result for reuse. You can see that with explain: print(df.lazy().with_columns( (pl.col('a') * 2).alias('step_1'), (pl.col('a') * 2).pow((pl.col('b') + 1.5)).alias('step_2'), (pl.col('c') + (pl.col('b') + 1.5)).alias('another_column') ).explain()) simple π 6/8 ["a", "b", "c", "step_1", ... 2 other columns] WITH_COLUMNS: [col("__POLARS_CSER_0xf9e008489610e0a5").alias("step_1"), col("__POLARS_CSER_0xf9e008489610e0a5") .pow([col("__POLARS_CSER_0x545c2bc7d61b77c5")]).alias("step_2"), [(col("c").cast(Unknown(Float))) + (col("__POLARS_CSER_0x545c2bc7d61b77c5"))].alias("another_column")] WITH_COLUMNS: [[(col("a")) * (2)].alias("__POLARS_CSER_0xf9e008489610e0a5"), [(col("b").cast(Unknown(Float))) + (dyn float: 1.5)].alias("__POLARS_CSER_0x545c2bc7d61b77c5")] DF ["a", "b", "c"]; PROJECT */3 COLUMNS As a tangent, instead of using alias you can also name the columns by using named parameters like this df.with_columns( step_1 = (stp_1 := (pl.col('a') * 2)), step_2 = stp_1.pow(tempvar := ((pl.col('b') + 1.5))), another_column = (pl.col('c') + tempvar) ) | 5 | 2 |
76,701,351 | 2023-7-17 | https://stackoverflow.com/questions/76701351/html-xml-understanding-how-scroll-bars-work | I am working with the R programming language and trying to learn about how to use Selenium to interact with webpages. For example, using Google Maps - I am trying to find the name, address and longitude/latitude of all Pizza shops around a certain area. As I understand, this would involve entering the location you are interested in, clicking the "nearby" button, entering what you are looking for (e.g. "pizza"), scrolling all the way to the bottom to make sure all pizza shops are loaded - and then copying the names, address and longitude/latitudes of all pizza locations. I have been self-teaching myself how to use Selenium in R and have been able to solve parts of this problem myself. Here is what I have done so far: Part 1: Searching for an address (e.g. Statue of Liberty, New York, USA) and returning a longitude/latitude : library(RSelenium) library(wdman) library(netstat) selenium() seleium_object <- selenium(retcommand = T, check = F) remote_driver <- rsDriver(browser = "chrome", chromever = "114.0.5735.90", verbose = F, port = free_port()) remDr<- remote_driver$client remDr$navigate("https://www.google.com/maps") search_box <- remDr$findElement(using = 'css selector', "#searchboxinput") search_box$sendKeysToElement(list("Statue of Liberty", key = "enter")) Sys.sleep(5) url <- remDr$getCurrentUrl()[[1]] long_lat <- gsub(".*@(-?[0-9.]+),(-?[0-9.]+),.*", "\\1,\\2", url) long_lat <- unlist(strsplit(long_lat, ",")) > long_lat [1] "40.7269409" "-74.0906116" Part 2: Searching for all Pizza shops around a certain location: library(RSelenium) library(wdman) library(netstat) selenium() seleium_object <- selenium(retcommand = T, check = F) remote_driver <- rsDriver(browser = "chrome", chromever = "114.0.5735.90", verbose = F, port = free_port()) remDr<- remote_driver$client remDr$navigate("https://www.google.com/maps") Sys.sleep(5) search_box <- remDr$findElement(using = 'css selector', "#searchboxinput") search_box$sendKeysToElement(list("40.7256456,-74.0909442", key = "enter")) Sys.sleep(5) search_box <- remDr$findElement(using = 'css selector', "#searchboxinput") search_box$clearElement() search_box$sendKeysToElement(list("pizza", key = "enter")) Sys.sleep(5) But from here, I do not know how to proceed. I do not know how to scroll the page all the way to the bottom to view all such results that are available - and I do not know how to start extracting the names. Doing some research (i.e. inspecting the HTML code), I made the following observations: The name of a restaurant location can be found in the following tags: <a class="hfpxzc" aria-label= The address of a restaurant location be found in the following tags: <div class="W4Efsd"> In the end, I would be looking for a result like this: name address longitude latitude 1 pizza land 123 fake st, city, state, zip code 45.212 -75.123 Can someone please show me how to proceed? Note: Seeing as more people likely use Selenium through Python - I am more than happy to learn how to solve this problem in Python and then try to convert the answer into R code.r Thanks! References: https://medium.com/python-point/python-crawling-restaurant-data-ab395d121247 https://www.youtube.com/watch?v=GnpJujF9dBw https://www.youtube.com/watch?v=U1BrIPmhx10 UPDATE: Some further progress with addresses remDr$navigate("https://www.google.com/maps") Sys.sleep(5) search_box <- remDr$findElement(using = 'css selector', "#searchboxinput") search_box$sendKeysToElement(list("40.7256456,-74.0909442", key = "enter")) Sys.sleep(5) search_box <- remDr$findElement(using = 'css selector', "#searchboxinput") search_box$clearElement() search_box$sendKeysToElement(list("pizza", key = "enter")) Sys.sleep(5) address_elements <- remDr$findElements(using = 'css selector', '.W4Efsd') addresses <- lapply(address_elements, function(x) x$getElementText()[[1]]) result <- data.frame(name = unlist(names), address = unlist(addresses)) | I see that you updated your question to include a Python answer, so here's how it's done in Python. you can use the same method for R. The page is lazy loaded which means, as you scroll the data is paginated and loaded. So, what you need to do, is to keep finding the last HTML tag of the data which will therefore load more content. Finding how more data is loaded You need to find out how the data is loaded. Here's what I did: First, disable internet access for your browser in the Network calls (F12 -> Network -> Offline) Then, scroll to the last loaded element, you will see a loading indicator (since there is no internet, it will just hang) Now, here comes the important part, find out under what HTML tag this loading indicator is: As you can see that element is under the div.qjESne CSS selector. Working with Selenium You can call the javascript code scrollIntoView() function which will scroll a particular element into view within the browser's viewport. Finding out when to break To find out when to stop scrolling in order to load more data, we need to find out what element appears when theres no data. If you scroll until there are no more results, you will see: which is an element under the CSS selector span.HlvSq. Code examples Scrolling the page from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC URL = "https://www.google.com/maps/search/Restaurants/@40.7256843,-74.1138399,14z/data=!4m8!2m7!3m5!1sRestaurants!2s40.7256456,-74.0909442!4m2!1d-74.0909442!2d40.7256456!6e5?entry=ttu" driver = webdriver.Chrome() driver.get(URL) # Waits 10 seconds for the elements to load before scrolling wait = WebDriverWait(driver, 10) elements = wait.until( EC.presence_of_all_elements_located((By.CSS_SELECTOR, "div.qjESne")) ) while True: new_elements = wait.until( EC.presence_of_all_elements_located((By.CSS_SELECTOR, "div.qjESne")) ) # Pick the last element in the list - this is the one we want to scroll to last_element = elements[-1] # Scroll to the last element driver.execute_script("arguments[0].scrollIntoView(true);", last_element) # Update the elements list elements = new_elements # Check if there are any new elements loaded - the "You've reached the end of the list." message if driver.find_elements(By.CSS_SELECTOR, "span.HlvSq"): print("No more elements") break Getting the data If you inspect the page, you will see that the data is under the cards under the CSS selector of div.lI9IFe. What you need to do, is wait until the scrolling has finished, and then you get all the data under the CSS selector of div.lI9IFe Code example from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC import pandas as pd URL = "https://www.google.com/maps/search/Restaurants/@40.7256843,-74.1138399,14z/data=!4m8!2m7!3m5!1sRestaurants!2s40.7256456,-74.0909442!4m2!1d-74.0909442!2d40.7256456!6e5?entry=ttu" driver = webdriver.Chrome() driver.get(URL) # Waits 10 seconds for the elements to load before scrolling wait = WebDriverWait(driver, 10) elements = wait.until( EC.presence_of_all_elements_located((By.CSS_SELECTOR, "div.qjESne")) ) titles = [] links = [] addresses = [] while True: new_elements = wait.until( EC.presence_of_all_elements_located((By.CSS_SELECTOR, "div.qjESne")) ) # Pick the last element in the list - this is the one we want to scroll to last_element = elements[-1] # Scroll to the last element driver.execute_script("arguments[0].scrollIntoView(true);", last_element) # Update the elements list elements = new_elements # time.sleep(0.1) # Check if there are any new elements loaded - the "You've reached the end of the list." message if driver.find_elements(By.CSS_SELECTOR, "span.HlvSq"): # now we can parse the data since all the elements loaded for data in driver.find_elements(By.CSS_SELECTOR, "div.lI9IFe"): title = data.find_element( By.CSS_SELECTOR, "div.qBF1Pd.fontHeadlineSmall" ).text restaurant = data.find_element( By.CSS_SELECTOR, ".W4Efsd > span:nth-of-type(2)" ).text titles.append(title) addresses.append(restaurant) # This converts the list of titles and links into a dataframe df = pd.DataFrame(list(zip(titles, addresses)), columns=["title", "addresses"]) print(df) break Prints: title addresses 0 Domino's Pizza · 741 Communipaw Ave A 1 Tommy's Family Restaurant · 349 Central Ave 2 VIP RESTAURANT LLC BARSHAY'S · 175 Sip Ave 3 The Hutton Restaurant and Bar · 225 Hutton St 4 Barge Inn · 324 3rd St .. ... ... 116 Bettie's Restaurant · 579 West Side Ave 117 Mahboob-E-El Ahi · 580 Montgomery St 118 Samosa Paradise · 804 Newark Ave 119 TACO DRIVE · 195 Newark Ave 120 Two Boots Pizza · 133 Newark Ave [121 rows x 2 columns] | 7 | 7 |
76,674,272 | 2023-7-12 | https://stackoverflow.com/questions/76674272/pydantic-basesettings-cant-find-env-when-running-commands-from-different-places | So, Im trying to setup Alembic with FastAPI and Im having a problem with Pydantic's BaseSettings, I get a validation error (variables not found) because it doesnt find the .env file (?) It can be solved by changing env_file = ".env" to env_file = "../.env" in the BaseSettings class Config but that makes the error happen when running main.py, I tried setting it as an absolute path with env_file = os.path.abspath("../../.env") but that didnt work. What should I do? config.py: import os from functools import lru_cache from pydantic_settings import BaseSettings abs_path_env = os.path.abspath("../../.env") class Settings(BaseSettings): APP_NAME: str = "AppName" SQLALCHEMY_URL: str ENVIRONMENT: str class Config: env_file = ".env" # Works with uvicorn run command from my-app/project/ # env_file = "../.env" Works with alembic command from my-app/alembic # env_file = abs_path_env @lru_cache() def get_settings(): return Settings() Project folders: my-app ├── alembic │ ├── versions │ ├── alembic.ini │ ├── env.py │ ├── README │ └── script.py.mako ├── project │ ├── core │ │ ├── __init__.py │ │ └── config.py │ └── __init__.py ├── __init__.py ├── .env └── main.py | For me it was the fact that .env was not in the same directory as the running script. I had to manually anchor the .env file to an absolute path. Here is my settings.py file with .env residing alongside in the same directory: import os from pydantic_settings import BaseSettings, SettingsConfigDict DOTENV = os.path.join(os.path.dirname(__file__), ".env") class Settings(BaseSettings): pg_dsn: str pg_pool_min_size: int pg_pool_max_size: int pg_pool_max_queries: int pg_pool_max_intactive_conn_lifetime: int model_config = SettingsConfigDict(env_file=DOTENV) So with the os.path.join(os.path.dirname(__file__), ".env") line I basically instruct to search for .env file inside the same directory as the settings.py file. This way you can run any of your scripts, and the Settings object will always point to your .dotenv file. | 9 | 11 |
76,710,868 | 2023-7-18 | https://stackoverflow.com/questions/76710868/why-i-take-an-exception-with-my-endpoint-fastapi | I have got an db like: class NewsBase(BaseModel): title: str topic: str class NewsCreate(NewsBase): datetime: datetime class News(NewsBase): id: int datetime: datetime class Config: orm_mode = True When i try to make this request, it returns with 500: @app.get("/news/find_by_topic/{topic}", response_model=schemas.News) def find_news_by_topic(topic: str, db: Session = Depends(get_db)): db_news = crud.get_news_by_topic(db, topic=topic) if db_news is None: raise HTTPException(status_code=404, detail="This title is not found") return db_news Crud.py: def get_news_by_topic(db: Session, topic: str): return db.query(models.News).filter(models.News.topic == topic).all() It's like error: File "C:\Projects\RestAPI\venv\Lib\site-packages\starlette\middleware\errors.py", line 184, in __call__ raise exc File "C:\Projects\RestAPI\venv\Lib\site-packages\starlette\middleware\errors.py", line 162, in __call__ await self.app(scope, receive, _send) File "C:\Projects\RestAPI\venv\Lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__ raise exc File "C:\Projects\RestAPI\venv\Lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__ await self.app(scope, receive, sender) File "C:\Projects\RestAPI\venv\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 20, in __call__ raise e File "C:\Projects\RestAPI\venv\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 17, in __call__ await self.app(scope, receive, send) File "C:\Projects\RestAPI\venv\Lib\site-packages\starlette\routing.py", line 718, in __call__ await route.handle(scope, receive, send) File "C:\Projects\RestAPI\venv\Lib\site-packages\starlette\routing.py", line 276, in handle await self.app(scope, receive, send) File "C:\Projects\RestAPI\venv\Lib\site-packages\starlette\routing.py", line 66, in app response = await func(request) ^^^^^^^^^^^^^^^^^^^ File "C:\Projects\RestAPI\venv\Lib\site-packages\fastapi\routing.py", line 291, in app content = await serialize_response( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Projects\RestAPI\venv\Lib\site-packages\fastapi\routing.py", line 154, in serialize_response raise ResponseValidationError( fastapi.exceptions.ResponseValidationError How can I fix it? | It seems like you need to set response_model properly. Read more here. If you are using Python 3.9: response_model=list[schemas.News] If you are using older Python: from typing import List response_model=List[schemas.News] | 2 | 4 |
76,707,715 | 2023-7-17 | https://stackoverflow.com/questions/76707715/stucking-at-downloading-shards-for-loading-llm-model-from-huggingface | I am just using huggingface example to use their LLM model, but it stuck at the: downloading shards: 0%| | 0/5 [00:00<?, ?it/s] (I am using Jupiter notebook, python 3.11, and all requirements were installed) from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-40b-instruct" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) sequences = pipeline( "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") how can I fix it? | I think it's not stuck. These are just very large models that take a while to download. tqdm only estimates after the first iteration, so it just looks like nothing is happening. I'm currently downloading the smallest version of LLama2 (7B parameters) and it's downloading two shards. The first took over 17 minutes to complete and I have reasonably fast internet connection. | 19 | 25 |
76,691,401 | 2023-7-14 | https://stackoverflow.com/questions/76691401/selenium-common-exceptions-nosuchdriverexception-message-unable-to-obtain-chro | I can't figure out why my code it's always getting an error This is my code: from selenium import webdriver url = "https://google.com/" path = "C:/Users/thefo/OneDrive/Desktop/summer 2023/chromedriver_win32" driver = webdriver.Chrome(path) driver.get(url) Path of chromedriver: and this is the error that always appears: Traceback (most recent call last): File "C:\Users\thefo\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\common\driver_finder.py", line 42, in get_path path = SeleniumManager().driver_location(options) if path is None else path ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\thefo\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\common\selenium_manager.py", line 74, in driver_location browser = options.capabilities["browserName"] ^^^^^^^^^^^^^^^^^^^^ AttributeError: 'str' object has no attribute 'capabilities' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\Users\thefo\OneDrive\Desktop\summer 2023\Projeto Bot Discord - BUFF SELL CHECKER\teste2.py", line 6, in <module> driver = webdriver.Chrome(path) ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\thefo\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 47, in __init__ self.service.path = DriverFinder.get_path(self.service, self.options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\thefo\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\common\driver_finder.py", line 44, in get_path raise NoSuchDriverException(f"Unable to obtain {service.path} using Selenium Manager; {err}") selenium.common.exceptions.NoSuchDriverException: Message: Unable to obtain chromedriver using Selenium Manager; 'str' object has no attribute 'capabilities'; For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors/driver_location | This error message... Traceback (most recent call last): File "C:\Users\thefo\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\common\driver_finder.py", line 42, in get_path path = SeleniumManager().driver_location(options) if path is None else path ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...implies that the version of Selenium which you are using is v4.6 or above. Selenium Manager In such cases Selenium Manager can silently download the matching ChromeDriver and you don't have to explicitly mention the chromedriver path anymore. Solution Your minimal code block can be: from selenium import webdriver url = "https://google.com/" driver = webdriver.Chrome() driver.get(url) | 5 | 10 |
76,682,341 | 2023-7-13 | https://stackoverflow.com/questions/76682341/not-filled-white-voids-near-boundaries-of-polar-plot | I have tried to create a polar plot using matplotlib's contourf, but it produces some not-filled areas near the circle boundary, in just some sides of the plot. It seems, this problem is a common problem that we can see in many examples e.g. 1. At first, I thought it may needs some interpolations and tried interpolating based on some available methods e.g. 2. But I couldn't produce a perfect plot. Interpolating using SciPy griddata with linear method solved the main issue but produce some shadows on the plot, and the cubic method result in some inappropriate colors (which shown the results incorrect). Finally, I guess this issue may be related to figure size that I specified and the dpi that I used. With low dpi it was cured a lot, but will get a low quality png. when I deactivate the related line for specifying figure size (# plt.rcParams["figure.figsize"] = (19.2, 9.6)) with dpi=600, it is shown fairly correct, but not on the needed figure size. The main question is how to solve the issue for saved files, with the desired specified figure size and dpi? It must be said that the problem is appearing on the saved files. Besides the answer to solve the issue, I will be appreciated if any answer about these questions too: Why it happens? Why it happens just in some sides of the plot? In this example it is problematic just on the right side of the quarter, not the upside. This issue makes me doubt that the data is correctly shown on the circle for analysis. Is it OK? Do we need interpolating on such data? If so, which algorithms will be the best for that which does not show the results incorrectly as cubic method in Scipy interpolation and without shadowing as the linear method? If choosing between algorithms is based on the case, how to decide for that? It will be very helpful if be explained with examples. import numpy as np from matplotlib import cm import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (19.2, 9.6) save_dpi = 600 Azimuth = np.tile(np.arange(0, 91, 10), 10) Deviation = np.repeat(np.arange(0, 91, 10), 10) color_data = np.array([2123, 2124, 2126, 2130, 2135, 2139, 2144, 2147, 2150, 2151, 2212, 2211, 2205, 2197, 2187, 2176, 2166, 2158, 2152, 2150, 2478, 2468, 2439, 2395, 2342, 2285, 2231, 2188, 2160, 2150, 2912, 2888, 2819, 2715, 2589, 2456, 2334, 2236, 2172, 2150, 3493, 3449, 3324, 3135, 2908, 2674, 2462, 2294, 2187, 2150, 4020, 4020, 3912, 3618, 3270, 2917, 2602, 2357, 2203, 2151, 4020, 4020, 4020, 4020, 3633, 3156, 2737, 2417, 2218, 2150, 4020, 4020, 4020, 4020, 3947, 3358, 2850, 2466, 2230, 2150, 4020, 4020, 4020, 4020, 4020, 3495, 2926, 2499, 2238, 2150, 4020, 4020, 4020, 4020, 4020, 3543, 2951, 2510, 2241, 2150]) ax = plt.subplot(projection='polar') plt.xticks([]) plt.yticks([]) Az = np.unique(Azimuth) Dev = np.unique(Deviation) mAz, mDev = np.meshgrid(Az, Dev) # way1: Original Xi, Yi, Zi = np.deg2rad(mAz), mDev, color_data.reshape(mAz.shape) contour_plot = ax.contourf(Xi, Yi, Zi, levels=256, cmap=cm.viridis_r, zorder=1) ax.plot() plt.savefig("way1.png", dpi=save_dpi) # way2: Interpolation # import scipy.interpolate as sci_int # Xi = np.linspace(0, 2 * np.pi, 256, endpoint=True) # Yi = np.linspace(0, 90, 256, endpoint=True) # Zi = sci_int.griddata(np.stack((np.deg2rad(mAz), mDev), axis=2).reshape(len(Azimuth), 2), color_data, # (Xi[None, :], Yi[:, None]), method='linear') # contour_plot = ax.contourf(Xi, Yi, Zi, levels=256, cmap=cm.viridis_r, zorder=1) # ax.plot() # plt.savefig("way2.png", dpi=save_dpi) Tested on windows 10 by: Python ver.: 3.8 & 3.10 Matplotlib ver.: 3.5.3 & 3.7.2 | After a related discussion in matplotlib repo, it seems there is not any simple and conventional solution for that (perhaps cartopy have prepared something helpful); The contouring algorithm doesn't know that it is acting on a polar plot, so when the contours are drawn in polar space the polygons are approximated. This isn't actually a contouring issue, it is an issue of whether a straight line in one coordinate system should be transformed to a straight or non-straight line in another coordinate system. It applies equally to any polygon (e.g. just a simple triangle) specified in, for example, polar coordinates, and rendered in cartesian (screen) coordinates. Just for a sub-solution, I tried to produce more points (Azimuth & Deviation) and interpolate their corresponding color_data (the more points you have, the shorter the lines are and the closer this looks to a circular segment) to reduce/cover this shortcoming. In this regard, and for this example, Radial basis function (RBF) interpolation of SciPy get an acceptable answer with the following code (which could be developed/adjusted for such problems): from scipy.interpolate import RBFInterpolator coordinates_org = np.column_stack((Azimuth, Deviation)) Az = np.linspace(0, 90, 181, endpoint=True) Dev = np.linspace(0, 90, 181, endpoint=True) Az_2D, Dev_2D = np.meshgrid(Az, Dev) coordinates_int = np.column_stack((Az_2D.ravel(), Dev_2D.ravel())) rbf = RBFInterpolator(coordinates_org, color_data, kernel="linear") color_data_int = rbf(coordinates_int).reshape(len(Az), len(Dev)) contour_plot = ax.contourf(np.deg2rad(Az), Dev, color_data_int, levels=1000, cmap=cm.viridis_r) ax.plot() plt.savefig("way3.png", dpi=save_dpi) I adjust this solution for another example by some modifications to compare its results with some various interpolation methods, which can be seen below. In this comparison, unfilled area above the circles for griddata might be filled using other levels than used, which was 1000 as I remember. Based on the comparison, this solution (RBF) produces better results than using griddata as way 2 in the question; Also, quintic method seems got the best: | 3 | 1 |
76,709,695 | 2023-7-18 | https://stackoverflow.com/questions/76709695/how-to-stop-multiprocessing-pool-with-ctrlc-python-3-10 | I've spent several hours on research and no solution seems to work anymore. My function will take an estimated 30-50 minutes per process. If I spot flaws during the first few minutes, I don't want to wait for the processes to finish completely or have to close the terminal without killing the processes cleanly. For simplicity, each process is assigned a function with an infinite loop. With Ctrl+c the complete main program should now be interrupted without getting stuck. The reason for the infinite loop is that every previous solution has worked with lists that are actually looped through. The keyboard interrupt is recognized, but only afterwards, when all processes have finished. I don't know if this has anything to do with the Python version, but that's not the requirement I would like. I would like a simple and clean solution to this problem. That would probably help many others as well. Example: from multiprocessing import Pool def f(x): while True: pass if __name__ == '__main__': with Pool(2) as p: p.map(f, [1, 2]) | Problem Solved In another forum this problem was discussed and solved in a simple and clean way. Here is the link for everyone else: https://www.reddit.com/r/learnpython/comments/152sfp8/how_to_stop_multiprocessingpool_with_ctrlc_python/ Solution 1 In multiprocessing module there is an AsyncResult object. This includes various functions such as get(), wait() and ready(). While get() and wait() block, you can use ready() to wait for the result and still catch KeyboardInterrupts. Unfortunately, only map_async() and apply_async() return an AsyncResult object. from multiprocessing import Pool def f(x): while True: pass if __name__ == "__main__": with Pool(2) as p: result = p.map_async(f, [1, 2]) while not result.ready(): time.sleep(1) Solution 2 This way you can also use functions that block: Playing around with signals it seems that from inside child processes, exceptions other than KeyboardInterrupt or SystemExit propagate to the main process. When ctrl+c is pressed, all processes (children and main process) run their SIGINT handler which by default just raises KeyboardInterrupt. To get control out of the children, the child interrupt handler must raise some other exception. The exception is not seen by the way because the main process has its hands full with its own KeyboardInterrupt. Basically, the SIGINT handler can also return None. from multiprocessing import Pool from signal import signal, SIGINT def initializer(): signal(SIGINT, lambda: None) def f(x): while True: pass if __name__ == '__main__': with Pool(n, initializer) as p: p.map(f, [1, 2]) | 2 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.