question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
78,203,312 | 2024-3-21 | https://stackoverflow.com/questions/78203312/polars-map-batches-udf-with-multi-processing | I want to apply a numba UDF, which generates the same length vectors for each groups in df: import numba df = pl.DataFrame( { "group": ["A", "A", "A", "B", "B"], "index": [1, 3, 5, 1, 4], } ) @numba.jit(nopython=True) def UDF(array: np.ndarray, threshold: int) -> np.ndarray: result = np.zeros(array.shape[0]) accumulator = 0 for i, value in enumerate(array): accumulator += value if accumulator >= threshold: result[i] = 1 accumulator = 0 return result df.with_columns( pl.col("index") .map_batches( lambda x: UDF(x.to_numpy(), 5) ) .over("group") .cast(pl.UInt8) .alias("udf") ) Inspired by this post where a multi-processing application has being introduced. However, in the case above, I am applying the UDF using a over window function. Is there an efficient approach by parallelizing the above executions? expected output: shape: (6, 3) βββββββββ¬ββββββββ¬ββββββ β group β index β udf β β --- β --- β --- β β str β i64 β u8 β βββββββββͺββββββββͺββββββ‘ β A β 1 β 0 β β A β 3 β 0 β β A β 5 β 1 β β B β 1 β 0 β β B β 4 β 1 β βββββββββ΄ββββββββ΄ββββββ | Here is example how you can do this with numba + using numba's parallelization features: from numba import njit, prange @njit(parallel=True) def UDF_nb_parallel(array, n, threshold): result = np.zeros_like(array, dtype="uint8") for i in prange(array.size // n): accumulator = 0 for j in range(i * n, (i + 1) * n): value = array[j] accumulator += value if accumulator >= threshold: result[j] = 1 accumulator = 0 return result df = df.with_columns( pl.Series(name="new_udf", values=UDF_nb_parallel(df["index"].to_numpy(), 3, 5)) ) print(df) Prints: shape: (9, 3) βββββββββ¬ββββββββ¬ββββββββββ β group β index β new_udf β β --- β --- β --- β β str β i64 β u8 β βββββββββͺββββββββͺββββββββββ‘ β A β 1 β 0 β β A β 3 β 0 β β A β 5 β 1 β β B β 1 β 0 β β B β 4 β 1 β β B β 8 β 1 β β C β 1 β 0 β β C β 1 β 0 β β C β 4 β 1 β βββββββββ΄ββββββββ΄ββββββββββ Benchmark: from timeit import timeit import numpy as np import polars as pl from numba import njit, prange def get_df(N, n): assert N % n == 0 df = pl.DataFrame( { "group": [f"group_{i}" for i in range(N // n) for _ in range(n)], "index": np.random.randint(1, 5, size=N, dtype="uint64"), } ) return df @njit def UDF(array: np.ndarray, threshold: int) -> np.ndarray: result = np.zeros(array.shape[0]) accumulator = 0 for i, value in enumerate(array): accumulator += value if accumulator >= threshold: result[i] = 1 accumulator = 0 return result @njit(parallel=True) def UDF_nb_parallel(array, n, threshold): result = np.zeros_like(array, dtype="uint8") for i in prange(array.size // n): accumulator = 0 for j in range(i * n, (i + 1) * n): value = array[j] accumulator += value if accumulator >= threshold: result[j] = 1 accumulator = 0 return result def get_udf_polars(df): return df.with_columns( pl.col("index") .map_batches(lambda x: UDF(x.to_numpy(), 5)) .over("group") .cast(pl.UInt8) .alias("udf") ) df = get_df(3 * 33_333, 3) # 100_000 values, length of groups 3 df = get_udf_polars(df) df = df.with_columns( pl.Series(name="new_udf", values=UDF_nb_parallel(df["index"].to_numpy(), 3, 5)) ) assert np.allclose(df["udf"].to_numpy(), df["new_udf"].to_numpy()) t1 = timeit("get_udf_polars(df)", number=1, globals=globals()) t2 = timeit( 'df.with_columns(pl.Series(name="new_udf", values=UDF_nb_parallel(df["index"].to_numpy(), 3, 5)))', number=1, globals=globals(), ) print(t1) print(t2) Prints on my machine (AMD 5700x): 2.7000599699968006 0.00025866299984045327 100_000_000 rows/groups 3 takes 0.06319052699836902 (with parallel=False this takes 0.2159650030080229) EDIT: Handling variable-length groups: @njit(parallel=True) def UDF_nb_parallel_2(array, indices, amount, threshold): result = np.zeros_like(array, dtype="uint8") for i in prange(indices.size): accumulator = 0 for j in range(indices[i], indices[i] + amount[i]): value = array[j] accumulator += value if accumulator >= threshold: result[j] = 1 accumulator = 0 return result def get_udf_polars_nb(df): n = df["group"].to_numpy() indices = np.unique(n, return_index=True)[1] amount = np.diff(np.r_[indices, [n.size]]) return df.with_columns( pl.Series( name="new_udf", values=UDF_nb_parallel_2(df["index"].to_numpy(), indices, amount, 5), ) ) df = get_udf_polars_nb(df) Benchmark: import random from timeit import timeit import numpy as np import polars as pl from numba import njit, prange def get_df(N): groups = [] cnt, group_no, running = 0, 1, True while running: for _ in range(random.randint(3, 10)): groups.append(group_no) cnt += 1 if cnt >= N: running = False break group_no += 1 df = pl.DataFrame( { "group": groups, "index": np.random.randint(1, 5, size=N, dtype="uint64"), } ) return df @njit def UDF(array: np.ndarray, threshold: int) -> np.ndarray: result = np.zeros(array.shape[0]) accumulator = 0 for i, value in enumerate(array): accumulator += value if accumulator >= threshold: result[i] = 1 accumulator = 0 return result @njit(parallel=True) def UDF_nb_parallel_2(array, indices, amount, threshold): result = np.zeros_like(array, dtype="uint8") for i in prange(indices.size): accumulator = 0 for j in range(indices[i], indices[i] + amount[i]): value = array[j] accumulator += value if accumulator >= threshold: result[j] = 1 accumulator = 0 return result def get_udf_polars(df): return df.with_columns( pl.col("index") .map_batches(lambda x: UDF(x.to_numpy(), 5)) .over("group") .cast(pl.UInt8) .alias("udf") ) def get_udf_polars_nb(df): n = df["group"].to_numpy() indices = np.unique(n, return_index=True)[1] amount = np.diff(np.r_[indices, [n.size]]) return df.with_columns( pl.Series( name="new_udf", values=UDF_nb_parallel_2(df["index"].to_numpy(), indices, amount, 5), ) ) df = get_df(100_000) # 100_000 values, length of groups length 3-9 df = get_udf_polars(df) df = get_udf_polars_nb(df) assert np.allclose(df["udf"].to_numpy(), df["new_udf"].to_numpy()) t1 = timeit("get_udf_polars(df)", number=1, globals=globals()) t2 = timeit("get_udf_polars_nb(df)", number=1, globals=globals()) print(t1) print(t2) Prints: 1.2675148629932664 0.0024339070077985525 | 4 | 2 |
78,209,144 | 2024-3-22 | https://stackoverflow.com/questions/78209144/list-comprehension-to-remove-element-from-python-list-if-it-is-only-digits-even | I have many lists like this synonyms = ["3,2'-DIHYDROXYCHALCONE", '36574-83-1', '36574831', "2',3-Dihydroxychalcone", '(E)-1-(2-hydroxyphenyl)-3-(3-hydroxyphenyl)prop-2-en-1-one', MLS002693861] from which I need to remove all elements that are comprised of only digits. I can't figure out how to remove element [1] because it's only digits but has random intervening dashes. Of course, this doesn't work since the dashes make the element not a digit: synonym_subset = [x for x in synonym_subset if not (x.isdigit())] And I can't just remove the dashes because I want the dashes in the other elements to be retained: synonym_subset = [x.replace('-','') for x in synonym_subset] I could run the above to find the index of the elements to be removed, then remove them that way, but I was hoping for a one-liner. Thanks. | Try: synonyms = [ "3,2'-DIHYDROXYCHALCONE", "36574-83-1", "36574831", "2',3-Dihydroxychalcone", "(E)-1-(2-hydroxyphenyl)-3-(3-hydroxyphenyl)prop-2-en-1-one", "MLS002693861", ] out = [s for s in synonyms if not all(ch in "0123456789-_" for ch in s)] print(out) Prints: [ "3,2'-DIHYDROXYCHALCONE", "2',3-Dihydroxychalcone", "(E)-1-(2-hydroxyphenyl)-3-(3-hydroxyphenyl)prop-2-en-1-one", "MLS002693861", ] | 2 | 3 |
78,203,545 | 2024-3-22 | https://stackoverflow.com/questions/78203545/discretized-function-becomes-complex-while-free-propagating-a-real-function-when | My code involves propagation of a real function using Fourier transform and inverse Fourier transform. Specifically, the function evolves as βΟ(z,t)/βt - v βΟ(z,t)/βz =0 I solve this problem by Fourier transforming the above equation given by βΟ(k,t)/βt + ikv Ο(k,t)=0 which has a solution given by Ο(k,t)=e^(-ikvt)Ο(k,0) Now performing inverse Fourier transform of this will give me the propagated function Ο(z-vt,0) However, when I do this in Python (see below for code), the final function (after applying FFT and IFFT) seems to have a tiny complex part, particularly when the function is sampled at even number of points. This error builds up over the course of my simulation resulting in less accurate answer. However, this issue seems to go away when I sample the function at odd number of points. Can somebody please help with what is happening here? Below is a simple code that illustrates my point using the norm of the function. Upon running this code, we can see that when the function is sampled at 10 points, the norm of the real part of the function after doing FFT and IFFT is accurate sometimes only up to 2 or 3 decimal points (while the norm of the whole function (real+complex) is still conserved). On the other hand, the norm is accurate up to 16 decimals in the case of odd sampling. It would be great if someone could explain to me what's happening and how would I get better accuracy using this function while sampling at even number of points import numpy as np def free_prop(vectr,Nd,vel=0.92,dt=1): dz=1/Nd psi_k = np.fft.fft(vectr) k_vals = 2.0*np.pi*np.fft.fftfreq(Nd, d=dz) return np.fft.ifft(np.exp(-1j*dt*k_vals*vel)*psi_k) vectr_even=np.random.rand(10) ## Even case vectr_odd=np.random.rand(11) ## Odd case print('Norm of input array (with even sampling):',np.linalg.norm(vectr_even)) print('Norm of output complex array (with even sampling):',np.linalg.norm(free_prop(vectr_even,10))) print('Norm of real part of output array (with even sampling):',np.linalg.norm(np.real(free_prop(vectr_even,10)))) print() print('Norm of input array (with odd sampling):', np.linalg.norm(vectr_odd)) print('Norm of output complex array (with odd sampling):',np.linalg.norm(free_prop(vectr_odd,11))) print('Norm of real part of complex array (with odd sampling):',np.linalg.norm(np.real(free_prop(vectr_odd,11)))) As can be seen from here, the norm in the even case (when only real part is considered) differs at the second decimal, and I would be very surprised if it a floating point error. | Look at k_vals: With Nd=6: [0., 6.28318531, 12.56637061, -18.84955592, -12.56637061, -6.28318531] With Nd=5: [0., 6.28318531, 12.56637061, -12.56637061, -6.28318531] When Nd is even, there's one negative frequency that doesn't have a positive frequency counterpart. As we know, the input to the IFFT must be conjugate symmetric for the output to be real-valued. With an odd-length array, the computed exponential will automatically be conjugate symmetric. With the even-length array, there's the one frequency that doesn't have a positive counterpart. We need to make this frequency component real-valued for the exponential to be conjugate symmetric, and the inverse FFT to be real-valued. For example, this modification to your function accomplishes this: def free_prop(vectr, Nd, vel=0.92, dt=1): dz=1/Nd psi_k = np.fft.fft(vectr) k_vals = 2 * np.pi * np.fft.fftfreq(Nd, d=dz) k_kernel = np.exp(-1j * dt * k_vals * vel) if np.mod(Nd, 2) == 0: k_kernel[Nd // 2] = k_kernel[Nd // 2].real return np.fft.ifft(k_kernel * psi_k) Whether this still matches the physics I don't know. One could also argue that that frequency bin should be 0, because it contains aliased data by definition (it's the Nyquist frequency, and to properly sample signal it must has 0 energy at and above the Nyquist frequency). | 3 | 1 |
78,208,703 | 2024-3-22 | https://stackoverflow.com/questions/78208703/in-a-2d-numpy-array-how-to-select-every-first-and-second-element-of-the-inner-a | For example the array: array = np.array([[1, 1, 4, 2, 1, 8], [1, 1, 8, 2, 1, 16], [1, 1, 40, 2, 1, 80], [1, 2, 40, 2, 1, 80]]) I'd like to essentially remove every third [:, ::2] element of the inner arrays. So the result should be: [[1 1 2 1] [1 1 2 1] [1 1 2 1] [1 2 2 1]] I could do two selections and concatenate each of the resulting arrays, but that seem super slow.. is there a way to use indexing or some other method that doesn't involve a loop with np.concatenate? My actual array always have inner arrays with sizes divisible by 3, and the 1 dimension is very, very large. So I'm interested in the fastest way to accomplish this. Thank you! | You can use boolean indexing: array[:, np.arange(array.shape[1]) % 3 != 2] array([[1, 1, 2, 1], [1, 1, 2, 1], [1, 1, 2, 1], [1, 2, 2, 1]]) | 3 | 3 |
78,208,576 | 2024-3-22 | https://stackoverflow.com/questions/78208576/how-to-filter-a-dataframe-by-row-id-row-number | I am looking to get a subset of rows based on the row_id/row_number for a dataframe similar to pyarrow.Table.take. For eg: given the below dataframe from datetime import datetime df = pl.DataFrame( { "integer": [1, 2, 3, 4, 5], "date": [ datetime(2022, 1, 1), datetime(2022, 1, 2), datetime(2022, 1, 3), datetime(2022, 1, 4), datetime(2022, 1, 5), ], "float": [4.0, 5.0, 6.0, 7.0, 8.0], } ) print(df) shape: (5, 3) βββββββββββ¬ββββββββββββββββββββββ¬ββββββββ β integer β date β float β β --- β --- β --- β β i64 β datetime[ΞΌs] β f64 β βββββββββββͺββββββββββββββββββββββͺββββββββ‘ β 1 β 2022-01-01 00:00:00 β 4.0 β β 2 β 2022-01-02 00:00:00 β 5.0 β β 3 β 2022-01-03 00:00:00 β 6.0 β β 4 β 2022-01-04 00:00:00 β 7.0 β β 5 β 2022-01-05 00:00:00 β 8.0 β βββββββββββ΄ββββββββββββββββββββββ΄ββββββββ I am looking for take like function df.take([0, 4]) which gives the below dataframe. shape: (2, 3) βββββββββββ¬ββββββββββββββββββββββ¬ββββββββ β integer β date β float β β --- β --- β --- β β i64 β datetime[ΞΌs] β f64 β βββββββββββͺββββββββββββββββββββββͺββββββββ‘ β 1 β 2022-01-01 00:00:00 β 4.0 β β 5 β 2022-01-05 00:00:00 β 8.0 β βββββββββββ΄ββββββββββββββββββββββ΄ββββββββ The row numbers are a result of some other process and handed over. Tried using df.select(pl.all().take([take_indices]) and noticed that it was slower than actually running the filter directly. i.e. df.filter(filter_expr). Please note that I am doing this over extremely large datasets (> 100m rows). Edit: Thanks for the answer. using df[[take_indices]] worked. However still curious as to why filter still outperforms the both select.gather as well as square bracket approach. Timings on my datasetwith 50m rows: select.gather: .5s square_bracket: .32s [inline with mozway's timings] filter: .18s | df[[0,4]] will allow to select the indices 0 and 4. Since take is deprecated, the equivalent to your proposed code would be to use gather: df.select(pl.all().gather([0, 4])) Output: shape: (2, 3) βββββββββββ¬ββββββββββββββββββββββ¬ββββββββ β integer β date β float β β --- β --- β --- β β i64 β datetime[ΞΌs] β f64 β βββββββββββͺββββββββββββββββββββββͺββββββββ‘ β 1 β 2022-01-01 00:00:00 β 4.0 β β 5 β 2022-01-05 00:00:00 β 8.0 β βββββββββββ΄ββββββββββββββββββββββ΄ββββββββ Timing on 500k rows: # df.select(pl.all().gather([0, 4])) 145 Β΅s Β± 9.43 Β΅s per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) # df[[0,4]] 122 Β΅s Β± 14.3 Β΅s per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) Timing on 5M rows: # df.select(pl.all().gather([0, 4])) 150 Β΅s Β± 13.3 Β΅s per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) # df[[0,4]] 117 Β΅s Β± 17.7 Β΅s per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) | 2 | 1 |
78,207,935 | 2024-3-22 | https://stackoverflow.com/questions/78207935/different-output-showing-from-a-source-code-machine-learning-python | I'm currently trying to work on a small image machine learning project. I found this person's Kaggle code and I tried replicating it from scratch. However, not even in the main part, I already faced an error. I'm sure there must be a localization issue on my end on how this ended up but I can't figure what. My code: #Import Libraries #Data processing modules import pandas as pd import numpy as np import matplotlib.pyplot as plt import cv2 #File directory modules import glob as gb import os #Training and testing (machine learning) modules import tensorflow as tf import keras #Importing the images into the code trainDataset = 'melanoma_cancer_dataset/train' testDataset = 'melanoma_cancer_dataset/test' predictionDataset = 'melanoma_cancer_dataset/skinTest' #creating empty lists for the images to fall into for processing training_List = [] testing_list = [] #making a classification dictionary for the two keys, benign and malignant #used for inserting into the images diction = {'benign' : 0, 'malignant' : 1} #Read through the folder's length contents for folder in os.listdir(trainDataset): data = gb.glob(pathname=str(trainDataset + folder + '/*.jpg')) print(f'{len(data)} in folder {folder}') #read the images, resize them in a uniform order, and store them in the empty lists for data in data: image = cv2.imread(data) imageList = cv2.resize(image(120,120)) training_List.append(list(imageList)) The output of the notebook showed that it had 0 images/contents stored in the folder. Now I'm kinda doubting what's happening here and would love some answers. Thanks in advance. I'm using my own VScode too. This is a screenshot of my files: | Based on your folder structure and the code you have provided, the issue is that you haven't put the trailing slash at the end of your folder paths. In the provided code, you're trying to concatenate the folder name directly with the path. However, if you miss a slash or if the folder variable does not include a trailing slash, this could result in an incorrect path. Update the paths like this: trainDataset = 'melanoma_cancer_dataset/train/' testDataset = 'melanoma_cancer_dataset/test/' predictionDataset = 'melanoma_cancer_dataset/skinTest/' What your code is doing is here: for folder in os.listdir(trainDataset): data = gb.glob(pathname=str(trainDataset + folder + '/*.jpg')) is that it is going to the path of the trainDataset, and then listing the folders there (which are named malignant and benign) with the use of os.listdir(). These paths are concatenated to generate the final image paths with: data = gb.glob(pathname=str(trainDataset + folder + '/*.jpg')) Also, slight syntax error in the line: imageList = cv2.resize(image(120,120)) It should be cv2.resize(image, (120, 120)) Also the way you are appending to training_List might be wrong. You need to convert the imageList to a list before appending it or append imageList directly if you want to keep the image array structure. Full updated code: # Data processing modules import pandas as pd import numpy as np import matplotlib.pyplot as plt import cv2 # File directory modules import glob as gb import os # Training and testing (machine learning) modules import tensorflow as tf import keras # Directories trainDataset = 'melanoma_cancer_dataset/train/' testDataset = 'melanoma_cancer_dataset/test/' predictionDataset = 'melanoma_cancer_dataset/skinTest/' # Empty list for the images training_List = [] testing_list = [] # Classification dictionary diction = {'benign': 0, 'malignant': 1} # Read through the folder's contents for folder in os.listdir(trainDataset): # Corrected the path pattern and added a slash data = gb.glob(pathname=str(trainDataset + folder + '/*.jpg')) print(f'{len(data)} in folder {folder}') # Read the images, resize them, and store them in the list for file_path in data: image = cv2.imread(file_path) # Corrected the resize function call imageList = cv2.resize(image, (120, 120)) # Append the image array directly training_List.append(imageList) print(f'Total images in training set: {len(training_List)}') | 2 | 2 |
78,202,730 | 2024-3-21 | https://stackoverflow.com/questions/78202730/polars-efficient-way-to-apply-function-to-filter-column-of-strings | I have a column of long strings (like sentences) on which I want to do the following: replace certain characters create a list of the remaining strings if a string is all text see whether it is in a dictionary and if so keep it if a string is all numeric keep it if a string is a mix of numeric/text, find ratio of numbers to letters and keep if above a threshold I currently do this as follows: for memo_field in self.memo_columns: data = data.with_columns( pl.col(memo_field).map_elements( lambda x: self.filter_field(text=x, word_dict=word_dict)) ) The filter_field method uses plain python, so: text_sani = re.sub(r'[^a-zA-Z0-9\s\_\-\%]', ' ', text) to replace text_sani = text_sani.split(' ') to split len(re.findall(r'[A-Za-z]', x)) to find num letters for each element in text_sani list (similar for num digits) and ratio is difference divided by overall num characters list comprehension and if to filter list of words It actually isn't too bad, 128M rows takes about 10 minutes. Unfortunately, future files will be much bigger. On a ~300M row file this approach gradually increases memory consumption until the OS (Ubuntu) kills the process. Also, all processing seems to take place on a single core. I have started to try to use the Polars string expressions and code and a toy example are provided below. At this point it looks like my only option is a function call to a do the rest. My questions are: in my original approach is it normal that memory consumption grows? Does map_elements create a copy of the original series and so consumes more memory? is my original approach correct or is there a better way eg. I have just started reading about struct in Polars? is it possible to do what I want using just Polars expressions? UPDATE The code example in answers from @Hericks and @ΩΠΞΞΞΞΞ‘Ξ₯ΞΞΞΞΞΞ£ were applied and largely addressed my third question. Implementing the Polars expressions greatly reduced run time with two observations: the complexity of the memo fields in my use-case greatly affect the run time. The key challenge is the look up of items in the dictionary; a large dictionary and many valid words in the memo field can severely affect run time; and I experienced many seg fault errors when saving in .parquet format when I used pl.DataFrame. When using pl.LazyFrame and sink_parquet there were no errors but run time was greatly extended (drives are NVME SSD at 2000MB/s) EXAMPLE CODE/DATA: Toy data: temp = pl.DataFrame({"foo": ['COOOPS.autom.SAPF124', 'OSS REEE PAAA comp. BEEE atm 6079 19000000070 04-04-2023', 'ABCD 600000000397/7667896-6/REG.REF.REE PREPREO/HMO', 'OSS REFF pagopago cost. Becf atm 9682 50012345726 10-04-2023'] }) Code Functions: def num_dec(x): return len(re.findall(r'[0-9_\/]', x)) def num_letter(x): return len(re.findall(r'[A-Za-z]', x)) def letter_dec_ratio(x): if len(x) == 0: return None nl = num_letter(x) nd = num_dec(x) if (nl + nd) == 0: return None ratio = (nl - nd)/(nl + nd) return ratio def filter_field(text=None, word_dict=None): if type(text) is not str or word_dict is None: return 'no memo and/or dictionary' if len(text) > 100: text = text[0:101] print("TEXT: ",text) text_sani = re.sub(r'[^a-zA-Z0-9\s\_\-\%]', ' ', text) # parse by replacing most artifacts and symbols with space words = text_sani.split(' ') # create words separated by spaces print("WORDS: ",words) kept = [] ratios = [letter_dec_ratio(w) for w in words] [kept.append(w.lower()) for i, w in enumerate(words) if ratios[i] is not None and ((ratios[i] == -1 or (-0.7 <= ratios[i] <= 0)) or (ratios[i] == 1 and w.lower() in word_dict))] print("FINAL: ",' '.join(kept)) return ' '.join(kept) Code Current Implementation: temp.with_columns( pl.col("foo").map_elements( lambda x: filter_field(text=x, word_dict=['cost','atm'])).alias('clean_foo') # baseline ) Code Partial Attempt w/Polars: This gets me the correct WORDS (see next code block) temp.with_columns( ( pl.col(col) .str.replace_all(r'[^a-zA-Z0-9\s\_\-\%]',' ') .str.split(' ') ) ) Expected Result (at each step, see print statements above): TEXT: COOOPS.autom.SAPF124 WORDS: ['COOOPS', 'autom', 'SAPF124'] FINAL: TEXT: OSS REEE PAAA comp. BEEE atm 6079 19000000070 04-04-2023 WORDS: ['OSS', 'REEE', 'PAAA', 'comp', '', 'BEEE', '', 'atm', '6079', '19000000070', '04-04-2023'] FINAL: atm 6079 19000000070 04-04-2023 TEXT: ABCD 600000000397/7667896-6/REG.REF.REE PREPREO/HMO WORDS: ['ABCD', '600000000397', '7667896-6', 'REG', 'REF', 'REE', 'PREPREO', 'HMO'] FINAL: 600000000397 7667896-6 TEXT: OSS REFF pagopago cost. Becf atm 9682 50012345726 10-04-2023 WORDS: ['OSS', 'REFF', 'pagopago', 'cost', '', 'Becf', '', 'atm', '9682', '50012345726', '10-04-2023'] FINAL: cost atm 9682 50012345726 10-04-2023 | The filtering can be implemented using polars' native expression API as follows. I've taken the regular expressions from the naive implementation in the question. word_list = ["cost", "atm"] # to avoid long expressions in ``pl.Expr.list.eval`` num_dec_expr = pl.element().str.count_matches(r'[0-9_\/]').cast(pl.Int32) num_letter_expr = pl.element().str.count_matches(r'[A-Za-z]').cast(pl.Int32) ratio_expr = (num_letter_expr - num_dec_expr) / (num_letter_expr + num_dec_expr) ( df .with_columns( pl.col("foo") # convert to lowercase .str.to_lowercase() # replace special characters with space .str.replace_all(r"[^a-z0-9\s\_\-\%]", " ") # split string at spaces into list of words .str.split(" ") # filter list of words .list.eval( pl.element().filter( # only keep non-empty string... pl.element().str.len_chars() > 0, # ...that either # - are in the list of words, # - consist only of characters related to numbers, # - have a ratio between -0.7 and 0 pl.element().is_in(word_list) | num_letter_expr.eq(0) | ratio_expr.is_between(-0.7, 0) ) ) # join list of words into string .list.join(" ") .alias("foo_clean") ) ) shape: (4, 2) βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββ β foo β foo_clean β β --- β --- β β str β str β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββͺβββββββββββββββββββββββββββββββββββββββ‘ β COOOPS.autom.SAPF124 β β β OSS REEE PAAA comp. BEEE atm 6079 19000000070 04-04-2023 β atm 6079 19000000070 04-04-2023 β β ABCD 600000000397/7667896-6/REG.REF.REE PREPREO/HMO β 600000000397 7667896-6 β β OSS REFF pagopago cost. Becf atm 9682 50012345726 10-04-2023 β cost atm 9682 50012345726 10-04-2023 β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββ | 2 | 3 |
78,207,067 | 2024-3-22 | https://stackoverflow.com/questions/78207067/how-to-get-the-sub-columns-from-a-pandas-dataframe-after-a-groupby-and-agg | I'm attempting to perform a groupby and aggregate operation on a Pandas DataFrame. Specifically, I want to compute the mean and count for each class group. However, I'm encountering issues accessing the generated columns. Here's an example of the transformation I'm aiming for: import pandas as pd df = pd.DataFrame({ 'Class': ['A', 'B', 'A', 'B', 'C'], 'Val': [25, 30, 35, 40, 15], }) grouped = df.groupby(by='Class').agg({'Val': ['mean', 'count']}) The result I obtain is as follows: Val mean count Class A 30.0 2 B 35.0 2 C 15.0 1 However, I want to get rid of the "Val" sub-column to achieve this data structure: Class mean count A 30.0 2 B 35.0 2 C 15.0 1 | You should slice before agg: grouped = df.groupby(by='Class', as_index=False)['Val'].agg(['mean', 'count']) Output: Class mean count 0 A 30.0 2 1 B 35.0 2 2 C 15.0 1 | 2 | 1 |
78,203,785 | 2024-3-22 | https://stackoverflow.com/questions/78203785/polars-rolling-by-option-not-allowed | I have a data frame of the type: df = pl.LazyFrame({"day": [1,2,4,5,2,3,5,6], 'type': ['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'], "value": [1, 0, 3, 4, 2, 2, 0, 1]}) day type value i64 str i64 1 "a" 1 2 "a" 0 4 "a" 3 5 "a" 4 2 "b" 2 3 "b" 2 5 "b" 0 6 "b" 1 I am trying to create a rolling sum variable, summing, for each different "type", the values in a two days window. Ideally, the resulting dataset would be the following: day type value rolling_sum 1 a 1 1 2 a 0 1 4 a 3 3 5 a 4 7 2 b 2 2 3 b 2 4 5 b 0 0 6 b 1 1 I tried using the following code: df = df.with_columns(pl.col("value") .rolling(index_column="day", by="type", period="2i") .sum().alias("rolling_sum")) but I get the error: "TypeError: rolling() got an unexpected keyword argument 'by'". Could you help me fix it? | That's because in your code you're trying to use Expr.rolling() which doesn't have by parameter (strangely, it is mentioned in the documentation under check_sorted parameter - is it just not implemented yet?), instead of DataFrame.rolling(). If you'd restructure the code to use the latter then it works fine: ( df.rolling( index_column="day", by="type", period="2i" ) .agg( pl.col('value').sum().alias("rolling_sum") ) ) ββββββββ¬ββββββ¬ββββββββββββββ β type β day β rolling_sum β β --- β --- β --- β β str β i64 β i64 β ββββββββͺββββββͺββββββββββββββ‘ β a β 1 β 1 β β a β 2 β 1 β β a β 4 β 3 β β a β 5 β 7 β β b β 2 β 2 β β b β 3 β 4 β β b β 5 β 0 β β b β 6 β 1 β ββββββββ΄ββββββ΄ββββββββββββββ If you need to have value column in your result, you can use Expr.rolling_sum() combined with Expr.over() instead (assuming your DataFrame is sorted by day already): df.with_columns( pl.col("value") .rolling_sum(window_size=2,min_periods=0) .over("type") .alias('rolling_sum') ) βββββββ¬βββββββ¬ββββββββ¬ββββββββββββββ β day β type β value β rolling_sum β β --- β --- β --- β --- β β i64 β str β i64 β i64 β βββββββͺβββββββͺββββββββͺββββββββββββββ‘ β 1 β a β 1 β 1 β β 2 β a β 0 β 1 β β 4 β a β 3 β 3 β β 5 β a β 4 β 7 β β 2 β b β 2 β 2 β β 3 β b β 2 β 4 β β 5 β b β 0 β 2 β β 6 β b β 1 β 1 β βββββββ΄βββββββ΄ββββββββ΄ββββββββββββββ Ideally, I would probably expect Expr.rolling together with Expr.over to work: # something like this df.with_columns( pl.col("value") .rolling(index_column="day", period="2i") .sum() .over("type") .alias('rolling_sum') ) # or this df.set_sorted(['type','day']).with_columns( pl.col("value") .sum() .over('type') .rolling(index_column="day", period="2i") .alias('rolling_sum') ) but unfortunately, it doesn't: InvalidOperationError: rolling expression not allowed in aggregation Update Using rolling_sum() might not be something you want, if you plan your window to be based on days / weeks etc. In this case you can still use DataFrame.rolling() and combine it with Expr.last() inside of GroupBy.agg() to get the last value in the window: ( df.rolling( index_column="day", by="type", period="2i" ) .agg( pl.col('value').last(), pl.col('value').sum().alias("rolling_sum") ) ) ββββββββ¬ββββββ¬ββββββββ¬ββββββββββββββ β type β day β value β rolling_sum β β --- β --- β --- β --- β β str β i64 β i64 β i64 β ββββββββͺββββββͺββββββββͺββββββββββββββ‘ β a β 1 β 1 β 1 β β a β 2 β 0 β 1 β β a β 4 β 3 β 3 β β a β 5 β 4 β 7 β β b β 2 β 2 β 2 β β b β 3 β 2 β 4 β β b β 5 β 0 β 0 β β b β 6 β 1 β 1 β ββββββββ΄ββββββ΄ββββββββ΄ββββββββββββββ | 5 | 7 |
78,193,123 | 2024-3-20 | https://stackoverflow.com/questions/78193123/how-to-use-window-function-in-pyspark-dataframe | I have a pyspark dataframe as below: Mail sno mail_date date1 present [email protected] 790 2024-01-01 2024-02-06 yes [email protected] 790 2023-12-23 2023-01-01 [email protected] 101 2022-02-23 [email protected] 101 2021-01-20 2022-07-09 yes In the final dataframe, I need one record of sno with all the max date values and corresponding max date in mail_date column for present So the final dataframe should be like: Mail sno mail_date date1 present [email protected] 790 2024-01-01 2024-02-06 yes [email protected] 101 2022-02-23 2022-07-09 I have the following code, windowSpec=Window.partitionBy('Mail','sno') df= df.withColumn('max_mail_date', F.max('mail_date').over(windowSpec))\ .withColumn('max_date1', F.max('date1').over(windowSpec)) df1 = df.withColumn('mail_date', F.when(F.col('mail_date').isNotNull(), F.col('max_mail_date')).otherwise(F.col('mail_date')))\ .drop('max_mail_date').dropDuplicates() Here, Im not getting the expected values in the present column. Please suggest any changes | Even thought @DerekO's answer seems correct. I'd like to take another approach on this. I prefer using the groupBy().agg() approach with max and struct. This approach is more efficient in terms of data reduction specially when working with large datasets. So considering the data that you have provided : df_result = df.withColumn( "mail_date_struct", F.struct(F.col("mail_date").alias("max_mail_date"), "present") ).groupBy("Mail", "sno").agg( F.max("mail_date_struct").alias("max_mail_date_struct"), F.max("date1").alias("max_date1") ).select( "Mail", "sno", "max_mail_date_struct.max_mail_date", "max_mail_date_struct.present", "max_date1" ) df_result.show() # +-----------+---+-------------+-------+----------+ # | Mail|sno|max_mail_date|present| max_date1| # +-----------+---+-------------+-------+----------+ # |[email protected]|790| 2024-01-01| yes|2024-02-06| # |[email protected]|101| 2022-02-23| NULL|2022-07-09| # +-----------+---+-------------+-------+----------+ Considerations : The choice between the approach that I'm suggesting and the window functions approach depends on your specific use case, dataset characteristics and performance considerations. For large datasets where you aim to reduce data volume through aggregation might be more efficient. However, if maintaining the original dataset's structure while computing partition wise metrics is required, window function could be more appropriate although potential at a higher computational cost. Always consider testing both approaches on a subset of your data to understand the performance implications in your specific environment. | 2 | 1 |
78,204,054 | 2024-3-22 | https://stackoverflow.com/questions/78204054/what-am-i-understanding-wrong-about-this-simple-asynchio-example | I can't wrap my head around how to use aynchio. Can you please just help me fix this issue. Assume I have some IO task which takes 8 seconds that I put in the coroutine task(). I want to run an infinite loop and perform this take every 5 seconds without blocking my code. import asyncio async def task(): print('starting task') await asyncio.sleep(8) print('finished task') async def main(): count = 0 while True: print(count) if count % 5 == 0: print('running task') await task() count += 1 await asyncio.sleep(1) asyncio.run(main()) From my understanding of the docs, when I perform await inside the task coroutine, it should go back to executing main but this does not happen. What am I doing wrong here? My output is 0 running task starting task finished task 1 2 3 4 5 running task starting task finished task 6 7 8 9 10 | Your issue is you're awaiting the coroutine, which does what it says on the box: It stops anything from proceeding until it completes (to say it doesn't go back to main isn't quite right either. It does, just later than you wanted). Spinning it out to another unawaited task will let it "run in the background" import asyncio async def task(): print('starting task') await asyncio.sleep(8) print('finished task') async def main(): count = 0 while True: print(count) if count % 5 == 0: print('running task') asyncio.create_task(task()) count += 1 await asyncio.sleep(1) asyncio.run(main()) Gives me, which I think is what you want 0 running task starting task 1 2 3 4 5 running task starting task 6 7 finished task 8 9 10 running task starting task 11 12 finished task 13 14 | 2 | 2 |
78,203,237 | 2024-3-21 | https://stackoverflow.com/questions/78203237/how-to-implement-the-observer-pattern-using-async-iterators-in-python | I'm working on implementing the observer pattern in Python using asyncio and async iterators. The goal is to create a βchange streamβ where tasks can add changes, and other tasks can subscribe to these changes as asynchronous iterators. I'm trying to create an interface similar to broadcast streams in Dart. Here's a simplified version of what I have so far: from asyncio import Condition class ChangeStream: def __init__(self): self._condition = Condition() self._change = None async def add_change(self, change): async with self._condition: self._change = change self._condition.notify_all() async def __aiter__(self): async with self._condition: while True: await self._condition.wait() yield self._change Is this the best approach to implementing the observer pattern? Are there more efficient or easier-to-reason-about ways to implement this? Edit: The goal here is that observers can see all changes from after they subscribe, but none of the changes from before they subscribe. | From your description and comment, it appears you're really looking for a Publish-Subscribe pattern instead of an Observer pattern. They are similar, but where Observers are typically known by the subject and get updates directly, in Publish-Subscribe, the publisher publishes to a bus (like your Stream) that tracks consumers that are interested in the data. Here's an example with a Queue-based implementation, and some extra code to show how it all works - of course you could use the class differently, the TaskGroup is just one way of getting some tasks going: from asyncio import Queue, TaskGroup, sleep, run from random import random class ChangeStream: def __init__(self): self._subscribers = [] async def add_change(self, change): for queue in self._subscribers: await queue.put(change) async def __aiter__(self): queue = Queue() self._subscribers.append(queue) try: while True: yield (value := await queue.get()) if value is None: raise GeneratorExit except GeneratorExit: self._subscribers.remove(queue) async def produce_changes(stream: ChangeStream): for i in range(10): await sleep(random() * 3) print(f"Add value: {i}") await stream.add_change(i) print("Done adding values, writing None to stream.") await stream.add_change(None) async def consume_changes(stream: ChangeStream, name: str, start_delay: int): await sleep(start_delay) print(f"{name} starting...") async for value in stream: print(f"{name} received: {value}") if value is None: break async def main(): # create a stream, pass it to a producer task to publish to stream = ChangeStream() async with TaskGroup() as tg: tg.create_task(produce_changes(stream)) tg.create_task(consume_changes(stream, 'consumer 1', 0)) tg.create_task(consume_changes(stream, 'consumer 2', 5)) # start after 5 sec if __name__ == "__main__": run(main()) Output (example, due to randomness): consumer 1 starting... Add value: 0 consumer 1 received: 0 Add value: 1 consumer 1 received: 1 Add value: 2 consumer 1 received: 2 consumer 2 starting... Add value: 3 consumer 1 received: 3 consumer 2 received: 3 Add value: 4 consumer 1 received: 4 consumer 2 received: 4 Add value: 5 consumer 1 received: 5 consumer 2 received: 5 Add value: 6 consumer 1 received: 6 consumer 2 received: 6 Add value: 7 consumer 1 received: 7 consumer 2 received: 7 Add value: 8 consumer 1 received: 8 consumer 2 received: 8 Add value: 9 Done adding values, writing None to stream. consumer 1 received: 9 consumer 1 received: None consumer 2 received: 9 consumer 2 received: None | 2 | 4 |
78,203,142 | 2024-3-21 | https://stackoverflow.com/questions/78203142/how-to-populate-null-values-in-columns-after-outer-join-in-python-pandas | My goal is to join two dataframes from different sources in Python using Pandas and then fill null values in columns with corresponding values in the same column. The dataframes have similar columns, but some text/object columns may have different values due to variations in the data sources. For instance, the "Name" column in one dataframe might contain "Nick M." while in the other it's "Nick Maison". However, certain columns such as "Date" (formatted as YYYY-MM-DD), "Order ID" (numeric), and "Employee ID" (numeric) have consistent values across both dataframes (we join dataframes based on them). Worth mentioning, some columns may not even exist in one or another dataframe, but should also be filled. import pandas as pd # Create DataFrame df1 df1_data = { 'Date (df1)': ['2024-03-18', '2024-03-18', '2024-03-18', '2024-03-18', '2024-03-18', "2024-03-19", "2024-03-19"], 'Order Id (df1)': [1, 2, 3, 4, 5, 1, 2], 'Employee Id (df1)': [825, 825, 825, 825, 825, 825, 825], 'Name (df1)': ['Nick M.', 'Nick M.', 'Nick M.', 'Nick M.', 'Nick M.', 'Nick M.', 'Nick M.'], 'Region (df1)': ['SD', 'SD', 'SD', 'SD', 'SD', 'SD', 'SD'], 'Value (df1)': [25, 37, 18, 24, 56, 77, 25] } df1 = pd.DataFrame(df1_data) # Create DataFrame df2 df2_data = { 'Date (df2)': ['2024-03-18', '2024-03-18', '2024-03-18', "2024-03-19", "2024-03-19", "2024-03-19", "2024-03-19"], 'Order Id (df2)': [1, 2, 3, 1, 2, 3, 4], 'Employee Id (df2)': [825, 825, 825, 825, 825, 825, 825], 'Name (df2)': ['Nick Mason', 'Nick Mason', 'Nick Mason', 'Nick Mason', 'Nick Mason', 'Nick Mason', 'Nick Mason'], 'Region (df2)': ['San Diego', 'San Diego', 'San Diego', 'San Diego', 'San Diego', 'San Diego', 'San Diego'], 'Value (df2)': [25, 37, 19, 22, 17, 9, 76] } df2 = pd.DataFrame(df2_data) # Combine DataFrames outer_joined_df = pd.merge( df1, df2, how = 'outer', left_on = ['Date (df1)', 'Employee Id (df1)', "Order Id (df1)"], right_on = ['Date (df2)', 'Employee Id (df2)', "Order Id (df2)"] ) # Display the result outer_joined_df Here is the output of joined dataframes. Null values colored in yellow should be filled. I tried below code and it works for Date, Order Id and Employee Id columns as expected (because they are the same across two dataframes and we join based on them), but not for other, because they may have different values. Basically, the logic in this code is if Null, then fill with values from the same row in specified column. However, since values may be different, filled column becomes messy, because it has multiple variations of the same value. outer_joined_df['Date (df1)'] = outer_joined_df['Date (df1)'].combine_first(outer_joined_df['Date (df2)']) outer_joined_df['Date (df2)'] = outer_joined_df['Date (df2)'].combine_first(outer_joined_df['Date (df1)']) outer_joined_df['Order Id (df1)'] = outer_joined_df['Order Id (df1)'].combine_first(outer_joined_df['Order Id (df2)']) outer_joined_df['Order Id (df2)'] = outer_joined_df['Order Id (df2)'].combine_first(outer_joined_df['Order Id (df1)']) outer_joined_df['Employee Id (df1)'] = outer_joined_df['Employee Id (df1)'].combine_first(outer_joined_df['Employee Id (df2)']) outer_joined_df['Employee Id (df2)'] = outer_joined_df['Employee Id (df2)'].combine_first(outer_joined_df['Employee Id (df1)']) outer_joined_df['Name (df1)'] = outer_joined_df['Name (df1)'].combine_first(outer_joined_df['Name (df2)']) outer_joined_df['Name (df2)'] = outer_joined_df['Name (df2)'].combine_first(outer_joined_df['Name (df1)']) outer_joined_df['Region (df1)'] = outer_joined_df['Region (df1)'].combine_first(outer_joined_df['Region (df2)']) outer_joined_df['Region (df2)'] = outer_joined_df['Region (df2)'].combine_first(outer_joined_df['Region (df1)']) Here is the output: As you can see, it populated the data, but not the way I want. Output I need: | # a list with all column names, minus `(dfx)` columns = ["Date", "Order Id", "Employee Id", "Name", "Region", "Value"] # create a dict with a relation between values in df1 and df2, both ways value_relations = {} for col in columns: relations = ( outer_joined_df[[f"{col} (df1)", f"{col} (df2)"]] .drop_duplicates() .dropna() .to_dict("tight") .get("data") ) value_relations[col] = {k: v for k, v in relations} value_relations[col].update({v: k for k, v in relations}) # fill values of df1 with the related value of df2 outer_joined_df[f"{col} (df1)"] = outer_joined_df[f"{col} (df1)"].fillna( outer_joined_df[f"{col} (df2)"].map(value_relations[col]) ) # fill values of df2 with the related value of df1 outer_joined_df[f"{col} (df2)"] = outer_joined_df[f"{col} (df2)"].fillna( outer_joined_df[f"{col} (df1)"].map(value_relations[col]) ) Date (df1) Order Id (df1) Employee Id (df1) Name (df1) Region (df1) ... Order Id (df2) Employee Id (df2) Name (df2) Region (df2) Value (df2) 0 2024-03-18 1.0 825.0 Nick M. SD ... 1.0 825.0 Nick Mason San Diego 25.0 1 2024-03-18 2.0 825.0 Nick M. SD ... 2.0 825.0 Nick Mason San Diego 37.0 2 2024-03-18 3.0 825.0 Nick M. SD ... 3.0 825.0 Nick Mason San Diego 19.0 3 2024-03-18 4.0 825.0 Nick M. SD ... NaN 825.0 Nick Mason San Diego NaN 4 2024-03-18 5.0 825.0 Nick M. SD ... NaN 825.0 Nick Mason San Diego NaN 5 2024-03-19 1.0 825.0 Nick M. SD ... 1.0 825.0 Nick Mason San Diego 22.0 6 2024-03-19 2.0 825.0 Nick M. SD ... 2.0 825.0 Nick Mason San Diego 17.0 7 2024-03-19 3.0 825.0 Nick M. SD ... 3.0 825.0 Nick Mason San Diego 9.0 8 2024-03-19 NaN 825.0 Nick M. SD ... 4.0 825.0 Nick Mason San Diego 76.0 If you want to fill the remaining null values, add this at the end of each loop: # fill remaining null values of df1 outer_joined_df[f"{col} (df1)"] = outer_joined_df[f"{col} (df1)"].fillna( outer_joined_df[f"{col} (df2)"] ) # fill remaining null values of df2 outer_joined_df[f"{col} (df2)"] = outer_joined_df[f"{col} (df2)"].fillna( outer_joined_df[f"{col} (df1)"] ) Date (df1) Order Id (df1) Employee Id (df1) Name (df1) Region (df1) ... Order Id (df2) Employee Id (df2) Name (df2) Region (df2) Value (df2) 0 2024-03-18 1.0 825.0 Nick M. SD ... 1.0 825.0 Nick Mason San Diego 25.0 1 2024-03-18 2.0 825.0 Nick M. SD ... 2.0 825.0 Nick Mason San Diego 37.0 2 2024-03-18 3.0 825.0 Nick M. SD ... 3.0 825.0 Nick Mason San Diego 19.0 3 2024-03-18 4.0 825.0 Nick M. SD ... 4.0 825.0 Nick Mason San Diego 24.0 4 2024-03-18 5.0 825.0 Nick M. SD ... 5.0 825.0 Nick Mason San Diego 56.0 5 2024-03-19 1.0 825.0 Nick M. SD ... 1.0 825.0 Nick Mason San Diego 22.0 6 2024-03-19 2.0 825.0 Nick M. SD ... 2.0 825.0 Nick Mason San Diego 17.0 7 2024-03-19 3.0 825.0 Nick M. SD ... 3.0 825.0 Nick Mason San Diego 9.0 8 2024-03-19 4.0 825.0 Nick M. SD ... 4.0 825.0 Nick Mason San Diego 76.0 | 2 | 1 |
78,203,219 | 2024-3-21 | https://stackoverflow.com/questions/78203219/dealing-with-non-optimal-solutions-from-gekko | I'm running into some situations where it seems like Gekko is getting stuck in local maximums and was wondering what approaches could be used to get around this or dig deeper into the cause (including default settings below). For example, running the scenario below yields an objective of "-5127.34945104756" m = GEKKO(remote=False) m.options.NODES = 3 m.options.IMODE = 3 m.options.MAX_ITER = 1000 m.options.SOLVER=1 #Limit max lnuc weeks m.Equation(sum(x8)<=6) m.Maximize(m.sum(simu_total_volume)) m.solve(disp = True) #Objective : -5127.34945104756 Now If I simply change "m.Equation(sum(x8)<=6)" to "m.Equation(sum(x8)==6)", it returns a better solution (-5638.55528892101): m = GEKKO(remote=False) m.options.NODES = 3 m.options.IMODE = 3 m.options.MAX_ITER = 1000 m.options.SOLVER=1 #Limit max lnuc weeks m.Equation(sum(x8)==6) m.Maximize(m.sum(simu_total_volume)) m.solve(disp = True) # Objective : -5638.55528892101 Given that "6" falls in the range of <=6, is there a reason why Gekko wouldn't try to go all the way up to 6 here? Posting the full code/values would also be difficult given size/scale of the problem, so appreciate any feedback based on this. | Gekko solvers are gradient-based Nonlinear Programming (NLP) solvers that find local minima. There are a few strategies to help Gekko find the global optimum. Here is an example that can help with this important topic of local vs. global minima. The following script produces the local (not global) solution of (7,0,0) with objective 951.0. from gekko import GEKKO m = GEKKO(remote=False) x = m.Array(m.Var,3,lb=0) x1,x2,x3 = x m.Minimize(1000-x1**2-2*x2**2-x3**2-x1*x2-x1*x3) m.Equations([8*x1+14*x2+7*x3==56, x1**2+x2**2+x3**2>=25]) m.solve(disp=False) res=[print(f'x{i+1}: {xi.value[0]}') for i,xi in enumerate(x)] print(f'Objective: {m.options.objfcnval:.2f}') There are gradient-based methods for global optimization found in solvers such as BARON, genetic algorithms, simulated annealing, etc. An easy approach is to perform a multi-start method with different initial conditions (guesses) over a grid search or intelligently with a Bayesian approach to more intelligently search if the number of initial guesses is small. Multi-Start with Parallel Threading A grid search is easy to parallelize to simultaneously start from multiple locations. Here is the same optimization problem where the global solution is found with a parallelized gekko optimizations. import numpy as np import threading import time, random from gekko import GEKKO class ThreadClass(threading.Thread): def __init__(self, id, xg): s = self s.id = id s.m = GEKKO(remote=False) s.xg = xg s.objective = float('NaN') # initialize variables s.m.x = s.m.Array(s.m.Var,3,lb=0) for i in range(3): s.m.x[i].value = xg[i] s.m.x1,s.m.x2,s.m.x3 = s.m.x # Equations s.m.Equation(8*s.m.x1+14*s.m.x2+7*s.m.x3==56) s.m.Equation(s.m.x1**2+s.m.x2**2+s.m.x3**2>=25) # Objective s.m.Minimize(1000-s.m.x1**2-2*s.m.x2**2-s.m.x3**2 -s.m.x1*s.m.x2-s.m.x1*s.m.x3) # Set solver option s.m.options.SOLVER = 1 threading.Thread.__init__(s) def run(self): print('Running application ' + str(self.id) + '\n') self.m.solve(disp=False,debug=0) # solve # Retrieve objective if successful if (self.m.options.APPSTATUS==1): self.objective = self.m.options.objfcnval else: self.objective = float('NaN') self.m.cleanup() # Optimize at mesh points x1_ = np.arange(0.0, 10.0, 3.0) x2_ = np.arange(0.0, 10.0, 3.0) x3_ = np.arange(0.0, 10.0, 3.0) x1,x2,x3 = np.meshgrid(x1_,x2_,x3_) threads = [] # Array of threads # Load applications id = 0 for i in range(x1.shape[0]): for j in range(x1.shape[1]): for k in range(x1.shape[2]): xg = (x1[i,j,k],x2[i,j,k],x3[i,j,k]) # Create new thread threads.append(ThreadClass(id, xg)) # Increment ID id += 1 # Run applications simultaneously as multiple threads # Max number of threads to run at once max_threads = 8 for t in threads: while (threading.activeCount()>max_threads): # check for additional threads every 0.01 sec time.sleep(0.01) # start the thread t.start() # Check for completion mt = 10.0 # max time (sec) it = 0.0 # time counter st = 1.0 # sleep time (sec) while (threading.active_count()>=3): time.sleep(st) it = it + st print('Active Threads: ' + str(threading.active_count())) # Terminate after max time if (it>=mt): break # Initialize array for objective obj = np.empty_like(x1) # Retrieve objective results id = 0 id_best = 0; obj_best = 1e10 for i in range(x1.shape[0]): for j in range(x1.shape[1]): for k in range(x1.shape[2]): obj[i,j,k] = threads[id].objective if obj[i,j,k]<obj_best: id_best = id obj_best = obj[i,j,k] id += 1 print(obj) print(f'Best objective {obj_best}') print(f'Solution {threads[id_best].m.x}') Bayesian Optimization Another approach is to intelligently search by mapping the initial conditions to the performance of the optimized solution. It searches in areas where it expects the best performance or where it hasn't been tested and the uncertainty is high. from gekko import GEKKO from hyperopt import fmin, tpe, hp from hyperopt import STATUS_OK, STATUS_FAIL # Define the search space for the hyperparameters space = {'x1': hp.quniform('x1', 0, 10, 3), 'x2': hp.quniform('x2', 0, 10, 3), 'x3': hp.quniform('x3', 0, 10, 3)} def objective(params): m = GEKKO(remote=False) x = m.Array(m.Var,3,lb=0) x1,x2,x3 = x x1.value = params['x1'] x2.value = params['x2'] x3.value = params['x3'] m.Minimize(1000-x1**2-2*x2**2-x3**2-x1*x2-x1*x3) m.Equations([8*x1+14*x2+7*x3==56, x1**2+x2**2+x3**2>=25]) m.options.SOLVER = 1 m.solve(disp=False,debug=False) obj = m.options.objfcnval if m.options.APPSTATUS==1: s=STATUS_OK else: s=STATUS_FAIL m.cleanup() return {'loss':obj, 'status': s, 'x':x} best = fmin(objective, space, algo=tpe.suggest, max_evals=50) sol = objective(best) print(f"Solution Status: {sol['status']}") print(f"Objective: {sol['loss']:.2f}") print(f"Solution: {sol['x']}") Both multi-start methods find the global solution: Objective: 936.00 Solution: [[0.0] [0.0] [8.0]] If you determine that equality constraints always produce the global optimum, then you could also switch from inequality constraints to enforce the constraint boundary. Additional information on these multi-start methods is in the Engineering Optimization course on the page for Global Optimization and Solver Tuning. | 2 | 1 |
78,203,276 | 2024-3-21 | https://stackoverflow.com/questions/78203276/pandas-converting-dataframe-column-to-int-following-dataframe-manipulation | Running pandas 1.5.3. Also attempted on pandas 2.2.1. I am loading in data from a CSV that looks like such: 888|0|TEST ACCOUNT 888|1|Sample Ship-to 802001|0|COMPANY 1 802001|1|COMPANY 1 INC 802001|2|COMPANY 1 BALL K802001|3|COMPANY 1 With columns CUSNO, S2, and NAME, in that order. I have a script that loads in the data, then checks the first column and makes sure it is of int64 in the resulting DataFrame. If not, the script is supposed to convert the column to numeric and drop the rows that have NaN in them. So, before: CUSNO S2 NAME 0 888 0 TEST ACCOUNT 1 888 1 Sample Ship-to 2 802001 0 COMPANY 1 3 802001 1 COMPANY 1 INC 4 802001 2 COMPANY 1 BALL 5 K802001 3 COMPANY 1 Then run: cl['CUSNO'] = pd.to_numeric(cl.CUSNO, errors='coerce') cl = cl.dropna(axis='index', how='any') After: CUSNO S2 NAME 0 888.0 0 TEST ACCOUNT 1 888.0 1 Sample Ship-to 2 802001.0 0 COMPANY 1 3 802001.0 1 COMPANY 1 INC 4 802001.0 2 COMPANY 1 BALL I want to make CUSNO a column full of int64 or similar types, but when I run company_locations['CUSNO'].dtype it keeps returning float64. (Realistically, I want to get rid of the decimal point at the end of every entry in CUSNO and thought typecasting to int or similar would work best.) I've tried a number of solutions, namely: cl['CUSNO'] = pd.to_numeric(cl.CUSNO, errors='coerce').dropna().astype(int) # replacing the earlier line 1 of the script cl['CUSNO'] = cl.astype({'CUSNO': 'int'}) cl['CUSNO'] = cl['CUSNO'].apply(pd.to_numeric, errors='coerce') I've tried inplace=True for line 2 in the script above. I've also tried solutions from pandas: to_numeric for multiple columns, Change column type in pandas, and Python - pandas column type casting with "astype" is not working. Perhaps I'm missing something here? Do I have to copy the new DataFrame to a new variable or something? | I think simple (after dropping the NaNs): df["CUSNO"] = df["CUSNO"].astype(int) print(df) Prints: CUSNO S2 NAME 0 888 0 TEST ACCOUNT 1 888 1 Sample Ship-to 2 802001 0 COMPANY 1 3 802001 1 COMPANY 1 INC 4 802001 2 COMPANY 1 BALL | 2 | 1 |
78,202,373 | 2024-3-21 | https://stackoverflow.com/questions/78202373/how-to-use-layered-conditional-constraints-in-gekko | I'm trying to implement conditional logic in Gekko using "if3" but am unsure how to successfully layer 2 conditions at different levels of granularity. "x1" is vector of binary values (0/1) that controls when an alternative rhs value should be used on element i to constrain x2 and x3. "x2" is a vector of floats where I want to make the lower and upper bounds dynamic for element i based on the binary values in "x1" above. If the value for x1 for element i is = 1, I want to use "window_lnuc_min_promo_price" (vector of same length) as the lower bound and "window_lnuc_max_promo_price" as the upper bound. If the value for x1 for element i is = 0, I want to use "min_promo_price" as the lower bound and "max_promo_price" as the upper bound. Similarly, "x3" is a vector of floats where I want to apply the same logic but just to the lower bound using values from "window_lnuc" when element i in x1 is = 1 and values from "lnuc" when it is = 0. Lastly, I want to limit how many times X1 can be = 1 (4 in the example below). This would mean the alternative values are limited to 4 occurrences in total. The problem I think I'm having is that because x1 is a variable with a 0-1 range, the optimizer is changing the default "0" values in "lnuc_weeks" (which I don't want it to do). I want the optimizer to basically keep anything that is 0 in "lnuc_weeks" as is and only select a maximum of 4 values from the elements in "lnuc_weeks" that are = 1 initially. There is probably a better way to write this, but any help/feedback is appreciated. The full solution is a bit long to display for reproducibility of output, but hopefully the above/below sufficiently describe the problem. x1 = m.Array(m.Var,(n), integer=True) #LNUC weeks i = 0 for xi in x1: xi.value = lnuc_weeks[i] xi.lower = 0 xi.upper = 1 i += 1 x2 = m.Array(m.Var,(n)) #Blended SRP i = 0 for xi in x2: xi.value = blended_srp[i] xi.lower = m.if3((x1[i]) - 1, min_promo_price[i], window_lnuc_min_promo_price[i]) xi.upper = m.if3((x1[i]) - 1, max_promo_price[i], window_lnuc_max_promo_price[i]) i += 1 x3 = m.Array(m.Var,(n)) #Blended NUC i = 0 for xi in x3: xi.value = blended_nuc[i] xi.lower = m.if3((x1[i]) - 1, lnuc[i], window_lnuc[i]) xi.upper = 10 i += 1 #Limit max lnuc weeks m.Equation(sum(x1)<=4) | The .lower and .upper bounds are defined when the model is initialized and do not change to reflect newly optimized values. To implement these, use an inequality expression. Use a switching point of 0.5 instead of 1 to avoid numerical issues with an integer value >1 or >=1. The solver tolerance is 1e-6 by default so a value of 0.999999 is considered the same as 1.000001 for convergence of equations. i = 0 for xi in x2: xi.value = blended_srp[i] m.Equation(xi >= m.if3((x1[i]) - 0.5, min_promo_price[i], window_lnuc_min_promo_price[i])) m.Equation(xi <= m.if3((x1[i]) - 0.5, max_promo_price[i], window_lnuc_max_promo_price[i])) i += 1 and i = 0 for xi in x3: xi.value = blended_nuc[i] m.Equation(xi >= m.if3((x1[i]) - 0.5, lnuc[i], window_lnuc[i])) xi.upper = 10 i += 1 The select of any four elements with m.Equation(sum(x1)<=4) is correct. | 2 | 1 |
78,200,862 | 2024-3-21 | https://stackoverflow.com/questions/78200862/how-may-i-combine-multiple-adapters-in-python-requests | I need to make multiple calls to a web endpoint that is somewhat unreliable, so I have put together a timeout/retry strategy that I issue to a requests.Session() object as an adapter. However, I also need to mount this same endpoint using a client PKCS12 certificate and a public certificate authority (verify), which I can easily perform with the requests_pkcs12 package - this also creates an adapter. So, adapter 1 looks like this, (pieced together with things I found across the internet): # timeout.py from requests.adapters import HTTPAdapter DEFAULT_TIMEOUT = 30 class TimeoutAdapter(HTTPAdapter): def __init__(self, *args, **kwargs): self.timeout = DEFAULT_TIMEOUT if "timeout" in kwargs: self.timeout = kwargs["timeout"] super().__init__(*args, **kwargs) def send(self, request, **kwargs): timeout = kwargs.get("timeout") if timeout is None: kwargs["timeout"] = self.timeout return super().send(request, **kwargs) ## set up adapter if __name__ == "__main__": from urllib3.util.retry import Retry u = 'https://some-webservice.com' s = requests.Session() retry_strategy = Retry( total=10, status_forcelist=[429, 500, 502, 503, 504], backoff_factor=1) s.mount(u, adapter=TimeoutAdapter(max_retries=retry_strategy) Then, the second adapter looks like this: from requests import Session from requests_pkcs12 import Pkcs12Adapter pki_adapter = Pkcs12Adapter( pkcs12_filename='path/to/cert.p12', pkcs12_password='cert-password') u = 'https://some-webservice.com' s = requests.Session() s.mount(u, adapter=pki_adapter) Both work lovely by themselves, but how would I go about merging them into a single adapter so that the session uses the PKI to authenticate, but also honors the timeout/retry strategy? | Assuming you want the default timeout behavior from TimeoutAdapter, then since TimeoutAdapter and Pkcs12Adapter both inherit from HTTPAdapter, you can have the former inherit from the latter (single inheritance). For example: # timeout.py from requests import Session from requests_pkcs12 import Pkcs12Adapter DEFAULT_TIMEOUT = 30 class TimeoutAdapter(Pkcs12Adapter): def __init__(self, *args, **kwargs): self.timeout = DEFAULT_TIMEOUT if "timeout" in kwargs: self.timeout = kwargs["timeout"] super().__init__(*args, **kwargs) def send(self, request, **kwargs): timeout = kwargs.get("timeout") if timeout is None: kwargs["timeout"] = self.timeout return super().send(request, **kwargs) ## set up adapter if __name__ == "__main__": from urllib3.util.retry import Retry retry_strategy = Retry( total=10, status_forcelist=[429, 500, 502, 503, 504], backoff_factor=1) timeout_adapter = TimeoutAdapter( max_retries=retry_strategy, pkcs12_filename='path/to/cert.p12', pkcs12_password='cert-password' ) u = 'https://some-webservice.com' s = requests.Session() s.mount(u, adapter=timeout_adapter) | 2 | 2 |
78,191,777 | 2024-3-20 | https://stackoverflow.com/questions/78191777/how-to-find-min-cost-for-element-selection-from-a-sequence-of-adjacent-pairs | Given an array of integers (with at least two elements), I need to choose at least one element of each adjacent pair of elements in the array in a way that costs me the least. Then, return the cost and elements chosen. For example, [50, 30, 40, 60, 10, 30, 10] We choose 30 for the pair (50, 30), and for the pair (30, 40). Then we choose 40 for the pair (40, 60) and 10 for the pairs (60, 10), (10, 30). Lastly, 10 for the pair (30, 10). So we got 30 + 40 + 10 + 10 = 90. Another example, [60, 100, 70] There are two possible selections: [60, 70] or [100]. But, the optimal solution would be [100] for a total of 100, which is less than 60 + 70. So, the algorithm should choose 100. My issue is that I only succeeded in making a code that returns the lowest cost without saving the elements used. My Code in Python arr = [50, 30, 40, 60, 10, 30, 10] min_sum = [0] * len(arr) min_sum[0] = arr[0] min_sum[1] = arr[1] for i in range(2, len(arr)): choice1 = min_sum[i-1] + arr[i] choice2 = min_sum[i-2] + arr[i] min_sum[i] = min(choice1, choice2) res = min(min_sum[-1], min_sum[-2]) print(res) | Calling cost(i) the cost of the optimal solution when considering the array up to element i included only, there is a simple recurrence formula for this problem: cost(i) = min( arr[i] + cost(i-1), arr[i-1] + cost(i-2), ) For instance, the cost for array [50, 30, 40, 60, 10, 30, 10] is the minimum of 10 + cost for [50, 30, 40, 60, 10, 30] 30 + cost for [50, 30, 40, 60, 10] The base cases for this recurrence are: cost(0) = 0 cost(1) = 0 cost(2) = min(arr[0],arr[1]) This recurrence relation leads to a simple iterative code: def cost(a): if len(a) <= 1: return 0 elif len(a) == 2: return min(a) cost_iminus2 = 0 cost_iminus1 = min(a[:2]) for i in range(2,len(a)): cost_i = min( arr[i] + cost_iminus1, arr[i-1] + cost_iminus2, ) cost_iminus2, cost_iminus1 = cost_iminus1, cost_i return cost_i You can easily amend this code to also remember which elements were used in the sum: def selection(a): if len(a) < 2: return 0, [] elif len(a) == 2: return min(a), [min(a)] cost_iminus2, selection_iminus2 = 0, [] cost_iminus1, selection_iminus1 = min(a[:2]), [min(a[:2])] for i in range(2,len(a)): cost_i = min( a[i] + cost_iminus1, a[i-1] + cost_iminus2, ) if cost_i == a[i] + cost_iminus1: selection_i = [*selection_iminus1, a[i]] # need to copy elif cost_i == a[i-1] + cost_iminus2: selection_i = selection_iminus2 # no need to copy selection_i.append(a[i-1]) else: raise ValueError("unreachable branch") cost_iminus2, cost_iminus1 = cost_iminus1, cost_i selection_iminus2, selection_iminus1 = selection_iminus1, selection_i return cost_i, selection_i Output: >>> selection([50, 30, 40, 60, 10, 30, 10]) (90, [30, 40, 10, 10]) >>> selection([60,100,70]) (100, [100]) | 3 | 0 |
78,196,569 | 2024-3-20 | https://stackoverflow.com/questions/78196569/extract-words-from-one-df-column-assign-to-another-column | I have two columns in a dataframe: Pests and FieldComment. If the value of Pests is listed as 'None', then I'd like to search the FieldComment column for specific words and overwrite what's in the Pests column. If none of the words are found in the FieldComment column, the Pests column can remain as 'None'. Example: pests_list = ['Spiders', 'Rodents', 'Ants', 'Honey Bees'] Pests FieldComment Spiders Performed service. None Performed service for reported rodents. The above would ideally turn into this: Pests FieldComment Spiders Performed service. Rodents Performed service for reported rodents. This is what I've tried so far but I can't quite get it: for w in df['FieldComment'].str.split(): for p in pests_list: if w.str.lower() == p.str.lower(): df['Pests'] = p I've also tried: df.loc[df['Pests'] == 'None', "Pests"] = *[pest for pest in pest_list if pest in df['FieldComment']] And finally: df.loc[df['Pests'] == 'None', "Pests"] = df.loc[df['Pests'] == 'None', "Pests"].apply(lambda x: pest for pest in pest_list if pest in df['FieldComment'] else 'None') | Convert the pests list into a set. Create a set with the words from FieldComment. Get the intersection of both sets and fill column Pests where it is null. pests_set = set([p.lower() for p in pests_list]) df.loc[df["Pests"].isna(), "Pests"] = df["FieldComment"].apply( lambda x: ", ".join( set(x.strip(".").lower().split()).intersection(pests_set) ).capitalize() ) Pests FieldComment 0 Spiders Performed service. 1 Rodents Performed service for reported rodents. This solution will join pest names with , if there is more than one in the FieldComment column. For this dataframe: Pests FieldComment 0 Spiders Performed service. 1 None Performed service for rodents and spiders. Result would be: Pests FieldComment 0 Spiders Performed service. 1 Spiders, rodents Performed service for rodents and spiders. Note that if the dataframe has a str 'None', and not the None keyword, you would have to modify the above code slightly, replacing df["Pests"].isna() with df["Pests"] == 'None' | 2 | 3 |
78,199,615 | 2024-3-21 | https://stackoverflow.com/questions/78199615/how-to-filter-groups-max-and-min-rows-using-transform | I am working on a task wherein, I need to filter in rows that contain the group's max and min values and filter out other rows. This is to understand how the values change at each decile. np.random.seed(0) df = pd.DataFrame({'id' : range(1,31), 'score' : np.random.uniform(size = 30)}) df id score 0 1 0.548814 1 2 0.715189 2 3 0.602763 3 4 0.544883 4 5 0.423655 5 6 0.645894 6 7 0.437587 7 8 0.891773 8 9 0.963663 9 10 0.383442 10 11 0.791725 11 12 0.528895 12 13 0.568045 13 14 0.925597 14 15 0.071036 15 16 0.087129 16 17 0.020218 17 18 0.832620 18 19 0.778157 19 20 0.870012 20 21 0.978618 21 22 0.799159 22 23 0.461479 23 24 0.780529 24 25 0.118274 25 26 0.639921 26 27 0.143353 27 28 0.944669 28 29 0.521848 29 30 0.414662 I then add the decile column using: df['decile'] = pd.qcut(df['score'], 10, labels=False) Now I tried both: df.transform((df['score'] == df.groupby('decile')['score'].min()) or (df['score'] == df.groupby('decile')['score'].max())) and df.transform(df['score'].eq(df.groupby('decile')['score'].min().values).any() or df['score'].eq(df.groupby('decile')['score'].max().values).any()) But both are not working, can someone please help with this. | Use DataFrame.transform with 2 masks by Series.eq chained by bitwise OR - |: g = df.groupby('decile')['score'] out = df[df['score'].eq(g.transform('min')) | df['score'].eq(g.transform('max'))] print (out) id score decile 1 2 0.715189 6 2 3 0.602763 5 3 4 0.544883 4 5 6 0.645894 5 6 7 0.437587 2 9 10 0.383442 1 10 11 0.791725 7 11 12 0.528895 3 12 13 0.568045 4 13 14 0.925597 8 15 16 0.087129 0 16 17 0.020218 0 17 18 0.832620 7 19 20 0.870012 8 20 21 0.978618 9 22 23 0.461479 3 23 24 0.780529 6 24 25 0.118274 1 27 28 0.944669 9 29 30 0.414662 2 Details: print (df.assign(min=df['score'].eq(g.transform('min')), max=df['score'].eq(g.transform('max')), both = lambda x: x['min'] | x['max'])) id score decile min max both 0 1 0.548814 4 False False False 1 2 0.715189 6 True False True 2 3 0.602763 5 True False True 3 4 0.544883 4 True False True 4 5 0.423655 2 False False False 5 6 0.645894 5 False True True 6 7 0.437587 2 False True True 7 8 0.891773 8 False False False 8 9 0.963663 9 False False False 9 10 0.383442 1 False True True 10 11 0.791725 7 True False True 11 12 0.528895 3 False True True 12 13 0.568045 4 False True True 13 14 0.925597 8 False True True 14 15 0.071036 0 False False False 15 16 0.087129 0 False True True 16 17 0.020218 0 True False True 17 18 0.832620 7 False True True 18 19 0.778157 6 False False False 19 20 0.870012 8 True False True 20 21 0.978618 9 False True True 21 22 0.799159 7 False False False 22 23 0.461479 3 True False True 23 24 0.780529 6 False True True 24 25 0.118274 1 True False True 25 26 0.639921 5 False False False 26 27 0.143353 1 False False False 27 28 0.944669 9 True False True 28 29 0.521848 3 False False False 29 30 0.414662 2 True False True Another idea with lambda function, but solution is slow: out = df[df.groupby('decile')['score'].transform(lambda x: x.eq(x.min()) | x.eq(x.max()))] np.random.seed(0) df = pd.DataFrame({'id' : range(1,30001), 'score' : np.random.uniform(size = 30000)}) df['decile'] = pd.qcut(df['score'], 5000, labels=False) In [44]: %%timeit ...: g = df.groupby('decile')['score'] ...: out = df[df['score'].eq(g.transform('min')) | df['score'].eq(g.transform('max'))] ...: 3.27 ms Β± 239 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) In [45]: %%timeit ...: df[df.groupby('decile')['score'].transform(lambda x: x.eq(x.min()) | x.eq(x.max()))] ...: 1.45 s Β± 12.2 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) | 2 | 3 |
78,194,029 | 2024-3-20 | https://stackoverflow.com/questions/78194029/is-it-feasible-to-create-virtual-machines-in-kubevirt-through-python-client | KubeVirt is an extension of Kubernetes, enabling it to manage virtual machines.I have deployed a KubeVirt cluster. I want to use Python Client to operate KubeVirt to manage virtual machines. Is this feasible? I use Kubernetes Python Client.But it does not seem to support the resource type of virtual machine.My cluster can create virtual machines through kubectl apply - f testvm.yaml Python code: from kubernetes import client, config, utils config.load_kube_config('config') k8s_client = client.ApiClient() file = 'yamlfile\\testvm.yaml' utils.create_from_yaml(k8s_client, file, verbose=True) error: AttributeError: module 'kubernetes.client' has no attribute 'KubevirtIoV1Api' testvm.yaml: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: testvm spec: running: false template: metadata: labels: kubevirt.io/size: small kubevirt.io/domain: testvm spec: domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default masquerade: {} resources: requests: memory: 64M networks: - name: default pod: {} volumes: - name: containerdisk containerDisk: image: quay.io/kubevirt/cirros-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: userDataBase64: SGkuXG4= --- apiVersion: v1 kind: Service metadata: name: vm-ssh spec: type: NodePort ports: - protocol: TCP port: 24 targetPort: 22 name: test selector: kubevirt.io/domain: testvm kubevirt.io/size: small | You're trying to create resources that use the kubevirt.io/v1 api. This isn't a core Kubernetes API; it's provided by the kubevirt custom resource definition, which means it's not supported by any of the static parts of the Python kubernetes client. But don't worry, that's what the DynamicClient is for! You can find some examples of working with the dynamic client here. Your code would end up looking something like: import yaml from kubernetes import config, dynamic from kubernetes.client import api_client # Creating a dynamic client client = dynamic.DynamicClient( api_client.ApiClient(configuration=config.load_kube_config()) ) # fetching the configmap api api = client.resources.get(api_version="kubevirt.io/v1", kind="VirtualMachine") path = 'testvm.yaml' with open(path) as fd: resource = yaml.safe_load(fd) api.create(body=resource) | 2 | 1 |
78,192,336 | 2024-3-20 | https://stackoverflow.com/questions/78192336/how-to-find-cycles-among-an-array-of-lines | I have a minimal reproducible example. Input data: coordinates = [(50.0, 50.0, 100.0, 50.0), (70.0, 50.0, 75.0, 40.0, 80.0, 50.0)] This is what these lines form in the drawing: The task is to find a loop, i.e. a closed region, and display their boundaries in the terminal. In this case, the result should be as follows (it is represented as [(line1), (line2)]): [(70.0, 50.0, 75.0, 40.0), (75.0, 40.0, 80.0, 50.0), (80.0, 50.0, 70.0, 50.0)] I tried the cycle_basic method from the networkx library. But in this method it is necessary that the cycle vertices touch the vertices of other cycles. But here the vertices can touch any place of another cycle | IIUC, you want the induced / chordless_cycles. If so, here is a primal approach with networkx: points = MultiPoint(list(batched(chain.from_iterable(coordinates), 2))) lines = [ line for coo in coordinates for pop in pairwise(batched(coo, 2)) for gc in [split(LineString(pop), points)] for line in gc.geoms ] G = gdf_to_nx( gpd.GeoDataFrame(geometry=lines), multigraph=False, approach="primal" ) cycles = [ [ tuple(chain.from_iterable(pair)) for pair in pairwise(cyc) ] for cyc in nx.chordless_cycles(G) for cyc in [cyc + [cyc[0]]] ] Output (cycles, clockwise): [ [ # top/green cycle (70.0, 50.0, 80.0, 50.0), (80.0, 50.0, 77.5, 45.0), (77.5, 45.0, 72.5, 45.0), (72.5, 45.0, 70.0, 50.0), ], [ # bottom/red cycle (72.5, 45.0, 77.5, 45.0), (77.5, 45.0, 75.0, 40.0), (75.0, 40.0, 72.5, 45.0), ], ] Used input (coordinates) : from itertools import chain, batched, pairwise import geopandas as gpd from momepy import gdf_to_nx import networkx as nx from shapely import LineString, MultiPoint from shapely.ops import split coordinates = [ (50.0, 50.0, 100.0, 50.0), (70.0, 50.0, 75.0, 40.0, 80.0, 50.0), (72.5, 45.0, 77.5, 45.0) ] Full code Old answer | 3 | 2 |
78,191,983 | 2024-3-20 | https://stackoverflow.com/questions/78191983/what-does-freq-infer-do-in-pandas-shift-how-does-it-differ-from-freq-d | I didn't see the difference between freq in .shift() df = pd.DataFrame({"Col1": [10, 20, 15, 30, 45], "Col2": [13, 23, 18, 33, 48], "Col3": [17, 27, 22, 37, 52]}, index=pd.date_range("2020-01-01", "2020-01-05")) df.shift(periods=2, freq="infer") Col1 Col2 Col3 2020-01-03 10 13 17 2020-01-04 20 23 27 2020-01-05 15 18 22 2020-01-06 30 33 37 2020-01-07 45 48 52 return the same as df.shift(periods=2, freq="d") Col1 Col2 Col3 2020-01-03 10 13 17 2020-01-04 20 23 27 2020-01-05 15 18 22 2020-01-06 30 33 37 2020-01-07 45 48 52 Can someone explain what the freq='infer' parameter does? | freq='infer' means that the frequency is inferred from the index metadata. freq DateOffset, tseries.offsets, timedelta, or str, optional Offset to use from the tseries module or time rule (e.g. βEOMβ). If freq is specified then the index values are shifted but the data is not realigned. That is, use freq if you would like to extend the index when shifting and preserve the original data. If freq is specified as βinferβ then it will be inferred from the freq or inferred_freq attributes of the index. If neither of those attributes exist, a ValueError is thrown. Assuming this example with an Index that has a frequency of 2 weeks: df = pd.DataFrame({"Col1": [10, 20, 15, 30, 45], "Col2": [13, 23, 18, 33, 48], "Col3": [17, 27, 22, 37, 52]}, index=pd.date_range("2020-01-01", periods=5, freq='2W')) df.index # DatetimeIndex(['2020-01-05', ..., '2020-03-01'], dtype='datetime64[ns]', # freq='2W-SUN') # <- the important part df.shift(periods=2, freq='infer') will shift 2 periods of 2W = 4 weeks: df.shift(periods=2, freq='infer') Col1 Col2 Col3 2020-02-02 10 13 17 # 4W after 2020-01-05 2020-02-16 20 23 27 2020-03-01 15 18 22 2020-03-15 30 33 37 2020-03-29 45 48 52 # 4W after 2020-03-01 In comparison a simple df.shift(periods=2) just shifts by 2 rows: df.shift(periods=2) Col1 Col2 Col3 2020-01-05 NaN NaN NaN 2020-01-19 NaN NaN NaN 2020-02-02 10.0 13.0 17.0 2020-02-16 20.0 23.0 27.0 2020-03-01 15.0 18.0 22.0 In your example, the default frequency of date_range is D, so this indeed gives the same output. Let's change it to freq='3D': df = pd.DataFrame({"Col1": [10, 20, 15, 30, 45], "Col2": [13, 23, 18, 33, 48], "Col3": [17, 27, 22, 37, 52]}, index=pd.date_range("2020-01-01", periods=5, freq='3D')) # shift by 2 * D df.shift(periods=2, freq='D') Col1 Col2 Col3 2020-01-03 10 13 17 # 2*1D after 2020-01-01 2020-01-05 20 23 27 2020-01-07 15 18 22 2020-01-09 30 33 37 2020-01-11 45 48 52 # shift by 2 * 3D = 6D df.shift(periods=2, freq='infer') Col1 Col2 Col3 2020-01-07 10 13 17 # 2*3D after 2020-01-01 2020-01-10 20 23 27 2020-01-13 15 18 22 2020-01-16 30 33 37 2020-01-19 45 48 52 | 2 | 2 |
78,192,256 | 2024-3-20 | https://stackoverflow.com/questions/78192256/decorated-function-call-now-showing-warning-for-incorrect-arguments-in-pycharm | I'm having some problems when static type checking decorated functions. For instance, when I use an incorrect function argument name or type, I don't get any warning or error hints in the IDE, only at runtime. What steps will reproduce the problem? Add a decorator to a function Use a incorrect argument name or type What is the expected result? PyCharm should highlight the wrong argument name or wrong type. What happens instead? The attribute name is not highlighted. Additional Info This is with Python 3.10 and PyCharm 2023.3.4 Code sample follows: def myDecorator(func): def wrapper(*args, **kwargs): return func(*args, **kwargs) return wrapper @myDecorator def myFunc(x: int, y: int): print(x+y) myFunc(z=4, x="something") # Expecting "z=4" and x="something" to be highlighted Edit It's now working with regular functions. However, I'm encountering the same problem if these functions are defined inside of a Class, as seen below: from typing import ParamSpec from typing import TypeVar from typing import Callable P = ParamSpec("P") T = TypeVar("T") def myDecorator(func: Callable[P, T]) -> Callable[P, T]: def wrapper(*args: P.args, **kwargs: P.kwargs) -> T: return func(*args, **kwargs) return wrapper class myClass: @myDecorator def myFunc2(self, x: int, y: int) -> None: print(x + y) @myDecorator def another_function(self): self.myFunc2(x=3) # showing type hinting: P and highlighting "x=3" as a wrong argument. | Looks similar to this PyCharm issue. If you are willing to annotate the decorator with ParamSpec you can do ... from typing import ParamSpec from typing import TypeVar from typing import Callable P = ParamSpec("P") T = TypeVar("T") def myDecorator(func: Callable[P, T]) -> Callable[P, T]: def wrapper(*args: P.args, **kwargs: P.kwargs) -> T: return func(*args, **kwargs) return wrapper @myDecorator def myFunc(x: int, y: int) -> None: print(x + y) myFunc(z=4, x="something") # Expecting "z=4" and x="something" to be highlighted This way PyCharm will report the mistake. | 2 | 2 |
78,191,270 | 2024-3-20 | https://stackoverflow.com/questions/78191270/type-conversion-using-type-hinting | Say, I have a function that should do some internal stuff and display the provided text: def display_text(text: str): ... print(text) There's also a class with a convert() method: class String: def __init__(self, string: str): self.string = string def convert(self): return self.string Now, can you type hint text argument in display_text with String, but if the provided parameter will be str, call convert and assign the returned value to text? Like that: def display_text(text: String): ... print(text) It should be done without any additional code in a display_text function, just with type hinting. I've seen that in some libs but couldn't figure out how does it work. I tried searching through some libraries' (e.g. discord.py converters) code, searching similar questions on StackOverflow, only found out about typing.Protocol but still no idea how this conversion is done. | Can you type hint text argument in display_text with String, but if the provided parameter will be str, call convert and assign the returned value to text without any additional code in a display_text function, just with type hinting No, not just with type hinting, since type annotations are absolutely inert at runtime and do nothing (and with from __future__ import annotations, they're not even evaluated). If this is a trick question and that "in a display_text function" was the catch, then yes, you could @decorate your functions (or decorate a class holding them, or use a metaclass) to wrap functions using type annotations to cast arguments if needed. An example of such a decorator: import dataclasses import inspect from functools import wraps from typing import Any def convert_args(fn): @wraps(fn) def wrapper(*args, **kwargs): sig = inspect.signature(fn) bound_args = sig.bind(*args, **kwargs) bound_args.apply_defaults() for name, val in bound_args.arguments.items(): param = sig.parameters[name] if hasattr(param.annotation, "convert"): bound_args.arguments[name] = param.annotation.convert(val) return fn(*bound_args.args, **bound_args.kwargs) return wrapper @dataclasses.dataclass class Shouty: val: str @classmethod def convert(cls, val: Any): return Shouty(val=str(val).upper()) @dataclasses.dataclass class Shorten: val: str @classmethod def convert(cls, val: Any): return Shorten(val=str(val)[::2]) @convert_args def display_text(arg1: Shouty, arg2: Shorten): print(locals()) display_text("hello", "world, this is an example") This prints out {'arg1': Shouty(val='HELLO'), 'arg2': Shorten(val='wrd hsi neape')} | 2 | 2 |
78,190,287 | 2024-3-20 | https://stackoverflow.com/questions/78190287/how-to-enforce-specific-elements-in-a-vector-to-be-in-an-optimization-solution-i | I'm trying to generate an optimal combination of p records in a vector of length n while simultaneously ensuring (constraining) that specific elements in the vector are included in the solution set p (based on a binary value for now) I have the equation "simu_total_volume" below that does work for ensuring the solution set never exceeds p records, but I am unable to figure out how to modify this equation to ensure that specific elements i in the vector are included in the solution set (even if non-optimal). x7 is the binary vector that selects which elements are included based on p. "Labor Day" is a vector of 0s except for one element that is equal to 1 (this corresponds to element I want to include in the solution set). I can enforce this vector to sum to 1 per the below, but am unsure how to integrate it into "simu_total_volume" so that the solution conforms to it. Apologies for not including all relevant information for reproducibility, but the full solution is very large. simu_total_volume = [m.Intermediate(( (m.max2(0,base_volume[i]*(m.exp(total_vol_fedi[i])-1)) * x3[i] + m.max2(0,base_volume[i]*(m.exp(total_vol_feao[i])-1)) * x4[i] + m.max2(0,base_volume[i]*(m.exp(total_vol_diso[i])-1)) * x5[i] + m.max2(0,base_volume[i]*(m.exp(total_vol_tpro[i])-1)) * x6[i]) + base_volume[i]) * x7[i]) for i in range(n)] labor_day = [m.Intermediate(x8[i] * el_cppg['holiday_labor_day_flag'].values[i]) for i in range(n)] #Require labor day to be in output m.Equation(sum(labor_day) == 1) #Limit max output m.Equation(sum(x7)<=p) m.Maximize(m.sum(simu_total_volume)) m.options.SOLVER=1 try: m.solve(disp = True) except: continue | There are multiple ways to constrain particular elements within an array. Here is a complete example that optimizes the elements of X to minimize the total cost. Each element X can be [0,1] and two can be selected with sum(X)==2. from gekko import GEKKO m = GEKKO(remote=False) X = m.Array(m.Var,7,lb=0,ub=1,integer=True) c = [1.2,0.95,1.3,1.0,0.8,1.25,1.4] m.Equation(sum(X)==2) m.Minimize(sum([X[i]*c[i] for i in range(7)])) m.options.SOLVER=1 m.solve() print(f'X: {X}') The solver picks the two lowest that correspond to c elements of 0.95 and 0.8: c = [1.2,0.95,1.3,1.0,0.8,1.25,1.4] X: [[0.0] [1.0] [0.0] [0.0] [1.0] [0.0] [0.0]] Here are a few ways to constrain the solution such as enforcing that the last element is always selected: Add an equation m.Equation(X[-1]==1) Set the upper and lower bounds to the specified solution X[-1].lower=1 X[-1].upper=1 Use the m.fix() function m.fix(X[-1],1) Add an objective as a soft constraint Use this method if adding a hard constraint gives an infeasible solution. This will encourage the selection of a preferred option, but won't enforce it if the equations aren't satisfied. m.Minimize(100*(X[-1]-1)**2) Results All of these methods return the correct solution that selects the last element (not optimal) and the least costly element. X: [[0.0] [0.0] [0.0] [0.0] [1.0] [0.0] [1.0]] | 2 | 1 |
78,190,336 | 2024-3-20 | https://stackoverflow.com/questions/78190336/what-is-a-fast-an-elegant-way-to-transform-a-column-with-a-bunch-a-duplicated-w | Lets assume we have a pandas data frame df with the column 'A', and the following non-vectorized transformation function: def transform_a_to_b(a): ... return b Then if we wanted to create the column 'B' using the transform on 'A', we could do the following: df['B'] = df['A'].apply(lambda x: transform_a_to_b(a)) What is a better way to do this if the transform takes a non-trivial amount of time, there are many duplicate values in the column 'A', and the transform always maps the duplicate a values to the same b value? Also assume there are more columns in the data frame so I do want to map the values back to every single row in the original data frame. I came up with the following solution, but I feel like something simpler should exist. transform_counts = 0 def transform_a_to_b(a): global transform_counts # Keep count of how many times this was called transform_counts += 1 return 2 * a # Test dataframe with several duplicates df = pd.DataFrame({ 'A': [1, 3, 2, 2, 3, 3, 2, 3, 1, 1, 1], }) # My solution: # Perform transformation only 3 times for the 3 unique A values and preserve order df = df.merge( df['A'].drop_duplicates().apply(lambda a: pd.Series( data=[a, transform_a_to_b(a)], index=['A', 'B'], )), on='A', how='left', ) After the function transform_counts is 3 and the df is as follows: A B 0 1 2 1 3 6 2 2 4 3 2 4 4 3 6 5 3 6 6 2 4 7 3 6 8 1 2 9 1 2 10 1 2 I am not apposed to caching, if that is simplest, but I cannot change the original transformation definition. | Your approach is good, I would just use map+unique in place of merge+drop_duplicates. df['B'] = df['A'].map({k: transform_a_to_b(k) for k in df['A'].unique()}) A pythonic alternative would be to. cache your function: from functools import cache transform_counts = 0 @cache def transform_a_to_b(a): global transform_counts # Keep count of how many times this was called transform_counts += 1 return 2 * a df = pd.DataFrame({ 'A': [1, 3, 2, 2, 3, 3, 2, 3, 1, 1, 1], }) df['B'] = df['A'].map(transform_a_to_b) print(df) Output: A B 0 1 2 1 3 6 2 2 4 3 2 4 4 3 6 5 3 6 6 2 4 7 3 6 8 1 2 9 1 2 10 1 2 | 3 | 3 |
78,190,327 | 2024-3-20 | https://stackoverflow.com/questions/78190327/issue-linking-css-to-html-file | I can't figure out how to link a CSS file to my HTML file. I'm am using Visual Studio Code, Python, and Flask to do this My project directory is like this: - templates - home.html - style.css - app.py home.html <!DOCTYPE html> <html> <head> <link href='https://fonts.googleapis.com/css?family=Raleway:400, 600' rel='stylesheet' type='text/css'> <link href='assets/style.css' rel='stylesheet' type='text/css'/> </head> <body> ... </body> style.css html, body { margin: 0; padding: 0; } header { background-color: #333333; position: fixed; width: 100%; z-index: 5; } h1 { font-family: 'Times New Roman', Times, serif; color:cadetblue } app.py from flask import Flask, render_template app = Flask(__name__, template_folder="templates") @app.route("/home") @app.route("/") def index(): return render_template("home.html") I tried using the templates folder, it didn't work I tried without using the templates folder, it didn't work. No error is raised, but the page just doesn't has the CSS style, for example the h1 tags keep being black with the default font, and they should be blue with the Times New Roman Font | In flask, your style.css file should be inside the 'static' folder. Then in your Html file you should link it like this: <link href="{{ url_for('static', filename='style.css') }}" rel="stylesheet" type="text/css"/> | 4 | 4 |
78,184,296 | 2024-3-19 | https://stackoverflow.com/questions/78184296/pivot-a-dataframe-using-polars-pivot-like-pivot-longer-in-r | Coming from R I am remaking some exercises that helped me a lot. So Trying to recreate this R code: wide_data <- read_csv('https://raw.githubusercontent.com/rafalab/dslabs/master/inst/extdata/life-expectancy-and-fertility-two-countries-example.csv') new_tidy_data <- pivot_longer(wide_data, `1960`:`2015`, names_to = "year", values_to = "fertility") The data looks like this (I don't know how to paste the output) but has 113 columns: First is country, and then 1960_fertility 1960_life_expectancy 1961_fertility 1961_life_expectancy ..... 2015_fertility 2015_life_expectancy and 2 rows germany, south Korea expected result: head(new_tidy_data) #> # A tibble: 6 Γ 3 #> country year fertility #> <chr> <chr> <dbl> #> 1 Germany 1960 2.41 #> 2 Germany 1961 2.44 #> 3 Germany 1962 2.47 #> 4 Germany 1963 2.49 #> 5 Germany 1964 2.49 #> # βΉ 1 more row So far my code looks like this: import polars as pl import polars.selectors as cs df = pl.read_csv('https://raw.githubusercontent.com/rafalab/dslabs/master/inst/extdata/life-expectancy-and-fertility-two-countries-example.csv') df.pivot() Thanks!! | In polars you've got pivot which makes things wider and then unpivot to make them longer. Unpivot won't split up the single columns with a delimitator into two columns for you, you've got to do that yourself. That looks like this... ( df .unpivot(index='country', value_name='fertility') .with_columns( pl.col('variable').str.splitn('_',2).struct.rename_fields(['year','var']) ) .unnest('variable') .filter(pl.col('var')=='fertility') .drop('var') .sort('country') ) The way polars expressions work is that for every input there is just one output but there's a type called a struct which can be nested with as many columns as is needed. In that way we splitn the variable column into the year part and the var part. We can convert that struct into two regular columns with unnest which is dispatched at the df level rather than as an expression. On second though, since you want to filter for only fertility anyway, you can pre-filter and rename the columns before you unpivot like this: ( df .select( pl.col("^(country|.+fertility)$") .name.map(lambda x: x.replace("_fertility", ""))) .unpivot(index='country', variable_name='year', value_name='fertility') .sort('country') ) shape: (112, 3) βββββββββββββββ¬βββββββ¬ββββββββββββ β country β year β fertility β β --- β --- β --- β β str β str β f64 β βββββββββββββββͺβββββββͺββββββββββββ‘ β Germany β 1960 β 2.41 β β Germany β 1961 β 2.44 β β Germany β 1962 β 2.47 β β Germany β 1963 β 2.49 β β Germany β 1964 β 2.49 β β β¦ β β¦ β β¦ β β South Korea β 2011 β 1.29 β β South Korea β 2012 β 1.3 β β South Korea β 2013 β 1.32 β β South Korea β 2014 β 1.34 β β South Korea β 2015 β 1.36 β βββββββββββββββ΄βββββββ΄ββββββββββββ Lastly If you wanted a column for fertility and life_expectancy, you'd need to combine the first approach with a pivot at the end like this: ( df .unpivot(index='country') .with_columns( pl.col('variable').str.splitn('_',2).struct.rename_fields(['year','var']) ) .unnest('variable') .pivot(on='var', index=['country','year']) .sort('country') ) shape: (112, 4) βββββββββββββββ¬βββββββ¬ββββββββββββ¬ββββββββββββββββββ β country β year β fertility β life_expectancy β β --- β --- β --- β --- β β str β str β f64 β f64 β βββββββββββββββͺβββββββͺββββββββββββͺββββββββββββββββββ‘ β Germany β 1960 β 2.41 β 69.26 β β Germany β 1961 β 2.44 β 69.85 β β Germany β 1962 β 2.47 β 70.01 β β Germany β 1963 β 2.49 β 70.1 β β Germany β 1964 β 2.49 β 70.66 β β β¦ β β¦ β β¦ β β¦ β β South Korea β 2011 β 1.29 β 80.6 β β South Korea β 2012 β 1.3 β 80.7 β β South Korea β 2013 β 1.32 β 80.9 β β South Korea β 2014 β 1.34 β 80.9 β β South Korea β 2015 β 1.36 β 81.0 β βββββββββββββββ΄βββββββ΄ββββββββββββ΄ββββββββββββββββββ | 3 | 3 |
78,170,063 | 2024-3-15 | https://stackoverflow.com/questions/78170063/python-tibber-module-just-return-latest-value-not-a-loop | Ultimately I want to write a value into a database. I found a script to output desired data. However it does return it in a loop - but I only need one single value each time I trigger the script, not a loop. import tibber account = tibber.Account("tokenstring") home = account.homes[0] @home.event("live_measurement") async def process_data( data): print(data.power) home.start_live_feed(user_agent="Homey/10.0.0") print(home.event("live_measurement"))` This will output values in a loop - but I only need one value each time the code runs. Help! | The start_live_feed function starts an infinite loop if called with default arguments. In other words, Python will not run any code after you call this function. The function takes an argument exit_condition. This is a function that is run after every time data is received. If it returns a truthy value, the loop will end and Python will continue running whatever comes after the line home.start_live_feed(...). So, to run it a single time, you just need to pass a function that always returns True. import tibber account = tibber.Account(TOKEN_STRING) home = account.homes[0] @home.event("live_measurement") async def process_data(data): # Database storing logic here home.start_live_feed(user_agent="Homey/10.0.0", exit_condition=lambda data: True) | 2 | 1 |
78,169,916 | 2024-3-15 | https://stackoverflow.com/questions/78169916/after-i-unpivot-a-polars-dataframe-how-can-i-pivot-it-back-to-its-original-form | import polars as pl df = pl.DataFrame({ 'A': range(1,4), 'B': range(1,4), 'C': range(1,4), 'D': range(1,4) }) print(df) shape: (3, 4) βββββββ¬ββββββ¬ββββββ¬ββββββ β A β B β C β D β β --- β --- β --- β --- β β i64 β i64 β i64 β i64 β βββββββͺββββββͺββββββͺββββββ‘ β 1 β 1 β 1 β 1 β β 2 β 2 β 2 β 2 β β 3 β 3 β 3 β 3 β βββββββ΄ββββββ΄ββββββ΄ββββββ df_long = df.unpivot( variable_name="recipe", value_name="revenue") print(df_long) shape: (12, 2) ββββββββββ¬ββββββββββ β recipe β revenue β β --- β --- β β str β i64 β ββββββββββͺββββββββββ‘ β A β 1 β β A β 2 β β A β 3 β β B β 1 β β B β 2 β β β¦ β β¦ β β C β 2 β β C β 3 β β D β 1 β β D β 2 β β D β 3 β ββββββββββ΄ββββββββββ It seems I need to add an index in order to pivot df_long back into the original form of df? Is there no way to pivot a polars dataframe without adding an index? df_long = df_long.with_columns(index=pl.col("revenue").cum_count().over("recipe")) df_long.pivot( on='recipe', index='index', values='revenue', aggregate_function='first' ) shape: (3, 5) βββββββββ¬ββββββ¬ββββββ¬ββββββ¬ββββββ β index β A β B β C β D β β --- β --- β --- β --- β --- β β u32 β i64 β i64 β i64 β i64 β βββββββββͺββββββͺββββββͺββββββͺββββββ‘ β 1 β 1 β 1 β 1 β 1 β β 2 β 2 β 2 β 2 β 2 β β 3 β 3 β 3 β 3 β 3 β βββββββββ΄ββββββ΄ββββββ΄ββββββ΄ββββββ In R, I can perform the equivalent to unpivot and pivot without indexing, and was seeking the same functionality in Python. df_pandas = df.to_pandas() library(tidyverse) library(reticulate) df_long <- py$df_pandas |> pivot_longer( everything(), names_to = 'recipe', values_to = 'value' ) df_long |> pivot_wider( names_from='recipe', values_from='value' ) |> unnest(cols = c(A,B,C,D)) | It seems I need to add an index in order to pivot df_long back into the original form of df? How would it otherwise decide that the entries in df_long with revenue=1 are all on the same row, and not for recipe=A the revenue is 2, for recipe=B, the revenue is 3, etc? You are inferring that from how you defined df, but from just df_long as defined in your question, that is not possible. If this is a requirement, than I suggest you add the index to df before using unpivot, so that df_long carries that information: >>> df_long=df.with_row_index().unpivot(index=["index"], variable_name="recipe",value_name="revenue") >>> df_long shape: (12, 3) βββββββββ¬βββββββββ¬ββββββββββ β index β recipe β revenue β β --- β --- β --- β β u32 β str β i64 β βββββββββͺβββββββββͺββββββββββ‘ β 0 β A β 1 β β 1 β A β 2 β β 2 β A β 3 β β 0 β B β 1 β β 1 β B β 2 β β β¦ β β¦ β β¦ β β 1 β C β 2 β β 2 β C β 3 β β 0 β D β 1 β β 1 β D β 2 β β 2 β D β 3 β βββββββββ΄βββββββββ΄ββββββββββ The pivot can then use the index column to reconstruct the original. | 2 | 2 |
78,188,399 | 2024-3-19 | https://stackoverflow.com/questions/78188399/is-there-parallelism-inside-ollama | Below Python program is intended to translate large English texts into French. I use a for loop to feed a series of reports into Ollama. from functools import cached_property from ollama import Client class TestOllama: @cached_property def ollama_client(self) -> Client: return Client(host=f"http://127.0.0.1:11434") def translate(self, text_to_translate: str): ollama_response = self.ollama_client.generate( model="mistral", prompt=f"translate this French text into English: {text_to_translate}" ) return ollama_response['response'].lstrip(), ollama_response['total_duration'] def run(self): reports = ["reports_text_1", "reports_text_2"....] # avearge text size per report is between 750-1000 tokens. for each_report in reports: try: translated_report, total_duration = self.translate( text_to_translate=each_report ) print(f"Translated text:{translated_report}, Time taken:{total_duration}") except Exception as e: pass if __name__ == '__main__': job = TestOllama() job.run() docker command to run ollama: docker run -d --gpus=all --network=host --security-opt seccomp=unconfined -v report_translation_ollama:/root/.ollama --name ollama ollama/ollama My question is: When I run this script on V100 and H100, I don't see a significant difference in execution time. I've avoided parallelism, thinking that Ollama might internally use parallelism to process. However, when I check with the htop command, I see only one core being used. Am I correct in my understanding? I am a beginner in NLP, so any help or guidance on how to organize my code (e.g., using multithreading to send Ollama requests) would be appreciated. | Flags OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS were added in v0.1.33. You can set them when starting the Ollama server: OLLAMA_NUM_PARALLEL=4 OLLAMA_MAX_LOADED_MODELS=4 ollama serve Available server settings OLLAMA_MAX_LOADED_MODELS - The maximum number of models that can be loaded concurrently provided they fit in available memory. The default is 3 * the number of GPUs or 3 for CPU inference. OLLAMA_NUM_PARALLEL - The maximum number of parallel requests each model will process at the same time. The default will auto-select either 4 or 1 based on available memory. OLLAMA_MAX_QUEUE - The maximum number of requests Ollama will queue when busy before rejecting additional requests. The default is 512. More information Concurrency is no longer experimental as of v0.2.0 and newer. Leave these variables unset so the default algorithm is used. If you want more (or less) parallelism per model than 4 (the default) then you can adjust OLLAMA_NUM_PARALLEL. The default max loaded models is 3x GPU count. According to performance testing run by the Ollama maintainers, setting >3 often leads to degraded performance if all the models are being accessed concurrently, so make sure to run your own tests before settling on an alternative max models setting. Source: faq.md#how-does-ollama-handle-concurrent-requests | 3 | 4 |
78,150,913 | 2024-3-13 | https://stackoverflow.com/questions/78150913/typeerror-locator-object-is-not-callable-using-first-in-playwright | I have a button on a webpage which looks like this: <button rpl="" aria-controls="comment-children" aria-expanded="true" aria-label="Toggle Comment Thread" class="text-neutral-content-strong bg-neutral-background overflow-visible w-md h-md button-small px-[var(--rem6)] button-plain icon items-center justify-center button inline-flex "> <!--?lit$747127195$--><!----><span class="flex items-center justify-center"> <!--?lit$747127195$--><span class="flex"><!--?lit$747127195$--><svg rpl="" fill="currentColor" height="16" icon-name="leave-outline" viewBox="0 0 20 20" width="16" xmlns="http://www.w3.org/2000/svg"> <!--?lit$747127195$--><!--?lit$747127195$--><path d="M14 10.625H6v-1.25h8v1.25ZM20 10a10 10 0 1 0-10 10 10.011 10.011 0 0 0 10-10Zm-1.25 0A8.75 8.75 0 1 1 10 1.25 8.76 8.76 0 0 1 18.75 10Z"></path><!--?--> </svg></span> <!--?lit$747127195$--> </span> <!--?lit$747127195$--><!--?--><!----><!----><!----> </button> I want to press it with Playwright in my Python code. I wrote: page.locator('button[aria-label="Toggle Comment Thread"]').first().click() Sadly, it is giving me the error TypeError: 'Locator' object is not callable. When I don't use first(), it says Error: Error: strict mode violation: locator("button[aria-label=\"Toggle Comment Thread\"]") resolved to 3 elements: which is right, there are 3 of the buttons on the page. I just want to click the first one. | Try .first rather than .first(): page.locator('button[aria-label="Toggle Comment Thread"]').first.click() Runnable example: from playwright.sync_api import sync_playwright # 1.40.0 html = '<button aria-label="Toggle Comment Thread"></button>' with sync_playwright() as p: browser = p.chromium.launch() page = browser.new_page() page.set_content(html) page.locator('button[aria-label="Toggle Comment Thread"]').first.click() browser.close() That said, get_by_role is preferred: page.get_by_role("button", name="Toggle Comment Thread").first.click() And try to avoid .first; find a stricter way to uniquely identify the element (possibly using one of its other many properties) rather than relying on element ordering on the page, which is liable to change unexpectedly. This isn't the only occurrence of this API difference. page.url is also a property, not a function. | 3 | 4 |
78,168,618 | 2024-3-15 | https://stackoverflow.com/questions/78168618/python-3-11-with-no-builtin-pip-module-whats-going-on | .venv/bin/python -m pip uninstall mysqlclient /Users/anentropic/dev/project/.venv/bin/python: No module named pip and .venv/bin/python Python 3.11.5 (main, Sep 18 2023, 15:04:25) [Clang 14.0.3 (clang-1403.0.22.14.1)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import pip Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'pip' I thought pip was a builtin for recent Python versions, how is it that I have one where it doesn't exist? | So... pip isn't actually a "builtin" in recent Python versions What changed is it began to be bundled with Python installs by default, so it's usually available but not necessarily. For whatever reason (perhaps because my venv was created by pdm) mine did not come with it. This can be fixed by: .venv/bin/python -m ensurepip which installs it After that .venv/bin/python -m pip install etc work ok. | 4 | 2 |
78,190,021 | 2024-3-19 | https://stackoverflow.com/questions/78190021/intersecting-geometrycollection-with-polygon | I am using Shapely to intersect geographical data and Pyshp to write the results to a shapefile. I don't completely understand how "intersection" works. First, I create some points and calculate the Voronoi polygons this way: # (1) from shapely import LineString, MultiPoint, Point, Polygon, MultiPolygon from shapely.geometry import shape from shapely.constructive import voronoi_polygons from shapely import intersection import shapefile output_shapefile = "/tmp/simple_test_01.shp" points = MultiPoint([Point(1,1), Point(2,2), Point(4,3), Point(2,4), Point(2,5), Point(6,5), Point(5,4), Point(7,1)]) myVoronoi = voronoi_polygons(points) #>>> myVoronoi #<GEOMETRYCOLLECTION (POLYGON ((-5 -5, -5 4.625, -4.5 4.5, 0 3, 4 -1, 4 -5, -...> n = 0 with shapefile.Writer(output_shapefile, shapeType=5) as w: w.field("ID", "N") for geom in myVoronoi.geoms: w.shape(geom) w.record(n) n += 1 When I draw the shapefile on ArcMap, I get this (green points highlighted by me, not in the shapefile): I create a polygon with which I will intersect the Voronoi polygons: # (2) polygon_to_intersect = Polygon([Point(0,0), Point(2,8), Point(4,3.7), Point(10,4.5), Point(5,-4), Point(0,0)]) output_shapefile = "/tmp/simple_test_02.shp" with shapefile.Writer(output_shapefile, shapeType=5) as w: w.field("ID", "N") w.shape(polygon_to_intersect) w.record(1) On ArcMap it is like this (shown with red outline): I perform the intersection: # (3) intersected = intersection(myVoronoi, polygon_to_intersect) >>> intersected <POLYGON ((0.6 2.4, 0.75 3, 1.125 4.5, 2 8, 3.553 4.66, 3.739 4.261, 4 3.7, ...> output_shapefile = "/tmp/simple_test_03.shp" n = 0 with shapefile.Writer(output_shapefile) as w: w.field("ID", "N") w.shape(intersected) w.record(n) simple_test_03.shp is a single polygon. Its vertices can be seen here: I was expecting "intersection" to give me the Voronoi polygons intersected with polygon_to_intersect. However, I am getting polygon_to_intersect with new vertices, in the places where the polygon's exterior intersects the Voronoi polygons. >>> intersected <POLYGON ((0.6 2.4, 0.75 3, 1.125 4.5, 2 8, 3.553 4.66, 3.739 4.261, 4 3.7, ...> "intersected" is a Polygon instead of a MultiPolygon, why? However, if I do this: # (4) output_shapefile = "/tmp/simple_test_04.shp" multipol = [] for geom in myVoronoi.geoms: multipol.append(intersection(geom, polygon_to_intersect)) n = 0 with shapefile.Writer(output_shapefile) as w: w.field("ID", "N") for pol in multipol: w.shape(pol) w.record(n) n += 1 I get this result, which is what I wanted in the first place: Each colored area is a Voronoi polygon intersected with "polygon_to_intersect". Why do I need to do (4) to get what I want, when I was expecting (3) to work that way? Is it the fact that myVoronoi is a GEOMETRYCOLLECTION instead of a POLYGON or MULTIPOLYGON? I tried to convert myVoronoi to a MultiPolygon prior to the intersection: # (5) output_shapefile = "/tmp/simple_test_05.shp" myVoronoi2 = MultiPolygon(myVoronoi) >>> myVoronoi2 <MULTIPOLYGON (((-5 -5, -5 4.625, -4.5 4.5, 0 3, 4 -1, 4 -5, -5 -5)), ((-5 1...> intersected2 = intersection(myVoronoi2, polygon_to_intersect) >>> intersected2 <POLYGON ((0.6 2.4, 0.75 3, 1.125 4.5, 2 8, 3.553 4.66, 3.739 4.261, 4 3.7, ...> n = 0 with shapefile.Writer(output_shapefile) as w: w.field("ID", "N") w.shape(intersected2) w.record(n) >>> intersected2 == intersected True but the result is the same as in (3). This is a simplified example. In my real situation, "myVoronoi" has 18000+ polygons and "polygon_to_intersect" is much bigger. (3) is not working as I want and (4) works but has performance issues. [EDIT] This is my real situation. My Voronoi polygons: Some zoom in: Some more zoom: And this is my "polygon_to_intersect" (outline in orange): With some zoom: As mentioned by @Pieter, the intersection works with "union semantics", which means my Voronoi polygons get unioned away first, and then intersected with my polygon_to_intersect. But that is not what I want. I want this (each Voronoi polygon in the result file; some of them intersected at the polygon_to_intersect boundaries): So I used this code: output_shapefile = 'intersected.shp' n = 0 w = 0 i = 0 skipped = 0 with shapefile.Writer(output_shapefile) as writer: writer.field("ID", "N") for poly in myVoronoi: n += 1 if not (n % 500): print(f'Processed {n}, within = {w}, intersected = {i}, skipped = {skipped}') # If the Voronoi polygon is fully inside polygon_to_intersect, don't waste time intersecting # Just write the polygon into the output file if within(poly, polygon_to_intersect): writer.shape(poly) writer.record(n) w += 1 else: # Perform the intersection intersected = poly.intersection(polygon_to_intersect) if intersected: writer.shape(intersected) writer.record(n + 50000) i += 1 else: skipped += 1 This works fine but it took almost 25 minutes to complete. There are 18.600+ polygons in myVoronoi and polygon_to_intersect has several hundreds of thousands of vertices. Pieter's suggestion of using a Rtree didn't help much here in terms of performance. tree = shapely.STRtree(voronoi_polys) intersecting_idx = tree.query(polygon_to_intersect, predicate="intersects") intersections = shapely.intersection(voronoi_polys.take(intersecting_idx), polygon_to_intersect) Because it performs many more intersections than in my code above. So I went and created a much simplified "polygon_to_intersect": which has only 76 vertices. And modified my code: # Read the new simplified shapefile into memory simplified_file = 'simplified.shp' with shapefile.Reader(simplified_file, shapeType=5) as w: for shapeRec in w.shapeRecords(): polygon_to_intersect_simplified = shape(shapeRec.shape) prepare(polygon_to_intersect_simplified) output_shapefile = 'intersected.shp' n = 0 w = 0 i = 0 skipped = 0 with shapefile.Writer(output_shapefile) as writer: writer.field("ID", "N") for poly in myVoronoi: n += 1 if not (n % 500): print(f'Processed {n}, within = {w}, intersected = {i}, skipped = {skipped}') # If the Voronoi polygon is fully inside the simplified shape, don't waste time intersecting # Just write the polygon into the output file if within(poly, polygon_to_intersect_simplified): writer.shape(poly) writer.record(n) w += 1 else: # Perform the intersection with the real (not simplified) polygon intersected = poly.intersection(polygon_to_intersect) if intersected: writer.shape(intersected) writer.record(n + 50000) i += 1 else: skipped += 1 "within" is computed 18.600+ times against the much simplified polygon, which is faster. Polygons within are not intersected, so time is saved there. Then, those polygons not "within", are intersected with the real polygon_to_intersect. The modified code runs in under 4 minutes. Without the simplified polygon: within = 16257 intersected = 2357 With the simplified polygon: within = 13669 <--- very fast against the simplified polygon intersected = 4945 <--- more intersections than the previous code but overall the execution time goes from 25 minutes to 4 minutes. | When doing overlays with GeometryCollections the inner boundaries are (intentionally) first unioned away, so this result is to be expected (source). It should be possible to solve the performance issue of the loop by using an rtree index, as shown in the following code sample: import numpy as np import shapely from shapely import MultiPoint, Polygon points = MultiPoint([(1,1), (2,2), (4,3), (2,4), (2,5), (6,5), (5,4), (7,1)]) myVoronoi = shapely.voronoi_polygons(points) polygon_to_intersect = Polygon([(0,0), (2,8), (4,3.7), (10,4.5), (5,-4), (0,0)]) voronoi_polys = np.array(myVoronoi.geoms) tree = shapely.STRtree(voronoi_polys) intersecting_idx = tree.query(polygon_to_intersect, predicate="intersects") intersections = shapely.intersection(voronoi_polys.take(intersecting_idx), polygon_to_intersect) print(intersections) | 3 | 1 |
78,162,630 | 2024-3-14 | https://stackoverflow.com/questions/78162630/storing-data-in-cells-as-a-function | I'm given a pandas dataframe (or even just a 2 dimensional array). Lets assume I have a variable a and one of the cells of the above dataframe (say the cell in place (0,0)) is a: df = pandas.DataFrame() df.at[0,0] = a How should I write the above code such that if the value of a changed then the cell at place (0,0) will change automatically? I tried what I wrote above. I also tried using lambda function. | What you ask for is not something you would want to do (as all the comments to your question point out). The reason is that in order to achieve this with "simple" types you would need pointers (the memory address where the actual value is stored) which is an extra piece of risky complexity which vanilla Python usually hides away. When you access an element by index with df.iloc[0,0] the pandas returns to you a copy of the actual value stored, which is what people manipulating data most people expect and not a memory address which is dangerous to expose and requires the extra hastle of having to dereference it to access its value. Thus there is no way to allias a variable (make to variables point to the same value). In case you really need this behaviour you could use a mutable data structure/object to store your value as a workaround (as @MichaelButscher explained). A simple way would be to use a list with a single value and simply access the value with mylist[0]. A more sophisticated way would be to define a custom class. If you work with numeric types the following is an option: class referred_number: def __init__(self, value=None): self._value = value @property def value(self): return self._value @value.setter def value(self, value): self._value = value def __add__(self, add_obj): self._value = self._value + add_obj.value return self def __sub__(self, sub_obj): self._value = self._value - sub_obj.value return self def __mult__(self, mult_obj): self._value = self._value * mult_obj.value return self def __div__(self, div_obj): self._value = self._value / div_obj.value return self def __str__(self): return str(self._value) This is underwhelming since not only you have lose all the power of pandas methods that work with numeric types and you have to a lot of dunder methods to recover just some of the basic behaviour for the default types. The following code will work as you wanted and will even be able to do some arithmetic operations. Besides the fact that we implemented the __ str__ will make any prints of the variable or the dataframe print the actual values. import pandas as pd df = pd.DataFrame() x = referred_number(1) df.at[0,0] = x df.at[0,1] = referred_number(5) print(f"df:\n{df} \n\nx:{x}\n") x.value = 5 print(f"df:\n{df} \n\nx:{x}\n") df[0] = df[0] + referred_number(4) print(f"df:\n{df} \n\nx:{x}\n") df[0] = df[0] + referred_number(3) print(f"df:\n{df} \n\nx:{x}\n") df[0] = df[0] + referred_number(2) print(f"df:\n{df} \n\nx:{x}\n") df[0] = df[0] + referred_number(2) print(f"df:\n{df} \n\nx:{x}\n") # Expected but dangerous behaviour since x is added twice!! df.at[0,2] = x df[0] = df[0] + referred_number(4) print(f"df:\n{df} \n\nx:{x}\n") You just have to be carefull of not assiging or operating with a regular number and use a number build with the constructor referred_number(...), e.g. referred_number(2) + 2 will yield an error. As you may have noticed what you intended involves more work, loses functionality and is error prone, which is why it is suggested to be a bad idea. | 3 | 1 |
78,179,759 | 2024-3-18 | https://stackoverflow.com/questions/78179759/read-accdb-database-in-python-app-running-on-docker-container-alpine | I am trying and failing to read a local .accdb file in my Python 3.11 app, which is running in an python:3.11-alpine container. My Dockerfile executes without errors: FROM python:3.11-alpine EXPOSE 5001 ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 RUN apk update && apk add --no-cache gcc g++ musl-dev unixodbc-dev flex bison gawk COPY requirements.txt . RUN python -m pip install -r requirements.txt RUN apk add --no-cache git autoconf automake libtool gettext-dev make RUN git clone https://github.com/mdbtools/mdbtools.git WORKDIR /mdbtools RUN autoreconf -i -f RUN ./configure --with-unixodbc=/usr --disable-dependency-tracking RUN make RUN make install RUN echo -e "\n[MDBTools]\nDescription=MDBTools Driver\nDriver=/usr/local/lib/odbc/libmdbodbc.so" >> /etc/odbcinst.ini RUN apk add --no-cache nano WORKDIR /app COPY . /app RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app USER appuser CMD ["python", "server.py"] My Python script (accdb_test.py): import pyodbc import argparse parser = argparse.ArgumentParser(description='Connect to an Access database.') parser.add_argument('db_path', type=str, help='The path to the Access database') args = parser.parse_args() conn_str = ( r'DRIVER={MDBTools};' r'DBQ=' + args.db_path + ';' ) try: conn = pyodbc.connect(conn_str) print("Connection successful!") except pyodbc.Error as e: print("Failed to connect to the database:", e) I build the container connect to its terminal, than I run the script with this result: /app $ python accdb_test.py /app/input_examples/caesar/MODEL_13-16_R01.ACCDB ['MDBTools'] File not found File not found Unable to locate database Failed to connect to the database: ('HY000', 'The driver did not supply an error!') The path to the .accdb file is correct, I checked: /app $ ls -l /app/input_examples/caesar/MODEL_13-16_R01.ACCDB -rwxrwxrwx 1 appuser root 47116288 Mar 18 09:29 /app/input_examples/caesar/MODEL_13-16_R01.ACCDB | As far as I can tell, mdbtools expects a 2-component connection string, with only a single semicolon. Since you end your connection string with a semicolon, it is looking for a file named /app/input_examples/caesar/MODEL_13-16_R01.ACCDB; and cannot find that file. This can be seen here in the source, it's splitting the connection string into a maximum of 2 components, where any further separators and components get added to the file name. The fix is simple: remove the semicolon: conn_str = ( r'DRIVER={MDBTools};' r'DBQ=' + args.db_path ) | 3 | 3 |
78,162,041 | 2024-3-14 | https://stackoverflow.com/questions/78162041/dropping-elements-from-lists-in-a-nested-polars-column | How do I get this behaviour: (pl.Series(['abc_remove_def', 'remove_abc_def', 'abc_def_remove']).str.split('_') .map_elements(lambda x: [y for y in x if y != 'remove']).list.join('_') ) Without using the slower map_elements? I have tried using .list.eval and pl.element() but I can't find anything that actually excludes elements from a list by name (i.e. by the word 'remove' in this case) | list.eval, in combination with filter would work as following: # list_eval (pl .Series(['abc_remove_def', 'remove_abc_def', 'abc_def_remove']).str.split('_') .list.eval(pl.element().filter(pl.element() != 'remove')) ) That said, list.set_difference as suggested by @jqurious is the most straightforward and fastest: # list_set_difference (pl .Series(['abc_remove_def', 'remove_abc_def', 'abc_def_remove']).str.split('_') .list.set_difference(['remove']) ) Output: shape: (3,) Series: '' [list[str]] [ ["abc", "def"] ["abc", "def"] ["abc", "def"] ] Timings and differences lists of 3 items lists of 100 items with many duplicates lists of 100 items without duplicates NB. the timings exclude the creation of the Series. Additionally, it is important to note that list.set_difference would also remove duplicated values. For instance on: s = pl.Series(['abc_remove_abc_def', 'remove_abc_def']).str.split('_') # output after set_difference shape: (2,) Series: '' [list[str]] [ ["abc", "def"] ["def", "abc"] ] # output for the other approaches shape: (2,) Series: '' [list[str]] [ ["abc", "abc", "def"] ["abc", "def"] ] | 2 | 2 |
78,180,128 | 2024-3-18 | https://stackoverflow.com/questions/78180128/build-palindrome-from-two-strings | I want to write a python function that does this efficiently: The function will take two strings, 'a' and 'b', and attempt to find the longest palindromic string that can be formed such that it is a concatenation of a non-empty substring of 'a' and a non-empty substring of 'b'. If there are multiple valid answers, it will return the lexicographically smallest one. If no such string can be formed, it will return '-1'. I have an inefficient solution that generates all the substrings of both strings, and then creates all possible concatenations whle tracking the longest which is a valid palindrome: def is_palindrome(word): """Check if a word is a palindrome.""" reversed_word = word[::-1] return word == reversed_word def all_substrings_of_word(word): """Generate all possible non-empty substrings of a given string.""" substrings = [] for sub_string_length in range(1, len(word) + 1): for i in range(len(word) - sub_string_length + 1): new_word = word[i:i + sub_string_length] substrings.append(new_word) return substrings def buildPalindrome(a, b): """Attempt to find the longest palindromic string created by concatenating a substring of `a` with a substring of `b`.""" sub_strings_a = all_substrings_of_word(a) sub_strings_b = all_substrings_of_word(b) # Generate all possible concatenations of substrings from `a` and `b` multiplexed_array = [ word_a + word_b for word_a in sub_strings_a for word_b in sub_strings_b] # Find the best palindrome (longest, then lexicographically smallest) best_palindrome = "" for word in multiplexed_array: if is_palindrome(word): if len(word) > len(best_palindrome): best_palindrome = word elif len(word) == len(best_palindrome) and word < best_palindrome: best_palindrome = word return best_palindrome if best_palindrome else "-1" print(buildPalindrome("bac", "bac")) # EXPECTED OUTPUT -- aba print(buildPalindrome("abc", "def")) # EXPECTED OUTPUT -- -1 print(buildPalindrome("jdfh", "fds")) # EXPECTED OUTPUT -- dfhfd Can I please get an explanation on how this can be improved? | You could take this approach: Build a trie for all substrings in b. A suffix tree would be even better, as it is more efficient. Consider all possible "centers" for potential palindromes in the string a. So these can be between two consecutive characters (when palindrome has an even size) or on a character (when palindrome has an odd size). For each of these centers do: Find the largest palindrome p at that center only considering string a extend p to the left as long as the added characters (in the order of being added) are a word in the trie of b. This is a potential solution. Compare it with the longest palindrome so far to retain the longest. If it is not possible to extend p in this way, then shorten p until a character is so removed that exists in b. In that case we have a potential solution. If in the latter case there are no characters in p that occur in b, then we have no suitable palindrome at the chosen center. Then turn the tables and apply the above procedure where a becomes the reversal of b, and b the reversal of a. This practically means we search for palindrome centers in the original b. Here is an implementation of that idea: # Of the two given strings choose the one that is longest, or if equal, comes first in lexical order def longest(x, y): return min((-len(x), x), (-len(y), y))[1] def buildTrie(s): trie = {} for i in range(len(s)): node = trie for j in range(i, len(s)): node = node.setdefault(s[j], {}) return trie def buildPalindromeTrincot(s1, s2): palin = "" # Two passes: one for where the center of a palindrome is in s1, one for the reverse case for a, b in ((s1, s2), (s2[::-1], s1[::-1])): # Build a trie for B substrings (could be suffixtree for better performance) trie = buildTrie(b) n = len(a) # Visit all possible centers in A for a potential solution of at least 2 characters for center in range(2*n-1, 0, -1): # Get the offsets of the innermost characters that must be equal # for a palindrome of at least two characters mid1 = (center - 1)//2 mid2 = (center + 2)//2 # Get largest palindrome at this center in A alone: # `left` will point to the left-neighboring character to that palindrome left = next((left for left, right in zip(range(mid1, 0, -1), range(mid2, n)) if a[left] != a[right]), max(0, mid1 + mid2 - n)) # Must extend the palindrome with a substring from B node = trie.get(a[left], None) if node is not None: # We can extend the palindrome using B for left in range(left-1, -1, -1): if a[left] not in node: left += 1 break node = node[a[left]] else: # See if we can drop characters from the palindrome in A # until we can replace one with the same character from B left = next((left for left in range(left+1, mid1+1) if a[left] in trie), None) if left is None: continue # No solution found here palin = longest(a[left:mid2] + a[left:mid1+1][::-1], palin) return palin or "-1" For input strings of around 40, this implementation runs 100 times faster than the original code you provided. For inputs of size 70, that becomes a factor of 1000 times. For inputs with strings of size 500, this implementation returns an answer in less than a second. There is still room for improvement, like: Using a suffix tree instead of a trie Exiting early when the future palindromes can never be longer than the one already found Have the center move from the center of a+b and "fan" outward, so that a larger palindrome is found sooner. For this you'll need to have both tries built first as you'll toggle between one and the other. But as the above code already brought a dramatic improvement, I didn't pursue any of these improvements. | 2 | 4 |
78,188,105 | 2024-3-19 | https://stackoverflow.com/questions/78188105/python-type-hints-what-should-i-use-for-a-variable-that-can-be-any-iterable | Consider the function: def mysum(x)->int: s = 0 for i in x: s += i return s The argument x can be list[int] or set[int], it can also be d.keys() where d is a dict, it can be range(10), as well as any other iterable, where the item is of type int. What is the correct type-hint for x? My python is 3.10+. | You can use typing.Iterable: from typing import Iterable def mysum(x: Iterable[int]) -> int: s = 0 for i in x: s += i return s Edit: typing.Iterable is an alias for collections.abc.Iterable, so you should use that instead, as suggested in the comments. | 3 | 8 |
78,187,120 | 2024-3-19 | https://stackoverflow.com/questions/78187120/drop-all-the-columns-after-a-particular-column | Suppose, I am reading a csv with hundreds of columns. Now, I know that after a particular column say 'XYZ' all the columns are junk. I want to keep all the columns from the beginning till column 'XYZ' and drop all the columns after column 'XYZ'. In pandas, I may do something like: df.iloc[:, :df.columns.get_loc('XYZ') + 1] What could be an efficient way in polars? | The slicing with [:] solution works for eager evaluation (DataFrame), but as far as I know it doesn't really work for LazyFrame. If you want to be able to use lazy evaluation, you can just use DataFrame.select(): # prepare the data df = pl.LazyFrame({ 'ABC': [1,2,3], 'DEF': [4,5,6], 'XYZ': [7,8,9], 'garbage1': [10,11,12], 'garbage2': list('abc') }) df.sink_csv('test.csv') now we can scan it: df = pl.scan_csv('test.csv') df.select(df.columns[:df.columns.index('XYZ')+1]).collect() βββββββ¬ββββββ¬ββββββ β ABC β DEF β XYZ β β --- β --- β --- β β i64 β i64 β i64 β βββββββͺββββββͺββββββ‘ β 1 β 4 β 7 β β 2 β 5 β 8 β β 3 β 6 β 9 β βββββββ΄ββββββ΄ββββββ I'm not sure, however, how efficient polars can filter out columns which don't have to be returned, cause they still have to be read from the file. | 2 | 1 |
78,187,376 | 2024-3-19 | https://stackoverflow.com/questions/78187376/how-to-make-a-regex-orderless-when-validating-a-list-of-texts | My input is this dataframe (but it could be a simple list) : import pandas as pd df = pd.DataFrame({'description': ['ij edf m-nop ij abc', 'abc ij mnop yz', 'yz yz mnop aa abc', 'i j y y abc xxx mnop y z', 'yz mnop ij kl abc uvwxyz', 'aaabc ijij uuu yz mnop']}) I also have a list of keywords (between 3 and 7 items) that I need to valid. We should only validate an exact combination of the whole keywords and ignore characters in between. The problem is that those keywords don't respect the order I put them in my list (here keywords). I searched in google and here too but couldn't find any post that talks about a similar topic. So I made the code below which is making a permuation of the keywords and put them in a regex string. import re import itertools keywords = ['abc', 'ij', 'mnop', 'yz'] regex = '' for perm in list(itertools.permutations(keywords)): perm = [fr'\b{key}\b' for key in perm] regex += f'(?:{".*".join(perm)})|' regex = regex.rstrip('|') Here is a snippet of my regex : # (?:\babc\b.*\bij\b.*\bmnop\b.*\byz\b)|(?:\babc\b.*\bij\b.*\byz\b.*\bmnop\b)|(?:\ # babc\b.*\bmnop\b.*\bij\b.*\byz\b)|(?:\babc\b.*\bmnop\b.*\byz\b.*\bij\b)|(?:\babc # \b.*\byz\b.*\bij\b.*\bmnop\b)|(?:\babc\b.*\byz\b.*\bmnop\b.*\bij\b)|(?:\bij\b.*\ # babc\b.*\bmnop\b.*\byz\b)|(?:\bij\b.*\babc\b.*\byz\b.*\bmnop\b)|(?:\bij\b.*\bmno # p\b.*\babc\b.*\byz\b)|(?:\bij\b.*\bmnop\b.*\byz\b.*\babc\b)|(?:\bij\b.*\byz\b.*\ # babc\b.*\bmnop\b)|(?:\bij\b.*\byz\b.*\bmnop\b.*\babc\b)|(?:\bmnop\b.*\babc\b.*\b # ij\b.*\byz\b)|(?:\bmnop\b.*\babc\b.*\byz\b.*\bij\b)|(?:\bmnop\b.*\bij\b.*\babc\b # .*\byz\b)|(?:\bmnop\b.*\bij\b.*\byz\b.*\babc\b)|(?:\bmnop\b.*\byz\b.*\babc\b.*\b # ij\b)|(?:\bmnop\b.*\byz\b.*\bij\b.*\babc\b)|(?:\byz\b.*\babc\b.*\bij\b.*\bmnop\b # )|(?:\byz\b.*\babc\b.*\bmnop\b.*\bij\b)|(?:\byz\b.*\bij\b.*\babc\b.*\bmnop\b)|(? # :\byz\b.*\bij\b.*\bmnop\b.*\babc\b)|(?:\byz\b.*\bmnop\b.*\babc\b.*\bij\b)|(?:\by # z\b.*\bmnop\b.*\bij\b.*\babc\b) While it works on the example I gave, it takes 5-15 minutes on my real dataset (50k rows and very long descriptions with breaklines) and I'm not sure if my approach handles correctly all the rows. And there is also a problem, sometimes I had to validate a list of 6 keywords, which gives 720 permuation ! Can you guys help me solve this ? Is regex the right way to approach my problem ? My expected ouptut is this : description valid 0 ij edf m-nop ij abc 1 abc ij mnop yz True 2 yz yz mnop aa abc 3 i j y y abc xxx mnop y z 4 yz mnop ij kl abc uvwxyz True 5 aaabc ijij uuu yz mnop | A regex can be useful, but generating all permutations is not appropriate. I would use a regex to extract words, then checking that the keywords are a subset of the extracted words with set.issubset: import re keywords = {'abc', 'ij', 'mnop', 'yz'} # this is a SET reg = re.compile(r'\b[a-z]+\b', flags=re.I) df['valid'] = [keywords.issubset(reg.findall(x)) for x in df['description']] NB. you might want to add a casefold step to ignore case. Output: description valid 0 ij edf m-nop ij abc False 1 abc ij mnop yz True 2 yz yz mnop aa abc False 3 i j y y abc xxx mnop y z False 4 yz mnop ij kl abc uvwxyz True 5 aaabc ijij uuu yz mnop False For fun, by tweaking the code you could even get the set of missing words instead of False: df['valid'] = [keywords.issubset(S:=set(reg.findall(x))) or keywords-S for x in df['description']] description valid 0 ij edf m-nop ij abc {mnop, yz} 1 abc ij mnop yz True 2 yz yz mnop aa abc {ij} 3 i j y y abc xxx mnop y z {yz, ij} 4 yz mnop ij kl abc uvwxyz True 5 aaabc ijij uuu yz mnop {abc, ij} # or df['missing'] = [keywords-set(reg.findall(x)) for x in df['description']] df['valid'] = df['missing'].eq(set()) description missing valid 0 ij edf m-nop ij abc {mnop, yz} False 1 abc ij mnop yz {} True 2 yz yz mnop aa abc {ij} False 3 i j y y abc xxx mnop y z {yz, ij} False 4 yz mnop ij kl abc uvwxyz {} True 5 aaabc ijij uuu yz mnop {abc, ij} False | 2 | 3 |
78,186,300 | 2024-3-19 | https://stackoverflow.com/questions/78186300/differrent-behavior-between-numpy-arrays-and-array-scalars | This is a follow-up on this question. When we use a numpy array with a specific type, it preserves its type following numeric operations. For example adding 1 to a uint32 array will wrap up the value to 0 if needed (when the array contained the max uint32 value) and keep the array of type uint32: import numpy a = numpy.array([4294967295], dtype='uint32') a += 1 # will wrap to 0 print(a) print(a.dtype) Output: uint32 [0] uint32 This behavior does not hold for an array scalar with the same type: import numpy a = numpy.uint32(4294967295) print(a.dtype) a += 1 # will NOT wrap to 0, and change the scalar type print(a) print(a.dtype) Output: uint32 4294967296 int64 But according to the array scalars documentation: The primary advantage of using array scalars is that they preserve the array type ... Therefore, the use of array scalars ensures identical behaviour between arrays and scalars, irrespective of whether the value is inside an array or not. (emphasys is mine) My question: Why do I observe the above different behavior between arrays and scalars despite the explicit documentation that states they should behave identically ? | As mentioned in the comments: yes, this documentation is imprecise at best. I think it is referring to the behavior between scalars of the same type: import numpy a = numpy.uint32(4294967295) print(a.dtype) # uint32 a += np.uint32(1) # WILL wrap to 0 with warning print(a) # 0 print(a.dtype) # uint32 The behavior of your example, however, will change due to NEP 50 in NumPy 2.0. So as frustrating as the old behavior is, there's not much to be done but wait, unless you want to file an issue about backporting a documentation change. As documented in the Migration Guide. The largest backwards compatibility change of this is that it means that the precision of scalars is now preserved consistently... np.float32(3) + 3. now returns a float32 when it previously returned a float64. I've confirmed that in your example, the type is preserved as expected. import numpy a = numpy.uint32(4294967295) print(a.dtype) # uint32 a += 1 # will wrap to 0 print(a) # 0 print(a.dtype) # uint32 numpy.__version__ # '2.1.0.dev0+git20240318.6059db1' The second NumPy 2.0 release candidate is out, in case you'd like to try it: https://mail.python.org/archives/list/[email protected]/thread/EGXPH26NYW3YSOFHKPIW2WUH5IK2DC6J/ | 4 | 4 |
78,187,026 | 2024-3-19 | https://stackoverflow.com/questions/78187026/how-to-reverse-the-value-of-column-in-a-dataframe | Given a dataframe with one columm "Name" and two rows. Input: Name ABCD XYZ Output: Name DCBA ZYX I have searched online and the solution is to reverse the row/column order. Kindly suggest how can I reverse the value in a column in A DataFrame | You have to use a loop. Assuming strings, you could use apply: df['Name'] = df['Name'].apply(lambda x: x[::-1]) or a list comprehension: df['Name'] = [x[::-1] for x in df['Name']] Output: Name 0 DCBA 1 ZYX If you don't have only strings, a safer approach would be to check for the type: df['Name'] = df['Name'].apply(lambda x: x[::-1] if isinstance(x, str) else x) # or df['Name'] = [x[::-1] if isinstance(x, str) else x for x in df['Name']] | 3 | 2 |
78,186,270 | 2024-3-19 | https://stackoverflow.com/questions/78186270/enums-in-pandas-dataframe-not-possible-to-do-groupby-on-a-enumn-column | I just learned about enums and thought that they would fit something I'm coding. But when I run this code, I get an error. Am I trying to do something I shouldn't be doing or is this a bug? When trying to groupby a column with enums, I get this error: TypeError: '<' not supported between instances of 'CarBrand' and 'CarBrand' The code: import pandas as pd from enum import Enum class CarBrand(Enum): VOLVO = 'Volvo' BMW = 'BMW' data = { 'brand': [CarBrand.VOLVO, CarBrand.VOLVO, CarBrand.BMW], 'price': [35000, 37000, 45000] } df = pd.DataFrame(data) sum_per_brand = df.groupby('brand').sum('price') print(sum_per_brand) This is the print I was expecting: brand price BMW 45000 VOLVO 72000 | pd.DataFrame.groupby sorts by default. It works if you use sort=False: sum_per_brand = df.groupby('brand', sort=False).sum('price') Alternatively, you could use a datatype which supports sorting (like CategoricalDtype). | 3 | 4 |
78,182,779 | 2024-3-18 | https://stackoverflow.com/questions/78182779/fastest-way-to-count-the-number-of-occurrences-of-a-list-of-items-from-a-numpy-n | I have a histogram of an image, basically a histogram of an image is a graph that show how many times a pixel that is converted to a 0-255 value occurs in an image. With Y axis number of occurance and X axis the pixel value. And what i need is the total number of pixel value from 75-125 image= cv2.imread('grade_0.jpg') listOfNumbers = image.ravel() #generates the long list of 0-255 values from the image) type numpy.ndarray Right now my code does this by converting the numpy.ndarray to a list and counting each values one by one start = time.time() numberlist = list(list0fNumbers) sum = 0 for x in range(75,125): sum = sum + numberlist.count(x) end = time.time() print('Sum: ' + str(sum)) print('Execution time_ms: ' + str((end-start) * 10**3)) Result : Sum: 57111 Execution time_ms: 13492.571830749512 I would be doing something like this for thousands of images and with just this image alone it took 13 seconds. It is just too ineffecient. Any recommendation on how to speed it up to about less than 10ms? I wont be just getting the sum of 75-125, but other ranges as well, e.g. 0-80,75-125,120-220,210-255. Assuming those take 13seconds too to process a single 256x256 pixel image takes about 60 seconds which i would say is a bit long even for a slow computer. Here is a sample image: | You can use simple boolean operators: import cv2 image = cv2.imread('grade_0.jpg') out = ((image>=75)&(image<125)).sum() # 57032 Or, as suggested by @jared: out = np.count_nonzero((image>=75)&(image<125)) Timing of the count: # sum 170 Β΅s Β± 2.81 Β΅s per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) # count_non_zero 47.6 Β΅s Β± 2.94 Β΅s per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) Edit: I realize you want to process several bins, this could be done using: bins = [(0,80),(75,125),(120,220),(210,255)] out = {f'{a}-{b}': np.count_nonzero((image>=a)&(image<b)) for a, b in bins} # {'0-80': 26274, '75-125': 57032, '120-220': 86283, '210-255': 40967} But this will read again the image's data for each bin. In this case, bincount, as suggested by @Andrej, could indeed preferred since it only counts the pixels once: bins = [(0,80),(75,125),(120,220),(210,255)] counts = np.bincount(image.ravel()) out = {'-'.join(map(str, t)): counts[slice(*t)].sum() for t in bins} # {'0-80': 26274, '75-125': 57032, '120-220': 86283, '210-255': 40967} The timings will depends on the size of the image and the number of bins. For small images, counting again might be more efficient, while for large ones bincount could be better (but, surprisingly, not always). 256 x 256 # count_nonzero in loop 198 Β΅s Β± 8.75 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) # bincount 440 Β΅s Β± 6.82 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) 512 x 512: # count_nonzero in loop 918 Β΅s Β± 31 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) # bincount 1.76 ms Β± 26.1 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) 1024 x 1024: # count_nonzero in loop 11 ms Β± 210 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) # bincount 8.15 ms Β± 437 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) 2048 x 2048: # count_nonzero in loop 47.1 ms Β± 3.01 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) # bincount 48.8 ms Β± 3.02 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) | 2 | 2 |
78,179,350 | 2024-3-18 | https://stackoverflow.com/questions/78179350/how-do-i-generate-ngroups-from-a-comparison-function | Suppose I have a function that compares rows in a dataframe: def comp(lhs: pandas.Series, rhs: pandas.Series) -> bool: if lhs.id == rhs.id: return True if abs(lhs.val1 - rhs.val1) < 1e-8: if abs(lhs.val2 - rhs.val2) < 1e-8: return True return False Now I have a dataframe containing id, val1 and val2 columns and I want to generate group ids such that any two rows for which comp evaluates to true have the group number. How do I do this with pandas? I've been trying to think of a way to get groupby to achieve this but can't think of a way. MRE: example_input = pandas.DataFrame({ 'id' : [0, 1, 2, 2, 3], 'value1' : [1.1, 1.2, 1.3, 1.4, 1.1], 'value2' : [2.1, 2.2, 2.3, 2.4, 2.1] }) example_output = example_input.copy() example_output.index = [0, 1, 2, 2, 0] example_output.index.name = 'groups' | You want to cluster rows that either belong to the same group, or that are close in distance. For that compute the distance with scipy.spatial.distance.pdist to identify the close points, and create a graph with networkx to identify the connected components: import networkx as nx import pandas as pd from scipy.spatial.distance import pdist from itertools import combinations # example input df = pandas.DataFrame({ 'id' : [0, 1, 2, 2, 3], 'value1' : [1.1, 1.2, 1.3, 1.4, 1.1], 'value2' : [2.1, 2.2, 2.3, 2.4, 2.1] }) thresh = 1e-8 cols = ['value1', 'value2'] # create graph based on already connected ids G = nx.compose_all(map(nx.path_graph, df.index.groupby(df['id']).values())) # add pairs of values with distance below threshold as edges G.add_edges_from(pd.Series(combinations(df.index, 2)) [pdist(df[cols])<thresh] ) # form groups based on the connected components groups = {n: i for i, c in enumerate(nx.connected_components(G)) for n in c} # {0: 0, 4: 0, 1: 1, 2: 2, 3: 2} # update index based on above dictionary df.index = df.index.map(groups) Output: id value1 value2 0 0 1.1 2.1 1 1 1.2 2.2 2 2 1.3 2.3 2 2 1.4 2.4 0 3 1.1 2.1 Graph (only based on IDs ; numbers are the original indices): Graph (after taking the distance into account): | 2 | 4 |
78,182,894 | 2024-3-18 | https://stackoverflow.com/questions/78182894/get-the-name-of-the-group-inside-pandas-groupby-transform | Here is what I am trying to do. I have the following DataFrame in pandas: import numpy as np import pandas as pd n_cols = 3 n_samples = 4 df = pd.DataFrame(np.arange(n_samples * n_cols).reshape(n_samples, n_cols), columns=list('ABC')) print(df) output: A B C 0 0 1 2 1 3 4 5 2 6 7 8 3 9 10 11 I have a category to which each sample (row) belongs: cat = pd.Series([1,1,2,2]) And I have a reference row related to each category: df_ref = pd.DataFrame(np.zeros((2, n_cols)), index=[1,2], columns=list('ABC')) df_ref.loc[1] = 10 print(df_ref) output: A B C 1 10.0 10.0 10.0 2 0.0 0.0 0.0 How do I do the following in a more elegant way (e.g., using groupby and transform): result = df.copy() for i in range(n_cols): result.iloc[i] = df.iloc[i] - df_ref.loc[cat[i]] print(results) output: A B C 0 -10 -9 -8 1 -7 -6 -5 2 6 7 8 3 9 10 11 I thought something like this should work: df.groupby(cat).transform(lambda x: x - df_ref.loc[x.GROUP_NAME]) where x.GROUP_NAME is accessing the name of the group on which transform is operating. In the pandas documentation about transform it is written: "Each group is endowed the attribute βnameβ in case you need to know which group you are working on." I tried to access x.name, but that gives the name of a column, not the name of the group. So I don't understand what this documentation is referring to. | No need for grouby, just reindex df_ref and convert to array: df -= df_ref.reindex(cat).values Or, for a copy: out = df.sub(df_ref.reindex(cat).values) Note that your approach would work with groupby.apply: out = df.groupby(cat, group_keys=False).apply(lambda x: x - df_ref.loc[x.name]) Output: A B C 0 -10.0 -9.0 -8.0 1 -7.0 -6.0 -5.0 2 6.0 7.0 8.0 3 9.0 10.0 11.0 | 2 | 1 |
78,166,162 | 2024-3-15 | https://stackoverflow.com/questions/78166162/how-to-keep-the-date-in-airflow-execution-date-and-convert-time-to-0000 | I'm using time_marker = {{execution_date.in_timezone('Europe/Amsterdam')}} in my dag.py program. I'm trying to keep the date part in execution date, and set the time to "T00:00:00" So whenever during the execution date it runs, the time_marker will always be for example 20240115T00:00:00 How should I do this? I tried to use pendulum.parse but didn't work out how to do this. Thanks. | Add a custom macro to the DAG definition Function: def format_execution_date(execution_date): amsterdam_time = execution_date.in_timezone('Europe/Amsterdam') midnight_amsterdam_time = amsterdam_time.start_of('day') return midnight_amsterdam_time.format('YYYYMMDDT00:00:00') Use function: with DAG( ... user_defined_macros={'format_execution_date': format_execution_date}, ) as dag: ... In your task, you can used by: params={ 'time_marker': '{{ format_execution_date(execution_date) }}' } | 4 | 1 |
78,180,325 | 2024-3-18 | https://stackoverflow.com/questions/78180325/can-you-use-a-functions-return-type-as-a-type-elsewhere | I have a callback that takes the result of another function as input. Is there a way to directly reference that function's return type? Currently I have the return type defined as a type alias that I can use in both places but that doesn't seem ideal. Does something like C++'s std::result_of exist in python? Current Code from typing import Tuple func_return = Tuple[int, int] def func() -> func_return: return 1, 1 def call_back(arg: func_return) -> int: a, b = arg return a + b Ideal Code from typing import Tuple def func() -> Tuple[int, int]: return 1, 1 def call_back(arg: return_type(func)) -> int: a, b = arg return a + b | No, because call_back only knows/cares about the value, not which function produces the value. call_back takes a value, and it's the caller's responsibility to provide a value of the correct type, whether or not they use a function to produce that value. Focus on the type, not the functions that produce or accept values of that type. IntPair = Tuple[int, int] def func() -> IntPair: return 1, 1 def call_back(arg: IntPair) -> int: a, b = arg return a + b (Same as your current code, but with a name that focuses on what the type is, not where it might come from: # All legal, all equivalent call_back(func()) call_back((1, 1)) call_back((1,) + (1,)) ) | 3 | 5 |
78,178,154 | 2024-3-18 | https://stackoverflow.com/questions/78178154/add-subtotal-column-by-condition | could you please advice how to add Quater column, contains sum of months values part 2024-03-01 00:00:00 2024-04-01 00:00:00 2024-05-01 00:00:00 2024-06-01 00:00:00 2024-07-01 00:00:00 2024-08-01 00:00:00 2024-09-01 00:00:00 part1 6 8 2 3 0 5 5 part2 7 1 3 8 9 4 10 part3 10 7 4 5 6 10 0 part4 6 9 3 0 10 9 10 part5 2 1 10 8 7 3 3 part6 1 0 4 1 1 7 8 I'm tring to get output like this: part 2024-03-01 00:00:00 Q1'24 2024-04-01 00:00:00 2024-05-01 00:00:00 2024-06-01 00:00:00 Q2'24 2024-07-01 00:00:00 2024-08-01 00:00:00 2024-09-01 00:00:00 Q3'24 part1 6 6 8 2 3 13 0 5 5 10 part2 7 7 1 3 8 12 9 4 10 23 part3 10 10 7 4 5 16 6 10 0 16 part4 6 6 9 3 0 12 10 9 10 29 part5 2 2 1 10 8 19 7 3 3 13 part6 1 1 0 4 1 5 1 7 8 16 raw data | You can convert columns to quarter periods by DatetimeIndex.to_period for subtotals, join to original by concat and for correct order add DataFrame.sort_index: df.columns = pd.to_datetime(df.columns) out = (pd.concat([df, df.groupby(df.columns.to_period('Q'), axis=1).sum()], axis=1) .sort_index(axis=1, key=lambda x: pd.PeriodIndex(x, 'Q'))) out.columns = [x.strftime("Q%q'%y") if isinstance(x, pd.Period) else x for x in out.columns] print (out) 2024-03-01 00:00:00 Q1'24 2024-04-01 00:00:00 2024-05-01 00:00:00 \ part part1 6 6 8 2 part2 7 7 1 3 part3 10 10 7 4 part4 6 6 9 3 part5 2 2 1 10 part6 1 1 0 4 2024-06-01 00:00:00 Q2'24 2024-07-01 00:00:00 2024-08-01 00:00:00 \ part part1 3 13 0 5 part2 8 12 9 4 part3 5 16 6 10 part4 0 12 10 9 part5 8 19 7 3 part6 1 5 1 7 2024-09-01 00:00:00 Q3'24 part part1 5 10 part2 10 23 part3 0 16 part4 10 29 part5 3 13 part6 8 16 Good point - in last version of pandas (2.1.4+) need transpose, thank you @nick: df.columns = pd.to_datetime(df.columns) df = df.T out = (pd.concat([df, df.groupby(df.index.to_period('Q')).sum()]).T .sort_index(axis=1, key=lambda x: pd.PeriodIndex(x, 'Q')) ) out.columns = [x.strftime("Q%q'%y") if isinstance(x, pd.Period) else x for x in out.columns] print (out) 2024-03-01 00:00:00 Q1'24 2024-04-01 00:00:00 2024-05-01 00:00:00 \ part part1 6 6 8 2 part2 7 7 1 3 part3 10 10 7 4 part4 6 6 9 3 part5 2 2 1 10 part6 1 1 0 4 2024-06-01 00:00:00 Q2'24 2024-07-01 00:00:00 2024-08-01 00:00:00 \ part part1 3 13 0 5 part2 8 12 9 4 part3 5 16 6 10 part4 0 12 10 9 part5 8 19 7 3 part6 1 5 1 7 2024-09-01 00:00:00 Q3'24 part part1 5 10 part2 10 23 part3 0 16 part4 10 29 part5 3 13 part6 8 16 | 2 | 3 |
78,177,751 | 2024-3-18 | https://stackoverflow.com/questions/78177751/is-there-any-way-to-have-re-sub-report-out-on-every-replacement-it-makes | TL;DR: How to get re.sub to print out what substitutions it makes, including when using groups? Kind of like having a verbose option, is it possible to have re.sub print out a message every time it makes a replacement? This would be very helpful for testing how multiple lines of re.sub is interacting with large texts. I've managed to come up with this workaround for simple replacements utilizing the fact that the repl argument can be a function: import re def replacer(text, verbose=False): def repl(matchobj, replacement): if verbose: print(f"Replacing {matchobj.group()} with {replacement}...") return replacement text = re.sub(r"[A-Z]+", lambda m: repl(m, "CAPS"), text) text = re.sub(r"\d+", lambda m: repl(m, "NUMBER"), text) return text replacer("this is a 123 TEST 456", True) # Log: # Replacing TEST with CAPS... # Replacing 123 with NUMBER... # Replacing 456 with NUMBER... However, this doesn't work for groups--it seems re.sub automatically escapes the return value of repl: def replacer2(text, verbose=False): def repl(matchobj, replacement): if verbose: print(f"Replacing {matchobj.group()} with {replacement}...") return replacement text = re.sub(r"([A-Z]+)(\d+)", lambda m: repl(m, r"\2\1"), text) return text replacer2("ABC123", verbose=True) # returns r"\2\1" # Log: # Replacing ABC123 with \2\1... Of course, a more sophisticated repl function can be written that actually checks for groups in replacement, but at that point that solution seems too complicated for the goal of just getting re.sub to report out on substitutions. Another potential solution would be to just use re.search, report out on that, then use re.sub to make the replacement, potentially using the Pattern.sub variant in order to specify pos and endpos to save the sub function from having to search the whole string again. Surely there's a better way than either of these options? | Use matchobj.expand(replacement) which will process the replacement string and make the substitutions: import re def replacer2(text, verbose=False): def repl(matchobj, replacement): result = matchobj.expand(replacement) if verbose: print(f"Replacing {matchobj.group()} with {result}...") return result text = re.sub(r"([A-Z]+)(\d+)", lambda m: repl(m, r"\2\1"), text) return text print(replacer2("ABC123", verbose=True) Output: Replacing ABC123 with 123ABC... 123ABC A generic example that extends re.sub with a verbose option and allows group patterns to be used by replacement functions: import re def sub2(pattern, repl, string, count=0, flags=0, verbose=False): def helper(match, repl): result = match.expand(repl(match) if callable(repl) else repl) if verbose: print(f'offset {match.start()}: {match.group()!r} -> {result!r}') return result return re.sub(pattern, lambda m: helper(m, repl), string, count, flags) # replace three digits with their reverse print(sub2(r'(\d)(\d)(\d)', r'\3\2\1', 'abc123def45ghi789', verbose=True)) # replace three digits with their reverse, and two digits wrap with parentheses print(sub2(r'(\d)(\d)(\d)?', lambda m: r'(\1\2)' if m.group(3) is None else r'\3\2\1', 'abc123def45ghi789', verbose=True)) Output: offset 3: '123' -> '321' offset 14: '789' -> '987' abc321def45ghi987 offset 3: '123' -> '321' offset 9: '45' -> '(45)' offset 14: '789' -> '987' abc321def(45)ghi987 | 2 | 3 |
78,175,724 | 2024-3-17 | https://stackoverflow.com/questions/78175724/chromadb-how-to-check-if-collection-exists | I want to create a script that recreates a chromadb collection - delete previous version and creates a new from scratch. client.delete_collection(name=COLLECTION_NAME) collection = client.create_collection( name=COLLECTION_NAME, embedding_function=embedding_func, metadata={"hnsw:space": "cosine"}, ) However, if the collection does not exist, I receive error: File "*/lib/python3.11/site-packages/chromadb/api/segment.py", line 347, in delete_collection raise ValueError(f"Collection {name} does not exist.") ValueError: Collection operations does not exist. Is there any command to check if a collection exists? I haven't found any in documentation. | You can use the following function collection = client.get_or_create_collection(name="test") It either gets the collection or creates it | 3 | 3 |
78,176,587 | 2024-3-17 | https://stackoverflow.com/questions/78176587/pythons-file-truncate-unexpectedly-does-not-truncate | I have this very simple Python program: def print_file(filename): with open(filename,'r') as read_file: print(read_file.read()) def create_random_file(filename,count): with open(filename,'w+', encoding='utf-8') as writefile: for row_num in range(count): writefile.write(f'{row_num}: fo bar baz\n') def truncate_file_after_first_line(file,read_a_line): file.seek(0,0) # go to start of file print(f"After seeking to 0, at position {file.tell()}"); if (read_a_line): header = file.readline() print(f"After reading a line, at position {file.tell()}"); print(f"Found header '{header.rstrip()}'\n") file.write('TRUNCATE AFTER THIS\n') print(f"After writing marker, at position {file.tell()}"); file.truncate() def mangle_file(filename,read_a_line): with open(filename,'r+') as file: truncate_file_after_first_line(file,read_a_line) # ---- filename = 'testpy.txt' read_a_line = True create_random_file(filename,5) print("Original file:") print_file(filename) mangle_file(filename,read_a_line) print("Truncated file:") print_file(filename) So, I: I create a file with 5 lines (and print it to stdout too). Then, in mangle_file(): I open the file with the r+ option, i.e. Open for reading and writing. The stream is positioned at the beginning of the file. Depending on bool read_a_line, I then either a) Seek to position 0, read a line, write the marker TRUNCATE AFTER THIS\n, then truncate the file. b) Seek to position 0, write the marker TRUNCATE AFTER THIS\n, then truncate the file. Finally I close the file And then re-read it to print it. Sounds straightforward but for a) where the first line in the file (i.e. 0: fo bar baz) is read before truncation, the resulting file is: 0: fo bar baz 1: fo bar baz 2: fo bar baz 3: fo bar baz 4: fo bar baz TRUNCATE AFTER THIS i.e. the truncate() did nothing, the marker has been appened to the untruncated file. Whereas I would expect truncation after the first line read: 0: fo bar baz TRUNCATE AFTER THIS For b), as expected, the resulting file is TRUNCATE AFTER THIS What do I get wrong about truncate()? Update: Added some tells With read_a_line = True Original file: 0: fo bar baz 1: fo bar baz 2: fo bar baz 3: fo bar baz 4: fo bar baz After seeking to 0, at position 0 After reading a line, at position 14 Found header '0: fo bar baz' After writing marker, at position 90 Truncated file: 0: fo bar baz 1: fo bar baz 2: fo bar baz 3: fo bar baz 4: fo bar baz TRUNCATE AFTER THIS With read_a_line = False Original file: 0: fo bar baz 1: fo bar baz 2: fo bar baz 3: fo bar baz 4: fo bar baz After seeking to 0, at position 0 After writing marker, at position 20 Truncated file: TRUNCATE AFTER THIS | From the fopen documentation: Reads and writes may be intermixed on read/write streams in any order. Note that ANSI C requires that a file positioning function intervene between output and input, unless an input operation encounters end-of-file. (If this condition is not met, then a read is allowed to return the result of writes other than the most recent.) Therefore it is good practice (and indeed sometimes necessary under Linux) to put an fseek(3) or fsetpos(3) operation between write and read operations on such a stream. This operation may be an apparent no-op (as in fseek(..., 0L, SEEK_CUR) called for its synchronizing side effect). If you change file.write('TRUNCATE AFTER THIS\n') to file.seek(file.tell()) file.write('TRUNCATE AFTER THIS\n') the function should behave as you intended (it does on my MacOS). Otherwise, "r+" may behave as if the mode was like "ra" (even though that is not a valid mode) -- that is, only appending later in the file or at the very end (depending on how much text was already buffered). | 3 | 3 |
78,176,717 | 2024-3-17 | https://stackoverflow.com/questions/78176717/how-to-bind-functions-returning-references-with-pybind11 | When binding C++ with pybind11, I ran into an issue regarding a couple of class members that return (const or non-const) references; considering the following snippet: struct Data { double value1; double value2; }; class Element { public: Element() = default; Element(Data values) : data(values) { } Element(const Element& other) : data(other.data) { printf("copying from %p\n", &other); } Data data; }; class Container { public: Container() : data(10, Element(Data{0.1, 0.2})) {}; Element& operator[](size_t idx) { return data[idx]; } const Element& operator[](size_t idx) const { return data[idx]; } protected: std::vector< Element > data; }; which is bound to a Python module with: py::class_< Data > (module, "Data") .def(py::init< double, double >(), "Constructs a new instance", "v1"_a, "v2"_a) .def_readwrite("value1", &Data::value1) .def_readwrite("value2", &Data::value2); py::class_< Element > (module, "Element") .def(py::init< Data >(), "Constructs a new instance", "values"_a) .def_readwrite("data", &Element::data) .def("__repr__", [](const Element& e){ return std::to_string(e.data.value1); }); py::class_< Container > (module, "Container") .def(py::init< >(), "Constructs a new instance") .def("__getitem__", [](Container& c, size_t idx) { return c[idx]; }, "idx"_a) .def("__setitem__", [](Container& c, size_t idx, Element e) { c[idx] = e; }, "idx"_a, "val"_a); I am having trouble getting the [] operator to work on Container class print("-------------") foo = module.Data(0.9, 0.8) print(foo.value2) foo.value2 = 0.7 # works print(foo.value2) print("-------------") e = module.Element(motion.Data(0.3, 0.2)) print(e.data.value1) e.data.value2 = 0.6 # works print(e.data.value2) e.data = foo # works print(e.data.value2) print("-------------") c = motion.Container() print(c[0].data.value1) c[0] = e # works print(c[0].data.value1) c[0].data = foo # does not work (!) print(c[0].data.value1) c[0].data.value1 = 0.0 # does not work (!) print(c[0].data.value1) While the __getitem__ ([]) function does seem to be working as intended, it seems to fail when accessing members on the returned object; instead, a temporary copy is created from the returned reference, and any changes to that instance are not applied. I've tried 1) declaring a std::shared_ptr<Element> holder type when binding the Element class; 2) defining specific return value policy py::return_value_policy::reference and py::return_value_policy::reference_internal on __getitem__; and 3) defining specific call policies py::keep_alive<0,1>() and py::keep_alive<1,0>(); but none of these solutions worked. Any hints on what how solve this issue? | C++ deduces the return type of your lambda to be Element not Element& thus making a copy so explicitly define the return type, and make sure you also take items by reference to avoid copying. py::class_< Container > (module, "Container") .def(py::init< >(), "Constructs a new instance") .def("__getitem__", [](Container& c, size_t idx) ->Element& { return c[idx]; }, "idx"_a, py::return_value_policy::reference_internal) .def("__setitem__", [](Container& c, size_t idx, const Element& e) { c[idx] = e; }, "idx"_a, "val"_a); you can define the return type to be ->auto& if you are using C++14 or higher. | 3 | 3 |
78,176,126 | 2024-3-17 | https://stackoverflow.com/questions/78176126/why-is-fast-lookup-possible-for-dict-items | Suppose we have a dictionary d defined as follows d = {i:i for i in range(1,1000001)} And then we store d.items() in x. So, x is a collection of tuples that contain key-value pairs for each element of d. I made a list with the same tuples, defined as follows : l = [(i, i) for i in range(1,1000001)] Now, I compared the time taken to execute (1000001, 1000001) in x and (1000001, 1000001) in l. The time difference was very significant. I assumed that this was because x had a set-like behavior and implemented hash tables on tuples, as tuples are hashable as well. However, tuples that contain lists are non hashable, so I tried the same thing with this dictionary : d = {i:[i,i+1] for i in range(1,1000001)} Unlike what I expected, it still enabled fast lookup. How does this work? PS : Here's my code for time comparison import time def for_dict_items(n): #d = {i:i for i in range(1, n+1)} d = {i:[i, i+1] for i in range(1, n+1)} di = d.items() st = time.time() x = (n, [n, n+1]) in di et = time.time() return (et - st) def for_tuples_list(n): #l = [(i, i) for i in range(1, n+1)] l = [(i,[i, i+1]) for i in range(1, n+1)] st = time.time() x = (n, [n, n+1]) in l et = time.time() return (et - st) k = 1000000 t1 = for_dict_items(k) t2 = for_tuples_list(k) print(t1, t2, t2/t1, sep = "\n") | Since dict is one of the most important core objects, it's implemented in C, as well as dict_values, dict_keys and dict_items. Here's the CPython implementation of dictitems_contains: static int dictitems_contains(PyObject *self, PyObject *obj) { _PyDictViewObject *dv = (_PyDictViewObject *)self; int result; PyObject *key, *value, *found; if (dv->dv_dict == NULL) return 0; if (!PyTuple_Check(obj) || PyTuple_GET_SIZE(obj) != 2) return 0; key = PyTuple_GET_ITEM(obj, 0); value = PyTuple_GET_ITEM(obj, 1); result = PyDict_GetItemRef((PyObject *)dv->dv_dict, key, &found); if (result == 1) { result = PyObject_RichCompareBool(found, value, Py_EQ); Py_DECREF(found); } return result; } If you don't speak C, here's what it does: Check that view is correct - namely that it has a reference to the underlying dictionary (and return False if not, for some reason) Check the input: if it's not a 2-tuple, return False. Look up a key in the underlying dictionary. If found, check if value matches. If it does, return True. Otherwise, return False. The core of this is the fact that dict.items returns not a list, but a special dict_items view. The term view here means that it does not represent a copy, but only a proxy to retrieve underlying items in some specific way. The same applies to dict_keys and dict_values, you can read more in the excellent documentation page on this topic. Note that this is specific to CPython implementation and may be wrong for other implementations. | 3 | 3 |
78,176,271 | 2024-3-17 | https://stackoverflow.com/questions/78176271/how-does-pandas-concat-work-when-the-input-is-a-dictionary | I am struggling to understand how pd.concat works when the input is a dictionary. Let's say we have the following pandas dataframe - # Import pandas library import pandas as pd # initialize list of lists data = [['tom', 10], ['nick', 15], ['juli', 14]] # Create the pandas DataFrame df = pd.DataFrame(data, columns=['Name', 'Age']) Then, we do the following concatenation operation - z = pd.concat({"z":df}, axis = 1) print(z) The output comes out to be - z Name Age 0 tom 10 1 nick 15 2 juli 14 It seems like the key z was stacked on top of the dataframe df. But this doesn't make sense as the axis specified was 1 and therefore, the stacking (if that's what occurred) should've been across columns. | It actually makes sense, since you concatenate as columns (axis=1) you need to differentiate the concatenated columns. Here is a more meaningful example: out = pd.concat({'left': df.add_prefix('left_'), 'middle': df.add_prefix('middle_'), 'right': df.add_prefix('right_')}, axis=1) left middle right left_Name left_Age middle_Name middle_Age right_Name right_Age 0 tom 10 tom 10 tom 10 1 nick 15 nick 15 nick 15 2 juli 14 juli 14 juli 14 This is equivalent to passing the new names to keys: out = pd.concat([df.add_prefix('left_'), df.add_prefix('middle_'), df.add_prefix('right_')], keys=['left', 'middle', 'right'], axis=1) If you were concatenating on axis=0 (rows), then concat would prefix an index level: out = pd.concat({'top': df, 'middle': df, 'bottom': df}, axis=0) Name Age top 0 tom 10 1 nick 15 2 juli 14 middle 0 tom 10 1 nick 15 2 juli 14 bottom 0 tom 10 1 nick 15 2 juli 14 | 3 | 5 |
78,174,010 | 2024-3-17 | https://stackoverflow.com/questions/78174010/what-is-the-most-efficient-method-to-get-the-last-modification-time-of-every-fil | I want to programmatically list the name and last modification time of every file in a certain revision. Running git log for every file, as suggested here is very slow. Is there a faster way to accomplish this? Running the script below on a non-trivial repo (SDL) takes 59s on my machine. #!/usr/bin/env python import datetime import subprocess import time commit = "HEAD" start = time.time() file_names = subprocess.check_output(["git", "ls-tree", "--name-only", "-r", commit], text=True).strip().split("\n") print(f"[{time.time() - start:.4f}] git ls-tree finished") file_times = list(datetime.datetime.fromisoformat(subprocess.check_output(["git", "log", "-1", "--pretty=format:%cI", commit, "--", name], text=True).strip()) for name in file_names) print(f"[{time.time() - start:.4f}] git info finished") | The basic idea is to postprocess git log --name-status with whatever per-commit info you want and look for the first occurrence of names you're interested in. The all-of-them version: git log --name-status --pretty=%ci | awk -F$'\t' ' NF==1 { stamp=$0; next } !seen[$2]++ { print stamp,$0 } ' | sort -t$'\t' -k2,2 and as always season to taste. Are you running on spinning rust? I do that on the SDL default checkout with a cheap ssd it takes 0.548s, so more than a hundred times faster. But then, it's doing 1500+ times fewer walks through history so there's that. | 2 | 6 |
78,172,031 | 2024-3-16 | https://stackoverflow.com/questions/78172031/how-to-obtain-an-exception-with-a-traceback-attribute-that-contains-the-st | It seems that in Python (3.10) an exception that is raised inside a try contains a traceback that does not extend to the calling location of the try. This is somewhat surprising to me, and more importantly, not what I want. Here's a short program that illustrates the problem: # short_tb.py import traceback def caller(): somefunction() def somefunction(): try: raise ValueError("This is a value error") except ValueError as e: # in my actual code, e is passed to a function, and __traceback__ is pulled from it for logging at this other # location; the code below simply demonstrates the lack of frames in the traceback object print("".join(traceback.format_tb(e.__traceback__))) print("This will produce a traceback with only one frame, the one in somefunction()") caller() def caller2(): somefunction2() def somefunction2(): raise ValueError("This is a value error") print("This will produce a traceback with all relevant frames (the default behavior of the python interpreter)") caller2() output: This will produce a traceback with only one frame, the one in somefunction() File ".../short_tb.py", line 10, in somefunction raise ValueError("This is a value error") This will produce a traceback with all relevant frames (the default behavior of the python interpreter) Traceback (most recent call last): File ".../short_tb.py", line 30, in <module> caller2() File ".../short_tb.py", line 22, in caller2 somefunction2() File ".../short_tb.py", line 26, in somefunction2 raise ValueError("This is a value error") ValueError: This is a value error what I want is for __traceback__ to contain all the information in the second example. I'm happy to overwrite the variable and I can do so at the exception-handling location... but how do I get an object to use for that purpose? in this question there are many answers about tracebacks, but none of them seem to be about traceback objects: Catch and print full Python exception traceback without halting/exiting the program | A more correct handling is already suggested by @Mark Tolonen, but if it's really necessary for __traceback__ to contain a full stack: Use tb_frame.f_back to access the previous frames. Reconstruct traceback objects using types.TracebackType. Link them with tb_next constructor argument. import traceback import types def caller(full = False): somefunction(full) def somefunction(full): try: raise ValueError("This is a value error") except ValueError as e: if full: complete_traceback(e) print("".join(traceback.format_tb(e.__traceback__))) def complete_traceback(exception): t = exception.__traceback__ fb = t.tb_frame.f_back while fb: t = types.TracebackType(tb_next=t, tb_frame=fb, tb_lasti=fb.f_lasti, tb_lineno=fb.f_lineno) fb = t.tb_frame.f_back exception.__traceback__ = t print("This will produce a traceback with only one frame, the one in somefunction()") caller() print("This will produce a traceback with all relevant frames") caller(True) Output: This will produce a traceback with only one frame, the one in somefunction() File ".../short_tb.py", line 10, in somefunction raise ValueError("This is a value error") This will produce a traceback with all relevant frames File ".../short_tb.py", line 32, in <module> caller(True) File ".../short_tb.py", line 6, in caller somefunction(full) File ".../short_tb.py", line 10, in somefunction raise ValueError("This is a value error") | 3 | 1 |
78,170,750 | 2024-3-16 | https://stackoverflow.com/questions/78170750/python-tensorflow-keras-error-when-load-a-json-model-could-not-locate-class-se | I've built and trained my model weeks ago and saved it on model.json dan model.h5 Today, when I tried to load it using model_from_json, it gives me an error TypeError: Could not locate class 'Sequential'. Make sure custom classes are decorated with `@keras.saving.register_keras_serializable()`. Full object config: {'class_name': 'Sequential', 'config': {'name': 'sequential_7', 'layers': [{'module': 'keras.layers', 'class_name': 'InputLayer', 'config': {'batch_input_shape': [None, 244, 244, 3], 'dtype': 'float32', 'sparse': False, 'ragged': False, 'name': 'conv2d_15_input'}, 'registered_name': None}, {'module': 'keras.layers', 'class_name': 'Conv2D', 'config': {'name': 'conv2d_15', 'trainable': True, 'dtype': 'float32', 'batch_input_shape': [None, 244, 244, 3], 'filters': 32, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'valid', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'groups': 1, 'activation': 'relu', 'use_bias': True, 'kernel_initializer': {'module': 'keras.initializers', 'class_name': 'GlorotUniform', 'config': {'seed': None}, 'registered_name': None}, 'bias_initializer': {'module': 'keras.initializers', 'class_name': 'Zeros', 'config': {}, 'registered_name': None}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}, 'registered_name': None, 'build_config': {'input_shape': [None, 244, 244, 3]}}, {'module': 'keras.layers', 'class_name': 'MaxPooling2D', 'config': {'name': 'max_pooling2d_14', 'trainable': True, 'dtype': 'float32', 'pool_size': [2, 2], 'padding': 'valid', 'strides': [2, 2], 'data_format': 'channels_last'}, 'registered_name': None, 'build_config': {'input_shape': [None, 242, 242, 32]}}, {'module': 'keras.layers', 'class_name': 'Conv2D', 'config': {'name': 'conv2d_16', 'trainable': True, 'dtype': 'float32', 'filters': 64, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'valid', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'groups': 1, 'activation': 'relu', 'use_bias': True, 'kernel_initializer': {'module': 'keras.initializers', 'class_name': 'GlorotUniform', 'config': {'seed': None}, 'registered_name': None}, 'bias_initializer': {'module': 'keras.initializers', 'class_name': 'Zeros', 'config': {}, 'registered_name': None}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}, 'registered_name': None, 'build_config': {'input_shape': [None, 121, 121, 32]}}, {'module': 'keras.layers', 'class_name': 'MaxPooling2D', 'config': {'name': 'max_pooling2d_15', 'trainable': True, 'dtype': 'float32', 'pool_size': [2, 2], 'padding': 'valid', 'strides': [2, 2], 'data_format': 'channels_last'}, 'registered_name': None, 'build_config': {'input_shape': [None, 119, 119, 64]}}, {'module': 'keras.layers', 'class_name': 'Flatten', 'config': {'name': 'flatten_7', 'trainable': True, 'dtype': 'float32', 'data_format': 'channels_last'}, 'registered_name': None, 'build_config': {'input_shape': [None, 59, 59, 64]}}, {'module': 'keras.layers', 'class_name': 'Dense', 'config': {'name': 'dense_14', 'trainable': True, 'dtype': 'float32', 'units': 64, 'activation': 'relu', 'use_bias': True, 'kernel_initializer': {'module': 'keras.initializers', 'class_name': 'GlorotUniform', 'config': {'seed': None}, 'registered_name': None}, 'bias_initializer': {'module': 'keras.initializers', 'class_name': 'Zeros', 'config': {}, 'registered_name': None}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}, 'registered_name': None, 'build_config': {'input_shape': [None, 222784]}}, {'module': 'keras.layers', 'class_name': 'Dense', 'config': {'name': 'dense_15', 'trainable': True, 'dtype': 'float32', 'units': 2, 'activation': 'softmax', 'use_bias': True, 'kernel_initializer': {'module': 'keras.initializers', 'class_name': 'GlorotUniform', 'config': {'seed': None}, 'registered_name': None}, 'bias_initializer': {'module': 'keras.initializers', 'class_name': 'Zeros', 'config': {}, 'registered_name': None}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}, 'registered_name': None, 'build_config': {'input_shape': [None, 64]}}]}, 'keras_version': '2.13.1', 'backend': 'tensorflow'} I've imports all the requirements: import numpy as np from keras.preprocessing import image from keras.models import model_from_json from tensorflow.keras import layers from tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array from tensorflow.keras.models import Sequential, Model from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, GlobalAveragePooling2D, Dropout, Flatten from tensorflow.keras.applications import VGG16 And this is my code used to load the saves json model: json_file = open('./model/model.json', 'r') loaded_model_json = json_file.read() json_file.close() loaded_model = model_from_json(loaded_model_json) loaded_model.load_weights("model.h5") Am I missing something? | After checking the version because of @kartoos comments. Yes, my kaggle's notebook use tensorflow version 2.13.1 meanwhile I'm trying to load the model to 2.16.1 Tensorflow version. I tried to install 2.13.1 but i can't find a way, so I'm build and re-train my model using 2.16.1 version tensorflow, and it works, no more error and the model can works fine when get saved and loaded again. Saved the model as keras instead of json model.save('model.keras') and load the model again loaded_model = keras.models.load_model("./model/model.keras") Thanks ! | 3 | 4 |
78,170,965 | 2024-3-16 | https://stackoverflow.com/questions/78170965/why-does-renamecolumns-b-b-copy-false-followed-by-inplace-method-no | Here's my example: In [1]: import pandas as pd In [2]: df = pd.DataFrame({'a': [1,2,3], 'b': [4,5,6]}) In [3]: df1 = df.rename(columns={'b': 'b'}, copy=False) In [4]: df1.isetitem(1, [7,8,9]) In [5]: df Out[5]: a b 0 1 4 1 2 5 2 3 6 In [6]: df1 Out[6]: a b 0 1 7 1 2 8 2 3 9 If df1 was derived from df with copy=False, then I'd have expected an in-place modification of df1 to also affect df. But it doesn't. Why? I'm using pandas version 2.2.1, with no options (e.g. copy-on-write) enabled | copy=False means that the underlying data (numpy) is shared. If we modify one item of the underlying numpy array, it is reflected since the data is shared: df = pd.DataFrame({'a': [1,2,3], 'b': [4,5,6]}) df1 = df.rename(columns={'b': 'c'}, copy=False) df1.values[0, 1] = 999 print(df) a b 0 1 999 1 2 5 2 3 6 print(df1) a c 0 1 999 1 2 5 2 3 6 print(id(df.values), id(df1.values)) # 140335530654768 140335530654768 The DataFrame objects (containers, indices, column namesβ¦) are however not identical. print(id(df), id(df1)) # 140335696175520 140335696171536 By setting a new column with df1.isetitem(1, [7,8,9]) you just add a new Series and force a copy. | 3 | 1 |
78,170,789 | 2024-3-16 | https://stackoverflow.com/questions/78170789/polars-replacing-values-greater-than-the-max-of-another-polars-dataframe-within | I have 2 DataFrames: import polars as pl df1 = pl.DataFrame( { "group": ["A", "A", "A", "B", "B", "B"], "index": [1, 3, 5, 1, 3, 8], } ) df2 = pl.DataFrame( { "group": ["A", "A", "A", "B", "B", "B"], "index": [3, 4, 7, 2, 7, 10], } ) I want to cap the index in df2 using the largest index of each group in df1. The groups in two DataFrames are the same. expected output for df2: shape: (6, 2) βββββββββ¬ββββββββ β group β index β β --- β --- β β str β i64 β βββββββββͺββββββββ‘ β A β 3 β β A β 4 β β A β 5 β β B β 2 β β B β 7 β β B β 8 β βββββββββ΄ββββββββ | You can compute the max per group over df1, then clip df2: out = df2.with_columns( pl.col('index').clip( upper_bound=df1.select(pl.col('index').max().over('group'))['index'] ) ) Output: shape: (6, 2) βββββββββ¬ββββββββ β group β index β β --- β --- β β str β i64 β βββββββββͺββββββββ‘ β A β 3 β β A β 4 β β A β 5 β β B β 2 β β B β 7 β β B β 8 β βββββββββ΄ββββββββ Alternatively, if the two groups are not necessarily the same in both dataframes, you could group_by.max then align with join: df1 = pl.DataFrame( { "group": ["A", "A", "A", "B", "B", "B"], "index": [1, 3, 5, 1, 3, 7], } ) df2 = pl.DataFrame( { "group": ["A", "A", "A", "B", "B", "B", "B"], "index": [3, 4, 7, 2, 7, 8, 9], } ) out = df2.with_columns( pl.col('index').clip( upper_bound=df2.join(df1.group_by('group').max(), on='group')['index_right'] ) ) Output: shape: (7, 2) βββββββββ¬ββββββββ β group β index β β --- β --- β β str β i64 β βββββββββͺββββββββ‘ β A β 3 β β A β 4 β β A β 5 β β B β 2 β β B β 7 β β B β 7 β β B β 7 β βββββββββ΄ββββββββ | 6 | 3 |
78,169,670 | 2024-3-15 | https://stackoverflow.com/questions/78169670/how-python-imports-an-instance-of-a-class | I'm trying to find out how exactly importing works in Python. Let's say I have a foo.py module that contains the following class: class Foo: def __init__(self, *args, **kwargs): ... foo = Foo() Now, I want to import it in other modules. Is it going to use the same instance every time that I import it or will make different instances whenever I import it and is it even a good way to instantiate a class or it'd be better to import the Foo class and create an instance in modules that imported it(As it's obvious, this class doesn't need any args and kwargs). | Import it in the usual way: from foo import Foo If this is the first time, the foo.py code will execute and the {class, def} statements will generate bytecode for the interpreter. And we make a note of it in sys.modules. Subsequent imports will benefit from a cache hit, so we wonβt repeat all that evaluation work. When baz.py and qux.py ask for an import of the foo module, they will immediately obtain the cached result, which contains that same old Foo class bytecode. Bottom line: you get exactly one instance of the code in that class. | 3 | 2 |
78,166,405 | 2024-3-15 | https://stackoverflow.com/questions/78166405/reproducing-the-phase-spectrum-while-using-np-fft-fft2-and-cv2-dft-why-are-the | Another question was asking about the correct way of getting magnitude and phase spectra while using cv2.dft. My answer was limited to the numpy approach and then I thought that using OpenCV for this would be even nicer. I am currently trying to reproduce the same results but I am seeing significant differences in the phase spectrum. Here are my imports: %matplotlib notebook import matplotlib.pyplot as plt import numpy as np import cv2 im = np.zeros((50, 50), dtype = np.float32) # create empty array im[2:10, 2:10] = 255 # draw a rectangle The numpy example and results: imFFTNumpy = np.fft.fft2(im) imFFTNumpyShifted = np.fft.fftshift(imFFTNumpy) magSpectrumNumpy = np.abs(imFFTNumpyShifted) phaseSpectrumNumpy = np.angle(imFFTNumpyShifted) fig, ax = plt.subplots(nrows = 1, ncols = 3) ax[0].imshow(im) ax[1].imshow(magSpectrumNumpy) ax[2].imshow(phaseSpectrumNumpy) plt.suptitle("Using Numpy np.fft.fft2 and np.abs/ np.angle") The OpenCV example and results: imFFTOpenCV = cv2.dft(im, flags=cv2.DFT_COMPLEX_OUTPUT) imFFTOpenCVShifted = np.fft.fftshift(imFFTOpenCV) magSpectrumOpenCV, phaseSpectrumOpenCV = cv2.cartToPolar(imFFTOpenCVShifted[:,:,0], imFFTOpenCVShifted[:,:,1]) fig, ax = plt.subplots(nrows = 1, ncols = 3) ax[0].imshow(im) ax[1].imshow(magSpectrumOpenCV) ax[2].imshow(phaseSpectrumOpenCV) plt.suptitle("Using OpenCV cv2.dft and cv2.cartToPolar") As you can see, while the magnitude spectrum looks the same (it has some expected deviations due to floating-point arithmetic), the phase spectrum looks significantly different. I dug around a bit and found out that OpenCV usually returns phase from 0 to 2Ο, whereas np.angle returns the phase from -Ο to +Ο. Subtracting Ο from the OpenCV phase does not correct difference though. What could be the reason for this? Is it possible to get almost identical phase using both approaches, just like with magnitude? | Given that imFFTOpenCV is a 3D array because OpenCV doesnβt understand complex numbers, np.fft.fftshift(imFFTOpenCV) will swap the real and complex planes. That is, the shift happens in all 3 dimensions of the array. So when computing the phase and magnitude, you need to take this swap into account: magSpectrumOpenCV, phaseSpectrumOpenCV = cv2.cartToPolar(imFFTOpenCVShifted[:,:,1], imFFTOpenCVShifted[:,:,0]) Alternatively, and this is probably more readable, you could tell NumPy which axes you want shifted: imFFTOpenCVShifted = np.fft.fftshift(imFFTOpenCV, axes=(0, 1)) | 3 | 4 |
78,157,548 | 2024-3-14 | https://stackoverflow.com/questions/78157548/calculate-exponential-complex-sum-with-fft-instead-of-summation-to-simulate-diff | Context I am trying to understand x-ray diffraction a little better by coding it up in python. For a collection of points with positions R_i, the Debye formula goes where the i in the exponential is for the complex number, all other i's are for indices and for now b_i = b_j = 1, for simplicity., Now I tried explicitly calculating this sum for a collection of points of which I have the coordinates import numpy as np # set up grid dims = 2 side = 30 points = np.power(side, dims) coords = np.zeros((dims, points)) xc, yc = np.meshgrid(np.arange(side), np.arange(side)) coords[0, :] = xc.reshape((points)) coords[1, :] = yc.reshape((points)) # calculate diffraction xdist = np.subtract.outer(coords[0], coords[0]) ydist = np.subtract.outer(coords[1], coords[1]) rdist = np.stack((xdist, ydist)) rdist = rdist.reshape(2, rdist.shape[1]*rdist.shape[2]) qs = 200 qspace = np.stack((np.linspace(-2, 8, qs), np.zeros(qs))) diffrac = np.sum(np.exp(-1j * np.tensordot(qspace.T, rdist, axes=1)), axis=1) Which gave me the following after a couple seconds This looks as expected (periodicity of 2 pi, as the dots have spacing 1). It also makes sense that this takes some time: for 900 points, 810000 distances have to be calculated. I don't use loops so I think the code is not that bad in terms of efficiency, but just the fact that I'm calculating this sum manually seems inherently slow. Thoughts Now it looks as if things would speed up greatly if I could use a discrete fast fourier transform for this - given the shape of the sum. However: for the discrete fourier transform, I would still need to 'pixelate' the image (as far as I understand), to include a lot of empty space in between the points in my signal. As if I were to transform the pixels of the first image I shared. This also seems less efficient (e.g. because of the sampling). I would like to move points around afterwards, so the fact that the first image is a grid and thus sampled regularly is not particularly helpful. It looks as if non-uniform fourier transformations could help me, but still that would require me to 'pixelate' the image and set some values to 0. Question Is there a way to use FFT (or another method) to calculate the sum faster, starting from a list of np.array coordinates (x,y)? (of dirac delta functions, if you want so...). Specifically pointers at relevant mathematical techniques/python functions/python packages would be appreciated. I'm not that familiar using using fourier transforms for actual applications, but most of the material I find online seems irrelevant. So probably I'm looking in the wrong direction, or something's lacking in my understanding. All help is appreciated! (the first image is a screenshot from https://www.ill.eu/fileadmin/user_upload/ILL/6_Careers/1_All_our_vacancies/PhD_recruitment/Student_Seminars/2017/19-2017-05-09_Fischer_Cookies.pdf, as it seems there's no math notation on SO or I did not find it)) | This answer provides a solution to make the code more efficient so it fully uses the computing power of your CPU and so to make it significantly faster. More than 90% of the time is spent in np.exp because computing the experiential of complex numbers is very expensive. One solution to speed this up is to use multiple threads (since Numpy does not use multiple threads). On top of that, we can also use a faster implementation of np.exp (typically leveraging SIMD units of the CPU). Both can be done easily with Numexpr. Then, we can speed up the np.tensordot operation using the matrix multiplication qspace.T @ rdist since the Numpy implementation is not efficient. import numexpr as ne # Equivalent of the last line of the code: tmp1 = qspace.T @ rdist tmp2 = ne.evaluate('exp(-1j * tmp1)') diffrac = np.sum(tmp2, axis=1) Performance evaluation Here are performance results on my i5-9600KF CPU (6 cores): Initial code: 9.3 s New proposed code: 1.1 s Thus, the new implementation is 8.5 times faster. Most of the time is still spent in computing the exponential of complex numbers (>60%). | 3 | 0 |
78,168,476 | 2024-3-15 | https://stackoverflow.com/questions/78168476/how-to-get-the-index-of-the-first-row-that-meets-the-conditions-of-a-mask | This is my DataFrame: import pandas as pd df = pd.DataFrame( { 'a': [100, 1123, 123, 100, 1, 0, 1], 'b': [1000, 11123, 1123, 0, 55, 0, 1], }, index=range(100, 107) ) And this is the expected output. I want to create column c: a b c 100 100 1000 NaN 101 1123 11123 NaN 102 123 1123 NaN 103 100 0 3.0 104 1 55 NaN 105 0 0 NaN 106 1 1 NaN The mask that is used is: mask = ((df.a > df.b)) I want to get the index of first row that mask occurs. I want to preserve the original index but get the reset_index() value. In this example the first instance of the mask is at index 3. I can get the first instance of the mask by this: df.loc[mask.cumsum().eq(1) & mask, 'c'] = 'the first row' But I don't know how to get the index. | Code This code can be modified to search for the second and third items as well, not only first. cond1 = df['a'] > df['b'] cond2 = df.groupby(cond1).cumcount().eq(0) df.loc[cond1 & cond2, 'c'] = 'the first row' df: a b c 100 100 1000 NaN 101 1123 11123 NaN 102 123 1123 NaN 103 100 0 the first row 104 1 55 NaN 105 0 0 NaN 106 1 1 NaN If you are only looking for the first value, the following code may be simpler: df.loc[df['a'].gt(df['b']).cummax().cumsum().eq(1), 'c'] = 'the first row' Updete Answer if you want only location of index, use following code: cond1 = df['a'] > df['b'] idx = cond1.idxmax() loc = df.index.get_loc(idx) loc: 3 df.loc[df.index == idx, 'c'] = loc df: a b c 100 100 1000 NaN 101 1123 11123 NaN 102 123 1123 NaN 103 100 0 3 104 1 55 NaN 105 0 0 NaN 106 1 1 NaN | 4 | 3 |
78,167,633 | 2024-3-15 | https://stackoverflow.com/questions/78167633/how-to-generate-the-section-id-based-on-certain-condition | I have pandas dataframe with this input: data = { 'sec_id': ['1', '', '1.2', '1.3', '1.3.1', '1.3.2', '2', '2.1', '2.2', '2.3', '', '2.3.2', '2.3.3', '3', '4', '4.1', '4.1.1', '4.2', '4.3', '4.4', '5', '5.1', '5.2', '5.3', '5.3.1', '5.3.2', '5.3.3', '5.3.4', '5.3.5', '5.4', '5.5', '6', '6.1', '6.1.1', '6.2', '6.3', '6.4', '6.5', '6.6', '6.6.1', '6.6.2', '6.6.3', '6.7', '6.8', '6.9', '6.9.1', '', '', '6.9.2'], 'p_type': ['Heading1', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading1', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading3', 'Heading1', 'Heading1', 'Heading2', 'Heading3', 'Heading2', 'Heading2', 'Heading2', 'Heading1', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading3', 'Heading3', 'Heading3', 'Heading2', 'Heading2', 'Heading1', 'Heading2', 'Heading3', 'Heading2', 'Heading2', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading3', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading4', 'Heading4', 'Heading3'] } df = pd.DataFrame(data) The problem is to populate the blank "document_section_id" values with accurate section ID values, using the preceding ones as references. Conditions: The number of digits is determined by the "paragraph type" column. For example, for "Heading3," there should be 4 digits and 3 dots, like so: 1.2.3.1. For each empty value, it should reference the preceding available "paragraph type" and increment by 1 accordingly. Example 1:Given the input, the section ID for the 12th row can be derived from the previous one, resulting in the computed value of 2.3.1.Example 2: For the 48th and 49th rows, the section ID needs to be derived as 6.9.1.1 and 6.9.1.2, respectively. There can be max 10 levels of subsection, so that should be taken care irrespective of number of sub sections. Output: sec_id = [ '1', '1.1', '1.2', '1.3', '1.3.1', '1.3.2', '2', '2.1', '2.2', '2.3', '2.3.1', '2.3.2', '2.3.3', '3', '4', '4.1', '4.1.1', '4.2', '4.3', '4.4', '5', '5.1', '5.2', '5.3', '5.3.1', '5.3.2', '5.3.3', '5.3.4', '5.3.5', '5.4', '5.5', '6', '6.1', '6.1.1', '6.2', '6.3', '6.4', '6.5', '6.6', '6.6.1', '6.6.2', '6.6.3', '6.7', '6.8', '6.9', '6.9.1', '6.9.1.1', '6.9.1.2', '6.9.2' ] p_type = [ 'Heading1', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading1', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading3', 'Heading1', 'Heading1', 'Heading2', 'Heading3', 'Heading2', 'Heading2', 'Heading2', 'Heading1', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading3', 'Heading3', 'Heading3', 'Heading2', 'Heading2', 'Heading1', 'Heading2', 'Heading3', 'Heading2', 'Heading2', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading3', 'Heading3', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading4', 'Heading4', 'Heading3' ] This is what I tried but it's not giving accurate output: current_section_id = "" current_level = 0 for index, row in df.iterrows(): if row['sec_id'] == '': current_level += 1 section_id = current_section_id.split('.') section_id[current_level - 1] = str(int(section_id[current_level - 1]) + 1) section_id = '.'.join(section_id[:current_level]) current_section_id = section_id df.at[index, 'document_section_id'] = section_id else: current_section_id = row['sec_id'] current_level = row['paragraph_type'].count('Heading') - 1 | I don't think you need to do this in Pandas. You can create a class that tracks the current section and bumps it by 1, or appends a subsection, or truncates and increments, based on the paragraph type. Here is an example: class SectionCreator: def __init__(self): self.section = [0] def __call__(self, paragraph_type: str): depth = int(paragraph_type.replace('Heading', '')) section = self.section[:depth] if depth == len(section) + 1: section.append(1) elif depth > len(section) + 1: raise ValueError(f'Heading depth increased from {len(section)} to {depth}') else: section[depth - 1] += 1 self.section = section return '.'.join(map(str, self.section)) To use it, instantiate the object and pass in the paragraph types one at at time. headings = ['Heading1', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading1', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading3', 'Heading1', 'Heading1', 'Heading2', 'Heading3', 'Heading2', 'Heading2', 'Heading2', 'Heading1', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading3', 'Heading3', 'Heading3', 'Heading2', 'Heading2', 'Heading1', 'Heading2', 'Heading3', 'Heading2', 'Heading2', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading3', 'Heading3', 'Heading2', 'Heading2', 'Heading2', 'Heading3', 'Heading4', 'Heading4', 'Heading3'] sc = SectionCreator() for h in headings: print(sc(h)) The printed output is: 1 1.1 1.2 1.3 1.3.1 1.3.2 2 2.1 2.2 2.3 2.3.1 2.3.2 2.3.3 3 4 4.1 4.1.1 4.2 4.3 4.4 5 5.1 5.2 5.3 5.3.1 5.3.2 5.3.3 5.3.4 5.3.5 5.4 5.5 6 6.1 6.1.1 6.2 6.3 6.4 6.5 6.6 6.6.1 6.6.2 6.6.3 6.7 6.8 6.9 6.9.1 6.9.1.1 6.9.1.2 6.9.2 I added a exception for the case where a paragraph type jumps up by more than 1 level of depth. So going from a 2 to a 4 will raise an exception. sc = SectionCreator() sc('Heading1') # -> "1" sc('Heading2') # -> "1.1" sc('Heading4') # raises: ValueError: Heading depth increased from 2 to 4 | 3 | 2 |
78,165,778 | 2024-3-15 | https://stackoverflow.com/questions/78165778/how-to-define-an-index-in-sqlalchemyalembic-on-a-column-from-a-base-table | I am a python novice. My project is using SqlAlchemy, Alembic and MyPy. I have a pair of parent-child classes defined like this (a bunch of detail elided): class RawEmergency(InputBase, RawTables): __tablename__ = "emergency" id: Mapped[UNIQUEIDENTIFIER] = mapped_column( UNIQUEIDENTIFIER(), primary_key=True, autoincrement=False ) attendance_id: Mapped[str | None] = guid_column() admitted_spell_id: Mapped[str | None] = guid_column() __table_args__ = ( PrimaryKeyConstraint("id", mssql_clustered=False), Index( "index_emergency_pii_patient_id_and_datetimes", pii_patient_id, attendance_start_date.desc(), attendance_start_time.desc(), ), ) class InputBase(DeclarativeBase): metadata = MetaData(schema="raw") refresh_date: Mapped[str] = date_str_column() refresh_time: Mapped[str] = time_str_column() class RawTables(object): id: Mapped[UNIQUEIDENTIFIER] = mapped_column( UNIQUEIDENTIFIER(), primary_key=True, autoincrement=False ) __table_args__: typing.Any = ( PrimaryKeyConstraint(name="id", mssql_clustered=False), ) I want to add a 2nd index to the Emergency table, indexing the refresh columns provided by the base table. I expect to do so by adding an additional Index() call into the __table_args__ setup. Then I want to run my standard migration creation/checking tool: poetry run alembic --config operator_app/alembic.ini revision --autogenerate -m "refresh_col_indexes" How do I reference the refresh columns in this declaration? Current attemguesses that have failed: Index( "index_emergency_refresh_date_time", refresh_date.desc(), refresh_time.desc(), ), mypy and the IDE both say they don't know what refresh_date is. error: Name "refresh_date" is not defined [name-defined] Index( "index_emergency_refresh_date_time", InputBase.refresh_date.desc(), InputBase.refresh_time.desc(), ), compiles now, but the alembic command doesn't work: sqlalchemy.exc.ArgumentError: Can't add unnamed column to column collection full error below Index( "index_emergency_refresh_date_time", super().refresh_date.desc(), super().refresh_time.desc(), ), Mypy/IDE say no: error: "super()" outside of a method is not supported Index( "index_emergency_refresh_date_time", super(InputBase, self).refresh_date.desc(), super(InputBase, self).refresh_time.desc(), ), self is not defined Index( "index_emergency_refresh_date_time", super(InputBase, None).refresh_date.desc(), super(InputBase, None).refresh_time.desc(), ), mypy says Unsupported argument 2 for "super" and alembic says AttributeError: 'super' object has no attribute 'refresh_date' | You can use sqlalchemy.orm.declared_attr for this. You can add any number of index you want under __table_args__ from sqlalchemy import create_engine, Index from sqlalchemy.orm import Mapped, mapped_column, DeclarativeBase, declared_attr class InputBase(DeclarativeBase): refresh_date: Mapped[str] refresh_time: Mapped[str] class RawEmergency(InputBase): __tablename__ = "emergency" id: Mapped[int] = mapped_column(primary_key=True, autoincrement=False) attendance_id: Mapped[str | None] admitted_spell_id: Mapped[str | None] @declared_attr def __table_args__(cls): return ( Index( "index_emergency_refresh_date_time", cls.refresh_date.desc(), cls.refresh_time.desc(), ), ) engine = create_engine("sqlite:///temp.sqlite", echo=True) InputBase.metadata.create_all(engine) These are the queries emitted. CREATE TABLE emergency ( id INTEGER NOT NULL, attendance_id VARCHAR, admitted_spell_id VARCHAR, refresh_date VARCHAR NOT NULL, refresh_time VARCHAR NOT NULL, PRIMARY KEY (id) ) sqlalchemy.engine.Engine CREATE INDEX index_emergency_refresh_date_time ON emergency (refresh_date DESC, refresh_time DESC) | 2 | 3 |
78,165,559 | 2024-3-15 | https://stackoverflow.com/questions/78165559/how-to-write-a-polars-dataframe-to-duckdb | I am trying to write a Polars DataFrame to a duckdb database. I have the following simple code which I expected to work: import polars as pl import duckdb pldf = pl.DataFrame({'mynum': [1,2,3,4]}) with duckdb.connect(database="scratch.db", read_only=False) as con: pldf.write_database(table_name='test_table', connection=con) However, I get the following error: sqlalchemy.exc.ArgumentError: Expected string or URL object, got <duckdb.duckdb.DuckDBPyConnection object I get a similar error if I use the non-default engine='adbc' instead of df.write_database()'s default engine='sqlalchemy'. So it seemed it should be easy enough to just swap in a URI for my ducdkb database, but I haven't been able to get that to work either. Potentially it's complicated by my being on Windows? | In-memory database. If you just want to use DuckDB to query a polars dataframe, this can simply be achieved as long as the table exists in the current scope. duckdb.sql("SELECT * FROM df").show() Persistent database If you want to use a persistent database, you could install duckdb-engine and write the database using the connection URI string. df.write_database( table_name='test_table', connection="duckdb:///scratch.db", ) Reading the data back in using DuckDB works as usual. with duckdb.connect(database="scratch.db", read_only=False) as con: con.query("SELECT * FROM test_table").show() βββββββββ β mynum β β int64 β βββββββββ€ β 1 β β 2 β β 3 β β 4 β βββββββββ | 5 | 4 |
78,166,924 | 2024-3-15 | https://stackoverflow.com/questions/78166924/how-to-improve-efficiency-in-random-column-selection-and-assignment-in-pandas-da | I'm working on a project where I need to create a new DataFrame based on an existing one, with certain columns randomly selected and assigned in each row with probability proportional to the number in that column. However, my current implementation seems to be inefficient, especially when dealing with large datasets. I'm seeking advice on how to optimize this process for better performance. Here's a simplified version of what I'm currently doing: import pandas as pd import numpy as np # Sample DataFrame data = { 'dog': [1, 2, 3, 4], 'cat': [5, 6, 7, 8], 'parrot': [9, 10, 11, 12], 'owner': ['fred', 'bob', 'jim', 'jannet'] } df = pd.DataFrame(data) # List of relevant columns relevant_col_list = ['dog', 'cat', 'parrot'] # New DataFrame with the same number of rows new_df = df.copy() # Create 'iteration_1' column in new_df new_df['iteration_1'] = "" # Iterate over rows for index, row in new_df.iterrows(): # Copy columns not in relevant_col_list for column in new_df.columns: if column not in relevant_col_list and column != 'iteration_1': new_df.at[index, column] = row[column] # Randomly select a column from relevant_col_list with probability proportional to the number in the column probabilities = df[relevant_col_list ].iloc[index] / df[relevant_col_list ].iloc[index].sum() chosen_column = np.random.choice(relevant_col_list , p=probabilities) # Write the name of the chosen column in the 'iteration_1' column new_df.at[index, 'iteration_1'] = chosen_column print(new_df) How can I speed it up? | You could first rework your DataFrame to select the columns of interest, normalize the weights, then create a cumsum. tmp = (df[relevant_col_list] .pipe(lambda x: x.div(x.sum(axis=1), axis=0)) .cumsum(axis=1).to_numpy() ) # cumulated probabilities array([[0.06666667, 0.4 , 1. ], [0.11111111, 0.44444444, 1. ], [0.14285714, 0.47619048, 1. ], [0.16666667, 0.5 , 1. ]]) After that generate n random numbers between 0-1 and identify the first column greater than the random number with argmax: r = np.random.random(len(df)) df['iteration_1'] = np.array(relevant_col_list)[(tmp > r[:,None]).argmax(axis=1)] Output: col1 col2 col3 col4 iteration_1 0 1 5 9 13 col3 1 2 6 10 14 col3 2 3 7 11 15 col1 3 4 8 12 16 col2 Intermediates: # for reproducibility np.random.seed(1) tmp # array([[0.06666667, 0.4 , 1. ], # [0.11111111, 0.44444444, 1. ], # [0.14285714, 0.47619048, 1. ], # [0.16666667, 0.5 , 1. ]]) r = np.random.random(len(df)) # array([0.417022005, 0.720324493, 0.114374817, 0.302332573]) (tmp > r[:,None]) # array([[False, False, True], # [False, False, True], # [ True, True, True], # [False, True, True]]) (tmp > r[:,None]).argmax(axis=1) # array([2, 2, 0, 1]) np.array(relevant_col_list)[(tmp > r[:,None]).argmax(axis=1)] # array(['col3', 'col3', 'col1', 'col2'], dtype='<U4') | 3 | 3 |
78,165,746 | 2024-3-15 | https://stackoverflow.com/questions/78165746/python-import-directory-from-a-file-inside-that-directory | I'm sorry if the title is confusing since English is not my first language. What I have trouble is running a python script that calls import of its own directory. The folder structure is like this: MyProject |--Utils |-- util | |-- __init__.py | |-- run.py |-- __init__.py |-- test.py And the code of test.py is as following import Utils.util if __name__ == '__main__': # Do something When running test.py, I get this error ModuleNotFoundError: No module named 'Utils' Is there anyway to call import that's suitable for this folder structure? If it's possible, someone can please help me naming this problem for better searches? I have tried create a new script in the root folder which runs normally. However, what if I want to store testing code like test.py in modular folders, the import command is not running | The error "ModuleNotFoundError: No module named 'Utils'" occurs because Python doesn't automatically search for modules within subdirectories unless they are treated as packages. from .util import | 2 | 1 |
78,164,251 | 2024-3-15 | https://stackoverflow.com/questions/78164251/dividing-each-column-in-polars-dataframe-by-column-specific-scalar-from-another | Polars noob, given an m x n Polars dataframe df and a 1 x n Polars dataframe of scalars, I want to divide each column in df by the corresponding scalar in the other frame. import numpy as np import polars as pl cols = list('abc') df = pl.DataFrame(np.linspace(1, 9, 9).reshape(3, 3), schema=cols) scalars = pl.DataFrame(np.linspace(1, 3, 3)[:, None], schema=cols) In [13]: df Out[13]: shape: (3, 3) βββββββ¬ββββββ¬ββββββ β a β b β c β β --- β --- β --- β β f64 β f64 β f64 β βββββββͺββββββͺββββββ‘ β 1.0 β 2.0 β 3.0 β β 4.0 β 5.0 β 6.0 β β 7.0 β 8.0 β 9.0 β βββββββ΄ββββββ΄ββββββ In [14]: scalars Out[14]: shape: (1, 3) βββββββ¬ββββββ¬ββββββ β a β b β c β β --- β --- β --- β β f64 β f64 β f64 β βββββββͺββββββͺββββββ‘ β 1.0 β 2.0 β 3.0 β βββββββ΄ββββββ΄ββββββ I can accomplish this easily in Pandas as shown below by delegating to NumPy broadcasting, but was wondering what the best way to do this is without going back and forth between Polars / Pandas representations. In [16]: df.to_pandas() / scalars.to_numpy() Out[16]: a b c 0 1.0 1.0 1.0 1 4.0 2.5 2.0 2 7.0 4.0 3.0 I found this similar question where the scalar constant is already a row in the original frame, but don't see how to leverage a row from another frame. Best I can come up with thus far is combining the frames and doing some... nasty looking things :D In [31]: (pl.concat([df, scalars]) ...: .with_columns(pl.all() / pl.all().tail(1)) ...: .head(-1)) Out[31]: shape: (3, 3) βββββββ¬ββββββ¬ββββββ β a β b β c β β --- β --- β --- β β f64 β f64 β f64 β βββββββͺββββββͺββββββ‘ β 1.0 β 1.0 β 1.0 β β 4.0 β 2.5 β 2.0 β β 7.0 β 4.0 β 3.0 β βββββββ΄ββββββ΄ββββββ | I think you found out a very unique/interesting and clever solution. Consider also just iterating over columns: df.select(column / scalars[column.name] for column in df.iter_columns()) or df.select(pl.col(k) / scalars[k] for k in df.columns) or df.with_columns(pl.col(k).truediv(scalars[k]) for k in df.columns) | 5 | 4 |
78,163,868 | 2024-3-14 | https://stackoverflow.com/questions/78163868/polars-expressions-failed-to-access-intermediate-column-creation-expressions | I want to encode the non-zero binary events with integer numbers. Following is a demo table: import polars as pl df = pl.DataFrame( { "event": [0, 1, 1, 0], "foo": [1, 2, 3, 4], "boo": [2, 3, 4, 5], } ) The expected output is achieved by: df = df.with_row_index() events = df.select(pl.col(["index", "event"])).filter(pl.col("event") == 1).with_row_index("event_id").drop("event") df = df.join(events, on="index", how="left") out: shape: (4, 5) βββββββββ¬ββββββββ¬ββββββ¬ββββββ¬βββββββββββ β index β event β foo β boo β event_id β β --- β --- β --- β --- β --- β β u32 β i64 β i64 β i64 β u32 β βββββββββͺββββββββͺββββββͺββββββͺβββββββββββ‘ β 0 β 0 β 1 β 2 β null β β 1 β 1 β 2 β 3 β 0 β β 2 β 1 β 3 β 4 β 1 β β 3 β 0 β 4 β 5 β null β βββββββββ΄ββββββββ΄ββββββ΄ββββββ΄βββββββββββ I want to get the expeceted output by chaining the expressions: ( df .with_row_index() .join( df .select(pl.col(["index", "event"])) .filter(pl.col("event") == 1) .with_row_index("event_id") .drop("event"), on="index", how="left", ) ) However, the expressions within the .join() expression does not seem to have added index column from the df.with_row_index() operation: ColumnNotFoundError: index Error originated just after this operation: DF ["event", "foo", "boo"]; PROJECT */3 COLUMNS; SELECTION: "None" | While the solution using the walrus operator works. It is probably more idiomatic and cleaner to use a pl.when().then() construct in conjunction with pl.int_range() to create the event_id. ( df .with_columns( pl.when(pl.col("event") == 1) .then(pl.int_range(pl.len())) .over("event") .alias("event_id") ) ) shape: (4, 4) βββββββββ¬ββββββ¬ββββββ¬βββββββββββ β event β foo β boo β event_id β β --- β --- β --- β --- β β i64 β i64 β i64 β i64 β βββββββββͺββββββͺββββββͺβββββββββββ‘ β 0 β 1 β 2 β null β β 1 β 2 β 3 β 0 β β 1 β 3 β 4 β 1 β β 0 β 4 β 5 β null β βββββββββ΄ββββββ΄ββββββ΄βββββββββββ | 3 | 2 |
78,163,954 | 2024-3-14 | https://stackoverflow.com/questions/78163954/how-could-i-get-a-value-out-of-a-pandas-dataframe-with-a-shape-of-1-1-without | I want to get a single value from a pandas DataFrame more efficiently. This is how I do it now: # import pandas import pandas as pd # set up the dataframe df = pd.DataFrame({'col1':['a','a','b'],'col2':[10,20,20],'col3':[100.0,200.0,300.0]}) # STEP 1 - filter down to a single row with loc row_of_interest = df.loc[(df['col1'] == 'a') & (df['col2'] > 11)] # STEP 2 - specify the column of interest column_and_row_of_interest = row_of_interest['col3'] # STEP 3 - get the value out of dataframe format using to_list and list indexing value_of_interest = column_and_row_of_interest.to_list()[0] And of course, I can do steps 1-3 in a single line of code like this: value_of_interest = df.loc[(df['col1'] == 'a') & (df['col2'] > 11)]['col3'].to_list()[0] I imagine STEP 1 and STEP 2 might be unavoidable, but STEP 3 feels clunky. Is there a better way to get a value out of a DataFrame with a shape of (1,1) than using .to_list()[0] ? | You can do: print(column_and_row_of_interest.squeeze()) OR: print(column_and_row_of_interest.iat[0]) This prints: 200.0 | 2 | 3 |
78,162,874 | 2024-3-14 | https://stackoverflow.com/questions/78162874/losing-type-information-inside-polars-dataframe | Sorry if my question doesn't make a lot of sense. I don't have much experience in python. I have some code that looks like: import polars as pl from typing import NamedTuple class Event(NamedTuple): name: str description: str def event_table(num) -> list[Event]: events = [] for i in range(5): events.append(Event("name", "description")) return events def pretty_string(events: list[Event]) -> str: pretty = "" for event in events: pretty += f"{event.name}: {event.description}\n" return pretty # This does work print(pretty_string(event_table(5))) # But then it doesn't work if I have my `list[Event]` in a dataframe data = {"events": [0, 1, 2, 3, 4]} df = pl.DataFrame(data).select(events=pl.col("events").map_elements(event_table)) # This doesn't work pretty_df = df.select(events=pl.col("events").map_elements(pretty_string)) print(pretty_df) # Neither does this print(pretty_string(df["events"][0])) It fails with error: Traceback (most recent call last): File "path/to/script.py", line 32, in <module> pretty_df = df.select(events=pl.col("events").map_elements(pretty_string)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "path/to/.venv/lib/python3.11/site-packages/polars/dataframe/frame.py", line 8116, in select return self.lazy().select(*exprs, **named_exprs).collect(_eager=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "path/to/.venv/lib/python3.11/site-packages/polars/lazyframe/frame.py", line 1934, in collect return wrap_df(ldf.collect()) ^^^^^^^^^^^^^ polars.exceptions.ComputeError: AttributeError: 'dict' object has no attribute 'name' Looks like my list[Event] is no longer that inside the df. I am not sure how to go about getting this to work. | You can keep the Event objects by passing return_dtype=pl.Object df.select(pl.col("events").map_elements(event_table)) shape: (5, 1) βββββββββββββββββββββββββββββββββββββ β events β β --- β β list[struct[2]] β βββββββββββββββββββββββββββββββββββββ‘ β [{"name","description"}, {"name"β¦ β β [{"name","description"}, {"name"β¦ β β [{"name","description"}, {"name"β¦ β β [{"name","description"}, {"name"β¦ β β [{"name","description"}, {"name"β¦ β βββββββββββββββββββββββββββββββββββββ df.select(pl.col("events").map_elements(event_table, return_dtype=pl.Object)) shape: (5, 1) βββββββββββββββββββββββββββββββββββββ β events β β --- β β object β βββββββββββββββββββββββββββββββββββββ‘ β [Event(name='name', description=β¦ β β [Event(name='name', description=β¦ β β [Event(name='name', description=β¦ β β [Event(name='name', description=β¦ β β [Event(name='name', description=β¦ β βββββββββββββββββββββββββββββββββββββ | 2 | 2 |
78,161,984 | 2024-3-14 | https://stackoverflow.com/questions/78161984/typing-for-rare-case-fallback-none-value | Trying to avoid typing issues I often run into the same problem. E.g. I have a function x that very rarily returns value None, all other times it returns int. def x(i: int) -> Union[int, None]: if i == 0: return return i def test(i: int): a = x(i) # typing issue: *= not supported for types int | None and int a *= 25 x used very often in the codebase and most of the time i was already checked a hundred times that x(i) will indeed return int and not None. Using it as int right away creates typing warnings - e.g. you can't multiply possible None value. What's best practice for that case? Ideas I considered: There is no real sense to check it for None with if a is None: return as it's already known. a *= 25 # type: ignore will make a an Unknown type. a = x(i) # type: int will make the warning go away. But will create a new warning "int | None cannot be assigned to int" a = cast(int, x(i)), haven't tested it much yet. I usually end up changing return type of x to just int, adding ignore in return # type: ignore and mention in the docstring that it can return None, it helps avoiding contaminating the entire codebase with type warnings. Is this the best approach? def x(i: int) -> int: """might also return `None`""" if i == 0: return # type: ignore return i | This might be a case where an exception is better than a return statement you never expect to be reached. def x(i: int) -> int: if i == 0: raise ValueError("didn't expect i==0") return i def test(i: int): try: a = x(i) except ValueError: pass a *= 25 Code that is confident it has sufficiently validated the argument to x can omit the try statement. Statically speaking, this is accurate: if x returns, it is guaranteed to return an int. (Whether it will return is another question.) Ideally, you could define a refinement type like NonZeroInt, and turn i == 0 into a type error, rather than a value error. # Made-up special form RefinementType obeys # # isinstance(x, RefinementType[T, p]) == isinstance(x, T) and p(x) NonZeroInt = RefinementType[int, lambda x: x != 0] def x(i: NonZeroInt) -> int: return i x(0) # error: Argument 1 to "x" has incompatible type "int"; expected "NonZeroInt" [arg-type] i: int = 0 x(i) # same error j: NonZeroInt = 0 # error: Incompatible types in assignment (expression has type "int", variable has type "NonZeroInt") [assignment] x(j) # OK k: NonZeroInt = 3 # OK x(k) # OK | 2 | 6 |
78,161,053 | 2024-3-14 | https://stackoverflow.com/questions/78161053/convert-entire-python-file-from-2-space-indent-to-4-space-indent | I regularly work with Python files that are provided to me as templates, which use an indentation of 2. However, I personally prefer working with an indentation width of 4, which is what I've set in my .vimrc. However, because of the indentation-sensitivity of Python, the typical gg=G way to fix indentation to my preferred indentation width does not work well at all to convert a file to my preferred indentation. Furthermore, just leaving the indentation at a width of 2 screws up with my tabstop settings. To fix this, what I'd like to do is have some system to convert a 2-space indented file into a 4-space indented file. I don't know what the easiest way to do this would be, but I'm thinking to vmap <Tab> to increase the tab width of the selected lines by 1, and vmap <S-Tab> to decrease the width by 1. I don't know if there is some built-in method to do this, or if this would require some find/replace rule, but I think having some way to fix the indentation to my preferred width would be useful, given that the autoindent is not smart enough to fix this properly. | In vim: " Convert 2-spaces indent to tabs :set noexpandtab tabstop=2 :retab! " Convert tabs to 4-spaces indent :set expandtab tabstop=4 :retab | 3 | 4 |
78,159,660 | 2024-3-14 | https://stackoverflow.com/questions/78159660/whats-the-difference-between-np-dividex-y-and-x-y-in-python3 | I recently found a bug in my code that I was able to fix by replacing np.divide(x, y) with x / y. I was under the impression that np.divide(x, y) was equivalent to x / y (it says as much in the numpy documentation). Is this a bug in numpy or is it expected behaviour? As I said my immediate issue is solved so I'm not too worried about finding a fix, I am more curious to understand what's going on. import numpy as np x1 = np.array([[281], [15831], [30280], [975], [313], [739], [252], [10364], [21480], [1447], [315], [772], [95], [2710], [7408], [215], [111], [158], [0], [88], [21], [661], [0], [0], [0], [5], [4], [0], [12], [0], [0], [50], [28], [0], [0], [272]]) x2 = np.array([[499], [6315], [33800], [580], [208], [464], [384], [3127], [19596], [2319], [218], [1740], [217], [411], [4250], [223], [406], [267], [2], [0], [16], [0], [0], [0], [0], [8], [3], [0], [18], [0], [1], [0], [41], [0], [0], [0]]) x3 = np.array([[507], [6180], [34005], [555], [200], [451], [390], [3024], [19492], [2425], [211], [1848], [223], [396], [4097], [224], [406], [282], [2], [0], [16], [0], [0], [0], [0], [8], [3], [0], [19], [0], [2], [0], [45], [0], [0], [0]]) x4 = np.array([[507], [6178], [34017], [554], [200], [451], [391], [3022], [19486], [2439], [210], [1865], [223], [396], [4089], [224], [406], [284], [2], [0], [16], [0], [0], [0], [0], [8], [3], [0], [19], [0], [2], [0], [46], [0], [0], [0]]) not_zero = (x1 + x2) != 0 x = np.divide(2*(x1 - x2)**2, x1 + x2, where=not_zero) r = (2*(x1[not_zero] - x2[not_zero])**2) / (x1[not_zero] + x2[not_zero]) print("n1 =",x.max(),"\tt1 =", r.max()) not_zero = (x2 + x3) != 0 x = np.divide(2*(x2 - x3)**2, x2 + x3, where=not_zero) r = (2*(x2[not_zero] - x3[not_zero])**2) / (x2[not_zero] + x3[not_zero]) print("n2 =",x.max(),"\tt2 =", r.max()) not_zero = (x3 + x4) != 0 x = np.divide(2*(x3 - x4)**2, x3 + x4, where=not_zero) r = (2*(x3[not_zero] - x4[not_zero])**2) / (x3[not_zero] + x4[not_zero]) print("n3 =",x.max(),"\tt3 =", r.max()) Output: n1 = 8177.933351395286 t1 = 8177.933351395286 n2 = 873842.0 t2 = 6.501672240802676 n3 = 1322.0 t3 = 0.15566927013196877 Python version: 3.7.6 Numpy version: 1.17.0 | The parameter where = mask without the parameter out is somewhat dangerous. Without a target for the output, the function builds an np.empty array of the appropriate shape, and then replaces some subset of the empty array with the output data. But np.empty isn't, well, empty. It's just a random memory location that hasn't been initialized (so it still has any garbage data that existed in that memory block before). So where mask = False, your output will be that leftover random garbage. If that memory block happens to have binary garbage that can be encoded into a number bigger than the rest of your data, it end up being your max value. You can mask out the garbage using not_zero as a mask again: x = np.divide(2*(x1 - x2)**2, x1 + x2, where=not_zero)[not_zero] or initialize your output array yourself: x = np.zeros_like(x1) np.divide(2*(x1 - x2)**2, x1 + x2, where = not_zero, out = x) | 2 | 6 |
78,159,962 | 2024-3-14 | https://stackoverflow.com/questions/78159962/in-pandas-how-to-reliably-set-the-index-order-of-multilevel-columns-during-or-a | After pivoting around two columns with a separate value column, I want a df with multiindex columns in a specific order, like so (please ignore that multi-2 and multi-3 labels are pointless in the simplified example): multi-1 one two multi-2 multi-2 multi-2 multi-3 SomeText SomeText mIndex bar -1.788089 -0.631030 baz -1.836282 0.762363 foo -1.104848 -0.444981 qux -0.484606 -0.507772 Starting with a multiindex series of values, labelled multi-2, I create a three column df: column 1 - the serie's indexes (multi-1); column 2 - the values (multi-2); plus another column (multi-3), which I really only want for the column label. I then want to pivot this df around multi-1 and multi-3, with values multi-2. PROBLEM: The multiindex column labels MUST always be in a specific order: multi-1, multi-2, then multi-3. import pandas as pd import numpy as np arrays = [["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"], ["one", "two", "one", "two", "one", "two", "one", "two"]] tuples = list(zip(*arrays)) index = pd.MultiIndex.from_tuples(tuples, names=["mIndex", "multi-1"]) s = pd.Series(np.random.randn(8), index=index) s.rename("multi-2", inplace=True) df = pd.DataFrame(s.reset_index(level=["multi-1"])) df["multi-3"] = "SomeText" df = df.pivot(columns={"multi-1", "multi-3"}, values=["multi-2"]) df = df.swaplevel(0,1, axis=1) # option 1: works only sometimes # ???? how do I name the values level ???? df = df.reorder_levels("multi-1", "multi-2", "multi-3") # option 2: set fixed order Including multi-2 in the columns during the pivot creates another level. The .swaplevel method does not always return the same order because (I guess) the original index order is not always the same following the pivot. Can this be right?!? To use the reorder_levels, I need to somehow set an index label for the multi-2 value level (which is currently "None", along side "Multi-1" and "Multi-3"). Is there a way to set the label during the pivot? or after the pivot in a way that doesn't use the index (which seems to change somehow)? Or another way to get the same outcome? | After pivot, the values don't have an index name, you have to assign it: (df.pivot(columns={'multi-1', 'multi-3'}, values=['multi-2']) .rename_axis(columns={None: 'multi-2'}) .reorder_levels(['multi-1', 'multi-2', 'multi-3'], axis=1) ) Output: multi-1 one two multi-2 multi-2 multi-2 multi-3 SomeText SomeText mIndex bar 0.938079 -1.051440 baz 0.263281 1.388145 foo -0.965295 0.611163 qux -1.120318 -0.529974 Alternatively: swaplevel doesn't work consistently because you used a set (that is unordered) in pivot, use a list instead: (df.pivot(columns=['multi-1', 'multi-3'], values=['multi-2']) .swaplevel(0, 1, axis=1) ) NB. you can also add .rename_axis(columns={None: 'multi-2'}) if desired. Output: multi-1 one two multi-2 multi-2 multi-3 SomeText SomeText mIndex bar 0.542184 -0.199041 baz 1.253028 -1.006294 foo 0.252699 -1.728199 qux 0.572631 -0.694103 # with more columns # columns=['multi-1', 'multi-3', 'multi-4', 'multi-5'] multi-1 one two multi-2 multi-2 multi-3 SomeText SomeText multi-4 SomeText SomeText multi-5 SomeText SomeText mIndex bar 0.071546 0.264463 baz 0.516355 1.594471 foo -0.194536 -1.344563 qux -0.197232 -0.845405 | 2 | 5 |
78,155,443 | 2024-3-13 | https://stackoverflow.com/questions/78155443/using-a-column-values-within-the-round-function | I have this dataframe: df = pl.from_repr(""" βββββββββββββββββ¬βββββββββββββββ¬βββββββββββββββββββββββββ β exchange_rate β sig_figs_len β reverse_rate_from_euro β β --- β --- β --- β β f64 β u32 β f64 β βββββββββββββββββͺβββββββββββββββͺβββββββββββββββββββββββββ‘ β 6.4881 β 5 β 0.154128 β β 6.5196 β 5 β 0.153384 β β 6.4527 β 5 β 0.154974 β β 6.41 β 3 β 0.156006 β β 6.425 β 4 β 0.155642 β βββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββββββ """) I would like to round the value of in every cell of the reverse_rate_from_euro with the matching value in the corresponding cell of the sig_figs_len column. I came up with a solution which uses the map_rows function, but since the dataset is quite big and using apply/pure python isn't an optimized ideal solution to go about it, i would like to find a better solution. Here's the snippet: df = df.with_columns( (df.map_rows(lambda df_: round(df_[-1], df_[-2]))) .to_series() .alias("reverse_rate_to_euro_rounded_sig_figs") ) Is there a better solution that uses any of the built-in Polars expressions API? result set should look like so: shape: (5, 4) βββββββββββββββββ¬βββββββββββββββ¬βββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββ β exchange_rate β sig_figs_len β reverse_rate_from_euro β reverse_rate_to_euro_rounded_sigβ¦ β β --- β --- β --- β --- β β f64 β u32 β f64 β f64 β βββββββββββββββββͺβββββββββββββββͺβββββββββββββββββββββββββͺββββββββββββββββββββββββββββββββββββ‘ β 6.4881 β 5 β 0.154128 β 0.15413 β β 6.5196 β 5 β 0.153384 β 0.15338 β β 6.4527 β 5 β 0.154974 β 0.15497 β β 6.41 β 3 β 0.156006 β 0.156 β β 6.425 β 4 β 0.155642 β 0.1556 β βββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββ Any input is highly appreciated! thanks for reading! | You could probably do something like this: df.with_columns( ( pl.col('reverse_rate_from_euro') * pl.lit(10).pow(pl.col('sig_figs_len')) ).round() * pl.lit(0.1).pow(pl.col('sig_figs_len')) ) βββββββββββββββββ¬βββββββββββββββ¬βββββββββββββββββββββββββ β exchange_rate β sig_figs_len β reverse_rate_from_euro β β --- β --- β --- β β f64 β i64 β f64 β βββββββββββββββββͺβββββββββββββββͺβββββββββββββββββββββββββ‘ β 6.4881 β 3 β 0.154 β β 6.5196 β 4 β 0.1534 β βββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββββββ alternatively, if you know that number of different sig_figs_len values is not large, you can just enumerate over unique() values and create result with coalesce() and when(): df.with_columns( pl.coalesce( pl.when(pl.col("sig_figs_len") == x) .then(pl.col("reverse_rate_from_euro").round(x)) for x in df['sig_figs_len'].unique() ).alias('reverse_rate_to_euro_rounded_sig_figs') ) βββββββββββββββββ¬βββββββββββββββ¬βββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββ β exchange_rate β sig_figs_len β reverse_rate_from_euro β reverse_rate_to_euro_rounded_sigβ¦ β β --- β --- β --- β --- β β f64 β i64 β f64 β f64 β βββββββββββββββββͺβββββββββββββββͺβββββββββββββββββββββββββͺββββββββββββββββββββββββββββββββββββ‘ β 6.4881 β 3 β 0.154128 β 0.154 β β 6.5196 β 4 β 0.153384 β 0.1534 β βββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββ Essentially, what we do here is creating a Iterable[IntoExpr] which you can see as a list of columns where each column contains only values rounded to certain sig_figs_len value: df.with_columns( pl.when(pl.col("sig_figs_len") == x) .then(pl.col("reverse_rate_from_euro").round(x)).name.suffix(f"_{x}") for x in df['sig_figs_len'].unique() )) βββββββββββββββββ¬βββββββββββββββ¬ββββββββββββββββββββββ¬ββββββββββββββββββββββ¬ββββββββββββββββββββββ¬ββββββββββββββββββββββ β exchange_rate β sig_figs_len β reverse_rate_from_e β reverse_rate_from_e β reverse_rate_from_e β reverse_rate_from_e β β --- β --- β uro β uro_3 β uro_4 β uro_5 β β f64 β i64 β --- β --- β --- β --- β β β β f64 β f64 β f64 β f64 β βββββββββββββββββͺβββββββββββββββͺββββββββββββββββββββββͺββββββββββββββββββββββͺββββββββββββββββββββββͺββββββββββββββββββββββ‘ β 6.4881 β 5 β 0.154128 β null β null β 0.15413 β β 6.5196 β 5 β 0.153384 β null β null β 0.15338 β β 6.4527 β 5 β 0.154974 β null β null β 0.15497 β β 6.41 β 3 β 0.156006 β 0.156 β null β null β β 6.425 β 4 β 0.155642 β null β 0.1556 β null β βββββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββ and then we use the fact that pl.coalesce() accept iterable of Expr to get first non-null column | 2 | 3 |
78,156,640 | 2024-3-13 | https://stackoverflow.com/questions/78156640/why-is-visual-studio-code-saying-my-code-in-unreachable-after-using-the-pandas-c | Koda UlaΕΔ±lamΔ±yor -> Code is unreachable Visual Studio code is graying out my code and saying it is unreachable after I used pd.concat(). The IDE seems to run smoothly but it's disturbing and I want my colorful editor back. How do I disable the editor graying out my code without changing the current language? | This is a bug currently existing in pandas-stubs. The matching overload of concat in pandas-stubs currently returns Never. According to this suggestion in Pylance github, you could work around the pandas-stubs issue by commenting out the Never overload in ...\.vscode\extensions\ms-python.vscode-pylance-2024.3.1\dist\bundled\stubs\pandas\core\reshape\concat.pyi. @overload def concat( objs: Iterable[None] | Mapping[HashableT1, None], *, axis: Axis = ..., join: Literal["inner", "outer"] = ..., ignore_index: bool = ..., keys: Iterable[HashableT2] = ..., levels: Sequence[list[HashableT3] | tuple[HashableT3, ...]] = ..., names: list[HashableT4] = ..., verify_integrity: bool = ..., sort: bool = ..., copy: bool = ..., ) -> Never: ... | 16 | 17 |
78,156,811 | 2024-3-13 | https://stackoverflow.com/questions/78156811/how-do-i-isort-using-ruff | I often work in very small projects which do not have config file. How do I use ruff in place of isort to sort the imports? I know that the following command is roughly equivalent to black: ruff format . The format command do not sort the imports. How do I do that? | According to the documentation: Currently, the Ruff formatter does not sort imports. In order to both sort imports and format, call the Ruff linter and then the formatter: ruff check --select I --fix . ruff format . | 18 | 36 |
78,156,692 | 2024-3-13 | https://stackoverflow.com/questions/78156692/pandas-move-values-from-one-column-to-an-appropriate-column | My google-fu is failing me. I have a simple dataframe that looks like this: Sample Subject Person Place Thing 1-1 Janet 1-1 Boston 1-1 Hat 1-2 Chris 1-2 Austin 1-2 Scarf I want the values in the subject column to move into their appropriate column so that I end up with something like this: Sample Subject Person Place Thing 1-1 Janet Janet Boston Hat 1-2 Chris Chris Austin Scarf I've looked at pivot and transpose, but those don't seem right. Any ideas would be appreciated! :) | If the groups are sorted and the pattern is always the same (no missing values), then reshape with numpy: cols = ['Person', 'Place', 'Thing'] out = df.loc[::len(cols), ['Sample']].reset_index(drop=True) out[cols] = df['Subject'].to_numpy().reshape(-1, len(cols)) For a more generic approach, only assuming that the categories are always in the same order within a group, identify the position per group with groupby.cumcount and map the names, then pivot: order = ['Person', 'Place', 'Thing'] out = (df.assign(col=df.groupby('Sample').cumcount() .map(dict(enumerate(order)))) .pivot(index='Sample', columns='col', values='Subject') .reset_index().rename_axis(columns=None) ) Variant with rename: order = ['Person', 'Place', 'Thing'] out = (df.assign(col=df.groupby('Sample').cumcount()) .pivot(index='Sample', columns='col', values='Subject') .rename(columns=dict(enumerate(order))) .reset_index().rename_axis(columns=None) ) Output: Sample Person Place Thing 0 1-1 Janet Boston Hat 1 1-2 Chris Austin Scarf Finally, if you really want the "Subject" column, insert it: out.insert(1, 'Subject', out['Person']) print(out) Sample Subject Person Place Thing 0 1-1 Janet Janet Boston Hat 1 1-2 Chris Chris Austin Scarf timings If you can use the numpy approach, it's more strict on the input but much faster: | 2 | 1 |
78,155,230 | 2024-3-13 | https://stackoverflow.com/questions/78155230/polars-timestamp-synchronization-lazy-evaluation | I want to synchronize two numpy arrays of timestamps to each other using Polars LazyFrames. Let's assume that I have two numpy arrays of timestamps which are stored using LazyFrames: import polars as pl timestamps = pl.LazyFrame( np.array( [ np.datetime64("1970-01-01T00:00:00.500000000"), np.datetime64("1970-01-01T00:00:01.500000000"), np.datetime64("1970-01-01T00:00:02.600000000"), np.datetime64("1970-01-01T00:00:03.400000000"), np.datetime64("1970-01-01T00:00:04.500000000"), np.datetime64("1970-01-01T00:00:05.300000000"), np.datetime64("1970-01-01T00:00:06.200000000"), np.datetime64("1970-01-01T00:00:07.400000000"), np.datetime64("1970-01-01T00:00:08.500000000"), ] ), schema={"values": pl.Datetime} ) other_timestamps = pl.LazyFrame( np.array( [ np.datetime64("1970-01-01T00:00:01.500000000"), np.datetime64("1970-01-01T00:00:02.000000000"), np.datetime64("1970-01-01T00:00:02.500000000"), np.datetime64("1970-01-01T00:00:04.500000000"), np.datetime64("1970-01-01T00:00:06.000000000"), np.datetime64("1970-01-01T00:00:06.500000000"), ] ), schema={"values": pl.Datetime} ) I also have the expected functionality implemented in numpy: import numpy as np import numpy.typing as npt def _np_sync_to( timestamps: npt.ArrayLike[np.datetime64], other: npt.ArrayLike[np.datetime64], tolerance: str, ): outer_diffs = np.abs(np.subtract.outer(other, timestamps)) closest_timestamps_indices = outer_diffs.argmin(0) closest_timestamps = other[closest_timestamps_indices] diffs = np.abs(closest_timestamps - timestamps) tolerance = parse_timedelta(tolerance) within_tolerance = diffs <= tolerance ts1_synced = timestamps[within_tolerance] ts2_synced = closest_timestamps[within_tolerance] return ts1_synced, ts2_synced np_ts1_synced, np_ts2_synced = _np_sync_to( timestamps=np.squeeze(timestamps.collect().to_numpy()), other=np.squeeze(other_timestamps.collect().to_numpy()), tolerance="500ms", ) The expected results are: np_ts1_synced = np.array([ np.datetime64('1970-01-01T00:00:01.500000000'), np.datetime64('1970-01-01T00:00:02.600000000'), np.datetime64('1970-01-01T00:00:04.500000000'), np.datetime64('1970-01-01T00:00:06.200000000') ]) np_ts2_synced = np.array([ np.datetime64('1970-01-01T00:00:01.500000000'), np.datetime64('1970-01-01T00:00:02.500000000'), np.datetime64('1970-01-01T00:00:04.500000000'), np.datetime64('1970-01-01T00:00:06.000000000') ]) So the synced timestamps are basically the nearest timestamps within the specified tolerance. Now I want to implement the same functionality using Polars LazyFrames to process large data. I tried to implement it equivalently with Polars, but the dimensions of the outer subtraction are not correct and I guess there is a better way to do the computation in general: # inspired by https://stackoverflow.com/questions/77748729/polars-equivalent-to-np-outer def sync_to(timestamps, other, tolerance: str): def _outer( a: pl.DataFrame | pl.LazyFrame, b: pl.DataFrame | pl.LazyFrame ): # I guess the following line is incorrect nrows = pl.len().sqrt().cast(pl.Int32) return ( a.select("values") .join(b.select("values"), how="cross") .select( computed=(pl.col("values") - pl.col("values_right")).abs() ) .group_by(pl.arange(0, pl.len()) // nrows, maintain_order=True) .agg("computed") .select(pl.col("computed").list.to_struct()) .unnest("computed") ) outer_diffs = timestamps.pipe(_outer, other) Another idea I had was: ts1 = timestamps.sort("values").join_asof( other.sort("values"), on="values", strategy="nearest", tolerance=tolerance, ) But the output is not what I want. | It seems that your join_as_of example works. The only thing is that, as far as I understand, join_as_of is a left join, so you have to additionally filter out non joined values by is_not_null() (or even better, drop_nulls() as @Hericks suggested in the comment): timestamps.sort("values").join_asof( other_timestamps.with_columns( pl.col('values').alias('other_values') ).sort("values"), on="values", strategy="nearest", tolerance="500ms" ).drop_nulls() # ).filter(pl.col('right_values').is_not_null()) βββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββ β values β other_values β β --- β --- β β datetime[ns] β datetime[ns] β βββββββββββββββββββββββββββͺββββββββββββββββββββββββββ‘ β 1970-01-01 00:00:01.500 β 1970-01-01 00:00:01.500 β β 1970-01-01 00:00:02.600 β 1970-01-01 00:00:02.500 β β 1970-01-01 00:00:04.500 β 1970-01-01 00:00:04.500 β β 1970-01-01 00:00:06.200 β 1970-01-01 00:00:06 β βββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββ | 3 | 1 |
78,154,266 | 2024-3-13 | https://stackoverflow.com/questions/78154266/fill-bins-with-no-coverage-with-0 | I need to generate a heatmap with the average coverage of positions within a bin from a determined number of bins, regardless of the number of bases in a transcriptome within each bin. In other words, if I want to have 10 bins, for one transcriptome, it may have 1000 bases to distribute among 10 bins, and another may have 2445 bases to distribute among 10 bins. The problem is that in my coverage file, there are gaps that don't fall into any bin. For example, if I want 5 bins over 10 positions, I'll have: (0,2], (2,4], (4,6], (6,8], (8,10]. If my positions with coverage are 1, 5, 5, 5, 7, 7, 10, the bin "(2,4]" will be hidden, thus not appearing in the heatmap. What I want is for these bins without coverage to be filled with 0s so that they appear in the heatmap. I'm using python with pandas, seaborn and matplot.pyplot libraries In the image below, the first line is edges positions of my bins, and the dataframe is what bins have coverage: enter image description here Input example: chr17 1 1 chr17 5 1 chr17 5 2 chr17 5 2 chr17 7 1 chr17 7 5 chr17 10 1 Problem: chr data_bin avg chr17 (0,2] 1 chr17 (4,6] 1.66 chr17 (4,6] 1.66 chr17 (4,6] 1.66 chr17 (6,8] 3 chr17 (6,8] 3 chr17 (8,10] 1 Expected: chr data_bin avg chr17 (0,2] 1 **chr17 (2,4] 0** chr17 (4,6] 1.66 chr17 (4,6] 1.66 chr17 (4,6] 1.66 chr17 (6,8] 3 chr17 (6,8] 3 chr17 (8,10] 1 The function I am using is: def bins_calculator(path_txt:str, start:int,end:int): column_names =["chr", "pos", "cov"] data = pd.read_csv(path_txt, names = column_names, sep = '\t') step = int((end - start) / 10) n_bins = [start + i * step for i in range(11)] n_bins[-1] = end data["data_bin"] = pd.cut(data["pos"], bins = n_bins) data["avg"] = data.groupby("data_bin", observed = False)["cov"].transform("mean") filtered_data = data[["chr","data_bin","avg"]].drop_duplicates("data_bin") return filtered_data Any questions about this problem, please let me know in the comments :) | IIUC you can use .merge to merge the missing categories, then fill any NaNs with values you want: df["data_bin"] = pd.cut(df["pos"], range(0, 12, 2)) df = pd.merge( df, df["data_bin"].cat.categories.to_frame(), left_on="data_bin", right_on=0, how="outer", )[["chr", "data_bin", "cov"]] df["chr"] = df["chr"].ffill().bfill() df["cov"] = df["cov"].fillna(0) df["avg"] = df.groupby("data_bin")["cov"].transform("mean") print(df) Prints: chr data_bin cov avg 0 chr17 (0.0, 2.0] 1.0 1.000000 1 chr17 (2.0, 4.0] 0.0 0.000000 2 chr17 (4.0, 6.0] 1.0 1.666667 3 chr17 (4.0, 6.0] 2.0 1.666667 4 chr17 (4.0, 6.0] 2.0 1.666667 5 chr17 (6.0, 8.0] 1.0 3.000000 6 chr17 (6.0, 8.0] 5.0 3.000000 7 chr17 (8.0, 10.0] 1.0 1.000000 | 4 | 1 |
78,153,897 | 2024-3-13 | https://stackoverflow.com/questions/78153897/row-count-expression-in-polars | Is there something like row_number, or a row_count expression in Polars? Something like the Polars with_row_index as an expression, but not just as a data frame method. I want to rank a given customer's order number in a day using the over window function, but can't come up with a solution so far. pl.count().over('ID Client')) gives me the total number of interactions that day. Now I want to have them ordered... In SQL I believe this works simply with the RANK() or ROW_NUMBER() window functions. Can anyone help with it? | You might be searching for pl.int_range, potentially combined with pl.Expr.over. import polars as pl df = pl.DataFrame({ "group": ["A", "A", "A", "B", "B", "C"], }) df.with_columns(pl.int_range(pl.len()).over("group").alias("index")) shape: (6, 2) βββββββββ¬ββββββββ β group β index β β --- β --- β β str β i64 β βββββββββͺββββββββ‘ β A β 0 β β A β 1 β β A β 2 β β B β 0 β β B β 1 β β C β 0 β βββββββββ΄ββββββββ | 3 | 4 |
78,153,672 | 2024-3-13 | https://stackoverflow.com/questions/78153672/decoding-of-a-byte-sequence-into-a-unicode-string | I am attempting to decode a byte sequence into a Unicode string from various types of files, such as .exe, .dll, and .deb files, using the pefile library in Python. However, I sometimes encounter Unicode decoding errors. How can I handle these errors effectively? Here's the relevant code snippet: import pefile def get_section_addresses(file_path): section_addresses = {} pe = pefile.PE(file_path) for section in pe.sections: section_addresses[section.Name.decode().strip('\x00')] = section.VirtualAddress return section_addresses section_addresses = get_section_addresses('D:/Binary/file/rufus.exe') for name, address in section_addresses.items(): print(f"{name}:{address:08X}") I'm utilizing pefile to parse Portable Executable (PE) files, extracting section names and their corresponding virtual addresses. However, during the decoding of section names, I sometimes encounter Unicode decoding errors. | I've implemented error handling using nested try-except blocks try: pe = pefile.PE(file_path) for section in pe.sections: try: name = section.Name.decode().strip('\x00') except UnicodeDecodeError: name = "Undecodable" section_addresses[name] = section.VirtualAddress except pefile.PEFormatError: print(f"Error: {file_path} is not a valid PE file.") return section_addresses | 4 | 3 |
78,151,027 | 2024-3-13 | https://stackoverflow.com/questions/78151027/polars-customized-function-returns-multiple-columns | _func is designed to return two columns: from polars.type_aliases import IntoExpr, IntoExprColumn import polars as pl def _func(x: IntoExpr): x1 = x+1 x2 = x+2 return pl.struct([x1, x2]) df = pl.DataFrame({"test": np.arange(1, 11)}) df.with_columns( _func(pl.col("test")).alias(["test1", "test2"]) ) I have tried to wrap the return values using pl.struct but it didn't work. Expected output: shape: (10, 3) test test1 test2 i32 i32 i32 1 2 3 2 3 4 3 4 5 4 5 6 5 6 7 6 7 8 7 8 9 8 9 10 9 10 11 10 11 12 | I'm assuming that you cannot or don't want to change the function, so we need to work with sequence of expressions returned by this function. Also I want the answer to be able to accommodate for more than 2 columns, so I don't have to specifically alias each column. The problem in your case is at the end you have a sequence of Expr with the same name, so you need to rename them at some point before executing. Solution might depend on what kind of names do you plan to use for your columns. You can do something like this: def _func(x: IntoExpr): x1 = x+1 x2 = x+2 return x1, x2 df.with_columns( x.alias(f"test{i+1}") for i,x in enumerate(_func(pl.col("test"))) ) # alternatevely # df.with_columns( # x.name.suffix(f"{i+1}") for i,x in enumerate(_func(pl.col("test"))) #) ββββββββ¬ββββββββ¬ββββββββ β test β test1 β test2 β β --- β --- β --- β β i32 β i32 β i32 β ββββββββͺββββββββͺββββββββ‘ β 1 β 2 β 3 β β 2 β 3 β 4 β β 3 β 4 β 5 β β 4 β 5 β 6 β β 5 β 6 β 7 β β 6 β 7 β 8 β β 7 β 8 β 9 β β 8 β 9 β 10 β β 9 β 10 β 11 β β 10 β 11 β 12 β ββββββββ΄ββββββββ΄ββββββββ For custom names you can use zip: f.with_columns( x.alias(n) for x, n in zip(_func(pl.col('test')), ["a","b"]) ) ββββββββ¬ββββββ¬ββββββ β test β a β b β β --- β --- β --- β β i32 β i32 β i32 β ββββββββͺββββββͺββββββ‘ β 1 β 2 β 3 β β 2 β 3 β 4 β β 3 β 4 β 5 β β 4 β 5 β 6 β β 5 β 6 β 7 β β 6 β 7 β 8 β β 7 β 8 β 9 β β 8 β 9 β 10 β β 9 β 10 β 11 β β 10 β 11 β 12 β ββββββββ΄ββββββ΄ββββββ Or you can use the fact that with_columns() accepts **named_exprs as an argument,convert list of expressions into dict and unpack it: df.with_columns( **dict(zip(['a','b'], _func(pl.col('test')))) ) ββββββββ¬ββββββ¬ββββββ β test β a β b β β --- β --- β --- β β i32 β i32 β i32 β ββββββββͺββββββͺββββββ‘ β 1 β 2 β 3 β β 2 β 3 β 4 β β 3 β 4 β 5 β β 4 β 5 β 6 β β 5 β 6 β 7 β β 6 β 7 β 8 β β 7 β 8 β 9 β β 8 β 9 β 10 β β 9 β 10 β 11 β β 10 β 11 β 12 β ββββββββ΄ββββββ΄ββββββ | 3 | 2 |
78,135,862 | 2024-3-10 | https://stackoverflow.com/questions/78135862/convert-undelimited-bytes-to-pandas-dataframe | I am sorry if this is a duplicate, but I didn't find a suitable answer for this problem. If have a bytes object in python, like this: b'\n\x00\x00\x00\x01\x00\x00\x00TEST\xa2~\x08A\x83\x11\xe3@\x05\x00\x00\x00\x03\x00\x00\x00TEST\x91\x9b\xd1?\x1c\xaa,@' It contains first a certain number of integer (4bytes) then a string with 4 characters and then a certain number of floats (4bytes). This is repeated a certain number of times which each correspond to a new row of data. The format of each row is the same and known. In the example this 2 rows of 2 integers, 1 string and 2 floats. My question is, if there is a way to convert this kind of data to a pandas DataFrame directly. My current approach was to first read all values (e.g. with struct.Struct.unpack) and place them in a list of lists. This however seem rather slow, especially for a large number of rows. | This works fine for me: import numpy as np import pandas as pd data = b'\n\x00\x00\x00\x01\x00\x00\x00TEST\xa2~\x08A\x83\x11\xe3@\x05\x00\x00\x00\x03\x00\x00\x00TEST\x91\x9b\xd1?\x1c\xaa,@' dtype = np.dtype([ ('int1', np.int32), ('int2', np.int32), ('string', 'S4'), ('float1', np.float32), ('float2', np.float32), ]) structured_array = np.frombuffer(data, dtype=dtype) df = pd.DataFrame(structured_array) df['string'] = df['string'].str.decode('utf-8') print(df) And it gives me this following output: int1 int2 string float1 float2 0 10 1 TEST 8.530916 7.095888 1 5 3 TEST 1.637560 2.697883 | 2 | 2 |
78,122,985 | 2024-3-7 | https://stackoverflow.com/questions/78122985/test-for-object-callability-in-match-case-construct | For context, I have a function that matches keys in a dictionary to perform certain action; to match item keys, the function accepts either a sequence of keys to match, or a function that recognizes those keys. I'm wondering if I can use the match-case pattern for it. I try something like: def process_fields(dataset,function,fields): """Apply function to selected values of a dictionary fields can be a list of keys whose values shall be processed, or a predicate that returns True for the targeted fields.""" match fields: case list() | set() | tuple(): key_matcher = lambda x:x in fields case <what I'm looking for>: key_matcher=fields walk_items(dataset,key_matcher,function) So far I've tried: case callable(function): key_matcher=function case typing.Callable(function): key_matcher=function I can't find what I need in the official documentation. I'm I missing something or it's not doable? Example is here to avoid too dry abstraction. Note that I'm NOT looking for alternatives to solve that particular problem, I can perfectly do it myself; I'm looking to find out if it exist a way to use that python structure in particular. Edit: Even though the function itself is not the focus of the post, I've added a docstring and simplified a bit to clarify the example. | If you want to test if an object is Callable then: from collections.abc import Callable def func(): pass f = func match f: case Callable(): print("Yes it's callable") | 5 | 3 |
78,111,501 | 2024-3-6 | https://stackoverflow.com/questions/78111501/generate-csv-export-downloadable-odoo | i have a problem to generate downloadable export to csv file, does anyone know the problem? this is my transient model class ReportPatientWizard(models.TransientModel): _name = "report.patient.wizard" _description = "Patient Reports" patient_id = fields.Many2one('hospital.patient', string='Patient', readonly=True) gender_filter = fields.Selection([('male', 'Male'), ('female', 'Female'), ('other', 'Other')], string='Gender Filter') def _prepare_csv_data(self, patients): field_names = ['Name', 'Age', 'Gender'] # Field names for CSV headers data = io.StringIO() writer = csv.DictWriter(data, fieldnames=field_names) writer.writeheader() # Write CSV header for patient in patients: row = { 'Name': patient.name, 'Age': patient.age, 'Gender': patient.gender, # Add other fields as needed } writer.writerow(row) return data.getvalue() @api.multi def download_patient_report(self): # Ambil data pasien berdasarkan ID yang dipilih patient = self.patient_id # Filter data berdasarkan gender jika gender_filter dipilih if self.gender_filter: patients = self.env['hospital.patient'].search([('gender', '=', self.gender_filter)]) else: patients = self.env['hospital.patient'].search([]) if not patients: raise UserError(_('No patients found matching the criteria.')) csv_data = self._prepare_csv_data(patients) print(f"Records: {patients}") # Prepare file name filename = f'patient_report_{fields.Date.today()}.csv' # Return a response to download the CSV file return { 'type': 'ir.actions.act_url', 'url': 'web/content/?model=report.patient.wizard&id={}&filename={}&field=file'.format(self.id, filename), 'target': 'current', } def print_patient_report(self): # Generate and download patient report as CSV return self.download_patient_report() and this is the view.xml <record id="report_patient_view" model="ir.ui.view"> <field name="name">Report Patient</field> <field name="model">report.patient.wizard</field> <field name="arch" type="xml"> <form string="Report Patient"> <!-- <field name="patient_id"/>--> <group col="1"> <!-- <field name="is_gender"/>--> <field name="gender_filter"/> </group> <!-- <field name="journal_ids" required="0" invisible="1"/>--> <footer> <button name="download_patient_report" string="Download" type="object" class="oe_highlight"/> <button string="Cancel" class="btn btn-default" special="cancel"/> </footer> </form> </field> </record> <record id="action_report_patient_view" model="ir.actions.act_window"> <field name="name">Report Patient</field> <field name="res_model">report.patient.wizard</field> <field name="type">ir.actions.act_window</field> <field name="view_mode">form</field> <field name="view_id" ref="report_patient_view"/> <field name="target">new</field> </record> <menuitem id="menu_report" name="Report Patients" parent="menu_hospital_operations" action="action_report_patient_view" sequence="15"/> in here i want to make a form filter that the user can select which gender they want to take the data, but after i clicked the download button it shows the problem this is the error what i was expecting is when i clicked the download button it will generate the download | You specified a field in the route and you don't have a field named file. You need to add a binary field and set its content just after the file name Example: # Prepare file name filename = f'patient_report_{fields.Date.today()}.csv' self.file = base64.b64encode(csv_data.encode('utf-8')) You can add self.ensure_one() at the beginning of the download_patient_report function to make sure that self holds a single record | 2 | 2 |
78,129,981 | 2024-3-8 | https://stackoverflow.com/questions/78129981/logging-error-failed-to-initialize-logging-system-log-messages-may-be-missing | Logging Error: Failed to initialize logging system. Log messages may be missing. If this issue persists, try setting IDEPreferLogStreaming=YES in the active scheme actions environment variables. Has anyone else encountered this message? Where is IDEPreferLogStreaming located? I don't know what any of this means. It's building my app successfully but then loading it like its a computer using floppy discs (crazy slow). Any ideas? I tried wiping my OS and reinstalling. I've reinstalled Xcode twice now. Nothing. A colleague of mine is working on the same SwifUI project with no issues. | To find IDEPreferLogStreaming, you need to go to Product -> Scheme -> Edit Scheme and then add it as a new Environment Variable yourself. IDEPreferLogStreaming=YES For me it didnt solve the issue though --- [Edit: it works for me now as well. Probably I was to quick saying it doesnt. Thanks for your feedback.] | 83 | 122 |
78,144,551 | 2024-3-12 | https://stackoverflow.com/questions/78144551/current-timestamp-in-azure-databricks-notebook-in-est | I need the current timestamp in EST but the current_timestamp() is returning PST. Tried the following code but it's not working and showing 6 hours before EST time: # Import the current_timestamp function from pyspark.sql.functions import from_utc_timestamp # Set the timezone to EST spark.conf.set("spark.sql.session.timeZone", "EST") # Get the current timestamp in EST current_timestamp_est = spark.sql("SELECT from_utc_timestamp(current_timestamp(), 'EST') as current_timestamp_est") # Show the current timestamp in EST current_timestamp_est.show(truncate = False) Also tried the below code. But the time is coming as 2024-03-11 23:16:04.589275-04:00. Why it's coming with -4:00? Is there a way to get rid of the -4:00? from datetime import datetime from pytz import timezone est = timezone('US/Eastern') now_est = datetime.now(est) #now_est1 = now_est[:26] print(now_est) Any other way to get the current timestamp in EST in a variable? | To get rid of the -04:00 you can just use .strftime() in your second approach: from datetime import datetime from pytz import timezone est = timezone('US/Eastern') now_est = datetime.now(est).strftime('%Y-%m-%d %H:%M:%S') print(now_est) Output: 2024-04-04 09:42:58 Now, for the first approach instead of setting EST as the timezone - it's preferred to point out a location around the globe like America/New_York to get rid of the offset value -04:00 as follows: spark.conf.set("spark.sql.session.timeZone", "America/New_York") current_timestamp_est = spark.sql("SELECT current_timestamp() as current_timestamp_est") current_timestamp_est.show(truncate=False) Output: +--------------------------+ |current_timestamp_est | +--------------------------+ |2024-04-04 09:42:12.082136| +--------------------------+ | 3 | 0 |
78,117,544 | 2024-3-6 | https://stackoverflow.com/questions/78117544/celery-worker-creates-control-folder-when-using-filesystem-as-a-broker | I have a project using Celery with Redis backend to manage tasks. For local development I am trying to set up Celery with filesystem as broker instead of Redis. However, when I run Celery worker, it creates me control folder in the root directory with the following contents: /control βββ celery.pidbox.exchange βββ Q1.exchange βββ Q2.exchange βββ ... I have trouble finding any resources on what it is and for what it is used exactly. My goal is to possibly move this folder to another location (ex. .celery/ folder), so that it does not sit in root directory. Here is my Celery configuration: class CeleryLocalConfig: include = ["app.tasks"] broker_url = "filesystem://" result_backend = "file://.celery/broker/results" broker_transport_options = { "data_folder_in": ".celery/broker/out", "data_folder_out": ".celery/broker/out", "queue_order_strategy": "sorted" } celery_app = Celery(__name__) celery_app.config_from_object(CeleryLocalConfig) So far I have tried: Running Celery worker from different directory than root, but then worker doesn't receive any tasks. Providing --workdir parameter when running Celery worker, but it seem to create different paths between Celery worker and Celery app and therefore fails. | Setting a control_folder field did the trick for me: CELERY_BROKER_TRANSPORT_OPTIONS = { "data_folder_in": os.path.join(BASE_DIR, ".celery"), "data_folder_out": os.path.join(BASE_DIR, ".celery"), "control_folder": os.path.join(BASE_DIR, ".celery"), } So for you, I suppose, it would be: broker_transport_options = { "data_folder_in": ".celery/broker/out", "data_folder_out": ".celery/broker/out", "queue_order_strategy": "sorted", "control_folder": ".celery/broker/out", } | 4 | 2 |
78,122,802 | 2024-3-7 | https://stackoverflow.com/questions/78122802/why-is-my-scipy-optimize-minimizemethod-newton-cg-function-stuck-on-a-local | I want to find the local minimum for a function that depends on 2 variables. For that my plan was to use the scipy.optimize.minimize function with the "newton-cg" method because I can calculate the jacobian and hessian analytically. However, when my starting guesses are on a local maximum, the function terminates successfully in the first iteration step on top of the local maximum, even though the hessian is negative. I've written a short test code where I was capable of reproducing the issue: import numpy as np import scipy.optimize as o def get_f_df(var): x = var[0] y = var[1] f = np.cos(x) + np.cos(y) df_dx = -np.sin(x) df_dy = -np.sin(y) return f, (df_dx, df_dy) def hess(var): x = var[0] y = var[1] f_hess = np.zeros((2,2)) f_hess[0,0] = -np.cos(x) f_hess[1,1] = -np.cos(y) return f_hess min = o.minimize(get_f_df, (0, 0), jac=True, hess=hess, method="newton-cg") print(min) The outcome then is: message: Optimization terminated successfully. success: True status: 0 fun: 2.0 x: [ 0.000e+00 0.000e+00] nit: 1 jac: [-0.000e+00 -0.000e+00] nfev: 1 njev: 1 nhev: 0 The same outcome occurs if I use hess=None, hess='cs', hess='2-point' or hess='3-point' instead of my custom hess function. Also if I use other methods like 'dogleg', 'trust-ncg', 'trust-krylov', 'trust-exact' or 'trust-constr' I basically get the same outcome except nhev = 1 but the outcome is still wrong with x = [0,0]. Either I am doing something terribly wrong here (definitely very likely) or there is a major problem with the minimize function, specifically the "newton-cg" method (very unlikely?). Regarding the latter case I also checked the source code to see if something's wrong there and stumpled upon something kinda weird(?). However, I don't completely understand the whole code, so I am a bit unsure if my worries are legitimate: Let's take a look at the source code When the minimize function is called with method="newton-cg" it jumps into the _minimize_newtoncg function (see source code here). I want to go into detail what I believe happens here: On line 2168 A = sf.hess(xk) the hessian is first calculated in dependence of xk which is at first the start guess x0. For my test case the hessian is of course A = [[fxx, fxy], [fxy, fyy]] with fij being the derivatives of f after i and j. In my case fxy = fyx is also true. Next on line 2183 Ap = A.dot(psupi) the product of the hessian A and psupi is calculated. psupi is basically equal to b which is the negative gradient of f at xk. So Ap = A.dot(psupi) results in Ap = [fxxfx + fxyfy, fxyfx + fyyfy]. Now to the (possible) problem Next, the curvature curv is calculated on line 2186 by np.dot(psupi, Ap). As explained above, psupiis the negative gradient of f so this results in curv = fxxfx2 + 2 fxyfxfy + fyyfy2. However, all of these derivatives are at xk which is equal to the starting parameters x0 at first. If the starting parameters are exactly at a local maximum, the derivatives fx and fy are equal to 0. Because of this, curv = 0. This results in a for loop break on the next line, thus, skipping to update xsupi, psupi and all the other parameters. Therefore, pk becomes [0,0] and _line_search_wolfe12 is called with basically all start parameters. This is where my understanding of the source code stops, however, I feel like things already went wrong after curv = 0 and breaking the for loop. Edit - Why do I need this: Since I got a bit feedback and the question arose as to why I don't just use another start guess, I want to give a short explanation what my actual goal is. Maybe it helps you help me. I want to simulate magnetic hysteresis loops using the macrospin model. What I need to do for that is to find the local minimum in the energy landscape for each external magnetic field step starting from saturation. There, the angle between the magnetic macrospin and the external magnetic field is 0Β°. In saturation they are in an energetic minimum. If I reduce the external magnetic field, I have to take the angle from the field step before that as a new starting guess. As the external magnetic field is reduced, the local minimum in the energy landscape in saturation transforms into a local maximum. At first, I ran the local minimization for the angle from the last field value and +- a small increment. My idea was that it should result in the same value, as long as it is not on a local maximum. Then I would take the value which found the lowest minimum. For some reason I do not understand yet, the increment value I chose had a huge impact on the outcome. My increment values were usually in the 0.0001-0.01 range with my angle being in pi values (-3.141 to 3.141). Therefore, I kinda scrapped that idea. Next, I guess I'll try only checking that if I am indeed on a local maximum and maybe rather consider the gradient and not the final energy value as deciding direction. If that works, I'll update this here. Update: If I sit on a local maximum or saddle point I now check the gradients at +- increment values and pick the position with the highest gradient as new start guess. This kinda works but the exact increment value again has more influence on the outcame than I'd like. I guess I'll have to try to find an ideal value which works for most conditions. (The derivative-free solvers suggested by Matt Haberland also did not really seem to help me. They kinda worked but also kinda didn't.) | The solution is stuck at a local maximum because you are using a gradient-based solver with a guess at a point where the gradient is exactly zero. The guess satisfies the condition for successful termination, so the solver stops. A simple solution is to perturb the guess: min = o.minimize(get_f_df, (1e-6, 1e-6), jac=True, hess=hess, method="newton-cg") print(min) # message: Optimization terminated successfully. # success: True # status: 0 # fun: -1.9999999999999998 # x: [ 3.142e+00 3.142e+00] # nit: 6 # jac: [-1.147e-08 -1.147e-08] # nfev: 36 # njev: 36 # nhev: 6 Of course, you could try a derivative-free solver. powell and cobyla don't use derivatives, and they both solve your problem with the same guess. But the better option is to change the guess. Either I am doing something terribly wrong here (definitely very likely) or there is a major problem with the minimize function, specifically the "newton-cg" method (very unlikely?). I believe you've already filed a bug report. Presumably you expected the method to check whether the Hessian is positive definite or not, and it may be surprising that it doesn't require that to report success. Over at the repo, it may be considered a bug or a shortcoming of the algorithm, but you will find that this sort of thing is not unusual for this sort of method, since it is rarely an impediment to solving a problem in practice. In any case, it isn't a problem with the way you've defined the problem - just the guess or choice of method. | 4 | 3 |
78,128,173 | 2024-3-8 | https://stackoverflow.com/questions/78128173/displaying-geojson-data-sent-by-flask-server-in-folium-map | I am currently working on a teaching tool for aviation and I need to display realtime data in a 1s interval in a GUI on a map. I decided to use PySide6 and folium for this. The realtime data is simple position data consisting of latitude and longitude. To my understanding it is not possible to insert changing values from the simulation right into the folium Realtime function, but to fetch it from an endpoint. The exampledata provided in the documentation works and displays data as expected in the map from this source: https://raw.githubusercontent.com/python-visualization/folium-example-data/main/subway_stations.geojson I set up a flask_server to send randomized data as a proof of concept, but the data doesn't show in the map. Down below the function, which contains the Realtime access def plot_flight_path(self): # Create a Folium map centered at the start of the path self.m = folium.Map(location=[self.start[0], self.start[1]], zoom_start=5) # Add the satellite imagery layer from Google Maps API folium.TileLayer('https://mt1.google.com/vt/lyrs=s&x={x}&y={y}&z={z}', attr='Google Satellite', name='Google Satellite', overlay=True).add_to(self.m) # Plot the path as a Polyline on the map folium.PolyLine(locations=self.route, color='blue', weight=5).add_to(self.m) rt = folium.plugins.Realtime( "http://127.0.0.1:5000/random_position.geojson", get_feature_id=JsCode("(f) => { return f.properties.objectid; }"), interval=10000, ) #rt = Realtime("http://127.0.0.1:5000/", interval=1000) rt.add_to(self.m) # Convert the Folium map to HTML plot_html = self.m.get_root().render() # Load the Folium map in the web view self.webview.setHtml(plot_html) And the flask_server I set up: from flask import Flask, jsonify import random app = Flask(__name__) @app.route('/random_position.geojson') def random_position(): # Generate random latitude and longitude latitude1 = random.uniform(-90, 90) longitude1 = random.uniform(-180, 180) latitude2 = random.uniform(-90, 90) longitude2 = random.uniform(-180, 180) # Create GeoJSON FeatureCollection with unordered features features = [ { "type": "Feature", "properties": { "name": "Astor Pl", "url": "http://web.mta.info/nyct/service/", "line": "4-6-6 Express", "objectid": "1", "notes": "4 nights, 6-all times, 6 Express-weekdays AM southbound, PM northbound" }, "geometry": { "type": "Point", "coordinates": [latitude1, longitude1] } }, { "type": "Feature", "properties": { "name": "Canal St", "url": "http://web.mta.info/nyct/service/", "line": "4-6-6 Express", "objectid": "2", "notes": "4 nights, 6-all times, 6 Express-weekdays AM southbound, PM northbound" }, "geometry": { "type": "Point", "coordinates": [latitude2, longitude2] } } ] # Create GeoJSON FeatureCollection feature_collection = { "type": "FeatureCollection", "features": features } print("Returning GeoJSON:", feature_collection) return jsonify(feature_collection), 200, {'Content-Type': 'application/json'} if __name__ == '__main__': app.run(debug=True, port=5000) I changed many parameters beginning in accessing the flask server via the JsCode function in folium and I changed different parameters in the flask_server file. Additionally i tried to send a .geojson file, not only the formatted geojson data, but this didn't work either. Thanks in advance! | I could resolve the issue by adding a header, which is needed by the fetch function: with open('random_position.geojson', 'w') as f: json.dump(feature_collection, f) response = make_response(send_file('random_position.geojson', mimetype='application/json')) response.headers.add('Access-Control-Allow-Origin', '*') return response | 2 | 2 |
78,132,353 | 2024-3-9 | https://stackoverflow.com/questions/78132353/pytest-how-to-remove-created-data-after-each-test-function | I have a FastAPI + SQLAlchemy project and I'm using Pytest for writing unit tests for the APIs. In each test function, I create some data in some tables (user table, post table, comment table, etc) using SQLAlchemy. These created data in each test function will remain in the tables after test function finished and will affect on other test functions. For example, in the first test function I create 3 posts, and 2 users, then in the second test functions, these 3 posts and 2 users remained on the tables and makes my test expectations wrong. Following is my fixture for pytest: @pytest.fixture def session(engine): Session = sessionmaker(bind=engine) session = Session() yield session session.rollback() # Removes data created in each test method session.close() # Close the session after each test I used session.rollback() to remove all created data during session, but it doesn't remove data. And the following is my test functions: class TestAllPosts(PostBaseTestCase): def create_logged_in_user(self, db): user = self.create_user(db) return user.generate_tokens()["access"] def test_can_api_return_all_posts_without_query_parameters(self, client, session): posts_count = 5 user_token = self.create_logged_in_user(session) for i in range(posts_count): self.create_post(session) response = client.get(url, headers={"Authorization": f"Bearer {user_token}"}) assert response.status_code == 200 json_response = response.json() assert len(json_response) == posts_count def test_can_api_detect_there_is_no_post(self, client, session): user_token = self.create_logged_in_user(session) response = client.get(url, headers={"Authorization": f"Bearer {user_token}"}) assert response.status_code == 404 In the latest test function, instead of getting 404, I get 200 with 5 posts (from the last test function) How can I remove the created data in each test function after test function finished? | The problem is that there are multiple sessions. One is used by your tests. The other one(s) is/are used by the server. Because you are using client.get, you are sending a request to the server, which will use its own database session. To solve your problem you can just truncate all tables at the end of each test: https://stackoverflow.com/a/25220958/5521670 @pytest.fixture def session(engine): Session = sessionmaker(bind=engine) session = Session() yield session # Remove any data from database (even data not created by this session) with contextlib.closing(engine.connect()) as connection: transaction = connection.begin() connection.execute(f'TRUNCATE TABLE {",".join(table.name for table in reversed(Base.metadata.sorted_tables)} RESTART IDENTITY CASCADE;')) transaction.commit() session.rollback() # Removes data created in each test method session.close() # Close the session after each test Another alternative would be to make the server use your test session (just like the FastAPI documentation suggests): https://fastapi.tiangolo.com/advanced/testing-database/ def override_get_db(): try: db = TestingSessionLocal() yield db finally: db.close() app.dependency_overrides[get_db] = override_get_db | 5 | 9 |
78,136,859 | 2024-3-10 | https://stackoverflow.com/questions/78136859/find-the-optimal-clipped-circle | Given a NxN integer lattice, I want to find the clipped circle which maximizes the sum of its interior lattice point values. Each lattice point (i,j) has a value V(i,j) and are stored in the following matrix V: [[ 1, 1, -3, 0, 0, 3, -1, 3, -3, 2], [-2, -1, 0, 1, 0, -2, 0, 0, 1, -3], [ 2, 2, -3, 2, -2, -1, 2, 2, -2, 0], [-2, 0, -3, 3, 0, 2, -1, 1, 3, 3], [-1, -2, -1, 2, 3, 3, -3, -3, 2, 0], [-3, 3, 2, 0, -3, -2, -1, -3, 0, -3], [ 3, 2, 2, -1, 0, -3, 1, 1, -2, 2], [-3, 1, 3, 3, 0, -3, -3, 2, -2, 1], [ 0, -3, 0, 3, 2, -2, 3, -2, 3, 3], [-1, 3, -3, -2, 0, -1, -2, -1, -1, 2]] The goal is to maximize the sum of values V(i,j) of the lattice points lying on the boundary and within interior of a (clipped) circle with radius R, with the assumptions and conditions: the circle has center at (0,0) the circle can have any positive radius (not necessarily an integer radius, i.e., rational). the circle may be clipped at two lattice points, resulting in a diagonal line as shown in the picture. This diagonal line has a slope of -45 degrees. Some additional details: The score for a clipped circle is the sum of all the integers that are both within the circle (or on the border) and on the side of the diagonal line including (0,0). The values on (or near) the border are -3, 1, 3, -1, -3, 3, -1, 2, 0, 3. Even though the circle can have any radius, we need only consider circles that intersect a grid point precisely so there are n^2 different relevant radiuses. Further, we need only record one position where the circle intersects with the diagonal line to fully specify the clipped circle. Note that this intersection with the diagonal does not need to be at an integer coordinate. If the optimal solution doesn't have the diagonal clipping the circle at all then we need only return the radius of the circle. What I have found so far: If we only wanted to find the optimal circle we could do that quickly in time proportional to the input size with: import numpy as np from math import sqrt np.random.seed(40) def find_max(A): n = A.shape[0] sum_dist = np.zeros(2 * n * n, dtype=np.int32) for i in range(n): for j in range(n): dist = i**2 + j**2 sum_dist[dist] += A[i, j] cusum = np.cumsum(sum_dist) # returns optimal radius with its score return sqrt(np.argmax(cusum)), np.max(cusum) A = np.random.randint(-3, 4, (10, 10)) print(find_max(A)) How quickly can the optimal clipped circle be found? | Start by creating a cumulative frequency table, or a fenwick tree. You'll have a record for each radius of circle, with value corresponding to explored weights at that distance from the origin. Then, begin a BFS from the origin. For each diagonal "frontier", you'll need to update your table/tree with the radius:weight key-value pair (add weight to existing value). You'll also need to then query the table/tree for the current cumulative sum at each radius just added, noting the maximum and updating a global running maximum accordingly. Once your search terminates, you'll have the maximum sum for your clipped-circle. If you want to reconstruct the circle, just store the max radius and BFS depth along with the global max sum itself. This will give you your solution in O(N^2 log N) time, as there will be N^2 updates and queries, which are O(log N) each. The intuition behind this solution is that by exploring along this diagonal "frontier" outward, you implicitly clip all your circles you query since the weights above/right of it haven't been added yet. By calculating the max (at each search depth) for just the radii that were just updated, you also enforce the constraint that the circles intersect the clipping line at an integer coordinate. Update Here is python code showing this in action. It needs cleaned up, but at least it shows the process. I opted to use cumulative frequency / max arrays, instead of trees, since that'll probably lend itself to vectorization with numpy for OP. def solve(matrix): n = len(matrix) max_radius_sqr = 2 * (n - 1) ** 2 num_bins = max_radius_sqr.bit_length() + 1 frontier = [(0, 0)] csum_arr = [[0] * 2 ** i for i in range(num_bins)[::-1]] cmax_arr = [[0] * 2 ** i for i in range(num_bins)[::-1]] max_csum = -float("inf") max_csum_depth = None max_csum_radius_sqr = None depth = 0 while frontier: next_frontier = [] if depth + 1 < n: # BFS up next_frontier.append((0, depth + 1)) # explore frontier, updating csums and maximums per each for x, y in frontier: if x + 1 < n: # BFS right next_frontier.append((x + 1, y)) index = x ** 2 + y ** 2 # index is initially the radius squared for i in range(num_bins): csum_arr[i][index] += matrix[y][x] # update csums if i != 0: # skip first, since no children to take max of sum_left = csum_arr[i-1][index << 1] # left/right is tree notation of the array max_left = cmax_arr[i-1][index << 1] max_right = cmax_arr[i-1][index << 1 | 1] cmax_arr[i][index] = max(max_left, sum_left + max_right) # update csum maximums index >>= 1 # shift off last bit, update sums/maxs again, log2 times # after entire frontier is explored, query for overall max csum over all radii # update running global max and associated values if cmax_arr[-1][0] > max_csum: max_csum = cmax_arr[-1][0] max_csum_depth = depth index = 0 for i in range(num_bins-1)[::-1]: # reconstruct max radius (this could just as well be stored) sum_left = csum_arr[i][index << 1] max_left = cmax_arr[i][index << 1] max_right = cmax_arr[i][index << 1 | 1] index <<= 1 if sum_left + max_right > max_left: index |= 1 max_csum_radius_sqr = index depth += 1 frontier = next_frontier # total max sum, dx + dy of diagonal cut, radius ** 2 return max_csum, max_csum_depth, max_csum_radius_sqr Calling this with the given test case produces the expected output: matrix = [ [-1, 3, -3, -2, 0, -1, -2, -1, -1, 2], [ 0, -3, 0, 3, 2, -2, 3, -2, 3, 3], [-3, 1, 3, 3, 0, -3, -3, 2, -2, 1], [ 3, 2, 2, -1, 0, -3, 1, 1, -2, 2], [-3, 3, 2, 0, -3, -2, -1, -3, 0, -3], [-1, -2, -1, 2, 3, 3, -3, -3, 2, 0], [-2, 0, -3, 3, 0, 2, -1, 1, 3, 3], [ 2, 2, -3, 2, -2, -1, 2, 2, -2, 0], [-2, -1, 0, 1, 0, -2, 0, 0, 1, -3], [ 1, 1, -3, 0, 0, 3, -1, 3, -3, 2], ][::-1] print(solve(matrix)) # output: 13 9 54 In other words, it says the maximum total sum is 13, with a diagonal cut stagger (dx + dy) of 9, and radius squared of 54. If I have some time tonight or this weekend, I'll clean up the code a bit. | 11 | 6 |
78,149,556 | 2024-3-12 | https://stackoverflow.com/questions/78149556/unknown-document-type-error-while-using-llamaindex-with-azure-openai | I'm trying to reproduce the code from documentation: https://docs.llamaindex.ai/en/stable/examples/customization/llms/AzureOpenAI.html and receive the following error after index = VectorStoreIndex.from_documents(documents): raise ValueError(f"Unknown document type: {type(document)}") ValueError: Unknown document type: <class 'llama_index.legacy.schema.Document'> Due to the fact that all these generative ai libraries are being constantly updated, I have to switch the import of SimpleDirectoryReader and make it like from llama_index.legacy.readers.file.base import SimpleDirectoryReader All the rest is actually the same with tutorial (using llama_index==0.10.18 and python of version 3.9.16). I have spent already several hours on that and actually don't have ideas how should I proceed. So if somebody can assist with that - it would be super helpful :) Many thanks in advance. | The error occurs because of the type of document you are passing for VectorStoreIndex.from_documents(). When you import SimpleDirectoryReader from legacy modules, the type of document is llama_index.legacy.schema.Document. You are passing that to VectorStoreIndex, which is imported from core modules: from llama_index.core import VectorStoreIndex. The document you referred to is correct for core modules, and you can import SimpleDirectoryReader as from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, and everything will work fine. If you wish to use legacy modules, then use the code below. from llama_index.legacy.llms.azure_openai import AzureOpenAI from llama_index.legacy.embeddings.azure_openai import AzureOpenAIEmbedding from llama_index.legacy import SimpleDirectoryReader, VectorStoreIndex, ServiceContext import logging import sys logging.basicConfig( stream=sys.stdout, level=logging.INFO ) # logging.DEBUG for more verbose output logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout)) api_key = "3c9xxxyyyyzzzzzssssssdb9" azure_endpoint = "https://<resource_name>.openai.azure.com/" api_version = "2023-07-01-preview" llm = AzureOpenAI( model="gpt-4", deployment_name="gpt4", api_key=api_key, azure_endpoint=azure_endpoint, api_version=api_version, ) # You need to deploy your own embedding model as well as your own chat completion model embed_model = AzureOpenAIEmbedding( model="text-embedding-ada-002", deployment_name="embeding1", api_key=api_key, azure_endpoint=azure_endpoint, api_version=api_version, ) documents = SimpleDirectoryReader(input_files=["./data/s1.txt"]).load_data() type(documents[0]) service_context = ServiceContext.from_defaults( llm=llm, embed_model=embed_model ) index = VectorStoreIndex.from_documents(documents, service_context=service_context) Output: query = "What is the model name and who updated it last?" query_engine = index.as_query_engine() answer = query_engine.query(query) print("query was:", query) print("answer was:", answer) Here, when using legacy modules, all tools and models should be imported from the same legacy modules, and an additional service context is used for the vector store index. | 2 | 1 |
78,150,355 | 2024-3-12 | https://stackoverflow.com/questions/78150355/from-typing-vs-from-collections-abc-for-standard-primitive-type-annotation | I'm looking at the standard library documentation, and I see that from typing import Sequence is just calling collections.abc under the hood. Now originally, there was a deprecation warning/error and migration from the collections package to collections.abc for some abstract classes. See here. However, now that the abstractions have settled in a new location, is it fine to use either? I see from collections.abc import [etc] in the codebase, and I wonder if it would be more practical to just import from typing when trying to do type annotations? Cython source code: Sequence = _alias(collections.abc.Sequence, 1) | However, now that the abstractions have settled in a new location, is it fine to use either? It's better not to do so. The documentation specifically says: class typing.Sequence(Reversible[T_co], Collection[T_co]) Deprecated alias to collections.abc.Sequence. Deprecated since version 3.9: collections.abc.Sequence now supports subscripting ([]) If it's deprecated, it may be removed in the next versions of Python. So go with collections.abc generic classes. | 4 | 5 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.