question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
79,409,587 | 2025-2-3 | https://stackoverflow.com/questions/79409587/performance-impact-of-inheriting-from-many-classes | I am investigating the performance impact of a very broad inheritance setup. Start with 260 distinct attribute names, from a0 through z9. Create 260 classes with 1 uniquely-named attribute each. Create one class that inherits from those 260 classes. Create 130 classes with 2 uniquely-named attributes each. Create one class that inherits from those 130 classes. Repeat for 52 classes with 5 attributes each, 26 classes with 10 attributes each, and 1 class with all 260 attributes. Create one instance of each of the five classes, and then measure the time to read (and add together) all 260 attributes on each. Average performance from 2.5M reads, interleaving in different orders. From260: 2.48 From130: 1.55 From52: 1.22 From26: 1.15 AllInOne: 1.00 These values sort of fit on a linear regression...but they don't. And these relationships hold true across many different runs, of various sizes and test orders. The values come closer to fitting a second-degree polynomial, or exponential fit...but again, the data does not fit so cleanly as to be obvious. As I massively increase the number of subclasses, will the performance falloff be linear, or non-linear? Here's some updated code that samples many different subclass combinations up to 2310: from time import time TOTAL_ATTRS = 2310 # Create attribute names "a0000" through "a2309" attr_names = [f"a{i:04d}" for i in range(TOTAL_ATTRS)] # Map each attribute to a default value (1..2310) all_defaults = {name: i + 1 for i, name in enumerate(attr_names)} # The provided factors of 2310 factors = [1, 2, 3, 5, 6, 7, 10, 11, 14, 15, 21, 22, 30, 33, 35, 42, 55, 66, 70, 77, 105, 110, 154, 165, 210, 231, 330, 385, 462, 770, 1155, 2310] # Build a dictionary mapping each factor to a composite class. # For factor f, create f subclasses each with (2310 // f) attributes, # then create a composite class inheriting from all f subclasses. composite_classes = {} for f in factors: group_size = TOTAL_ATTRS // f subclasses = [] for i in range(f): group = attr_names[i * group_size:(i + 1) * group_size] group_defaults = {name: all_defaults[name] for name in group} subclass = type(f"Sub_{f}_{i}", (object,), group_defaults) subclasses.append(subclass) composite_classes[f] = type(f"From_{f}", tuple(subclasses), {}) iterations = range(0, 1_000) for n, c in composite_classes.items(): i = c() t = time() for _ in iterations: for a in attr_names: getattr(i, a) print(f"Inheriting from {n} subclasses: {time()-t:.3f}s") and the results, which seem far more linear than polynomial, but which have odd "ledges" in them: | The slowdown will be weird and hard to fully predict, depending on subtle details of memory allocation and attribute access order. Worst-case, not only will you experience a linear slowdown, you'll slow down completely unrelated attribute accesses in unrelated code. CPython has a 4096-entry type attribute cache, and instance attribute lookups do check this cache when looking for an attribute through the class. The cache entry used for an attribute lookup is determined using a simple hash based on the type's version tag and the address of the string object for the attribute being looked up: #define MCACHE_HASH(version, name_hash) \ (((unsigned int)(version) ^ (unsigned int)(name_hash)) \ & ((1 << MCACHE_SIZE_EXP) - 1)) #define MCACHE_HASH_METHOD(type, name) \ MCACHE_HASH(FT_ATOMIC_LOAD_UINT32_RELAXED((type)->tp_version_tag), \ ((Py_ssize_t)(name)) >> 3) If an attribute is found through the cache, the lookup is quick, and doesn't depend on the depth of the inheritance hierarchy. But if an attribute is not found through the cache, the lookup has to go through the class's MRO, one class at a time, performing a dict lookup in each class until it finds (or fails to find) the attribute. This takes an amount of time linear in the number of classes it has to look through. Note that because of the descriptor protocol, Python has to do this even if it finds an entry for the attribute directly in the instance dict. So the more attributes you use, the greater the chance you run into a hash collision, either with another one of your attributes or with something completely unrelated, like list.append. The longer your MRO, the greater the impact of a collision. If two attributes happen to produce a hash collision, then accessing one while the other is in the cache will need to perform a slow MRO search for the attribute. Then it'll evict the other attribute from the cache. If you access this attribute again before the other one, it'll be quick, but the next time you access the attribute that just got evicted, you'll go through another slow MRO search and eviction. Because the cache cares about the address of the attribute string instead of the actual attribute name, using a different string object for a lookup will also cause a cache miss. In normal Python code, attribute names get interned, so this isn't a problem, but when you're generating attribute names dynamically, the interning doesn't happen. | 2 | 3 |
79,407,233 | 2025-2-2 | https://stackoverflow.com/questions/79407233/how-do-i-store-decimal-values-coming-from-sql-in-csv-file-with-python | I have a simple chatbot that generates SQL queries and uses them on a database. I want to store the output in a .csv file and then download that file. This is usually possible when I take my output from SQL (a list of dicts), create a pd.Dataframe after evaluating that output, and finally download it as a csv file using st.download_button. However, when the result is a Decimal() format from SQL, pd.Dataframe fails and both eval and literal_eval would not work on it (I got invalid object error using literal_eval). I also tried to convert the data using Python's Decimal datatype. from decimal import Decimal But this recognized the output object as string and not a Decimal. So I did some research and found that writerows would work with Decimal type data, but I am still not able to download the file. out = db.run_no_throw(query, include_columns=True) #This is the output returned by the database. According to the [docs][1], this returns a string with the result. print(out, type(out)) # Prints [{A: Decimal(1,2)}] and str (literal_eval(out), literal_eval(str(out)) and literal_eval('"' + out + '"') all gave an invalid object error here) This is how I am currently trying to download the output data: with open(filename, mode='w+', newline='') as file_to_output: writer = csv.writer(file_to_output, delimiter=",") writer.writerows(out) downloaded = st.download_button( label="Download data as CSV", data=file_to_output, file_name="filename.csv" ) The above code creates a file locally with the expected data, but it prints 1 line per row in the .csv file, like so - [ { A : D e ... However, on the server, the file I download does not even consist of that data (It is a blank file). So the streamlit download button is not getting the data from the file, even though it says here that it should because it is one of str, bytes, or a file. What am I missing here? I would really appreciate any help. Thanks! EDIT - Running eval() on out variable gave the error "Could not recognize Decimal() object" | pandas .read_sql_query() method can be used to directly create a DataFrame from an SQL query. Then the DataFrame can be written to a CSV file using the .to_csv() method. import pandas as pd import sqlalchemy as sa engine = sa.create_engine("postgresql://scott:[email protected]/test") sql = """\ SELECT 'widget' AS item, CAST(2.99 AS Decimal(18, 4)) AS price UNION ALL SELECT 'gadget' AS item, CAST(9.99 AS Decimal(18, 4)) AS price """ df = pd.read_sql_query(sql, engine, coerce_float=False) print(df) """ item price 0 widget 2.9900 1 gadget 9.9900 """ print(repr(df.loc[0, "price"])) # Decimal('2.9900') df.to_csv("products.csv", header=True, index=False) with open("products.csv", "r") as csv: print(csv.read()) """ item,price widget,2.9900 gadget,9.9900 """ | 2 | 0 |
79,408,681 | 2025-2-3 | https://stackoverflow.com/questions/79408681/perform-a-rolling-operation-on-indices-without-using-with-row-index | I have a DataFrame like this: import polars as pl df = pl.DataFrame({"x": [1.2, 1.3, 3.4, 3.5]}) df # shape: (3, 1) # βββββββ # β a β # β --- β # β f64 β # βββββββ‘ # β 1.2 β # β 1.3 β # β 3.4 β # β 3.5 β # βββββββ I would like to make a rolling aggregation using .rolling() so that each row uses a window [-2:1]: shape: (4, 2) βββββββ¬ββββββββββββββββββββ β x β y β β --- β --- β β f64 β list[f64] β βββββββͺββββββββββββββββββββ‘ β 1.2 β [1.2, 1.3] β β 1.3 β [1.2, 1.3, 3.4] β β 3.4 β [1.2, 1.3, β¦ 3.5] β β 3.5 β [1.3, 3.4, 3.5] β βββββββ΄ββββββββββββββββββββ So far, I managed to do this with the following code: df.with_row_index("index").with_columns( y = pl.col("x").rolling(index_column = "index", period = "4i", offset = "-3i") ).drop("index") However this requires manually creating a column index and then removing it after the operation. Is there a way to achieve the same result in a single with_columns() call? | Pure expressions approach (apparently slow) You can use concat_list with shift ( df .with_columns( y=pl.concat_list( pl.col('x').shift(x) for x in range(2,-2,-1) ) .list.drop_nulls() ) ) shape: (4, 2) βββββββ¬ββββββββββββββββββββ β x β y β β --- β --- β β f64 β list[f64] β βββββββͺββββββββββββββββββββ‘ β 1.2 β [1.2, 1.3] β β 1.3 β [1.2, 1.3, 3.4] β β 3.4 β [1.2, 1.3, β¦ 3.5] β β 3.5 β [1.3, 3.4, 3.5] β βββββββ΄ββββββββββββββββββββ There are a couple things to note here. When the input to shift is positive, that means to go backwards which is the opposite of your notation. range can count backwards with (start, stop, increment) but stop is non-inclusive so when entering that parameter, it needs an extra -1. At the end of the concat_list you need to manually drop the nulls that it will have for items at the beginning and end of the series. As always, you can wrap this into a function, including a translation of your preferred notation to what you actually need in range for it to work. from typing import Sequence def my_roll(in_column: str | pl.Expr, window: Sequence): if isinstance(in_column, str): in_column = pl.col(in_column) pl_window = range(-window[0], -window[1] - 1, -1) return pl.concat_list(in_column.shift(x) for x in pl_window).list.drop_nulls() which then allows you to do df.with_columns(y=my_roll("x", [-2,1])) If you don't care about static typing you can even monkey patch it to pl.Expr like this pl.Expr.my_roll = my_roll and then do df.with_columns(y=pl.col("x").my_roll([-2,1])) but your pylance/pyright/mypy/etc will complain about it not existing. Another approach that's kind of cheating if you're an expression purist You can combine the built in way featuring .with_row_index and .rolling into a .map_batches that just turns your column into a df and spits back the series you care about. def my_roll(in_column: str | pl.Expr, window): if isinstance(in_column, str): in_column = pl.col(in_column) period = f"{window[1]-window[0]+1}i" offset = f"{window[0]-1}i" return in_column.map_batches( lambda s: ( s.to_frame() .with_row_index() .select( pl.col(s.name).rolling( index_column="index", period=period, offset=offset ) ) .get_column(s.name) ) ) The way this works is that map_batches will turn your column into a Series and then run a function on it where the function returns another Series. If we make the function turn that Series into a DF, then attach the row_index, do the rolling, and get the resultant Series then that gives you exactly what you want all contained in an expression. It should be just as performant as the verbose way, assuming you don't have any other use of the row_index. then you do df.with_columns(y=my_roll("x", [-2,1])) | 5 | 2 |
79,407,317 | 2025-2-2 | https://stackoverflow.com/questions/79407317/how-to-create-possible-sets-of-n-numbers-from-m-sized-prime-number-list | Input: a list of m prime numbers (with possible repetition), and integers n and t. Output: all sets of n numbers, where each set is formed by partitioning the input into n parts, and taking the product of the primes in each part. We reject any set containing a number greater than t or any duplicate numbers. Thanks, @Dave, for the now hopefully clear formulation! I have a list of m elements (prime numbers) and out of these by using all of them, I want to create all possible sets of n numbers which fulfill certain conditions: the maximum value of each set should be below a certain threshold value t the values in each set should be unique So, my thoughts: split the m-sized list into all possible n subsets calculate the product of each subset exclude duplicate sets, sets with duplicates and sets with maximum above the threshold Searching StackOverflow I found this: How to get all possible combinations of dividing a list of length n into m sublists Taking the partCombo from here and add some more lines, I ended up with this: Script: ### get set of numbers from prime number list from itertools import combinations def partCombo(L,N=4): # https://stackoverflow.com/a/66120649 if N==1: yield [L]; return for size in range(1,len(L)-N+2): for combo in combinations(range(len(L)),size): # index combinations part = list(L[i] for i in combo) # first part remaining = list(L) for i in reversed(combo): del remaining[i] # unused items yield from ([part]+rest for rest in partCombo(remaining,N-1)) def lst_product(lst): p = 1 for i in range(len(lst)): p *= lst[i] return p a = [2,2,2,2,3,3,5] n = 3 # number of subsets t = 50 # threshold for max. d = sorted([u for u in set(tuple(v) for v in [sorted(list(w)) for w in set(tuple(z) for z in [[lst_product(y) for y in x] for x in partCombo(a, N=n) ])]) if max(u)<t]) print(len(d)) print(d) ### end of script Result: (for a = [2,2,2,2,3,3,5], i.e. m=7; n=3, t=50) 26 [(2, 8, 45), (2, 9, 40), (2, 10, 36), (2, 12, 30), (2, 15, 24), (2, 18, 20), (3, 5, 48), (3, 6, 40), (3, 8, 30), (3, 10, 24), (3, 12, 20), (3, 15, 16), (4, 4, 45), (4, 5, 36), (4, 6, 30), (4, 9, 20), (4, 10, 18), (4, 12, 15), (5, 6, 24), (5, 8, 18), (5, 9, 16), (5, 12, 12), (6, 6, 20), (6, 8, 15), (6, 10, 12), (8, 9, 10)] This is fast and I need to also exclude the sets having duplicate numbers, e.g. (4,4,45), (5,12,12), (6,6,20) Here comes the problem: If I have larger lists, e.g. a = [2,2,2,2,2,2,2,3,5,5,7,13,19], i.e. m=len(a)=13 and splitting into more subsets n=6, the combinations can quickly get into the millions/billions, but there are millions of duplicates which I would like to exclude right from the beginning. Also the threshold t will exclude a lot of combinations. And maybe just a few hundreds sets will remain. If I start the script with the above values for (a, m=13, n=6), the script will take forever... Is there maybe a smarter way to reduce/exclude a lot of duplicate combinations and achieve this in a much faster way or some smart Python feature which I am not aware of? Clarification: Thanks for your comments, apparently, I couldn't make my problem clear enough. Let me try to clarify. For example, given the prime number set a = [2,2,2,2,3,3,5] (m=7), I want to make all possible sets of n=3 numbers out of it which satisfy some conditions. So, for example, 5 out of many (I guess 1806) possibilities to split a into 3 parts and the corresponding products of each subset: [2],[2],[2,2,3,3,5] --> (2,2,180): excl. because max=180 > t=50 and duplicate of 2 [2],[2,2,2],[3,3,5] --> (2,8,45): included [3],[2,2],[2,2,3,5] --> (3,4,60): excluded because max=60 >t=50 [2,2],[2,2],[3,3,5] --> (4,4,45): excluded because duplicates of 4 [2,3],[2,3],[2,2,5] --> (6,6,20): excluded because duplicates of 6 | This may not be the ultimate most efficient solution, but there are at least two aspects where gains can be made: permutations of the same primes are a wasted effort. It will help to count how many you have of each prime (which will be its maximum exponent) and pre-calculate the partitions you can have of the available exponent into t size tuples. When you have those for each prime, you can think of combining those. Verify the threshold at several stages of the algorithm, also when you have intermediate results. If we imagine involving one prime after the other, and determining the results first for a smaller collection of primes, and then involving the next prime, we can at each stage reject intermediate results that violate the threshold requirement. Here is suggested implementation: from collections import Counter from itertools import product, pairwise from math import log, prod def gen_partitions(n, num_partitions, min_size, max_size): if num_partitions <= 1: if num_partitions == 1 and min_size <= n <= max_size: yield (n, ) return for i in range(min_size, min(n, max_size) + 1): for result in gen_partitions(n - i, num_partitions - 1, min_size, max_size): yield (i, ) + result def solve(primes, tuple_size, threshold): # for each distinct prime, get its number of occurrences (which is the exponent for that prime), then # partition that integer into all possible tuple_size partitionings (in any order), and # transform each partitioning (with exponents) to the relevant prime raised to those exponents prime_factor_combis = [ [ tuple(prime ** exp for exp in exponents) for exponents in gen_partitions(count, tuple_size, 0, int(log(threshold, prime))) ] for prime, count in Counter(primes).items() ] # Now reduce the list of tuples per prime to a list of tuples having the products. # At each step of the reduction: # the next prime is taken into consideration for making the products, and # tuples are removed that have a member that exceeds the threshold # the remaining tuples are made non-decreasing. Also starting with only non-decreasing tuples results = [tup for tup in prime_factor_combis[0] if all(x <= y for x, y in pairwise(tup))] # Only keep the non-decreasing for prime_factors in prime_factor_combis[1:]: results = set( tup for p in product(results, prime_factors) for tup in [tuple(sorted(map(prod, zip(*p))))] # assign the accumulated tuple to variable tup if tup[-1] < threshold # remove exceeding values at each iteration of outer loop ) # Finally, remove tuples that contain 1 or that include a repetition of the same number return sorted((result for result in results if result[0] > 1 and all(x < y for x, y in pairwise(result)))) You would run it like this for the first example you gave: a = [2,2,2,2,3,3,5] n = 3 t = 50 res = solve(a, n, t) print("Number of results", len(res)) for tup in res: print(tup) This outputs: Number of results 23 (2, 8, 45) (2, 9, 40) (2, 10, 36) (2, 12, 30) (2, 15, 24) (2, 18, 20) (3, 5, 48) (3, 6, 40) (3, 8, 30) (3, 10, 24) (3, 12, 20) (3, 15, 16) (4, 5, 36) (4, 6, 30) (4, 9, 20) (4, 10, 18) (4, 12, 15) (5, 6, 24) (5, 8, 18) (5, 9, 16) (6, 8, 15) (6, 10, 12) (8, 9, 10) For the input that was said to be a challenge... a = [2,2,2,2,2,2,2,3,5,5,7,13,19] n = 6 t = 50 res = solve(a, n, t) print("Number of results", len(res)) for tup in res: print(tup) Output: Number of results 385 (2, 4, 35, 38, 39, 40) (2, 5, 26, 35, 38, 48) (2, 5, 26, 38, 40, 42) (2, 5, 28, 38, 39, 40) (2, 5, 32, 35, 38, 39) (2, 6, 26, 35, 38, 40) (2, 7, 20, 38, 39, 40) (2, 7, 25, 26, 38, 48) (2, 7, 25, 32, 38, 39) (2, 7, 26, 30, 38, 40) (2, 8, 19, 35, 39, 40) (2, 8, 20, 35, 38, 39) (2, 8, 25, 26, 38, 42) (2, 8, 25, 28, 38, 39) (2, 8, 26, 30, 35, 38) (2, 10, 13, 35, 38, 48) (2, 10, 13, 38, 40, 42) (2, 10, 14, 38, 39, 40) (2, 10, 16, 35, 38, 39) (2, 10, 19, 26, 35, 48) (2, 10, 19, 26, 40, 42) (2, 10, 19, 28, 39, 40) (2, 10, 19, 32, 35, 39) (2, 10, 20, 26, 38, 42) (2, 10, 20, 28, 38, 39) (2, 10, 21, 26, 38, 40) (2, 10, 24, 26, 35, 38) (2, 10, 26, 28, 30, 38) (2, 12, 13, 35, 38, 40) (2, 12, 19, 26, 35, 40) (2, 12, 20, 26, 35, 38) (2, 12, 25, 26, 28, 38) (2, 13, 14, 25, 38, 48) (2, 13, 14, 30, 38, 40) (2, 13, 15, 28, 38, 40) (2, 13, 15, 32, 35, 38) (2, 13, 16, 25, 38, 42) (2, 13, 16, 30, 35, 38) (2, 13, 19, 20, 35, 48) (2, 13, 19, 20, 40, 42) (2, 13, 19, 24, 35, 40) (2, 13, 19, 25, 28, 48) (2, 13, 19, 25, 32, 42) (2, 13, 19, 28, 30, 40) (2, 13, 19, 30, 32, 35) (2, 13, 20, 21, 38, 40) (2, 13, 20, 24, 35, 38) (2, 13, 20, 28, 30, 38) (2, 13, 21, 25, 32, 38) (2, 13, 24, 25, 28, 38) (2, 14, 15, 26, 38, 40) (2, 14, 16, 25, 38, 39) (2, 14, 19, 20, 39, 40) (2, 14, 19, 25, 26, 48) (2, 14, 19, 25, 32, 39) (2, 14, 19, 26, 30, 40) (2, 14, 20, 26, 30, 38) (2, 14, 24, 25, 26, 38) (2, 15, 16, 26, 35, 38) (2, 15, 19, 26, 28, 40) (2, 15, 19, 26, 32, 35) (2, 15, 20, 26, 28, 38) (2, 16, 19, 20, 35, 39) (2, 16, 19, 25, 26, 42) (2, 16, 19, 25, 28, 39) (2, 16, 19, 26, 30, 35) (2, 16, 21, 25, 26, 38) (2, 19, 20, 21, 26, 40) (2, 19, 20, 24, 26, 35) (2, 19, 20, 26, 28, 30) (2, 19, 21, 25, 26, 32) (2, 19, 24, 25, 26, 28) (3, 4, 26, 35, 38, 40) (3, 5, 26, 28, 38, 40) (3, 5, 26, 32, 35, 38) (3, 7, 20, 26, 38, 40) (3, 7, 25, 26, 32, 38) (3, 8, 13, 35, 38, 40) (3, 8, 19, 26, 35, 40) (3, 8, 20, 26, 35, 38) (3, 8, 25, 26, 28, 38) (3, 10, 13, 28, 38, 40) (3, 10, 13, 32, 35, 38) (3, 10, 14, 26, 38, 40) (3, 10, 16, 26, 35, 38) (3, 10, 19, 26, 28, 40) (3, 10, 19, 26, 32, 35) (3, 10, 20, 26, 28, 38) (3, 13, 14, 20, 38, 40) (3, 13, 14, 25, 32, 38) (3, 13, 16, 19, 35, 40) (3, 13, 16, 20, 35, 38) (3, 13, 16, 25, 28, 38) (3, 13, 19, 20, 28, 40) (3, 13, 19, 20, 32, 35) (3, 13, 19, 25, 28, 32) (3, 14, 16, 25, 26, 38) (3, 14, 19, 20, 26, 40) (3, 14, 19, 25, 26, 32) (3, 16, 19, 20, 26, 35) (3, 16, 19, 25, 26, 28) (4, 5, 13, 35, 38, 48) (4, 5, 13, 38, 40, 42) (4, 5, 14, 38, 39, 40) (4, 5, 16, 35, 38, 39) (4, 5, 19, 26, 35, 48) (4, 5, 19, 26, 40, 42) (4, 5, 19, 28, 39, 40) (4, 5, 19, 32, 35, 39) (4, 5, 20, 26, 38, 42) (4, 5, 20, 28, 38, 39) (4, 5, 21, 26, 38, 40) (4, 5, 24, 26, 35, 38) (4, 5, 26, 28, 30, 38) (4, 6, 13, 35, 38, 40) (4, 6, 19, 26, 35, 40) (4, 6, 20, 26, 35, 38) (4, 6, 25, 26, 28, 38) (4, 7, 10, 38, 39, 40) (4, 7, 13, 25, 38, 48) (4, 7, 13, 30, 38, 40) (4, 7, 15, 26, 38, 40) (4, 7, 16, 25, 38, 39) (4, 7, 19, 20, 39, 40) (4, 7, 19, 25, 26, 48) (4, 7, 19, 25, 32, 39) (4, 7, 19, 26, 30, 40) (4, 7, 20, 26, 30, 38) (4, 7, 24, 25, 26, 38) (4, 8, 10, 35, 38, 39) (4, 8, 13, 25, 38, 42) (4, 8, 13, 30, 35, 38) (4, 8, 14, 25, 38, 39) (4, 8, 15, 26, 35, 38) (4, 8, 19, 20, 35, 39) (4, 8, 19, 25, 26, 42) (4, 8, 19, 25, 28, 39) (4, 8, 19, 26, 30, 35) (4, 8, 21, 25, 26, 38) (4, 10, 12, 26, 35, 38) (4, 10, 13, 19, 35, 48) (4, 10, 13, 19, 40, 42) (4, 10, 13, 20, 38, 42) (4, 10, 13, 21, 38, 40) (4, 10, 13, 24, 35, 38) (4, 10, 13, 28, 30, 38) (4, 10, 14, 19, 39, 40) (4, 10, 14, 20, 38, 39) (4, 10, 14, 26, 30, 38) (4, 10, 15, 26, 28, 38) (4, 10, 16, 19, 35, 39) (4, 10, 19, 20, 26, 42) (4, 10, 19, 20, 28, 39) (4, 10, 19, 21, 26, 40) (4, 10, 19, 24, 26, 35) (4, 10, 19, 26, 28, 30) (4, 10, 20, 21, 26, 38) (4, 12, 13, 19, 35, 40) (4, 12, 13, 20, 35, 38) (4, 12, 13, 25, 28, 38) (4, 12, 14, 25, 26, 38) (4, 12, 19, 20, 26, 35) (4, 12, 19, 25, 26, 28) (4, 13, 14, 15, 38, 40) (4, 13, 14, 19, 25, 48) (4, 13, 14, 19, 30, 40) (4, 13, 14, 20, 30, 38) (4, 13, 14, 24, 25, 38) (4, 13, 15, 16, 35, 38) (4, 13, 15, 19, 28, 40) (4, 13, 15, 19, 32, 35) (4, 13, 15, 20, 28, 38) (4, 13, 16, 19, 25, 42) (4, 13, 16, 19, 30, 35) (4, 13, 16, 21, 25, 38) (4, 13, 19, 20, 21, 40) (4, 13, 19, 20, 24, 35) (4, 13, 19, 20, 28, 30) (4, 13, 19, 21, 25, 32) (4, 13, 19, 24, 25, 28) (4, 14, 15, 19, 26, 40) (4, 14, 15, 20, 26, 38) (4, 14, 16, 19, 25, 39) (4, 14, 19, 20, 26, 30) (4, 14, 19, 24, 25, 26) (4, 15, 16, 19, 26, 35) (4, 15, 19, 20, 26, 28) (4, 16, 19, 21, 25, 26) (5, 6, 13, 28, 38, 40) (5, 6, 13, 32, 35, 38) (5, 6, 14, 26, 38, 40) (5, 6, 16, 26, 35, 38) (5, 6, 19, 26, 28, 40) (5, 6, 19, 26, 32, 35) (5, 6, 20, 26, 28, 38) (5, 7, 8, 38, 39, 40) (5, 7, 10, 26, 38, 48) (5, 7, 10, 32, 38, 39) (5, 7, 12, 26, 38, 40) (5, 7, 13, 19, 40, 48) (5, 7, 13, 20, 38, 48) (5, 7, 13, 24, 38, 40) (5, 7, 13, 30, 32, 38) (5, 7, 15, 26, 32, 38) (5, 7, 16, 19, 39, 40) (5, 7, 16, 20, 38, 39) (5, 7, 16, 26, 30, 38) (5, 7, 19, 20, 26, 48) (5, 7, 19, 20, 32, 39) (5, 7, 19, 24, 26, 40) (5, 7, 19, 26, 30, 32) (5, 7, 20, 24, 26, 38) (5, 8, 10, 26, 38, 42) (5, 8, 10, 28, 38, 39) (5, 8, 12, 26, 35, 38) (5, 8, 13, 19, 35, 48) (5, 8, 13, 19, 40, 42) (5, 8, 13, 20, 38, 42) (5, 8, 13, 21, 38, 40) (5, 8, 13, 24, 35, 38) (5, 8, 13, 28, 30, 38) (5, 8, 14, 19, 39, 40) (5, 8, 14, 20, 38, 39) (5, 8, 14, 26, 30, 38) (5, 8, 15, 26, 28, 38) (5, 8, 16, 19, 35, 39) (5, 8, 19, 20, 26, 42) (5, 8, 19, 20, 28, 39) (5, 8, 19, 21, 26, 40) (5, 8, 19, 24, 26, 35) (5, 8, 19, 26, 28, 30) (5, 8, 20, 21, 26, 38) (5, 10, 12, 26, 28, 38) (5, 10, 13, 14, 38, 48) (5, 10, 13, 16, 38, 42) (5, 10, 13, 19, 28, 48) (5, 10, 13, 19, 32, 42) (5, 10, 13, 21, 32, 38) (5, 10, 13, 24, 28, 38) (5, 10, 14, 16, 38, 39) (5, 10, 14, 19, 26, 48) (5, 10, 14, 19, 32, 39) (5, 10, 14, 24, 26, 38) (5, 10, 16, 19, 26, 42) (5, 10, 16, 19, 28, 39) (5, 10, 16, 21, 26, 38) (5, 10, 19, 21, 26, 32) (5, 10, 19, 24, 26, 28) (5, 12, 13, 14, 38, 40) (5, 12, 13, 16, 35, 38) (5, 12, 13, 19, 28, 40) (5, 12, 13, 19, 32, 35) (5, 12, 13, 20, 28, 38) (5, 12, 14, 19, 26, 40) (5, 12, 14, 20, 26, 38) (5, 12, 16, 19, 26, 35) (5, 12, 19, 20, 26, 28) (5, 13, 14, 15, 32, 38) (5, 13, 14, 16, 30, 38) (5, 13, 14, 19, 20, 48) (5, 13, 14, 19, 24, 40) (5, 13, 14, 19, 30, 32) (5, 13, 14, 20, 24, 38) (5, 13, 15, 16, 28, 38) (5, 13, 15, 19, 28, 32) (5, 13, 16, 19, 20, 42) (5, 13, 16, 19, 21, 40) (5, 13, 16, 19, 24, 35) (5, 13, 16, 19, 28, 30) (5, 13, 16, 20, 21, 38) (5, 13, 19, 20, 21, 32) (5, 13, 19, 20, 24, 28) (5, 14, 15, 16, 26, 38) (5, 14, 15, 19, 26, 32) (5, 14, 16, 19, 20, 39) (5, 14, 16, 19, 26, 30) (5, 14, 19, 20, 24, 26) (5, 15, 16, 19, 26, 28) (5, 16, 19, 20, 21, 26) (6, 7, 10, 26, 38, 40) (6, 7, 13, 20, 38, 40) (6, 7, 13, 25, 32, 38) (6, 7, 16, 25, 26, 38) (6, 7, 19, 20, 26, 40) (6, 7, 19, 25, 26, 32) (6, 8, 10, 26, 35, 38) (6, 8, 13, 19, 35, 40) (6, 8, 13, 20, 35, 38) (6, 8, 13, 25, 28, 38) (6, 8, 14, 25, 26, 38) (6, 8, 19, 20, 26, 35) (6, 8, 19, 25, 26, 28) (6, 10, 13, 14, 38, 40) (6, 10, 13, 16, 35, 38) (6, 10, 13, 19, 28, 40) (6, 10, 13, 19, 32, 35) (6, 10, 13, 20, 28, 38) (6, 10, 14, 19, 26, 40) (6, 10, 14, 20, 26, 38) (6, 10, 16, 19, 26, 35) (6, 10, 19, 20, 26, 28) (6, 13, 14, 16, 25, 38) (6, 13, 14, 19, 20, 40) (6, 13, 14, 19, 25, 32) (6, 13, 16, 19, 20, 35) (6, 13, 16, 19, 25, 28) (6, 14, 16, 19, 25, 26) (7, 8, 10, 19, 39, 40) (7, 8, 10, 20, 38, 39) (7, 8, 10, 26, 30, 38) (7, 8, 12, 25, 26, 38) (7, 8, 13, 15, 38, 40) (7, 8, 13, 19, 25, 48) (7, 8, 13, 19, 30, 40) (7, 8, 13, 20, 30, 38) (7, 8, 13, 24, 25, 38) (7, 8, 15, 19, 26, 40) (7, 8, 15, 20, 26, 38) (7, 8, 16, 19, 25, 39) (7, 8, 19, 20, 26, 30) (7, 8, 19, 24, 25, 26) (7, 10, 12, 13, 38, 40) (7, 10, 12, 19, 26, 40) (7, 10, 12, 20, 26, 38) (7, 10, 13, 15, 32, 38) (7, 10, 13, 16, 30, 38) (7, 10, 13, 19, 20, 48) (7, 10, 13, 19, 24, 40) (7, 10, 13, 19, 30, 32) (7, 10, 13, 20, 24, 38) (7, 10, 15, 16, 26, 38) (7, 10, 15, 19, 26, 32) (7, 10, 16, 19, 20, 39) (7, 10, 16, 19, 26, 30) (7, 10, 19, 20, 24, 26) (7, 12, 13, 16, 25, 38) (7, 12, 13, 19, 20, 40) (7, 12, 13, 19, 25, 32) (7, 12, 16, 19, 25, 26) (7, 13, 15, 16, 19, 40) (7, 13, 15, 16, 20, 38) (7, 13, 15, 19, 20, 32) (7, 13, 16, 19, 20, 30) (7, 13, 16, 19, 24, 25) (7, 15, 16, 19, 20, 26) (8, 10, 12, 13, 35, 38) (8, 10, 12, 19, 26, 35) (8, 10, 13, 14, 30, 38) (8, 10, 13, 15, 28, 38) (8, 10, 13, 19, 20, 42) (8, 10, 13, 19, 21, 40) (8, 10, 13, 19, 24, 35) (8, 10, 13, 19, 28, 30) (8, 10, 13, 20, 21, 38) (8, 10, 14, 15, 26, 38) (8, 10, 14, 19, 20, 39) (8, 10, 14, 19, 26, 30) (8, 10, 15, 19, 26, 28) (8, 10, 19, 20, 21, 26) (8, 12, 13, 14, 25, 38) (8, 12, 13, 19, 20, 35) (8, 12, 13, 19, 25, 28) (8, 12, 14, 19, 25, 26) (8, 13, 14, 15, 19, 40) (8, 13, 14, 15, 20, 38) (8, 13, 14, 19, 20, 30) (8, 13, 14, 19, 24, 25) (8, 13, 15, 16, 19, 35) (8, 13, 15, 19, 20, 28) (8, 13, 16, 19, 21, 25) (8, 14, 15, 19, 20, 26) (10, 12, 13, 14, 19, 40) (10, 12, 13, 14, 20, 38) (10, 12, 13, 16, 19, 35) (10, 12, 13, 19, 20, 28) (10, 12, 14, 19, 20, 26) (10, 13, 14, 15, 16, 38) (10, 13, 14, 15, 19, 32) (10, 13, 14, 16, 19, 30) (10, 13, 14, 19, 20, 24) (10, 13, 15, 16, 19, 28) (10, 13, 16, 19, 20, 21) (10, 14, 15, 16, 19, 26) (12, 13, 14, 16, 19, 25) (13, 14, 15, 16, 19, 20) This runs quite fast. | 2 | 2 |
79,406,476 | 2025-2-2 | https://stackoverflow.com/questions/79406476/why-doesnt-pagination-work-in-this-case-using-selenium | Most websites display data across multiple pages. This is done to improve user experience and reduce loading times. But when I wanted to automate the data extraction process using Selenium, I noticed that my script only retrieves information from page one and then stops. What am I doing wrong? from selenium.webdriver import Chrome from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By import pandas as pd import undetected_chromedriver as uc url = "https://www.zoopla.co.uk/house-prices/england/?new_homes=include&q=england+&orig_q=united+kingdom&view_type=list&pn=1" # Handle elements that may not be currently visible def etext(e): """Extracts text from an element, handling visibility issues.""" if e: if t := e.text.strip(): return t if (p := e.get_property("textContent")) and isinstance(p, str): return p.strip() return "" driver = uc.Chrome() # Initialize result list to store data result = [] with Chrome() as driver: driver.get(url) wait = WebDriverWait(driver, 10) while True: # Wait for the main content to load sel = By.CSS_SELECTOR, "div[data-testid=result-item]" houses = wait.until(EC.presence_of_all_elements_located(sel)) # Extract and store data from the current page for house in houses: try: item = { "address": etext(house.find_element(By.CSS_SELECTOR, "h2")), "DateLast_sold": etext(house.find_element(By.CSS_SELECTOR, "._1hzil3o9._1hzil3o8._194zg6t7")) } result.append(item) except Exception as e: print(f"Error extracting address or date: {e}") # Check for "Next" button and move to the next page try: next_button = driver.find_element(By.CSS_SELECTOR, '#main-content div._12n2exy2 nav div._14xj7k72') next_button.click() wait.until(EC.staleness_of(houses[0])) # Wait for the new page to load except Exception as e: print("No more pages to scrape or error:", e) break # Stop if no more pages # Convert results to a DataFrame and display df = pd.DataFrame(result) print(df) | Different websites often require bespoke strategies in order to scrape them with any level of success. This site is protected by Cloudflare. When Cloudflare detects too many automated invocations it will intervene and present a page that requires you to prove that you're not a robot. In this case, the number of pages that you can scrape before this happens is variable although it seems to be anywhere between 20 and 30 pages which is unfortunate because there are ~40 pages available. The code below will handle the cookie prompt (if it appears) and then will try to get as many addresses as possible. You should be able to adapt this to your specific needs. from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from undetected_chromedriver import Chrome from selenium.webdriver.remote.webelement import WebElement from selenium.webdriver.remote.webdriver import WebDriver from selenium.common.exceptions import TimeoutException from selenium.webdriver.common.action_chains import ActionChains from typing import cast from collections.abc import Iterator URL = "https://www.zoopla.co.uk/house-prices/england/?new_homes=include&q=england+&orig_q=united+kingdom&view_type=list&pn=1" TIMEOUT = 5 # get text from webelement that may not be visible def etext(e: WebElement) -> str: if e: if t := e.text.strip(): return t if (p := e.get_property("textContent")) and isinstance(p, str): return p.strip() return "" # click the WebElement def click(driver: WebDriver, e: WebElement) -> None: ActionChains(driver).click(e).perform() # get all WebElements that match the given css def get_all(driver: WebDriver, css: str) -> Iterator[WebElement]: wait = WebDriverWait(driver, TIMEOUT) ec = EC.presence_of_all_elements_located sel = By.CSS_SELECTOR, css try: yield from wait.until(ec(sel)) except TimeoutException: pass # look for the Next button and click it def click_next(driver: WebDriver) -> None: for a in get_all(driver, "a[aria-live=polite] > div > div:nth-child(2)"): if etext(a) == "Next": click(driver, a) break # look for the shadow root def get_shadow_root(driver: WebDriver) -> WebDriver: wait = WebDriverWait(driver, TIMEOUT) ec = EC.presence_of_element_located sel = By.ID, "usercentrics-root" sre = wait.until(ec(sel)) return cast(WebDriver, sre.shadow_root) # you may be required to accept or decline cookies # ignore any exceptions that may arise def click_through(driver: WebDriver) -> None: try: wait = WebDriverWait(get_shadow_root(driver), TIMEOUT) ec = EC.element_to_be_clickable sel = By.CSS_SELECTOR, "button[data-testid=uc-deny-all-button]" button = wait.until(ec(sel)) click(driver, button) except Exception: pass if __name__ == "__main__": with Chrome() as driver: driver.get(URL) click_through(driver) prev_url = "" npages = 0 # if and when Cloudflare intervenes, the current URL does not change while prev_url != driver.current_url: prev_url = driver.current_url for h2 in get_all(driver, "div[data-testid=result-item] h2"): print(etext(h2)) click_next(driver) npages += 1 print(f"Processed {npages=}") | 1 | 2 |
79,408,524 | 2025-2-3 | https://stackoverflow.com/questions/79408524/writing-back-to-a-panda-groupby-group | Good morning all I am trying to process a lot of data, and I need to group data, look at the group, then set a value based on the other entries in the group, but I want to set the value in a column in the full dataset. What I can't figure out is how I can use the group to write back to the main dataframe. So as an example, I created this data frame import pandas as pd data = [{ "class": "cat", "name": "Fluffy", "age": 3, "child": "Whiskers", "parents_in_group": "" }, { "class": "dog", "name": "Spot", "age": 5 }, { "class": "cat", "name": "Whiskers", "age": 7 }, { "class": "dog", "name": "Rover", "age": 2, "child": "Spot" }] df = pd.DataFrame(data) df So as an example, lets say that I want to set the parrents_in_group to a list of all the parrents in the group, easy to do for name, group in group_by_class: mask = group["child"].notna() print("This is the parrent in group") print(group[mask]) parent_name = group[mask]["name"].values[0] print(f"This is the parent name: {parent_name}") group["parents_in_group"] = parent_name print("And now we have the name set in group") print(group) That updates the group, but not the actual data frame. So how would I go about writing this information back to the main data frame Using the name and search This works, but seems a bit untidy for name, group in group_by_class: mask = group["child"].notna() parent_name = group[mask]["name"].values[0] df.loc[df['class'] == name, 'parents_in_group'] = parent_name df Using group How would I go about using group to set the values, rather than searching for the name that the group was created by. Or are there better ways to going about it. The real challenge I'm having is that I need to get the group, find some specific values in the group, then set some fields based on the data found. Any help of course welcome. | A loop-less approach would be to compute a groupby.first after dropna, then to map the output: df['parents_in_group'] = df['class'].map( df.dropna(subset='child').groupby('class')['name'].first() ) # variant df['parents_in_group'] = df['class'].map( df['name'].where(df['child'].notna()).groupby(df['class']).first() ) Or, with drop_duplicates in place of groupby (for efficiency): df['parents_in_group'] = df['class'].map( df.dropna(subset='child') .drop_duplicates(subset='class') .set_index('class')['name'] ) Output: class name age child parents_in_group 0 cat Fluffy 3 Whiskers Fluffy 1 dog Spot 5 NaN Rover 2 cat Whiskers 7 NaN Fluffy 3 dog Rover 2 Spot Rover Or, if efficiency doesn't really matter with a groupy.apply: out = (df.groupby('class', sort=False, group_keys=False) .apply(lambda x: x.assign(parents_in_group=x.loc[x['child'].notna(), 'name'] .iloc[:1].squeeze()), include_groups=False) ) | 1 | 1 |
79,405,014 | 2025-2-1 | https://stackoverflow.com/questions/79405014/how-put-an-algorithm-drafted-on-paper-into-a-working-c-code | I am trying to improve attack on PKZIP by minimizing workload and requirements, So PKZIP uses this: key0[i] = key0[i-1]>>8 ^ crctab[key0[i-1]&0xFF ^ plaintext[i]]; key1[i] = (key1[i-1] + (key0[i]&0xFF))* Const + 1; key2[i] = key2[i-1]>>8 ^ crctab[key2[i-1]&0xFF ^ (key1[i]>>24)]; to update its keystream or internal state and this: tmp = (key2[i] |3)&0xFFFF; // notice: makes the last 16 bits of key2 odd key3[i] = ((tmp*(tmp ^ 1))>>8)&0xFF;//notice: this is ((x**2)-x)//256 mod 256 or (x*(x + or - 1))//256 mod 256 ciphertext[i] = plaintext[i] ^ key3[i]; to derive the byte key3 and encrypt plaintext into ciphertext but programs like pkcrak uses key3[i] and key3[i-1] to derive 2**22 possible key2 values for position i, which is too much, so i discovered this: Given 3 consecutive plaintext values and ciphertext we can derive 3 consecutive key3 values and each key3 value has 256 possible last 16 bit values, which is too much unless the connection between a 16 bit value for key3[i] and key3[i-1] is found. Hence: import math n= int(65536) x = 0xc90b #this is key2[i-1] |3 y = 0x2f17# this is key2[i] | 3 a = ((x*x) - x) % n # we havent divided this by 256 for simplicity b = ((y*y) - y) % n # so the result of a ^ b is 16 bit or simply (e<<8) e = (a ^ b)&0xFF00 #EQUATION ''' square root of (((x**2) -x) %n xor (256*e)) == square root of (((y**2) -y) %n) meaning given key3[i] and key3[i-1] we can get e by key3[i-1] xor key3[i] And since they are only 64 odd last 16 bit values of key2 for each position the above equation will get a corresponding odd value for y ''' i = round(math.sqrt((((x*x) - x) %n ^ e))) j = round(math.sqrt(((y*y) - y) %n)) print(hex(a)) print(hex(b)) print(hex(e)) print(hex(i)) print(hex(j)) So the above code works perfectly and proves my theory but i need to put it into practical and in C so i did this: #include <time.h> #include <stdlib.h> #include <assert.h> #include <ctype.h> #include <math.h> #include <stdbool.h> #include <stdint.h> #include <stdio.h> #define CRCPOLY 0xedb88320 #define CONSTx 0x08088405U /* 134775813 */ #define MSB(x) (((x)>>24)&0xFF) #define MSBMASK 0xFF000000U #define CRC32(x,c) (((x)>>8)^crctab[((x)^(c))&0xff]) uint32_t crctab[256]; const uint8_t plain[8]= { 0x64, 0x14, 0x9A, 0xC7, 0xB2, 0x96, 0xC0, 0x15 }; uint8_t cipher[8]= { }; uint8_t key3[8]= { }; uint32_t k0 = 0x12345678,t0; uint32_t k1 = 0x23456789,t1; uint32_t k2 = 0x34567890,t2; static void mkCrcTab( ){ unsigned int i, j, c; for( i = 0; i < 256; i++ ) { c = i; for( j = 0; j < 8; j++ ) if( c&1 ) c = (c>>1) ^ CRCPOLY; else c = (c>>1); crctab[i] = c; } } int main(){ mkCrcTab( ); uint8_t pw[10] = { 0x01,0x02,0x03,0x04,0x05,0x06,0x07,0x08,0x09,0x00 }; uint32_t ky0 = k0; // FNV offset basis for 32-bit uint32_t ky1 = k1; // FNV prime for 32-bit uint32_t ky2 = k2; uint16_t tmp; uint8_t bt; for (int i = 0; i < 10; i++) { ky0 = CRC32( ky0, pw[i] ); ky1 = (k1 + (ky0&0xff))*CONSTx + 1; ky2 = CRC32( ky2, MSB(ky1) ); printf("%08X %08X %08X\n",ky0,ky1,ky2); } t0 =ky0; t1 = ky1; t2 = ky2; printf("===================================\n"); for (int i = 0; i < 8; i++) { ky0 = CRC32( ky0, plain[i] ); ky1 = (k1 + (ky0&0xff))*CONSTx + 1; ky2 = CRC32( ky2, MSB(ky1) ); printf("%08X %08X %08X\n",ky0,ky1,ky2); tmp = ky2 | 3; bt = ((tmp*(tmp ^ 1))>>8)&0xFF; cipher[i] = plain[i] ^ bt; key3[i] = bt; } printf("===================================\n"); printf("plain : "); for(int i =0; i <8; i++){ printf("%02X ",plain[i]); } printf("\n"); printf("cipher : "); for(int i =0; i <8; i++){ printf("%02X ",cipher[i]); } printf("\n"); printf("key3 : "); for(int i =0; i <8; i++){ printf("%02X ",key3[i]); } printf("\n"); printf("===================================\n"); return 0; } And encrypted an 8 byte plaintext into 8 byte ciphertext using keys derived from a password 1234567890 to test my theory further, But when i used the code below to see if the key2 value i found using the above code for position 4 is inside the list of the ones generated by the algorithm, I don't get it the right value and now i am confused whether it's my coding or the algorithm itself is flawed or misinterpreted. Here is the code for key2 generation: #include <time.h> #include <stdlib.h> #include <assert.h> #include <ctype.h> #include <math.h> #include <stdbool.h> #include <stdint.h> #include <stdio.h> #define CRCPOLY 0xedb88320 #define CONSTx 0x08088405U /* 134775813 */ #define KEY2SPACE (1<<12) #define KEY3(i) (plain[(i)]^cipher[(i)]) #define MSB(x) (((x)>>24)&0xFF) #define MAXDELTA (0x00FFFFFFU+0xFFU) #define MSBMASK 0xFF000000U #define CRC32(x,c) (((x)>>8)^crctab[((x)^(c))&0xff]) uint32_t crctab[256],crcbymsb[256],crcbylsb[256]; uint32_t *key2i; uint16_t tmptaba[256],tmptabb[256],tmptabc[256],lkpc[65536],lkpa[65536]; uint8_t idxbylsb[256], idxbymsb[256]; int numKey2s = 0; const uint8_t plain[8]= { 0x64, 0x14, 0x9A, 0xC7, 0xB2, 0x96, 0xC0, 0x15 }; const uint8_t cipher[8]= { 0xEF, 0x20, 0xDC, 0x4C, 0x11, 0x9A, 0x78, 0xEB }; static void mkCrcTab( ){ unsigned int i, j, c; for( i = 0; i < 256; i++ ) { c = i; for( j = 0; j < 8; j++ ) if( c&1 ) c = (c>>1) ^ CRCPOLY; else c = (c>>1); crctab[i] = c; crcbymsb[c>>24] = c; crcbylsb[c&0xFF] = c; idxbymsb[c>>24] = i; idxbylsb[c&0xFF] = i; } } void generate( int n ){ int i,j, d; uint8_t e, ea; uint32_t cr[4],cr1[4]; uint16_t ls16a,ls16b,ls16c,eq,eq1; printf("Generating possible key2_%d values...", n ); ea = KEY3(n-1) ^ KEY3(n); e = KEY3(n-2) ^ KEY3(n-1); for( i = 3; i < 256; i+=4 ){ ls16b = tmptabb[i]; eq = (int) sqrt( (double) ((256*ea) ^ (ls16b*(ls16b ^ 1))&0xFFFF)); ls16c = lkpc[eq]; cr[0] = crcbylsb[(ls16b>>8) ^ (ls16c&0xFF)]; cr[1] = crcbylsb[(ls16b>>8) ^ ((ls16c-1)&0xFF)]; cr[2] = crcbylsb[(ls16b>>8) ^ ((ls16c-2)&0xFF)]; cr[3] = crcbylsb[(ls16b>>8) ^ ((ls16c-3)&0xFF)]; eq1 = (int) sqrt( (double) ((256*e) ^ (ls16b*(ls16b ^ 1))&0xFFFF)); ls16a = lkpa[eq1]; cr1[0] = crcbylsb[(ls16a>>8) ^ (ls16b&0xFF)]; cr1[1] = crcbylsb[(ls16a>>8) ^ ((ls16b-1)&0xFF)]; cr1[2] = crcbylsb[(ls16a>>8) ^ ((ls16b-2)&0xFF)]; cr1[3] = crcbylsb[(ls16a>>8) ^ ((ls16b-3)&0xFF)]; for(j=0; j <4; j++){ for(d=0; d < 4; d++){ key2i[numKey2s++] = (((cr1[j]>>8) ^ cr[d])&0xFFFF0000) | (ls16c-d); } } } printf("done.\nFound %d possible key2-values.\n", numKey2s ); } int main(){ uint16_t kt,bk,idx; uint8_t dt; int x=0, y=0, z=0; key2i = malloc(KEY2SPACE * sizeof(uint32_t)); if (!key2i) { perror("Memory allocation failed"); exit(EXIT_FAILURE); } //load tmptabb for(uint16_t i=0; i < 0xFFFF; i++){ kt = i | 3; dt = ((kt*(kt ^ 1))>>8)&0xFF; if(dt == KEY3(2)){ tmptabb[y++] = i; } if(dt == KEY3(3)){ tmptabc[z++] = i; } if(dt == KEY3(1)){ tmptaba[x++] = i; } } //load lookup tables for(int k =3; k < 256; k+=4){ bk = tmptaba[k]; idx = (int) sqrt( (bk*(bk ^ 1))&0xFFFF); lkpa[idx] = bk; bk = tmptabc[k]; idx = (int) sqrt( (bk*(bk ^ 1))&0xFFFF); lkpc[idx] = bk; } mkCrcTab(); generate(3); for(int w =0; w < numKey2s; w++){ printf("%08X\n", key2i[w]); } } I do not know where i went wrong and now i can not continue with my research of finding the mathematical relationship between key2 and key0 because of this. Thank you. | The problem isn't the code but the calculation itself as written on paper the algorithm states that you can find a unique y value from x value but that's not entirely correct because given x and the equation ((256*e) = ((x*x)-x) mod 65536) xor ((y*y)-y) mod 65536 ) and e they are 64 possible y values. That is when the code loads the lookup tables (lkpa,lkpc) for position a and c it overwrites the 63 times ending with last possible value being stored ( in the case of lkpc[] it will be 0xf707). Simply putting this: printf("resulted index : %d for 16 bit value %04X\n",idx, bk); After any of these lines: idx = (int) sqrt( (bk*(bk ^ 1))&0xFFFF); would allow you to see that 64 odd 16 bit values result in same index, hence overwriting 63 times. So putting an algorithm on actual working code is challenging because on paper you can not mostly see other possibilities until you have the code running. | 4 | 0 |
79,407,952 | 2025-2-3 | https://stackoverflow.com/questions/79407952/find-the-index-of-the-current-df-value-in-another-series-and-add-to-a-column | I have a dataframe and a series, as follows: import pandas as pd from itertools import permutations df = pd.DataFrame({'a': [['a', 'b', 'c'], ['a', 'c', 'b'], ['c', 'a', 'b']]}) prob = list(permutations(['a', 'b', 'c'])) prob = [list(ele) for ele in prob] ps = pd.Series(prob) >>> df a 0 [a, b, c] 1 [a, c, b] 2 [c, a, b] >>> ps 0 [a, b, c] 1 [a, c, b] 2 [b, a, c] 3 [b, c, a] 4 [c, a, b] 5 [c, b, a] dtype: object My question is how to add a column 'idx' in df, which contains the index of the value in column 'a' in series 'ps'? The desire result is: a idx [a,b,c] 0 [a,c,b] 1 [c,a,b] 4 The chatgpt gave me a answer, but it works very very slowly when my real data is huge. df['idx'] = df['a'].apply(lambda x: ps[ps.apply(lambda y: y == x)].index[0]) Is there a more efficient way? | Use DataFrame.merge with DataFrame constructor: #if possible duplicates in ps remove them ps = ps.drop_duplicates() df = df.merge(pd.DataFrame({'idx': ps.index, 'a':ps.values}), on='a') print (df) a idx 0 [a, b, c] 0 1 [a, c, b] 1 2 [c, a, b] 4 Solution for oldier pandas versions - converting lists to tuples before merge: df1 = ps.apply(tuple).reset_index().drop_duplicates(0) print (df1) index 0 0 0 (a, b, c) 1 1 (a, c, b) 2 2 (b, a, c) 3 3 (b, c, a) 4 4 (c, a, b) 5 5 (c, b, a) df = (df.merge(df1, left_on=df['a'].apply(tuple),right_on=df1[0]) .drop(['key_0',0], axis=1)) print (df) a index 0 [a, b, c] 0 1 [a, c, b] 1 2 [c, a, b] 4 | 1 | 1 |
79,407,420 | 2025-2-2 | https://stackoverflow.com/questions/79407420/why-doesnt-the-condition-of-my-while-loop-apply-to-my-print-statement | Only started learning Python a few days ago. I followed freecodecamp's while loop word guessing game tutorial and wanted to add the statement, "Wrong answer!" after every wrong guess. This was what I initially wrote: secret_word = "giraffe" guess = "" guess_count = 0 guess_limit = 3 out_of_guesses = False while guess != secret_word and not (out_of_guesses): if guess_count < guess_limit: guess = input("Enter guess: ") guess_count += 1 print("Wrong answer!") else: out_of_guesses = True if out_of_guesses: print("Out of Guesses, YOU LOSE!") else: print("Correct!") While it did print after every wrong answer, it ALSO printed after the correct input, which was then followed by the "Correct!" statement. Initially I tried moving print("Wrong answer!") to other lines within the while function and tried adding a break after the if function but nothing seemed to work. It was only when I found this answer where "Wrong answer!" stopped printing after the correct input: secret_word = "giraffe" guess = "" guess_count = 0 guess_limit = 3 out_of_guesses = False while guess != secret_word and not (out_of_guesses): if guess_count < guess_limit: guess = input("Enter guess: ") guess_count += 1 if guess == secret_word: print("Correct!") if guess != secret_word: print("Wrong answer!") else: out_of_guesses = True print("Out of Guesses, YOU LOSE!") So my questions are: Why do I need to define if guess != secret_word: AGAIN if I already had while guess != secret_word and not (out_of_guesses): in the while loop function? Why did it keep printing "Wrong answer!" after the correct input in the first code even after adding a break? | The while loop (it's not a function!) head is (here) independent from what you do in the loop's body. When the loop condition is true, the whole code in the body runs. If that code prints something unconditionally (i.e., without using an if to check whether it should print), then you get that output. After the body is finished, the loop's condition is evaluated again, and the loop may or may not start over. Stating it differently: the loop head determines if the body should run again, and that check happens when the body is finished. The whole body runs, unless you use break or continue to exit the loop early. | 5 | 4 |
79,407,124 | 2025-2-2 | https://stackoverflow.com/questions/79407124/python-3-13-1-breaks-indentation-when-pasting-code-in-terminal | Python 3.13.1 results in unexpected indentation errors when I copy and paste Python code into its windows terminal (as shown in the pic below). This did not happen in Python 3.12.x or earlier. In windows terminal open python 3.13.1, and 3.12.8 separately, and paste the below code for example def fun1(): if 5 > 3: return 'a' return 0 and hit enter.. You should get the below for 3.13.1 and below for 3.12.8 (and versions before)... (I pasted using a simple right click in windows terminal, and check multiple times the steps are same) has anyone faced this issue, and how did you fix this | Press F3 before pasting (new in Python 3.13). Press F3 again when done pasting. See What's New in Python 3.13: New Features - A better interactive interpreter. | 2 | 3 |
79,405,614 | 2025-2-1 | https://stackoverflow.com/questions/79405614/slicing-netcdf4-dataset-based-on-specific-time-interval-using-xarray | I have a netCDF4 dataset for the following datatime which is stored in _date_times variable:- <xarray.DataArray 'Time' (Time: 21)> Size: 168B array(['2025-01-30T00:00:00.000000000', '2025-01-30T06:00:00.000000000', '2025-01-30T12:00:00.000000000', '2025-01-30T18:00:00.000000000', '2025-01-31T00:00:00.000000000', '2025-01-31T06:00:00.000000000', '2025-01-31T12:00:00.000000000', '2025-01-31T18:00:00.000000000', '2025-02-01T00:00:00.000000000', '2025-02-01T06:00:00.000000000', '2025-02-01T12:00:00.000000000', '2025-02-01T18:00:00.000000000', '2025-02-02T00:00:00.000000000', '2025-02-02T06:00:00.000000000', '2025-02-02T12:00:00.000000000', '2025-02-02T18:00:00.000000000', '2025-02-03T00:00:00.000000000', '2025-02-03T06:00:00.000000000', '2025-02-03T12:00:00.000000000', '2025-02-03T18:00:00.000000000', '2025-02-04T00:00:00.000000000'], dtype='datetime64[ns]') The above data is of six hour interval. However, I need to convert the dataset to twelve hourly dataset. The filtered dataset should look like this:- <xarray.DataArray 'Time' (Time: 21)> Size: 168B array(['2025-01-30T00:00:00.000000000', '2025-01-30T12:00:00.000000000', '2025-01-31T00:00:00.000000000', '2025-01-31T12:00:00.000000000', '2025-02-01T00:00:00.000000000', '2025-02-01T12:00:00.000000000', '2025-02-02T00:00:00.000000000', '2025-02-02T12:00:00.000000000', '2025-02-03T00:00:00.000000000', '2025-02-03T12:00:00.000000000', '2025-02-04T00:00:00.000000000'], dtype='datetime64[ns]') What I tried was:- xr_ds.sel(Time=slice(_date_times[0], _date_times[-1]), freq='12 h') Off course, it won't work as there is no option to specify freq. How do I slice dataset containing only on specific time interval? | You don't have to use a slice() to select the times, you can also specify a list or array of times. Here, I used Pandas date_range() for simplicity: import xarray as xr import pandas as pd import numpy as np ds = xr.open_dataset('202001.nc') times = pd.date_range(ds.time[0].values, ds.time[-1].values, freq='12h') dst = ds.sel(time=times) This results in: In [10]: dst.time Out[10]: <xarray.DataArray 'time' (time: 62)> Size: 496B array(['2020-01-01T00:00:00.000000000', '2020-01-01T12:00:00.000000000', '2020-01-02T00:00:00.000000000', '2020-01-02T12:00:00.000000000', An alternative is to use ds.isel() with an array of indexes. dst = ds.isel(time=np.arange(0, ds.time.size, 12)) Or you can simply slice the time array from the original dataset, if you really want to avoid Pandas/Numpy: dst = ds.sel(time=ds.time[::12]) | 1 | 1 |
79,405,867 | 2025-2-1 | https://stackoverflow.com/questions/79405867/change-value-in-ini-to-empty-using-python-configparser | I want to read the config file if the value is empty Example: To convert a video file to audio file # config.ini [settings] videofile = video.avi codesplit = -vn outputfile = audio.mp3 Output ['ffmpeg.exe', '-i', 'video.avi', '-vn', 'audio3.mp3'] If you make the value "codesplit" empty, the code will not work. example : To convert a video avi file to video mp4 [settings] videofile = video.avi codesplit = outputfile = videoaudio.mp4 Output ['ffmpeg.exe', '-i', 'video.avi', '', 'videoaudio.mp4'] I want to remove this quote so that the code to works '', full code import subprocess import configparser config = configparser.ConfigParser(allow_no_value=True) config.read(r'config.ini') videofile = config.get('settings', 'videofile') outputfile = config.get('settings', 'outputfile') codesplit = config.get('settings', 'codesplit', fallback=None) ffmpeg_path = r"ffmpeg.exe" command = [ f"{ffmpeg_path}", "-i", (videofile), (codesplit), (outputfile),] process = subprocess.Popen( command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, bufsize=1, universal_newlines=True) process.communicate() print(command) | You can try using an if-else statement to check if there is a codesplit option passed from the config file or not. import subprocess import configparser config = configparser.ConfigParser(allow_no_value=True) config.read(r"config.ini") videofile = config.get("settings", "videofile") outputfile = config.get("settings", "outputfile") codesplit = config.get("settings", "codesplit", fallback=None) ffmpeg_path = r"ffmpeg.exe" if codesplit: command = [ f"{ffmpeg_path}", "-i", (videofile), (codesplit), (outputfile), ] else: command = [ f"{ffmpeg_path}", "-i", (videofile), (outputfile), ] process = subprocess.Popen( command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, bufsize=1, universal_newlines=True, ) process.communicate() print(command) This code checks if codesplit is there in the config. If so, it adds it to the command list. If not, it doesn't. The output looks like ['ffmpeg.exe', '-i', 'video.avi', 'videoaudio.mp4'] where the empty string is absent. | 1 | 1 |
79,405,672 | 2025-2-1 | https://stackoverflow.com/questions/79405672/iterqueue-object-has-no-attribute-not-full | I have a class called "IterQueue which is an iter queue: IterQueue.py from multiprocessing import Process, Queue, Pool import queue class IterQueue(queue.Queue): def __init__(self): self.current = 0 self.end = 10000 def __iter__(self): self.current = 0 self.end = 10000 while True: yield self.get() def __next__(self): if self.current >= self.end: raise StopIteration current = self.current self.current += 1 return current I put items inside it in a different processes: modulexxx.py ... self.q1= IterQueue() def function(self): x = 1 while True: x = x + 1 self.q1.put(x) Everything works fine, but python gives me an error: 'IterQueue' object has no attribute 'not_full I searched for this function so I implement it in my custom queue, but found nothing, What's the solution to this ? | You inherited from Queue and overrode __init__, but never called its __init__, so that was never run. That means that not_full was never assigned, thus the error. Unless you want to override its maxsize default argument of 0, you just need to change your __init__ to: def __init__(self): super().__init__() self.current = 0 self.end = 10000 | 1 | 3 |
79,405,385 | 2025-2-1 | https://stackoverflow.com/questions/79405385/why-is-my-code-rendering-curves-rotated-90-degrees | I'm trying to make a heating cable layout for floor heating and i got stuck with this problem. This i what i got from my code So i made a small program asking for width,lenght of the room and calculating the are in m2. And i need a graphical representation of how cable should be on the floor. This is my code for arch: # Add C-shaped curves at the end of each section for i in range(int(width // spacing)): theta = np.linspace(0, np.pi, 100) x_curve = length + (spacing / 2) * np.cos(theta) y_curve = (i + 1) * spacing + (spacing / 2) * np.sin(theta) color = 'blue' if total_length < 5 else 'red' ax.plot(x_curve, y_curve, color=color) total_length += np.pi * (spacing / 2) I'm a beginner so i tried what i knew so far. If someone can guide me or help me in anyway possible. | I suspect you want to lay your cable like this. The end arcs are essentially the same (so don't keep re-calculating them). Just shift them up at each crossing and alternate left and right sides. import numpy as np import matplotlib.pyplot as plt width = 4 spacing = 0.25 N = 17 length = 4 total_length = 0 theta = np.linspace(0, np.pi, 100) xarc = (spacing / 2) * np.sin(theta) yarc = (spacing / 2) * ( 1 - np.cos(theta) ) base = 0 for i in range( N ): color = 'blue' if total_length < 5 else 'red' plt.plot( [0,length], [base,base], color=color) # draw line if i == N - 1: break # no connector at end if i % 2 == 0: x_curve = length + xarc # arc on right else: x_curve = -xarc # arc on left y_curve = base + yarc plt.plot(x_curve, y_curve, color=color) # draw connector base += spacing # update base and length total_length += length + np.pi * (spacing / 2) plt.show() | 2 | 1 |
79,405,520 | 2025-2-1 | https://stackoverflow.com/questions/79405520/md-simulation-using-velocity-verlet-in-python | I'm trying to implement a simple MD simulation in Python (I'm new to this),I'm using LJ potential and force equations along with Verlet method: def LJ_VF(r): #r = distance in Γ
#Returns V in (eV) and F in (eV/Γ
) V = 4 * epsilon * ( (sigma/r)**(12) - (sigma/r)**6 ) F = 24 * epsilon * ( 2 * ((sigma**12)/(r**(13))) - ( (sigma**6)/(r**7) )) return V , F def velocity_verlet(x, v, f_old, f_new): #setting m=1 so that a=f x_new = x + v * dt + 0.5 * f_old * dt**2 v_new = v + 0.5 * (f_old + f_new) * dt return x_new, v_new Now to make sure it works I'm trying to use this on the simplest system, i.e 2 atoms separated by the distance 4 Γ
: #Constants epsilon = 0.0103 sigma = 3.4 m = 1.0 t0 = 0.0 v0 = 0.0 dt = 0.1 N = 1000 def simulate_two_atoms(p1_x0, p1_v0, p2_x0, p2_v0): p1_x, p2_x = [p1_x0], [p2_x0] p1_v, p2_v = [p1_v0], [p2_v0] p1_F, p1_V, p2_F, p2_V = [], [], [], [] r = abs(p2_x0 - p1_x0) V, F = LJ_VF(r) p1_F.append(F) p1_V.append(V) p2_F.append(-F) p2_V.append(V) for i in range(N - 1): r_new = abs(p2_x[-1] - p1_x[-1]) V_new, F_new = LJ_VF(r_new) x1_new, v1_new = velocity_verlet(p1_x[-1], p1_v[-1], p1_F[-1], F_new) x2_new, v2_new = velocity_verlet(p2_x[-1], p2_v[-1], p2_F[-1], -F_new) p1_x.append(x1_new) p1_v.append(v1_new) p2_x.append(x2_new) p2_v.append(v2_new) p1_F.append(F_new) p2_F.append(-F_new) p1_V.append(V_new) p2_V.append(V_new) return np.array(p1_x), np.array(p1_v), np.array(p2_x), np.array(p2_v) #Initial conditions p1_x0 = 0.0 p1_v0 = 0.0 p2_x0 = 4.0 p2_v0 = 0.0 #Plot p1_x, p1_v, p2_x, p2_v = simulate_two_atoms(p1_x0, p1_v0, p2_x0, p2_v0) time = np.arange(N) plt.plot(time, p1_x, label="Particle 1", color="blue") plt.plot(time, p2_x, label="Particle 2", color="green") plt.xlabel("Time (t)") plt.ylabel("x (Γ
)") plt.title("Particle Positions Over Time (Bouncing Test)") plt.legend() plt.grid(True) plt.show() But I'm getting incorrect values and the plot shows that the atoms are not bouncing at all, in contra they are drifting away from each other! I have been trying to find where I went wrong for a very long time now but not progressing in any level! Thought that a second eye might help! Any tips/advice? | Your initial force F is negative. You assign that to P1, making it move down and -F (which is positive) to P2, so making P2 move up. That is contrary to the physics. You have just assigned F and -F to the wrong particles (F should go to the one with positive r). Switch them round, both where you update x_new, v_new and the particle forces p1_F and p2_F. With your given potential I think the equilibrium displacement should be 2^(1/6).sigma, or about 3.816 in the units you are using here. Specifically: x1_new, v1_new = velocity_verlet(p1_x[-1], p1_v[-1], p1_F[-1], -F_new) x2_new, v2_new = velocity_verlet(p2_x[-1], p2_v[-1], p2_F[-1], F_new) p1_x.append(x1_new) p1_v.append(v1_new) p2_x.append(x2_new) p2_v.append(v2_new) p1_F.append(-F_new) p2_F.append( F_new) Gives | 1 | 2 |
79,403,285 | 2025-1-31 | https://stackoverflow.com/questions/79403285/how-to-force-sympy-to-simplify-expressions-that-contain-logarithms-inside-an-exp | Consider the following MWE: import sympy as sp a,b = sp.symbols("a b", positive=True, real=True) t = sp.symbols("t", real=True) s = sp.symbols("s") T = 1/(1+s*a)/(1+s*b) y = sp.inverse_laplace_transform(T,s,t) tmax = sp.solve(sp.diff(y,t),t)[0] ymax = y.subs(t,tmax) display(ymax.simplify()) This code gives the following output: How can I force Sympy to simplify this expression? Both the fraction and the sum can be decomposed easily, leaving a term in the form of exp(log(x)), which can be simplified to x. Edit: Here is one example how the simplification of one exp() term could look like (the first one in this case): | Notice the denominators in the arguments of the exponentials (which are different than you posted, but that is what I get): >>> from sympy import * ...your code... >>> e=ymax.atoms(exp);e [exp(-log((a/b)**(a*b))/(a**2 - a*b)), exp(-log((a/b)**(a*b))/(a*b - b**2))] If you assure SymPy that those are non-zero (and real), all can be well: >>> p,q = var('p q',nonzero=True) # nonzero=True implies real >>> reps = {a**2-a*b:p,a*b-b**2:q}; _reps = {v:k for k,v in reps.items()} >>> [i.subs(reps).simplify().subs(_reps) for i in e] [(b/a)**(a*b/(a**2 - a*b)), (b/a)**(a*b/(a*b - b**2))] Is that what you were hoping would happen to them? If so you can write a handler to be used with replace to target out the exp and make the denominator nonzero: def handler(x): assert isinstance(x, exp) e = x.args[0] n,d = e.as_numer_denom() if d.is_nonzero: return x.simplify() nz=Dummy(nonzero=True) return exp(n/nz).simplify().subs(nz,d) >>> ymax.replace(lambda x:isinstance(x,exp), handler) a*(b/a)**(a*b/(a**2 - a*b))*Heaviside(log((a/b)**(a*b))/(a - b))/(a**2 - a*b) - b*(b/a)**(a*b/(a*b - b**2))*Heaviside(log((a/b)**(a*b))/(a - b))/(a*b - b**2) | 3 | 2 |
79,405,200 | 2025-2-1 | https://stackoverflow.com/questions/79405200/using-named-columns-and-relative-row-numbers-with-pandas-3 | I switched from NumPy arrays to Pandas DataFrames (dfs) many years ago because the latter has column names, which makes programming easier; is robust in order changes when reading data from a .json or .csv file. From time to time, I need the last row ([-1]) of some column col of some df1, and combine it with the last row of the same column col of another df2. I know the name of the column, not their position/order (I could know, but it might change, and I want to have a code that is robust against changers in the order of columns). So what I have been doing for years in a number of Python scripts is something that looks like import numpy as np import pandas as pd # In reality, these are read from json files - the order # of the columns may change, their names may not: df1 = pd.DataFrame(np.random.random((2,3)), columns=['col2','col3','col1']) df2 = pd.DataFrame(np.random.random((4,3)), columns=['col1','col3','col2']) df1.col2.iloc[-1] = df2.col2.iloc[-1] but since some time my mailbox gets flooded with cron jobs going wrong, telling me that You are setting values through chained assignment. Currently this works in certain cases, but when using Copy-on-Write (which will become the default behaviour in pandas 3.0) this will never work to update the original DataFrame or Series, because the intermediate object on which we are setting values will behave as a copy. A typical example is when you are setting values in a column of a DataFrame, like: df["col"][row_indexer] = value Use df.loc[row_indexer, "col"] = values instead, to perform the assignment in a single step and ensure this keeps updating the original df. See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df1.col2.iloc[-1] = df2.col2.iloc[-1] Of course, this error message is incorrect, and replacing the last line in my example with either of df1.loc[-1, 'col2'] = df2.loc[-1, 'col2'] # KeyError: -1 df1.iloc[-1, 'col2'] = df2.iloc[-1, 'col2'] # ValueError (can't handle 'col2') does not work either, since .iloc[] cannot handle column names and .loc[] cannot handle relative numbers. How can I handle the last (or any other relative number) row and a column with given name of a Pandas DataFrame? | You can try to use the following snippet. df1.loc[df1.index[-1], 'col1'] = df2.loc[df2.index[-1], 'col1'] On my machine with pandas version 2.2.3, it gives no warnings. | 1 | 1 |
79,404,040 | 2025-1-31 | https://stackoverflow.com/questions/79404040/is-there-a-way-to-overlay-scatterplot-over-grouped-boxplots-so-they-arent-offse | I'm trying to get the scatter plots to lie ontop of their respective boxplots to act as outlier points. Since plotly's graph_object.box doesn't have a method of inputting precalculated outliers, I've been trying to do it this way. I don't want plotly to calculate the outliers for the purposes of the project. Is there any way to accomplish this by moving the scatterplot, or perhaps a feature of go.Box I overlooked that can do this? import plotly.graph_objects as go def create_multiple_boxplots(summary_stats_list, labels, types, title="Multiple Boxplots"): fig = go.Figure() color_map = {"Rainy": "blue", "Sunny": "green"} i=0 for stats, label, type_ in zip(summary_stats_list, labels, types): fig.add_trace(go.Box( name=type_, q1=[stats['Q1']], median=[stats['Median']], q3=[stats['Q3']], lowerfence=[stats['Min']], upperfence=[stats['Max']], mean=[stats['Mean']], boxpoints='all' if 'Outliers' in stats else False, jitter=0.3, pointpos=-1.8, marker=dict(color=color_map[type_]), # Assign color based on type legendgroup=type_, showlegend=True if i < 2 else False, x=[label], # y=stats.get('Outliers', []) )) # Add outlier points separately fig.add_trace(go.Scatter( x=[label] * len(stats['Outliers']), y=stats['Outliers'], mode='markers', marker=dict(color=color_map[type_], size=8, symbol='circle-open'), name=f"Outliers - {type_}", legendgroup=type_, showlegend=False )) i+=1 fig.update_layout(title=title, yaxis_title="Value", boxmode='group') fig.show() # Example summary statistics data_summaries = [ {"Min": 5, "Q1": 10, "Median": 15, "Q3": 20, "Max": 25, "Mean": 16, "Outliers": [2, 27]}, {"Min": 6, "Q1": 11, "Median": 16, "Q3": 21, "Max": 26, "Mean": 17, "Outliers": [3, 28]}, {"Min": 4, "Q1": 9, "Median": 14, "Q3": 19, "Max": 24, "Mean": 15, "Outliers": [1, 26]}, {"Min": 7, "Q1": 12, "Median": 17, "Q3": 22, "Max": 27, "Mean": 18, "Outliers": [4, 29]} ] labels = ["Happy", "Happy", "Sad", "Sad"] types = ["Rainy", "Sunny", "Rainy", "Sunny"] create_multiple_boxplots(data_summaries, labels, types) | The desired output can be obtained by exiting box mode and making each label unique. This is because the x-axis of the box-and-whisker and scatter plots will be the same. import plotly.graph_objects as go def create_multiple_boxplots(summary_stats_list, labels, types, title="Multiple Boxplots"): fig = go.Figure() color_map = {"Rainy": "blue", "Sunny": "green"} i=0 for stats, label, type_ in zip(summary_stats_list, labels, types): fig.add_trace(go.Box( name=type_, q1=[stats['Q1']], median=[stats['Median']], q3=[stats['Q3']], lowerfence=[stats['Min']], upperfence=[stats['Max']], mean=[stats['Mean']], boxpoints='all' if 'Outliers' in stats else False, jitter=0.3, pointpos=-1.8, marker=dict(color=color_map[type_]), # Assign color based on type legendgroup=type_, showlegend=True if i < 2 else False, x=[label], )) fig.add_trace(go.Scatter( x=[label] * len(stats['Outliers']), y=stats['Outliers'], mode='markers', marker=dict(color=color_map[type_], size=8, symbol='circle-open'), name=f"Outliers - {type_}", legendgroup=type_, showlegend=False )) i+=1 fig.update_layout(title=title, yaxis_title="Value")#, boxmode='group') # update fig.show() # Example summary statistics data_summaries = [ {"Min": 5, "Q1": 10, "Median": 15, "Q3": 20, "Max": 25, "Mean": 16, "Outliers": [2, 27]}, {"Min": 6, "Q1": 11, "Median": 16, "Q3": 21, "Max": 26, "Mean": 17, "Outliers": [3, 28]}, {"Min": 4, "Q1": 9, "Median": 14, "Q3": 19, "Max": 24, "Mean": 15, "Outliers": [1, 26]}, {"Min": 7, "Q1": 12, "Median": 17, "Q3": 22, "Max": 27, "Mean": 18, "Outliers": [4, 29]} ] labels = ["Happy", "Happy_", "Sad", "Sad_"] # update types = ["Rainy", "Sunny", "Rainy", "Sunny"] create_multiple_boxplots(data_summaries, labels, types) To do a scatter plot while still in box mode, add an offset group, which will draw the scatter plot in the center of the box-and-whisker plot. import plotly.graph_objects as go def create_multiple_boxplots(summary_stats_list, labels, types, title="Multiple Boxplots"): fig = go.Figure() color_map = {"Rainy": "blue", "Sunny": "green"} offsetgroup_names = ['A','B','A','B'] # update i=0 for stats, label, type_, offset in zip(summary_stats_list, labels, types, offsetgroup_names): print('stats', stats, 'label',label, 'type', type_) fig.add_trace(go.Box( name=type_, q1=[stats['Q1']], median=[stats['Median']], q3=[stats['Q3']], lowerfence=[stats['Min']], upperfence=[stats['Max']], mean=[stats['Mean']], boxpoints='all' if 'Outliers' in stats else False, jitter=0.3, pointpos=-1.8, marker=dict(color=color_map[type_]), # Assign color based on type legendgroup=type_, showlegend=True if i < 2 else False, x=[label], offsetgroup=offset, # update )) fig.add_trace(go.Scatter( x=[label] * len(stats['Outliers']), y=stats['Outliers'], xaxis='x', yaxis='y', offsetgroup=offset, # update mode='markers', marker=dict(color=color_map[type_], size=8, symbol='circle-open'), name=f"Outliers - {type_}", legendgroup=type_, showlegend=False )) i+=1 fig.update_layout(title=title, yaxis_title="Value", boxmode='group') fig.show() # Example summary statistics data_summaries = [ {"Min": 5, "Q1": 10, "Median": 15, "Q3": 20, "Max": 25, "Mean": 16, "Outliers": [2, 27]}, {"Min": 6, "Q1": 11, "Median": 16, "Q3": 21, "Max": 26, "Mean": 17, "Outliers": [3, 28]}, {"Min": 4, "Q1": 9, "Median": 14, "Q3": 19, "Max": 24, "Mean": 15, "Outliers": [1, 26]}, {"Min": 7, "Q1": 12, "Median": 17, "Q3": 22, "Max": 27, "Mean": 18, "Outliers": [4, 29]} ] labels = ["Happy", "Happy", "Sad", "Sad"] types = ["Rainy", "Sunny", "Rainy", "Sunny"] create_multiple_boxplots(data_summaries, labels, types) | 1 | 1 |
79,404,376 | 2025-2-1 | https://stackoverflow.com/questions/79404376/os-symlink-fails-when-directory-exists-and-when-directory-doesnt-exist | I have source and link paths. I'm trying to create a symlink, but must be misunderstanding how its used. Let's say source = '/var/source/things/' link = '/var/link/' When I use os.symlink(source, link) Initially I received an error FileNotFoundError: [Errno 129] EDC5129I No such file or directory.: '/var/source/things/' -> '/var/link/' Ok, so I'll put in something to create the directory if it doesn't exist. if not os.path.exists(link): os.makedirs(link) Re-run and now receive: FileExistsError: [Errno 117] EDC5117I File exists.: '/var/source/things/' -> '/var/link/' So if the directory doesn't exist it fails and if the directory does exist it also fails? We have a bash script that uses ln -sf $source/* $link Which creates symlinks for all folders within 'source' and was hoping python would be just as easy. But as I mentioned before, I'm misunderstanding something here. | Can you remove "/" at the end under link and try once source = '/var/source/things/' link = '/var/link' apart from that, I don't see any issues in your code. | 2 | 2 |
79,404,077 | 2025-1-31 | https://stackoverflow.com/questions/79404077/how-to-fix-alignment-of-projection-from-x-y-z-coordinates-onto-xy-plane-in-mat | I was trying to make a 3D visualization of the joint probability mass function with the following code: import math import numpy as np import matplotlib.pyplot as plt def f(x, y): if(1 <= x + y and x + y <= 4): return (math.comb(3, x) * math.comb(2, y) * math.comb(3, 4 - x - y)) / math.comb(8, 4) else: return 0.0 x_domain = np.array([0, 1, 2, 3]) y_domain = np.array([0, 1, 2]) X, Y = np.meshgrid(x_domain, y_domain) X = np.ravel(X) Y = np.ravel(Y) Z = np.zeros_like(X, dtype=float) for i in range(len(X)): Z[i] = f(X[i], Y[i]) fig = plt.figure(figsize=(8, 4)) ax = fig.add_subplot(1, 1, 1, projection="3d") ax.scatter(X, Y, Z) # plots the induvidual points for i in range(len(X)): # draws lines from xy plane up (x,y,z) ax.plot([X[i], X[i]], [Y[i], Y[i]], [0, Z[i]], color="r") ax.set_xticks(x_domain) ax.set_yticks(y_domain) plt.show() Which gave the following result: As you can see, the stems of lines from the xy-plane do not align with the integer coordinates. I have searched through the matplotlib documentation and cant find the cause for this (unless I have overlooked something). Does anyone know how to fix the alignment issue? EDIT It seems stack overflow could not render the mathjax equation, so I removed it. | Actually, they DO align with the integer grid, as you will see by rotating your plot a bit. It's just that the z axis doesn't naturally start from 0, which is where the end of your whatever-they-are are. Add the line ax.set_zlim( 0.0, 0.25 ) just before plt.show() and you will be good to go. | 1 | 2 |
79,403,612 | 2025-1-31 | https://stackoverflow.com/questions/79403612/calculate-the-count-of-distinct-values-appearing-in-multiple-tables | I have three pyspark dataframes in Databricks: raw_old, raw_new, and master_df. These are placeholders to work out the logic on a smaller scale (actual tables contain billions of rows of data). There is a column in all three called label. I want to calculate the number of labels that appear in: raw_old and raw_new (the answer is 3: A789, B456, D123) raw_new and master_df (the answer is 2: C456, D123) raw_old and master_df (the answer is 4: A654, B987, C987, D123) raw_old, raw_new, and master_df (the answer is 1: D123) The three tables are below. How do I calculate the above bullet points? raw_old +---+-----+ | id|label| +---+-----+ | 1| A987| | 2| A654| | 3| A789| | 4| B321| | 5| B456| | 6| B987| | 7| C321| | 8| C654| | 9| C987| | 10| D123| +---+-----+ raw_new +---+-----+ | id|label| +---+-----+ | 1| A123| | 2| A456| | 3| A789| | 4| B123| | 5| B456| | 6| B789| | 7| C123| | 8| C456| | 9| C789| | 10| D123| +---+-----+ master_df +---+-----+ | id|label| +---+-----+ | 1| A999| | 2| A654| | 3| A000| | 4| B111| | 5| B000| | 6| B987| | 7| C999| | 8| C456| | 9| C987| | 10| D123| +---+-----+ | You should use an inner join to get the elements in common between the datasets joined_data = raw_old.join( raw_new, on=raw_old["label"] == raw_new["label"], how="inner" ) and then you can collect the result back to Python, keeping all the heavy work in Spark print(joined_data.count()) When joining 3 dataframes, you can do the first 2 and join the resulted dataframe with the third one. | 4 | 1 |
79,401,185 | 2025-1-30 | https://stackoverflow.com/questions/79401185/overload-a-method-based-on-init-variables | How can I overload the get_data method below to return the correct type based on the init value of data_type instead of returning a union of both types? from typing import Literal DATA_TYPE = Literal["wood", "concrete"] class WoodData: ... class ConcreteData: ... class Foo: def __init__(self, data_type: DATA_TYPE) -> None: self.data_type = data_type def get_data(self) -> WoodData | ConcreteData: if self.data_type == "wood": return WoodData() return ConcreteData() I was thinking this could be done by specifying a generic for Foo. But I'm unsure on implementation details. I'd prefer not to pass WoodData/ConcreteData directly as a generic. This is because I have many methods returning conditional data types depending on whether the init var is wood or concrete. To illustrate that last point, I know I could add a generic that takes one of the two return types like so: from typing import Literal DATA_TYPE = Literal["wood", "concrete"] class WoodData: ... class ConcreteData: ... class Foo[MY_RETURN_TYPE: WoodData | ConcreteData]: def __init__(self, data_type: DATA_TYPE) -> None: self.data_type = data_type def get_data(self) -> MY_RETURN_TYPE: if self.data_type == "wood": return WoodData() return ConcreteData() But imagine I have tons of methods conditionally returning different types based on the value of data_type. I don't want to specify each of these as generics. I'd rather overload the methods on the class and have return types accurately inferred. Lastly, I know I could split this into two separate sub classes, but it would be nice to keep them as one class if possible. | Ok, for this solution, you annotate self with the generic type you want, both mypy and pyright give similar outputs for reveal_type (i.e., it works with the base class but not the subclass): from typing import Literal, overload, TypeVar class WoodData: ... class ConcreteData: ... class Foo[T:(Literal['wood'], Literal['concrete'])]: data_type: T def __init__(self, data_type: T) -> None: self.data_type = data_type @overload def get_data(self: "Foo[Literal['wood']]") -> WoodData: ... @overload def get_data(self: "Foo[Literal['concrete']]") -> ConcreteData: ... @overload def get_data(self) -> WoodData | ConcreteData: ... def get_data(self): if self.data_type == "wood": return WoodData() return ConcreteData() @overload def bar(self: "Foo[Literal['wood']]") -> int: ... @overload def bar(self: "Foo[Literal['concrete']]") -> str: ... @overload def bar(self) -> int | str: ... def bar(self): if self.data_type == "wood": return 42 return "42" reveal_type(Foo('wood').get_data()) # main.py:32: note: Revealed type is "__main__.WoodData" reveal_type(Foo('concrete').get_data()) # main.py:33: note: Revealed type is "__main__.ConcreteData" reveal_type(Foo('wood').bar()) # main.py:34: note: Revealed type is "builtins.int" reveal_type(Foo('concrete').bar()) # main.py:35: note: Revealed type is "builtins.str" class Bar[T:(Literal['wood'], Literal['concrete'])](Foo[T]): pass # works with inheritance too reveal_type(Bar('wood').get_data()) # main.py:41: note: Revealed type is "__main__.WoodData" reveal_type(Bar('concrete').get_data()) # main.py:41: note: Revealed type is "__main__.ConcreteData" reveal_type(Bar('wood').bar()) # main.py:41: note: Revealed type is "builtins.int" reveal_type(Bar('concrete').bar()) # main.py:41: note: Revealed type is "builtins.str" However, mypy won't type check the body of the implementation, and pyright seems to be reporting erroneous errors for the body... | 4 | 3 |
79,400,487 | 2025-1-30 | https://stackoverflow.com/questions/79400487/pyqt6-issue-in-fetching-geometry-of-the-window | I am currently learning the PyQt6 library and I want to get the geometry of the window. The issue is that the x and y positions are always 0 even after I changed the position of the window on the screen. OS: Ubuntu Python version: 3.9 import sys from PyQt6.QtWidgets import QApplication, QVBoxLayout, QPushButton, QWidget from PyQt6.QtCore import QTimer class MainWindow(QWidget): def __init__(self): super().__init__() self.resize(200, 300) self.setWindowTitle("Main Window") layout = QVBoxLayout() btn = QPushButton("Click", self) btn.clicked.connect(self.print_geometry) layout.addWidget(btn) self.setLayout(layout) self.show() def print_geometry(self): print(self.geometry()) # This will now print the correct geometry if __name__ == '__main__': app = QApplication(sys.argv) win = MainWindow() sys.exit(app.exec()) Output: PyQt6.QtCore.QRect(0, 0, 200, 300) PyQt6.QtCore.QRect(0, 0, 200, 300) PyQt6.QtCore.QRect(0, 0, 200, 300) PyQt6.QtCore.QRect(0, 0, 200, 300) I don't know why it is returning 0 for x and y positions. It should have returned the exact position. | why it doesn't work On Ubuntu, Wayland is used by default for regular user sessions. Wayland has certain limitations, including restricting access to low-level details like window position, which caused this issue. However, sudo runs the program with root privileges, which forces the system to use X11 (instead of Wayland), where such restrictions don't apply. This is why running the program with sudo sometimes works, but it isn't the best solution. Solution Methods: The issue was resolved by setting the QT_QPA_PLATFORM environment variable to xcb in the script, which forces PyQt6 to use X11 instead of Wayland. This allowed the window geometry to be fetched correctly without requiring root privileges. Additionally, installing the X11 plugins (libxcb-cursor0, libxcb1, and libxcb-xinerama0) ensures that X11 works properly Method 1: Running the app as sudo: If you run the application with sudo, it will use X11 instead of Wayland. This bypasses the restrictions imposed by Wayland and allows window geometry to be fetched correctly. However, running GUI applications as root is generally not recommended due to potential security and permission issues. Method 2: Set the environment variable to force Qt to use xcb: A cleaner solution is to explicitly set the QT_QPA_PLATFORM environment variable to xcb in the Python script, forcing Qt to use the X11 plugin. This avoids the need to run the application as root and resolves the geometry issue without requiring elevated privileges: import os os.environ['QT_QPA_PLATFORM'] = 'xcb' The error you might encounter qt.qpa.plugin: From 6.5.0, xcb-cursor0 or libxcb-cursor0 is needed to load the Qt xcb platform plugin. qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found. This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem. Available platform plugins are: wayland, vnc, linuxfb, xcb, eglfs, vkkhrdisplay, minimalegl, minimal, offscreen, wayland-egl. Aborted (core dumped) Solution: If you wish to use X11 through sudo, you must install the necessary plugins to support it. You can install the required X11 plugins with the following commands: sudo apt-get install libxcb-cursor0 libxcb1 libxcb-xinerama0 These libraries are needed for the X11 plugin to function properly and allow you to retrieve window geometry correctly final code import sys import os from PyQt6.QtWidgets import QApplication, QVBoxLayout, QPushButton, QWidget os.environ['QT_QPA_PLATFORM'] = 'xcb' class MainWindow(QWidget): def __init__(self): super().__init__() self.resize(200, 300) self.setWindowTitle("Main Window") layout = QVBoxLayout() btn = QPushButton("Click", self) btn.clicked.connect(self.print_geometry) layout.addWidget(btn) self.setLayout(layout) self.show() def print_geometry(self): print("Widget Geometry:", self.geometry()) if __name__ == '__main__': app = QApplication(sys.argv) win = MainWindow() sys.exit(app.exec()) Official Wayland Documentation X.Org (X11) Documentation A Comparison of X11 and Wayland Ubuntu Blog on Wayland vs X11 Qt Platform Abstraction (QPA) Overview Qt Wayland Platform Plugin Qt XCB (X11) Platform Plugin | 3 | 2 |
79,402,794 | 2025-1-31 | https://stackoverflow.com/questions/79402794/python-vectorized-minimization-of-a-multivariate-loss-function-without-jacobian | I have a loss function that needs to be minimized def loss(x: np.ndarray[float]) -> float My problem has nDim=10 dimensions. Loss function works for 1D arrays of shape (nDim,), and with 2D arrays of shape (nSample, nDim) for an arbitrary number of samples. Because of the nature of the implementation of the loss function (numpy), it is significantly faster to make a single call to the loss function with several samples packed into 2D argument than to make several 1D calls. The minimizer I am currently running is sol = scipy.optimize.basinhopping(loss, x0, minimizer_kwargs={"method": "SLSQP"}) It does the job, but is too slow. As of current, the minimizer is making single 1D calls to the loss function. Based on observing the sample points, it seems, SLSQP is performing numerical differentiation, thus sampling 11 points for each 1 sample to calculate the gradient. Theoretically, it should be possible to implement this minimizer with vectorized function calls, requesting all 11 sample points from the loss function simultaneously. I was hoping that there would be a vectorize flag for SLSQP, but it does not seem to be the case, please correct me if I am wrong. Note also that the loss function is far too complicated for analytic calculation of derivatives, so explicit Jacobian not an option. Question: Does Scipy or any other minimization library for python support a global optimization strategies (such as basinhopping) with vectorized loss function and no available Jacobian? | differential_evolution is a global optimizer that does not require gradients. It has a vectorized keyword to enable many function evaluations in a single call. Alternatively, you could write a function that takes the Jacobian with scipy.differentiate.jacobian, which calls the function at all required points at once, and pass that as the Jacobian callable. However, it is designed for accuracy, not speed, so you should probably set coarse tolerances and low order. | 1 | 2 |
79,402,532 | 2025-1-31 | https://stackoverflow.com/questions/79402532/cropping-the-image-by-removing-the-white-spaces | I am trying to identify the empty spaces in the image and if there is no image, then I would like to crop it by eliminating the spaces. Just like in the images below. --> I would be grateful for your help. Thanks in advance! I was using the following code, but was not really working. import cv2 import numpy as np def crop_empty_spaces_refined(image_path, threshold_percentage=0.01): image = cv2.imread(image_path, cv2.IMREAD_UNCHANGED) if image is None: print(f"Error: Could not read image at {image_path}") return None if image.shape[2] == 4: # RGBA image gray = image[:, :, 3] # Use alpha channel else: gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) _, thresh = cv2.threshold(gray, 240, 255, cv2.THRESH_BINARY_INV) kernel = np.ones((3, 3), np.uint8) thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=1) contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) if contours: image_area = image.shape[0] * image.shape[1] min_contour_area = image_area * threshold_percentage valid_contours = [cnt for cnt in contours if cv2.contourArea(cnt) >= min_contour_area] if valid_contours: # ***Corrected Bounding Box Calculation*** x_coords = [] y_coords = [] for cnt in valid_contours: x, y, w, h = cv2.boundingRect(cnt) x_coords.extend([x, x + w]) # Add both start and end x y_coords.extend([y, y + h]) # Add both start and end y x_min = min(x_coords) y_min = min(y_coords) x_max = max(x_coords) y_max = max(y_coords) cropped_image = image[y_min:y_max, x_min:x_max] return cropped_image else: print("No valid contours found after filtering. Returning original image.") return image else: return image image_path = '/mnt/data/Untitled.png' # file path cropped_image = crop_empty_spaces_refined(image_path, threshold_percentage=0.0001) if cropped_image is not None: cv2.imwrite('/mnt/data/cropped_output.png', cropped_image) print("Image Cropped and saved") else: print("Could not crop image") | Approach: threshold -> obtain mask use boundingRect() on the mask crop im = cv.imread("Cb768fyr.jpg") gray = cv.cvtColor(im, cv.COLOR_BGR2GRAY) th = 240 # 255 won't do, the image's background isn't perfectly white (th, mask) = cv.threshold(gray, th, 255, cv.THRESH_BINARY_INV) (x, y, w, h) = cv.boundingRect(mask) pad = 0 # increase to give it some border cropped = im[y-pad:y+h+pad, x-pad:x+w+pad] Why threshold at all? Because cv.boundingRect() would otherwise treat all non-zero pixels as "true", i.e. the background would be considered foreground. Why threshold with something other than 255? The background isn't perfectly white, due to the source image having been compressed lossily. If you did, that would be the result: If you wanted to replace cv.boundingRect(), you can do it like this: max-reduce mask along each axis in turn find first and last index of positive values xproj = np.max(mask, axis=1) # collapse X, have Y ymin = np.argmax(xproj) ymax = len(xproj) - np.argmax(xproj[::-1]) print(f"{ymin=}, {ymax=}") yproj = np.max(mask, axis=0) xmin = np.argmax(yproj) xmax = len(yproj) - np.argmax(yproj[::-1]) print(f"{xmin=}, {xmax=}") cropped = im[ymin-pad:ymax+pad, xmin-pad:xmax+pad] This could also use np.argwhere(). I won't bother comparing these two approaches since cv.boundingRect() does the job already. The findContours approach will pick any connected component, not all of them. This means it could sometimes pick the triad (bottom left) or text (top left), and entirely discard most of the image. You could fix that by slapping a convex hull on all the contours, but you'd still have to call boundingRect() anyway. So, all the contour stuff is wasted effort. | 2 | 3 |
79,402,275 | 2025-1-31 | https://stackoverflow.com/questions/79402275/how-to-fix-inconsistent-method-resolution-order-when-deriving-from-ctypes-struct | Given the following Python code: import ctypes from collections.abc import Mapping class StructureMeta(type(ctypes.Structure), type(Mapping)): pass class Structure(ctypes.Structure, Mapping, metaclass=StructureMeta): pass struct = Structure() assert isinstance(struct, ctypes.Structure) assert isinstance(struct, Mapping) The metaclass is needed to avoid a metaclass conflict when deriving from both ctypes.Structure (metaclass _ctypes.PyCStructType) and Mapping (metaclass abc.ABCMeta). This works fine when executed with Python 3.11. Alas, pylint 3.3.4 reports two errors: test.py:5:0: E0240: Inconsistent method resolution order for class 'StructureMeta' (inconsistent-mro) test.py:9:0: E1139: Invalid metaclass 'StructureMeta' used (invalid-metaclass) How do I need to change the meta class to fix the error reported by pylint? Is it even a problem? | As you can attest, there is no "error" in this code. Working with metaclasses is hard - and having to create a compatible metaclass in cases like this shows some of the side effects. The problem is most of what metaclasses do are things that take effect as the code is executed (i.e. in runtime) - while analysis tools like pylint, pyright, mypy, all try to see Python as if it was a static language, and as they evolve, they incorporate patterns to understand ever a little bit more of the language's dynamism. You simply reached a step that pylint (or whatever tool) haven't reached yet - the plain workaround is just to ignore its fake error message (by adding a # noQA or equivalent comment in the infringing line). Do it without ignoring the the linting error No, seriously, don't do this! The copying of the Mapping class in this way is subject to unforeseen side effects But, for the sake of entertainment and completeness, this is the path to avoid the linter-error (involving much more experimental code practices and subject to surprise side effects than just silencing the QA tool): A workaround there might be not to use the Mapping base class as it is, since it has a conflicting metaclass - it is possible for your resulting class to be registered as a Mapping (by using the Mapping.register call as a decorator) - so that isinstance(struct, Mapping) will still be true. That would imply, of course, in reimplementing all the goodness collections.abc.Mapping have, in wiring up all of a mapping logic so that one just have to implement a few methods. Instead of that, we might just create a "non-abstract" version of Mapping by copying all of its methods over to another, plain, class - and use that to the multiple-inheritance with ctypes.Struct from collections import abc import ctypes ConcreteMapping = type("ConcreteMapping", (), {key: getattr(abc.Mapping, key) for key in dir(abc.Mapping)}) @abc.Mapping.register class Struct(ctypes.Structure, ConcreteMapping): pass s = Struct() assert isinstance(s, abc.Mapping) assert isinstance(s, ctypes.Structure) (Needless to say, this throws away the only gain of abstractclasses in Python: the RuntimeError at the point one attempts to instantiate an incomplete "Mapping" class, without implementing one of the required methods as per the docs.) Keep it simple: No Linter Error, no metaclass and no frankenstein clonning Actually, there is a path forward, without potential side effects like either combining metaclasses (that, although working, may yield problems as Python versions change due to subtle modifications in how these metaclasses interact) - and without clonning collections.abc.Mapping as above - if you are willing to let go of the most of the benefficts of inheriting from Mapping, namely, getting the .get, __contains__, values, keys, items, __ne__, __eq__ methods for free: Just implement the methods your codebase will be actually using from Mapping (which might be just __getitem__, in this case), and register your class as a subclass of collections.abc.Mapping: the isinstance check will return True. No side effects - just that if one attempts to use one of the non-existing methods for your class, it will raise a RuntimeError: @collections.abc.Mapping.register class Struct(ctypes.Structure): _fields_ = ... def __getitem__(self, key): ... | 1 | 1 |
79,402,605 | 2025-1-31 | https://stackoverflow.com/questions/79402605/can-storing-an-api-key-in-env-file-for-a-containerized-python-app-be-considered | The title is the question, can storing an API-key in .env file for a containerized Python app be considered safe? While writing, a Nginx/Docker set-up question came to mind as well, asked at the bottom. A little background, I've made a Python app that I want to deploy with Streamlit on a Linux-based server. All is hobby-wise but aiming to learn industry practices. Therefore, I considered the web app in production environment as opposed to development. The script MyClass.py looks like this, API-key is stored in same directory as .env. (mock-up example, may be incomplete): import os from dotenv import load_dotenv load_dotenv() api_key = os.getenv('Super_Private_API_key') class MyClass: def __init__(self, foo: str): self.api_answer = _api_call(foo, api_key) def _api_call(foo, key) -> str: # Make API call with foo and api_key and convert to string return string And the app looks like: import streamlit as st import MyClass as mc string = st.text_input(placeholder="put 'foo' or 'bar' here", label="input") my_variable = mc.MyClass(string) # Return the API answer st.write(my_variable.api_answer) The set-up would be a Ubuntu server with a Docker container. The container would contain also Ubuntu or Alpine (for lightweigth), Nginx and Python 3.12. I would set-up the proper network to display to the outside world. Question(s): If the API-key, placed in a .env-file, placed in the same directory as MyClass.py, placed in a virtual (needed when in container?) Python environment to run my app, can that be considered safe/secure? Would separating MyClass.py and app.py within the container file system be safer? E.g. ~/myapp/app.py and ~/my_scripts/MyClass.py and make the class globally accessible. Is it safer to place the .env-file in the base Linux and let the container only-read it? Though that feels same difference to me. Additional question when writing this: With the aim of making more small apps, it seems wiser to install Nginx on the base Linux (Ubuntu) and make Alpine containers running only Python-apps and have them connect with Nginx outside the container. Does this approach enhance/weakens security or is it same difference, considering that an isolated approach feels securer. I am aware that there are secret managers such as Vault or Secret Manager but for me it's one step at a time. I am also trying to immerse myself in the mechanics and safeties/vulnerabilities of the tools I am using. In the end, I imagine, using a secret manager is easier to learn than understanding the extend of what to do and what not to do, with tools you are using. | At the end of the day, your program is going to need to have this value, in a variable, in plain text. So the question is how many people do you want to be able to see it before then? There is a lot of "it depends" here. If your application is handling real-world money or particularly sensitive data, you might want (or need) some more aggressive security settings, and your organization (or certification) might require some specific setups. I can also envision an environment where what you've shown in the question is just fine. ... industry practices ... production environment ... Set up some sort of credential store. Hashicorp Vault is a popular usually-free-to-use option; if you're running in Amazon Web Services, it has a secret manager; there are choices that allow you to put an encrypted file in source control and decrypt it at deploy time; and so on. From there you have three choices: Wire your application directly into the credential store. Most secure, but requires code changes, and hardest to run in development. Build automation at deploy time to extract the credential into a file, and read the file in your code (without involving an environment variable; the file is not .env). Build automation at deploy time to extract the credential directly into an environment variable. There's a tradeoff of complexity vs. security level here. I've worked with systems that extract the credentials into Kubernetes Secrets which get turned into environment variables; in that setup, given the correct Kubernetes permissions, you could read back the Secret. But given the correct permissions, you could also launch a Pod that mounted the Secret and read it that way, or with a ServiceAccount that allowed it to impersonate the real service to the credential store. The main points here are that an average developer doesn't actually have the credential, and that the credential can be different in different environments. [Is] the API-key, placed in a .env-file ... safe/secure? If the file isn't checked into source control, and nobody else can read it (either through Unix permissions or by controlling login access to the box), it's probably fine, but it depends on your specific organization's requirements. Would separating MyClass.py and app.py within the container file system be safer? Your credential isn't actually written in either of these files, so it makes no difference. These files should both be in the image, and the credential should be injected at deploy time. Is it safer to place the .env-file in the base Linux and let the container only-read it? Traditionally, environment variables are considered a little less secure: if you're logged into the box, ps can often show them, where a credential file can be set to a mostly-unreadable mode 0400. I might argue this is a little less relevant in a container world, but only because in practice access to the Docker socket gives you unrestricted root-level access anyways. In Kubernetes, even if a developer has access to the Kubernetes API, they won't usually be able to directly log into the nodes. This means there's an argument to put the credential in a file not named .env. However, if you're trying to make the file user-read-only (mode 0400) then you need to be very specific about the user ID the container is using, and some images don't necessarily tolerate this well. (If your code in the image is owned by root and world-readable but not writable, and you don't write to any directory that's not externally mounted, you're probably in good shape.) ... install Nginx on the base Linux ... Unless you have some specific configurations on that Nginx proxy that you think will improve security in some way, there's not a security-related reason to do this. There's an argument that adding an additional software layer decreases security, by adding another thing that could have exploitable bugs (Nginx has a pretty good track record though). You might want this reverse proxy anyways for other reasons, and it's totally reasonable to add it. That could let you run multiple applications on the same host without using multiple ports. Having a proxy is pretty common (I keep saying Kubernetes, and its Ingress and Gateway objects provide paths to set one up). Having two proxies is IME generally considered mildly unsightly but not a real problem. | 2 | 1 |
79,402,318 | 2025-1-31 | https://stackoverflow.com/questions/79402318/in-an-array-of-counters-that-reset-find-the-start-end-index-for-counter | Given an array that looks like this: values [0, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 6, 0, 0, 1, 2, 3] index 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 If I search for index 3, I want to get the indexes of the start and end indexes for that counter, before it is reset again, which is 2 - 6. And for index 10, I want to get 8 - 13. And for index 16, I want to get 16 - 18. How can I achieve this in numpy? | You can get the start/end coordinates of the non-null stretches with something like: idx = np.nonzero(values == 0)[0] start = idx+1 end = np.r_[idx[1:]-1, len(values)-1] m = start<end indices = np.c_[start, end][m] indices: array([[ 2, 6], [ 8, 13], [16, 18]]) Then get the position with searchsorted (assuming you only pass non-zeros indices, else you need an additional check (e.g. is values[position] != 0) and explanation of what should be the output): indices[np.searchsorted(indices[:, 1], 2)] # [ 2, 6] indices[np.searchsorted(indices[:, 1], 10)] # [ 8, 13] indices[np.searchsorted(indices[:, 1], 16)] # [16, 18] And you can get multiple targets at once: target = [2, 6, 10, 16] indices[np.searchsorted(indices[:, 1], target)] array([[ 2, 6], [ 2, 6], [ 8, 13], [16, 18]]) And if you have indices of zero-values you could mask them in the output: target = [1, 2, 6, 7, 10, 16] out = np.ma.masked_array(indices[np.searchsorted(indices[:, 1], target)], np.broadcast_to(values[target, None]==0, (len(target), 2)) ) [[-- --] [ 2 6] [ 2 6] [-- --] [ 8 13] [16 18]] Used input: values = np.array([0, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 6, 0, 0, 1, 2, 3]) | 4 | 3 |
79,402,325 | 2025-1-31 | https://stackoverflow.com/questions/79402325/implement-a-knights-tour-algorithm-using-backtracking-in-python | I try to solve Knight tour problem with python and backtracking, but the code doesn't respond likely I want... This is my code in python: import random board = [ [0,0,0,0,0,0], [0,0,0,0,0,0], [0,0,0,0,0,0], [0,0,0,0,0,0], [0,0,0,0,0,0], [0,0,0,0,0,0] ] intialCell = [random.randint(0,5),random.randint(0,5)] path = [] def besideCells(s): moves = [ (2,1),(2,-1),(-2,1),(-2,-1),(1,2),(1,-2),(-1,2),(-1,-2) ] return [[s[0] + dx, s[1] +dy] for dx,dy in moves if 0 <= s[0] +dx < 6 and 0 <= s[1] +dy < 6 and board[s[0] +dx][s[1] +dy] == 0] def choose(board,s,path): for l in board: print(l) print(path) print('---###---') varbesideCells = besideCells(s) path.append(s) board[s[0]][s[1]] = len(path) if len(path) == 36: return True #str(path) if not varbesideCells: return False found = False for cells in varbesideCells: if choose(board,cells,path): return True path.pop() board[s[0]][s[1]] = 0 return found print(intialCell) print(besideCells(intialCell)) print(choose(board,intialCell,path)) The besideCells function return a list that have the allowed cells in the board can be chosen The choose function try to add a cell to the path and continue until don't exist a valid cell for a tour, with recursive and backtracking algorithm... The algorithm does not always produce a valid tour, leading to duplicate numbers or incomplete paths. I tried rearranging function calls, but this did not solve the issue. | The problem is that your code can return False without undoing the latest update you made to path and board. To solve this, just remove the following lines: if not varbesideCells: return False If now that condition is true (i.e. varbesideCells is an empty list), then execution will continue as follows: it will not make any iteration of the loop, and so the boolean found will be False. But now the two updates to path and board are properly reverted before that False is returned. Other remarks Your functions have some dependencies on global variables (e.g. board in besideCells) and hardcoded constants (like 5, 6 and 36). It would be better to avoid havinng board and path as global variables. Instead, pass all needed variables as arguments and support different board sizes. For debugging purposes it would seem more useful to print the board and path after they have been updated. Some names are misleading or strange: varbesideCells is an odd name (starting with var). You could call this neighbors, and the function that retrieves them get_neighbors. cells is not a collection of multiple cells; it is one cell. In the context where you use it, it could be named neighbor s doesn't really give much information what it is about. This could be named cell. If you use randrange instead of randint, you can use the size of the board as argument. There is much more that could be improved, but with the above remarks, it could already look like this: from random import randrange def get_neighbors(board, cell): n = len(board) moves = [ (2,1),(2,-1),(-2,1),(-2,-1),(1,2),(1,-2),(-1,2),(-1,-2) ] return [[cell[0] + dx, cell[1] + dy] for dx, dy in moves if 0 <= cell[0] + dx < n and 0 <= cell[1] + dy < n and board[cell[0] + dx][cell[1] + dy] == 0] def choose(board, cell, path): path.append(cell) board[cell[0]][cell[1]] = len(path) # If you want to print the board for debugging, do it here. But # the size of the output will grow exponentially. if len(path) == len(board) ** 2: return True #str(path) neighbors = get_neighbors(board, cell) found = False for neighbor in neighbors: if choose(board, neighbor, path): return True path.pop() board[cell[0]][cell[1]] = 0 return found def find_knight_path(n): board = [[0] * n for _ in range(n)] intialCell = [randrange(n), randrange(n)] found = choose(board, intialCell, []) return board if found else None solution = find_knight_path(5) if solution: for row in solution: print(row) else: print("No solution found. Maybe try again from a different starting point.") Finally, this algorithm will not be suitable for much larger boards, as it will need to backtrack a lot. You could have a look at Warnsdorff's rule. | 1 | 1 |
79,399,929 | 2025-1-30 | https://stackoverflow.com/questions/79399929/how-to-sample-pandas-dataframe-using-a-normal-distribution-by-using-random-state | I am trying to write Pandas code that would allow me to sample DataFrame using a normal distribution. The most convinient way is to use random_state parameter of the sample method to draw random samples, but somehow employ numpy.random.Generator.normal to draw random samples using a normal (Gaussian) distribution. import pandas as pd import numpy as np import random # Generate a list of unique random numbers temp = random.sample(range(1, 101), 100) df = pd.DataFrame({'temperature': temp}) # Sample normal rng = np.random.default_rng() triangle_df.sample(n=10, random_state=rng.normal()) This obviously doesn't work. There is an issue with random_state=rng.normal(). | Passing a Generator to sample just changes the way the generator is initialized, it won't change the distribution that is used. Random sampling is uniform (choice is used internally [source]) and you can't change that directly with the random_state parameter. Also note that normal sampling doesn't really make sense for discrete values (like the rows of a DataFrame). Now let's assume that you want to sample the rows of your DataFrame in a non-uniform way (for example with weights that follow a normal distribution) you could use the weights parameter to pass custom weights for each row. Here is an example with normal weights (although I'm not sure if this makes much sense): rng = np.random.default_rng() weights = abs(rng.normal(size=len(df))) sampled = df.sample(n=10000, replace=True, weights=weights) Another example based on this Q/A. Here we'll give higher probabilities to the rows from the middle of the DataFrame: from scipy.stats import norm N = len(df) weights = norm.pdf(np.arange(N)-N//2, scale=5) df.sample(n=10, weights=weights).sort_index() Output (mostly rows around 50): temperature 43 94 44 50 47 80 48 99 50 63 51 52 52 1 53 20 54 41 63 3 Probabilities of sampling with a bias for the center (and sampled points): | 2 | 2 |
79,401,274 | 2025-1-30 | https://stackoverflow.com/questions/79401274/extracting-text-from-wikisource-using-beautifulsoup-returns-empty-result | I'm trying to extract the text of a book from a Wikisource page using BeautifulSoup, but the result is always empty. The page I'm working on is Le Père Goriot by Balzac. Here's the code I'm using: import requests from bs4 import BeautifulSoup def extract_text(url): try: # Fetch the page content response = requests.get(url) response.raise_for_status() soup = BeautifulSoup(response.text, 'html.parser') # Find the main text section text_section = soup.find("div", {"class": "mw-parser-output"}) if not text_section: raise ValueError("Text section not found.") # Extract text from paragraphs and other elements text_elements = text_section.find_all(["p", "div"]) text = "\n".join(element.get_text().strip() for element in text_elements if element.get_text().strip()) return text except Exception as e: print(f"Error extracting text from {url}: {e}") return None # Example usage url = "https://fr.wikisource.org/wiki/Le_P%C3%A8re_Goriot_(1855)" text = extract_text(url) if text: print(text) else: print("No text found.") Problem: The extract_text function always returns an empty string, even though the page clearly contains text. I suspect the issue is related to the structure of the Wikisource page, but I'm not sure how to fix it. | To find the text section you are using the class mw-parser-output. But this class is present for two different div elements. And the first one with this class doesn't contain the texts. The find function returns the first element found. That is why you can't get the texts. The div with class prp-pages-output contains all the text you want and the div is inside the second div with the class you have used. You can use this class to get the texts. You don't need to parse the p and div tags to get the text. You can get the text directly from the parent element and it would work fine. import requests from bs4 import BeautifulSoup def extract_text(url): try: # Fetch the page content response = requests.get(url) response.raise_for_status() soup = BeautifulSoup(response.text, 'html.parser') # Find the main text section text_section = soup.find("div", {"class": "prp-pages-output"}) if not text_section: raise ValueError("Text section not found.") # Extract text from paragraphs and other elements text = text_section.get_text().strip() return text except Exception as e: print(f"Error extracting text from {url}: {e}") return None # Example usage url = "https://fr.wikisource.org/wiki/Le_P%C3%A8re_Goriot_(1855)" text = extract_text(url) if text: print(text) else: print("No text found.") But the first div and first two p tag elements are not the text from the book but the data about the book and the previous/next book's title/link. So if you want just the book content and not other texts, then try the following. Here I have used the CSS selector which selects all the elements after the div tag that contains the meta info. import requests from bs4 import BeautifulSoup def extract_text(url): try: # Fetch the page content response = requests.get(url) response.raise_for_status() soup = BeautifulSoup(response.text, 'html.parser') # Extract the text text_elements = soup.select("div.prp-pages-output > div[itemid] ~ *") text = "\n".join(element.get_text().strip() for element in text_elements) return text except Exception as e: print(f"Error extracting text from {url}: {e}") return None # Example usage url = "https://fr.wikisource.org/wiki/Le_P%C3%A8re_Goriot_(1855)" text = extract_text(url) if text: print(text) else: print("No text found.") | 1 | 2 |
79,401,374 | 2025-1-30 | https://stackoverflow.com/questions/79401374/how-to-convert-multiple-video-files-in-a-specific-path-outputvideo-with-the-same | The following code to converts a one video from mp4 to avi using ffmpeg ffmpeg.exe -i "kingman.mp4" -c copy "kingman.avi" I need to convert multiple videos in a specific path. "outputvideo" with the same video name This is the my code that needs to be modified. from pathlib import Path import subprocess from glob import glob all_files = glob('./video/*.mp4') for filename in all_files: videofile = Path(filename) outputfile = Path(r"./outputvideo/") codec = "copy" ffmpeg_path = (r"ffmpeg.exe") outputfile = outputfile / f"{videofile.stem}" outputfile.mkdir(parents=True, exist_ok=True) command = [ f"{ffmpeg_path}", "-i", f"{videofile}", "-c", f"{codec}", "{outputfile}",] process = subprocess.Popen( command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, bufsize=1, universal_newlines=True) process.communicate() | Use ffmpeg.exe if you are using windows, else just ffmpeg will work in case of linux from pathlib import Path import subprocess import os from glob import glob input_folder = "video" output_folder = "outputvideo" abs_path = os.getcwd() all_files = glob(f'{abs_path}/{input_folder}/*.mp4') output_dir = os.path.join(abs_path , output_folder) out_ext = "avi" os.makedirs(output_dir , exist_ok=True) input_dir = os.path.join(abs_path, input_folder) for filepath in all_files: stem = os.path.splitext(os.path.basename(filepath))[0] videofile = Path(filepath) codec = "copy" ffmpeg_path = (r"ffmpeg") outputfile = f"{output_dir}/{stem}.{out_ext}" command = [ f"{ffmpeg_path}", "-i", f"{videofile}", "-c", f"{codec}", f"{outputfile}",] process = subprocess.Popen( command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, bufsize=1, universal_newlines=True) stdout, stderr = process.communicate() # Print output for debugging print(f"Processed: {videofile} -> {outputfile}") if stderr: print(f"FFmpeg Error: {stderr}") | 1 | 1 |
79,401,430 | 2025-1-30 | https://stackoverflow.com/questions/79401430/pyspark-jdbc-read-with-partitions | I'm reading data in pyspark from postgres using jdbc connection. The table being read is large, about 240 million rows. I'm attempting to read it into 16 partitions. The read is being performed like this. query = f""" (select receiptid, itemindex, barcode, productnumberraw, itemdescription, itemdescriptionraw, itemextendedprice, itemdiscountedextendedprice, itemquantity, barcodemanufacturer, barcodebrand, barcodecategory1, barcodecategory2, barcodecategory3, isfetchpromo, ispartnerbrand, subscribeandsave, soldby, yyyymm, retailerid, MOD(ABS(CAST('x' || md5(receiptid) AS bit(32))::int), {num_partitions}) AS partition_id from {table}_bck) as subquery """ # Optimize JDBC read options df = spark.read \ .format("jdbc") \ .option("url", pg_url) \ .option("dbtable", query) \ .option("user", pg_properties["user"]) \ .option("password", pg_properties["password"]) \ .option("driver", pg_properties["driver"]) \ .option("numPartitions", num_partitions) \ .option("partitionColumn", "partition_id") \ .option("lowerBound", 0) \ .option("upperBound", num_partitions - 1) \ .load() df = df.withColumn( "productnumberint", regexp_replace(col("productnumberraw"), "[#-]", "").cast(LongType()) ).withColumn( "barcodeint", regexp_replace(col("barcode"), "[#-]", "").cast(LongType()) ) I then want to write the data back to postgres df.rdd.foreachPartition(write_partition) where write_partition just iterates the rows and does batch inserts using psycopg2. My issue is that I am seeing the partition queries doubled on my database. SELECT "receiptid","itemindex","barcode","productnumberraw","itemdescription","itemdescriptionraw","itemextendedprice","itemdiscountedextendedprice","itemquantity","barcodemanufacturer","barcodebrand","barcodecategory1","barcodecategory2","barcodecategory3","isfetchpromo","ispartnerbrand","subscribeandsave","soldby","yyyymm","retailerid","partition_id" FROM (select receiptid, itemindex, barcode, productnumberraw, itemdescription, itemdescriptionraw, itemextendedprice, itemdiscountedextendedprice, itemquantity, barcodemanufacturer, barcodebrand, barcodecategory1, barcodecategory2, barcodecategory3, isfetchpromo, ispartnerbrand, subscribeandsave, soldby, yyyymm, retailerid, MOD(ABS(CAST('x' || md5(receiptid) AS bit(32))::int), 16) AS partition_id from mytable) as subquery WHERE "partition_id" >= 10 AND "partition_id" < 11 What is causing the double read of the data? | If you're seeing them duplicated in pg_stat_activity, some of them might show a leader_pid pointing at the pids of others, which means the query is being handled by multiple worker processes. Seeing your queries distributed between multiple workers is especially likely on partitioned tables. The fact that you specifically narrowed this down to a single partition: WHERE "partition_id" >= 10 AND "partition_id" < 11 doesn't prevent additional workers from being used. It also doesn't disqualify the default partition as a scan target. You can tweak asynchronous behaviour settings to control that: select name, setting, short_desc from pg_settings where name in ( 'max_worker_processes' ,'max_parallel_workers_per_gather' ,'max_parallel_maintenance_workers' ,'max_parallel_workers' ,'parallel_leader_participation'); name setting short_desc max_parallel_maintenance_workers 2 Sets the maximum number of parallel processes per maintenance operation. max_parallel_workers 8 Sets the maximum number of parallel workers that can be active at one time. max_parallel_workers_per_gather 4 Sets the maximum number of parallel processes per executor node. max_worker_processes 8 Maximum number of concurrent worker processes. parallel_leader_participation on Controls whether Gather and Gather Merge also run subplans. demo at db<>fiddle create table test (partition_id int, payload text) partition by list(partition_id); create table test1 partition of test for values in (1); create table test2 partition of test for values in (2); create table test_default partition of test default; select setseed(.42); insert into test select 1, md5(random()::text) from generate_series(1,5e5); If I now tell a parallel dblink client to query that table and observe pg_stat_activity, I'm also getting the query twice, as a pid-leader_pid pair: create extension if not exists dblink; select dblink_connect('parallel_client',''); select dblink_send_query('parallel_client', $q$ select*from test where partition_id>=1 and partition_id<2; $q$); select pid,leader_pid,query from pg_stat_activity; pid leader_pid query 786 null create extension if not exists dblink;select dblink_connect('parallel_client','');select dblink_send_query('parallel_client', $q$ select*from test where partition_id>=1 and partition_id<2; $q$);select pid,leader_pid,query from pg_stat_activity; 787 null select*from test where partition_id>=1 and partition_id<2; 788 787 select*from test where partition_id>=1 and partition_id<2; Explain also shows that the query runs on 2 workers: explain(analyze,verbose,settings) select*from test where partition_id>=1 and partition_id<2; QUERY PLAN Gather (cost=1000.00..8766.49 rows=2652 width=36) (actual time=0.450..206.506 rows=500000 loops=1) Output: test.partition_id, test.payload Workers Planned: 2 Workers Launched: 2 -> Parallel Append (cost=0.00..7501.29 rows=1105 width=36) (actual time=0.013..48.702 rows=166667 loops=3) Worker 0: actual time=0.015..3.538 rows=15360 loops=1 Worker 1: actual time=0.015..3.262 rows=15360 loops=1 -> Parallel Seq Scan on public.test1 test_1 (cost=0.00..7474.56 rows=1102 width=36) (actual time=0.012..35.920 rows=166667 loops=3) Output: test_1.partition_id, test_1.payload Filter: ((test_1.partition_id >= 1) AND (test_1.partition_id < 2)) Worker 0: actual time=0.014..2.525 rows=15360 loops=1 Worker 1: actual time=0.014..2.207 rows=15360 loops=1 -> Parallel Seq Scan on public.test_default test_2 (cost=0.00..21.21 rows=4 width=36) (actual time=0.001..0.001 rows=0 loops=1) Output: test_2.partition_id, test_2.payload Filter: ((test_2.partition_id >= 1) AND (test_2.partition_id < 2)) Settings: max_parallel_workers_per_gather = '4', search_path = 'public' Planning Time: 0.232 ms Execution Time: 224.640 ms Even if you change the condition to point at the specific partition and disqualify the default, you can still get more than one worker: explain(analyze,verbose,settings) select*from test where partition_id=1; QUERY PLAN Gather (cost=1000.00..8187.90 rows=2646 width=36) (actual time=0.206..240.995 rows=500000 loops=1) Output: test.partition_id, test.payload Workers Planned: 2 Workers Launched: 2 -> Parallel Seq Scan on public.test1 test (cost=0.00..6923.30 rows=1102 width=36) (actual time=0.017..31.386 rows=166667 loops=3) Output: test.partition_id, test.payload Filter: (test.partition_id = 1) Worker 0: actual time=0.022..2.331 rows=13440 loops=1 Worker 1: actual time=0.018..2.389 rows=13440 loops=1 Settings: max_parallel_workers_per_gather = '4', search_path = 'public' Planning Time: 0.227 ms Execution Time: 268.901 ms This can be changed on session or even transaction level: set max_parallel_workers=0; set max_parallel_workers_per_gather=0; explain(analyze,verbose,settings) select*from test where partition_id>=1 and partition_id<2; QUERY PLAN Append (cost=0.00..12147.44 rows=2652 width=36) (actual time=0.011..94.449 rows=500000 loops=1) -> Seq Scan on public.test1 test_1 (cost=0.00..12105.13 rows=2646 width=36) (actual time=0.010..59.975 rows=500000 loops=1) Output: test_1.partition_id, test_1.payload Filter: ((test_1.partition_id >= 1) AND (test_1.partition_id < 2)) -> Seq Scan on public.test_default test_2 (cost=0.00..29.05 rows=6 width=36) (actual time=0.012..0.012 rows=0 loops=1) Output: test_2.partition_id, test_2.payload Filter: ((test_2.partition_id >= 1) AND (test_2.partition_id < 2)) Settings: max_parallel_workers = '0', max_parallel_workers_per_gather = '0', search_path = 'public' Planning Time: 0.170 ms Execution Time: 109.770 ms explain(analyze,verbose,settings) select*from test where partition_id=1; QUERY PLAN Seq Scan on public.test1 test (cost=0.00..10782.11 rows=2646 width=36) (actual time=0.025..52.193 rows=500000 loops=1) Output: test.partition_id, test.payload Filter: (test.partition_id = 1) Settings: max_parallel_workers = '0', max_parallel_workers_per_gather = '0', search_path = 'public' Planning Time: 0.073 ms Execution Time: 67.570 ms | 2 | 0 |
79,401,140 | 2025-1-30 | https://stackoverflow.com/questions/79401140/representing-tridiagonal-matrix-using-numpy | I am trying to solve a mathematical problem related to matrices using numpy as shown below: I am really finding it hard to represent this kind matrix structure using numpy. I really donot want to type these values because I want to understand how this kind of structures are represented using python. Consider the empty places as zero. Thank you for great help. | The matrix has to be decomposed into several parts. First, the middle region forms a block diagonal matrix, where each block is a 4x4 Toeplitz matrix. # makes a shifted diagonal matrix def E(n, v, k): return v * np.eye(n, k=k) def toeplitz_block(n, v_list, k_list): return sum(E(n, v, k) for v, k in zip(v_list, k_list)) Then we can do the following: n = 4 # size of block m = 3 # how many blocks # Make block diagonal matrix A A = np.zeros((m, n, m, n)) u, v = np.diag_indices(m) A[u, :, v, :] = toeplitz_block(n, [-1, 3, -1], [-1, 0, 1]) A = A.reshape(m * n, m * n) print(A.astype(int)) # Output: [[ 3 -1 0 0 0 0 0 0 0 0 0 0] [-1 3 -1 0 0 0 0 0 0 0 0 0] [ 0 -1 3 -1 0 0 0 0 0 0 0 0] [ 0 0 -1 3 0 0 0 0 0 0 0 0] [ 0 0 0 0 3 -1 0 0 0 0 0 0] [ 0 0 0 0 -1 3 -1 0 0 0 0 0] [ 0 0 0 0 0 -1 3 -1 0 0 0 0] [ 0 0 0 0 0 0 -1 3 0 0 0 0] [ 0 0 0 0 0 0 0 0 3 -1 0 0] [ 0 0 0 0 0 0 0 0 -1 3 -1 0] [ 0 0 0 0 0 0 0 0 0 -1 3 -1] [ 0 0 0 0 0 0 0 0 0 0 -1 3]] To get the final desired matrix, we can just add another Toeplitz matrix that has the diagonal of -1s. B = A + E(n * m, -1, -4) print(B.astype(int)) # Output: [[ 3 -1 0 0 0 0 0 0 0 0 0 0] [-1 3 -1 0 0 0 0 0 0 0 0 0] [ 0 -1 3 -1 0 0 0 0 0 0 0 0] [ 0 0 -1 3 0 0 0 0 0 0 0 0] [-1 0 0 0 3 -1 0 0 0 0 0 0] [ 0 -1 0 0 -1 3 -1 0 0 0 0 0] [ 0 0 -1 0 0 -1 3 -1 0 0 0 0] [ 0 0 0 -1 0 0 -1 3 0 0 0 0] [ 0 0 0 0 -1 0 0 0 3 -1 0 0] [ 0 0 0 0 0 -1 0 0 -1 3 -1 0] [ 0 0 0 0 0 0 -1 0 0 -1 3 -1] [ 0 0 0 0 0 0 0 -1 0 0 -1 3]] | 1 | 2 |
79,400,679 | 2025-1-30 | https://stackoverflow.com/questions/79400679/plotting-lambda-functions-in-python-and-mpmath-plot | I'm using the mpmath plot function (which simply uses pyplot, as far as I understood). Consider the following code: from math import cos, sin import mpmath as mp mp.plot([sin, cos], [0, 3]) # this is fine l = [sin, cos] mp.plot([lambda x: f(2*x) for f in l], [0, 3]) # this only plots sin(2x)! Is there anything I'm missing here, or it's a bug in the plot function? | See the relevant documentation here. Here's a quick fix that does what you want. l = [sin, cos] mp.plot([lambda x, f=f: f(2*x) for f in l], [0, 3]) So, what's going on here? The key is that each lambda x: f(2*x) is equivalent to something of the form def func(x): return f(2*x) Importantly, the f within each lambda function is NOT replaced by the corresponding function from l in the list comprehension, there is literally an f in the function definition. As such, whenever the lambda function is called, Python looks for something called f. f is not defined within the scope of the function, so it uses the value of f from the next level up, which is the scope of the list comprehension (interestingly, f is not a variable in the main scope). Because the list has already been constructed, f within the scope of the list comprehension refers to the last item of the list l, namely cos. For that reason, both functions within the list [lambda x: f(2*x) for f in l] yield cos(2x). The quick fix I provide puts the desired f into the scope of the function through an optional argument. If we write out the first lambda x, f=f: f(2*x) as a long form function definition, we have the following: def func(x, f=sin): return f(2*x) The key here is that unlike the f of the f(2*x), the f on the right hand side of the = in f=f DOES get replaced by the corresponding function from l in the list comprehension. So, there is no need for Python to go outside of the local scope of the function. Here's an alternative approach, using the "function factory" method suggested here. def make_func(f): def func(x): return f(2*x) return func l = [sin, cos] mp.plot([make_func(f) for f in l], [0,3]) Or, sticking to lambdas, def make_func(f): return lambda x: f(2*x) l = [sin, cos] mp.plot([make_func(f) for f in l], [0,3]) The key here is that the value of f (i.e. sin or cos) is kept within the scope of the make_func function call. A potential advantage of this method is that the resulting lambda functions do not have f as an optional parameter. In a sense, this amounts to "decorating lambdas", as you suggest in your comment. | 1 | 1 |
79,400,482 | 2025-1-30 | https://stackoverflow.com/questions/79400482/map-each-element-of-torch-tensor-with-its-value-in-the-dict | Suppose i have a tensor t consisting only zeros and ones: t = torch.Tensor([1, 0, 0, 1]) And a dict with the weights: weights = {0: 0.1, 1: 0.9} I want to form a new tensor new_t, such that every element in tensor t is mapped to the corresponding value in the dict weights: new_t = torch.Tensor([0.9, 0.1, 0.1, 0.9]) Is there an elegant way to do this without iterating over tensor t? I've heard about torch.apply, but it only works if tensor t is on the CPU, is there any other options? | If you convert your weights dict into a tensor, you can index directly t = torch.tensor([1, 0, 0, 1]) weights = torch.tensor([0.1, 0.9]) new_t = weights[t] new_t >tensor([0.9000, 0.1000, 0.1000, 0.9000]) | 1 | 1 |
79,399,353 | 2025-1-30 | https://stackoverflow.com/questions/79399353/insert-or-update-when-importing-from-json | My SQLAlchemy ORM model is populated by a JSON file that occasionally changes. The JSON file does not provide an integer primary key but has a unique alphanumeric ProductCode. My model: class ProductDescriptor(Base): __tablename__ = 'product_descriptor' id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True) ProductCode: Mapped[str] = mapped_column(String(50), unique=True) DisplayName: Mapped[str] = mapped_column(String(50)) Description: Mapped[str] = mapped_column(String(1000)) ... This answer makes sense until this line: on_duplicate_stmt = insert_stmt.on_duplicate_key_update(dict(txt=insert_stmt.inserted.txt)) Because the incoming data lacks a key I have nothing to compare. Do I need to change the ProductCode definition? I am using Unique=True. My code: product_list = [] for product in products: # Create filtered list of dicts to be send to the DB product_list.append({ 'ProductName': product.get('ProductName'), 'DisplayName': product.get('DisplayName'), 'Description': product.get('Description'), ... more columns }) insert_stmt = insert(ProductDescriptor).values(product_list) # This is where it goes wrong on_duplicate_stmt = insert_stmt.on_duplicate_key_update() # Trying to deal with integrity errors and rollback requests for product in product_list: try: self.session.add(resource) self.session.commit() except IntegrityError: pass How can I efficiently create an update or create function for bulk records? Do I need to turn my unique field into a key field? Can I keep an autoincrement key field as well? In Django I would be using the update_or_create method where I can specify the key field and provide a defaults dictionary: Profile.objects.update_or_create(custid=user_profile.custid, defaults=defaults) | SQLite lets us specify the matching columns for ON CONFLICT, like so: from sqlalchemy.dialects.sqlite import insert new_values = json.loads("""\ [ {"ProductCode": "code_1", "DisplayName": "display_1", "Description": "description_1"}, {"ProductCode": "code_2", "DisplayName": "display_2", "Description": "description_2"} ] """) insert_stmt = insert(ProductDescriptor).values(new_values) do_update_stmt = insert_stmt.on_conflict_do_update( index_elements=["ProductCode"], set_=dict( DisplayName=insert_stmt.excluded.DisplayName, Description=insert_stmt.excluded.Description, ), ) engine.echo = True with engine.begin() as conn: conn.execute(do_update_stmt) """ BEGIN (implicit) INSERT INTO product_descriptor ("ProductCode", "DisplayName", "Description") VALUES (?, ?, ?), (?, ?, ?) ON CONFLICT ("ProductCode") DO UPDATE SET "DisplayName" = excluded."DisplayName", "Description" = excluded."Description" [no key 0.00093s] ('code_1', 'display_1', 'description_1', 'code_2', 'display_2', 'description_2') COMMIT """ Note that if "ProductCode" is a unique non-nullable column then it is in fact a natural primary key, so the autoincrement integer "id" column is not really necessary | 1 | 1 |
79,400,462 | 2025-1-30 | https://stackoverflow.com/questions/79400462/django-managers-vs-proxy-models | I'm currently getting into proxy models and I actually cannot understand when we should give them respect. For me they look very similar to managers Is there some differences or we can implement the same things using proxy models and managers? | Django managers are classes that manage the database query operations on a particular model. Django model manager Proxy models allow you to create a new model that inherits from an existing model but does not create a new database table. proxy model Let me give you an example: class Book(models.Model): title = models.CharField(max_length=100) author = models.CharField(max_length=100) published_date = models.DateField() is_downloadable = models.BooleanField(default=True) If this is your model manager: class BookMnanger(models.Manager): def by_link(self): return self.filter(is_downloadable=True) and you new manager: class Book(models.Model): title = models.CharField(max_length=100) author = models.CharField(max_length=100) published_date = models.DateField() is_downloadable = models.BooleanField(default=True) # your new manager should register to your model downloading = BookMnanger() now, your new custom manager can work as below: my_books = Book.downloading.all() print(my_books) but the proxy: class BookProxy(Book): class Meta: proxy = True def special_method(self): return f"{self.title} by {self.author}, published on {self.published_date}" and your proxy can work like this: book = BookProxy.objects.first() print(book.special_method()) proxies are the way to change behavior of your model but, managers will change your specific queries I can give you more link about them, if you need? | 2 | 2 |
79,400,269 | 2025-1-30 | https://stackoverflow.com/questions/79400269/truth-value-for-expr-is-ambiguous-in-with-columns-ternary-expansion-on-dates | I'm trying to account for room usage only during business hours and abridge an event duration if it runs past the end of business hours. I have a dataframe like this: import polars as pl from datetime import datetime df = pl.DataFrame({ 'name': 'foo', 'start': datetime.fromisoformat('2025-01-01 08:00:00'), 'end': datetime.fromisoformat('2025-01-01 18:00:00'), # ends after business hours 'business_end': datetime.fromisoformat('2025-01-01 17:00:00') }) I want to create a duration column that is equal to end unless it's after business_end otherwise set to business_end. For this, I tried the following: df.with_columns( duration=pl.col("end") - pl.col("start") if pl.col("end") <= pl.col("business_end") else pl.col("business_end") - pl.col("start") ) This gives an error: TypeError: the truth value of an Expr is ambiguous Thoughts about how to produce the desired row from the conditional? I can use filter() to find rows where event ends are after business ends, create a frame of those, replace the end time value, merge back in, etc. but I was hoping to keep the original data and only add a new column. | Short answer You use when/then/otherwise instead of if else df.with_columns( duration=pl.when(pl.col("end") <= pl.col("business_end")) .then(pl.col("end") - pl.col("start")) .otherwise(pl.col("business_end") - pl.col("start")) ) Background polars works with expressions inside contexts. What's that mean? Contexts are your with_columns, select, group_by, agg, etc. The inputs to contexts are expressions. Expressions usually start with pl.col() or pl.lit(). They have lots of methods which also return expressions which makes them chainable. The thing about expressions is that they don't have values, they're just instructions. One way to see that clearly is to assign an expression to a normal variable like end=pl.col("end"). You can do that without any DataFrames existing. Once you have a df, you can use that expr in its context df.select(end). When the select context gets the expression pl.col("end"), that's when it'll go fetch the column. You could also make a more complicated expression like my_sum = (pl.col("a") * 2 + pl.col("b").pow(3)) and then even chain off of it df.select(my_sum*2+5) Now getting back to the if, because pl.col("end") doesn't have any values associated with it, python can't evaluate if pl.col("end") <= pl.col("other") which is why you're getting that error. python doesn't have an overload for if so you just can't use it inside a context. Instead you can use the when then otherwise construct. | 3 | 6 |
79,399,683 | 2025-1-30 | https://stackoverflow.com/questions/79399683/fastest-way-to-find-the-smallest-possible-sum-of-the-absolute-differences-of-pai | By grouping all the items within an array into pairs and getting their absolute differences, what is the minimum sum of their absolute differences? Example: [4, 1, 2, 3] should return 2 because |1 - 2| + |3 - 4| = 2 [1, 3, 3, 4, 5] should return 1 because |3 - 3| + |4 - 5| = 1 Getting the result for arrays with an even length is pretty easy, because it just requires sorting the array and getting the differences between numbers right next to each other. However, arrays with an odd length are pretty hard because the addition of an extra number. Thus, my current solution for odd length arrays is to remove each number and check the sum using the even-length array approach. Current solution: def smallest_sum(arr): arr = sorted(arr) if len(arr) % 2 == 0: return even_arr_sum(arr) else: return odd_arr_sum(arr) def even_arr_sum(arr): total = 0 for i in range(0, len(arr), 2): total += arr[i+1] - arr[i] return total def odd_arr_sum(arr): total = 0 for i in range(len(arr)): s = even_arr_sum(arr[:i] + arr[i+1:]) if total == 0 or s < total: total = s return total assert smallest_sum([4, 1, 2, 3]) == 2 assert smallest_sum([1, 3, 3, 4, 5]) == 1 However, this is extremely slow and gives a time of n2. Is there a smarter way to handle arrays with an odd length? | Imagine you have sorted seven numbers A to G and you leave out A, thus calculating (C-B)+(E-D)+(G-F). That's adding or subtracting them like this: A B C D E F G - + - + - + (leaving out A) And this is how it looks in general, for leaving out each of the numbers: A B C D E F G - + - + - + (leaving out A) - + - + - + (leaving out B) - + - + - + ... - + - + - + - + - + - + - + - + - + - + - + - + From one row to the next, there's little change. So first compute the total for leaving out A. Then for the total for leaving out B, simply subtract A and add B. And so on. | 5 | 8 |
79,396,894 | 2025-1-29 | https://stackoverflow.com/questions/79396894/doing-pywavelets-calculation-on-gpu | Currently working on a classifier using PyWavelets, here is my calculation block: class WaveletLayer(nn.Module): def __init__(self): super(WaveletLayer, self).__init__() def forward(self, x): def wavelet_transform(img): coeffs = pywt.dwt2(img.cpu().numpy(), "haar") LL, (LH, HL, HH) = coeffs return ( torch.from_numpy(LL).to(img.device), torch.from_numpy(LH).to(img.device), torch.from_numpy(HL).to(img.device), torch.from_numpy(HH).to(img.device), ) # Apply wavelet transform to each channel separately LL, LH, HL, HH = zip( *[wavelet_transform(x[:, i : i + 1]) for i in range(x.shape[1])] ) # Concatenate the results LL = torch.cat(LL, dim=1) LH = torch.cat(LH, dim=1) HL = torch.cat(HL, dim=1) HH = torch.cat(HH, dim=1) return torch.cat([LL, LH, HL, HH], dim=1) The output from this module goes to a resnet block for learning, while doing this I find my CPU clogged and thus slowing down my training process I am trying to use the GPUs for these calculations. | Since you only seem to be interested in the Haar wavelet, you can pretty much implement it yourself: The high-frequency component of the Haar wavelet along each dimension can be written as a pairwise difference. The low-frequency component of the Haar wavelet along each dimension can be written as a pairwise sum. The following code achieves this in pure PyTorch: class HaarWaveletLayer(nn.Module): def l_0(self, t): # sum ("low") along cols t = torch.cat([t, t[..., -1:, :]], dim=-2) if t.shape[-2] % 2 else t return (t[..., ::2, :] + t[..., 1::2, :]) def l_1(self, t): # sum ("low") along rows t = torch.cat([t, t[..., :, -1:]], dim=-1) if t.shape[-1] % 2 else t return (t[..., :, ::2] + t[..., :, 1::2]) def h_0(self, t): # diff ("hi") along cols t = torch.cat([t, t[..., -1:, :]], dim=-2) if t.shape[-2] % 2 else t return (t[..., ::2, :] - t[..., 1::2, :]) def h_1(self, t): # diff ("hi") along rows t = torch.cat([t, t[..., :, -1:]], dim=-1) if t.shape[-1] % 2 else t return (t[..., :, ::2] - t[..., :, 1::2]) def forward(self, x): x = .5 * x l_1 = self.l_1(x) h_1 = self.h_1(x) ll = self.l_0(l_1) lh = self.h_0(l_1) hl = self.l_0(h_1) hh = self.h_0(h_1) return torch.cat([ll, lh, hl, hh], dim=1) In combination with your given code, you can convince yourself of the equivalence as follows: t = torch.rand((7, 3, 127, 128)).to("cuda:0") result_given = WaveletLayer()(t) result_proposed = HaarWaveletLayer()(t) # Same result? assert (result_given - result_proposed).abs().max() < 1e-5 # Time comparison from timeit import Timer num_timings = 100 print("time given: ", Timer(lambda: WaveletLayer()(t)).timeit(num_timings)) print("time proposed:", Timer(lambda: HaarWaveletLayer()(t)).timeit(num_timings)) The timing shows a speedup of more than a factor of 10 on my machine. Notes The t = torch.cat... parts are only necessary if you want to be able to handle odd-shaped images: In that case, we pad by replicating the last row and column, respectively, mimicking the default padding of PyWavelets. Multiplying x with .5 is done for normalization. Compare this discussion on the Signal Processing Stack Exchange for more details. | 2 | 4 |
79,398,404 | 2025-1-29 | https://stackoverflow.com/questions/79398404/python-pandas-how-to-read-in-data-from-list-data-and-columns-separate-list | I'm running into a situation I don't know what to do: The data is a list, no index. Sample data: data = [ {'fields': ['2024-10-07T21:22:01', 'USER-A', 21, 0, 0, 21]}, {'fields': ['2024-10-07T21:18:28', 'USER-B', 20, 20, 0, 0, 0, 45]} ] The column header is in another: cols = ['Created On', 'Created By', 'Transaction Count (ALL)', 'X Pending', 'X Cancelled (X)', 'X Completed (Y)'] I have tried using pandas.DataFrame as well as json_normalize, I either get a single column table with each value as a row, or I got all values as a column, and when I try with using "fields", it tells me "list indices must be integers or slices, not str" which I don't understand why I get this... what is the best way to have these info into a dataframe please? (the number of data elements and number of column headers may not be consistent just for example sake, the real data has things aligned) | You could combine two DataFrame constructors: data = [{'fields': ['2024-10-07T21:22:01', 'USER-A', 21, 0, 0, 21]}, {'fields': ['2024-10-07T21:18:28', 'USER-B', 20, 20, 0, 0, 0, 45]}, ] out = pd.DataFrame(pd.DataFrame(data)['fields'].tolist()) Output: 0 1 2 3 4 5 6 7 0 2024-10-07T21:22:01 USER-A 21 0 0 21 NaN NaN 1 2024-10-07T21:18:28 USER-B 20 20 0 0 0.0 45.0 If you also have a list of columns cols, you could truncate the columns: cols = ['Created On', 'Created By', 'Transaction Count (ALL)', 'X Pending', 'X Cancelled (X)', 'X Completed (Y)'] out = pd.DataFrame(pd.DataFrame(data)['fields'].str[:len(cols)].tolist(), columns=cols) Output: Created On Created By Transaction Count (ALL) X Pending X Cancelled (X) X Completed (Y) 0 2024-10-07T21:22:01 USER-A 21 0 0 21 1 2024-10-07T21:18:28 USER-B 20 20 0 0 Or rename to keep the extra columns: out = (pd.DataFrame(pd.DataFrame(data)['fields'].tolist()) .rename(columns=dict(enumerate(cols))) ) Output: Created On Created By Transaction Count (ALL) X Pending X Cancelled (X) X Completed (Y) 6 7 0 2024-10-07T21:22:01 USER-A 21 0 0 21 NaN NaN 1 2024-10-07T21:18:28 USER-B 20 20 0 0 0.0 45.0 But, honestly, better pre-process in pure python, this will be more efficient/explicit: # truncation out = pd.DataFrame((dict(zip(cols, d['fields'])) for d in data)) # alternative truncation out = pd.DataFrame([d['fields'][:len(cols)] for d in data], columns=cols) # renaming out = (pd.DataFrame([d['fields'] for d in data]) .rename(columns=dict(enumerate(cols))) ) | 1 | 3 |
79,398,865 | 2025-1-30 | https://stackoverflow.com/questions/79398865/numpy-scipy-how-to-find-the-least-squares-solution-with-the-constraint-that-ax | I have a linear algebra problem that I can't find the correct function for; I need to find the vector x that satisfies Ax = b. Because A is non-square (there's 5 columns and something like 37 rows), np.linalg.solve(A,b) will not work - most search results I've seen have pointed me towards np.linalg.lstsq instead, or scipy.optimize.nnls if x must be strictly non-negative (which I do want). The problem is that I need to also add on the constraint that the resulting Ax must be strictly greater-than-or-equal-to b; that is, no element of Ax can be less than the corresponding element of b. This is what I'm currently doing: import numpy as np import scipy v1 = np.array([0.00000961,0.0000011,0.0000011,0.000015,0.00000884,0.00000286,0,0.00000006,0,0.000196,0,0.0000071,0.000000023,0.000131,0.00038,0,0,0.00000161,0,0,0.0000069,0.00027,0.000005,0,0,0.00054,0.00475,0.000000002,0.00036,0.0000032,0,0,0.033,0.0015,0.02,0.00052,0.207]).T v2 = np.array([0,0.0000064,0.00000135,0.000121,0.0000177,0.00000348,0,0.00000006,0,0,0,0.0000833,0,0.000525,0.00062,0,0,0.0000114,0,0,0.0000458,0.00168,0.0000193,0,0,0.00376,0.00705,0.000000072,0.00018,0.0000327,0,0,0.085,0.492,0.258,0.0628,0.161]).T v3 = np.array([0,0.000009,0.00000193,0.0000196,0.00000899,0,0,0.00000444,0,0,0,0.0000021,0.000000056,0.000664,0.00123,0,0,0.00000841,0,0,0.0000502,0.00171,0.0000106,0,0,0.00352,0.0148,0.000000032,0.00005,0.0000365,0,0,0.155,0.0142,0.216,0.00366,0.624]).T v4 = np.array([0,0.00000763,0.00000139,0.00000961,0.0000135,0.00000119,0,0.00000056,0,0,0,0,0,0,0.00054,0,0,0.00000626,0,0,0.0000472,0.00177,0.0000492,0,0,0.00523,0.00429,0,0.00002,0.0000397,0,0,0.106,0.069,0.169,0.0122,0.663]).T v5 = np.array([0.00000241,0.00000113,0.00000347,0.0000118,0.0000037,0.00000147,0,0.00000062,0,0.000934,0,0,0,0.000005,0.00254,0,0,0.00000053,0,0,0.000016,0.00033,0.0000092,0,0,0.00055,0.00348,0,0.00053,0.0000039,0,0,0.041,0.0149,0.0292,0.00178,0.0442]).T b = np.array([0.00657,0.00876,0.00949,0.1168,0.0365,0.01241,0.000219,0.00292,0,0.657,0.000146,0.1095,0.000876,4.015,8.76,16.79,0.0002555,0.00657,0.0292,0.001095,0.0584,3.066,0.01679,0.0003285,0,5.11,24.82,0.0004015,10.95,0.0803,0,0.00219,204.4,569.4,365,146,2007.5]).T A = np.column_stack((v1,v2,v3,v4,v5)) result = scipy.optimize.nnls(A,b) x = result[0][:,None] # Force to be column array; the A @ x step doesn't seem to work if I don't do this # https://stackoverflow.com/questions/64869851/converting-numpy-array-to-a-column-vector lsq = A @ x # result is a column vector diff = lsq.T - b # b is still technically a row vector, so lsq has to be converted back to a row vector for dimensions to match print(diff) If Ax >= b, then all elements of diff (= Ax - b) should be positive (or 0). Instead, what I'm getting is that some elements of diff are negative. Is there any function that can be used to force the Ax >= b constraint? (The context for why this constraint has to be imposed is that A encodes the amount of vitamins/minerals/macronutrients (rows) in 5 different foods (columns), and b is the total amount of each nutrient that has to be consumed to avoid a nutritional deficiency.) | It doesn't sound like the least squares criterion is the important part; you just want a solution that is not too wasteful. In that case, as Nick pointed out, you could use linear programming to minimimize the total food required. (A common variant of this problem is to minimize the cost of the food consumed. You could also minimize other things, like the sum or maximum of the excess nutrition, but that's a little more complicated, and the objective function is not the biggest issue here...) import numpy as np from scipy import optimize # Data copied from above v1 = np.array([0.00000961,0.0000011,0.0000011,0.000015,0.00000884,0.00000286,0,0.00000006,0,0.000196,0,0.0000071,0.000000023,0.000131,0.00038,0,0,0.00000161,0,0,0.0000069,0.00027,0.000005,0,0,0.00054,0.00475,0.000000002,0.00036,0.0000032,0,0,0.033,0.0015,0.02,0.00052,0.207]).T v2 = np.array([0,0.0000064,0.00000135,0.000121,0.0000177,0.00000348,0,0.00000006,0,0,0,0.0000833,0,0.000525,0.00062,0,0,0.0000114,0,0,0.0000458,0.00168,0.0000193,0,0,0.00376,0.00705,0.000000072,0.00018,0.0000327,0,0,0.085,0.492,0.258,0.0628,0.161]).T v3 = np.array([0,0.000009,0.00000193,0.0000196,0.00000899,0,0,0.00000444,0,0,0,0.0000021,0.000000056,0.000664,0.00123,0,0,0.00000841,0,0,0.0000502,0.00171,0.0000106,0,0,0.00352,0.0148,0.000000032,0.00005,0.0000365,0,0,0.155,0.0142,0.216,0.00366,0.624]).T v4 = np.array([0,0.00000763,0.00000139,0.00000961,0.0000135,0.00000119,0,0.00000056,0,0,0,0,0,0,0.00054,0,0,0.00000626,0,0,0.0000472,0.00177,0.0000492,0,0,0.00523,0.00429,0,0.00002,0.0000397,0,0,0.106,0.069,0.169,0.0122,0.663]).T v5 = np.array([0.00000241,0.00000113,0.00000347,0.0000118,0.0000037,0.00000147,0,0.00000062,0,0.000934,0,0,0,0.000005,0.00254,0,0,0.00000053,0,0,0.000016,0.00033,0.0000092,0,0,0.00055,0.00348,0,0.00053,0.0000039,0,0,0.041,0.0149,0.0292,0.00178,0.0442]).T b = np.array([0.00657,0.00876,0.00949,0.1168,0.0365,0.01241,0.000219,0.00292,0,0.657,0.000146,0.1095,0.000876,4.015,8.76,16.79,0.0002555,0.00657,0.0292,0.001095,0.0584,3.066,0.01679,0.0003285,0,5.11,24.82,0.0004015,10.95,0.0803,0,0.00219,204.4,569.4,365,146,2007.5]).T A = np.column_stack((v1,v2,v3,v4,v5)) n = A.shape[1] # 5 # Minimimize the amount of food consumed c = np.ones(n) # The lower bound on `x` is 0 (upper bound is infinite by default) bounds = optimize.Bounds(lb=np.zeros(n)) # The lower bound on `A@x` is `b` (upper bound is infinite by default) constraints = optimize.LinearConstraint(A, lb=b) res = optimize.milp(c, bounds=bounds, constraints=constraints) res # message: The problem is infeasible. (HiGHS Status 8: model_status is Infeasible; primal_status is None) # success: False # status: 2 ... Wait, what? Infeasible? Yes, because some nutrients you needed are just not present in the foods. np.where(np.sum(A, axis=1) == 0)[0] # array([ 6, 8, 10, 15, 16, 18, 19, 23, 24, 30, 31]) Please offer more nutritious food. | 3 | 7 |
79,395,723 | 2025-1-29 | https://stackoverflow.com/questions/79395723/monte-carlo-simulation-in-rocketpy-flight-has-no-attribute-apogee | I am trying to model the trajectory of a rocket in RocketPy. I have successfully completed this, and now I am trying to look at the Monte Carlo simulations. However, I am encountering an error: AttributeError: 'Flight' object has no attribute 'apogee' I have tried to make the code as simple as I can. I have the Rocket Part: # Imports from rocketpy import Environment, SolidMotor, Rocket, Flight, MonteCarlo from rocketpy.stochastic import ( StochasticEnvironment, StochasticSolidMotor, StochasticRocket, StochasticFlight, ) import numpy as np # Date: import datetime %matplotlib widget # Environment env = Environment(latitude=67.89325597913002,longitude=21.065756056273834, elevation=300) tomorrow = datetime.date.today() + datetime.timedelta(days=1) env.set_date((tomorrow.year, tomorrow.month, tomorrow.day, 12)) env.set_atmospheric_model(type="Ensemble", file="GEFS") # Motor RedMotor = SolidMotor( thrust_source=r"...\Python Modelling\Thrust curves\RedMotor.eng", burn_time=13.1, # s dry_mass=1.815, # kg dry_inertia=(1.86, 1.86, 0.13), center_of_dry_mass_position=0, grains_center_of_mass_position=0, grain_number=1, grain_separation=0, grain_density=1750, # kg/m^3 grain_outer_radius=0.2612, # m grain_initial_inner_radius=0.0726, # m grain_initial_height=2.6072, # m nozzle_radius=0.235, # m throat_radius=0.0726, # m interpolation_method="linear", nozzle_position=2.1, # from CG of motor coordinate_system_orientation="combustion_chamber_to_nozzle", # combustion_chamber_to_nozzle" ) # Rocket RedRocket = Rocket( radius=0.23, mass=184.77, inertia=(1315, 1315, 14.1155), power_off_drag=r"...\\Python Modelling\\Drag curves\\Cesaroni_6026M1670_PpowerOffDragCurve.eng", power_on_drag=r"...\\Python Modelling\\Drag curves\\Cesaroni_6026M1670_PpowerOnDragCurve.eng", center_of_mass_without_motor=4.45, coordinate_system_orientation="nose_to_tail", ) rail_buttons = RedRocket.set_rail_buttons( upper_button_position=3.8, lower_button_position=1.5, ) RedRocket.add_motor(RedMotor, position=6.6428125) nose_cone = RedRocket.add_nose(length=0.9375, kind="ogive", position=0) fin_set = RedRocket.add_trapezoidal_fins( n=4, root_chord=0.703125, tip_chord=0.703125, span=0.546875, position=8.01, ) transition = RedRocket.add_tail( top_radius=0.23, bottom_radius=0.2795, length=0.705625, position=4.57, ) Main = RedRocket.add_parachute( "Main", cd_s=2.2 * np.pi * (120 * 25.4 / 1000) * (120 * 25.4 / 1000) / 4, trigger=167.64, sampling_rate=105, lag=1, noise=(0, 8.3, 0.5), ) Drogue = RedRocket.add_parachute( "Drogue", cd_s=1.5 * np.pi * (24 * 25.4 / 1000) * (24 * 25.4 / 1000) / 4, trigger="apogee", sampling_rate=105, lag=1, noise=(0, 8.3, 0.5), ) # Flight test_flight = Flight( rocket=RedRocket, environment=env, rail_length=5.2, inclination=60, heading=0, ) test_flight.all_info() and the Stochastic section: # Stochastic Environment stochastic_env = StochasticEnvironment( environment=env, ensemble_member=list(range(env.num_ensemble_members)), ) stochastic_env.visualize_attributes() # Stochastic Motor stochastic_motor = StochasticSolidMotor( solid_motor=RedMotor, burn_start_time=(0, 0.1, "binomial"), grains_center_of_mass_position=0.001, grain_density=50, grain_separation=0.001, grain_initial_height=0.001, grain_initial_inner_radius= 0.00038, grain_outer_radius= 0.00038, total_impulse=(1.07*(RedMotor.total_impulse), (RedMotor.total_impulse/10)), throat_radius= 0.0005, nozzle_radius= 0.0005, nozzle_position=0.001, ) stochastic_motor.visualize_attributes() # Stochastic Rocket stochastic_rocket = StochasticRocket( rocket=RedRocket, radius=0.23, mass= 184.77, inertia_11= (1325.502, 0), inertia_22= (1325.502,0.01000), inertia_33= 0.01, center_of_mass_without_motor=4.45, ) stochastic_rocket.visualize_attributes() # Stochastic flight stochastic_flight = StochasticFlight( flight=test_flight, inclination=(60, 1), heading=(0, 2), ) stochastic_flight.visualize_attributes() Which all works, but I can't make the Monte Carlo simulations work: # Monte Carlo Simulations test_dispersion = MonteCarlo( filename="monte_carlo_analysis_outputs/monte_carlo_class_example", environment=stochastic_env, rocket=stochastic_rocket, flight=stochastic_flight, ) test_dispersion.simulate( number_of_simulations=1000, append=False) This is when I get the 'Flight' object has no attribute 'apogee' error. What am I doing wrong? Below is the thrust curve: ; Red Rocket Motor Thrust Curve RedRocket 559 3440 0 914 1176 0.05 180952.4 0.2381 180952.4 1.8929 202975.0 1.9524 202975.0 3 204762.5 3.5 205952.4 4.4762 207440.0 4.5357 208927.5 4.5952 207440.0 4.71 205952.4 4.7143 200000 4.8929 186905.0 5 180952.4 5.298 175000 7.0595 138392.5 7.2381 137202.5 10.5 130952.4 12.0476 125000 12.3571 126487.5 12.5357 114880 12.7143 92857.5 13 25000 13.1190 5952.4 ; When I call test_dispersion.prints.all() right before test_dispersion.simulate, this is the result: Monte Carlo Simulation by RocketPy Data Source: monte_carlo_analysis_outputs/monte_carlo_class_example Number of simulations: 0 Results: Parameter Mean Std. Dev. ------------------------------------------------------------ | Thanks for that Onuralp. That's a good insight. I did get some good results for all the following: stochastic_motor.visualize_attributes() print(stochastic_motor.total_impulse) stochastic_rocket.visualize_attributes() stochastic_flight.visualize_attributes() but something that stuck out to me as a potential problem was that when I tried print(stochastic_motor.total_impulse), I got back the following: (np.float64(2221943.0652824277), np.float64(207658.2304002269), <bound method RandomState.normal of RandomState(MT19937) at 0x1D833D56840>) Do you think this may have been the cause of the problem since it's not a single float? I had set it up like so in the script: total_impulse=(1.07*(RedMotor.total_impulse), (RedMotor.total_impulse/10)), type(stochastic_motor.total_impulse) returns that it is a tuple. Could this be the cause of the problem? | 2 | 1 |
79,396,950 | 2025-1-29 | https://stackoverflow.com/questions/79396950/converting-a-pandas-dataframe-in-wide-format-to-long-format | I have a Pandas dataframe in wide format that looks like this: import pandas as pd df = pd.DataFrame({'Class_ID': {0: 432, 1: 493, 2: 32}, 'f_proba_1': {0: 3, 1: 8, 2: 6}, 'f_proba_2': {0: 4, 1: 9, 2: 9}, 'f_proba_3': {0: 2, 1: 4, 2: 1}, 'p_proba_1': {0: 3, 1: 82, 2: 36}, 'p_proba_2': {0: 2, 1: 92, 2: 96}, 'p_proba_3': {0: 8, 1: 41, 2: 18}, 'Meeting_ID': {0: 27, 1: 23, 2: 21}}) df Class_ID f_proba_1 f_proba_2 f_proba_3 p_proba_1 p_proba_2 p_proba_3 Meeting_ID 0 432 3 4 2 3 2 8 27 1 493 8 9 4 82 92 41 23 2 32 6 9 1 36 96 18 21 and I would like to convert to long format: Class_ID Student_ID f_proba p_proba Meeting_ID 0 432 1 3 3 27 1 432 2 4 2 27 2 432 3 2 8 27 3 493 1 8 82 23 4 493 2 9 92 23 5 493 3 4 41 23 6 32 1 6 36 21 7 32 2 9 96 21 8 32 3 1 18 21 So I have tried .melt in Pandas and here is my code out = pd.melt(df, id_vars = ['Class_ID', 'Meeting_ID'], value_vars = ['f_proba_1','f_proba_2','f_proba_3','p_proba_1','p_proba_2','p_proba_3'], var_name = 'Student_ID', value_name = ['f_proba', 'p_proba']) out but it didn't work. | You can use pd.wide_to_long for this: out = (pd.wide_to_long(df, stubnames=['f_proba', 'p_proba'], i=['Class_ID', 'Meeting_ID'], j='Student_ID', sep='_') .reset_index() ) Output: Class_ID Meeting_ID Student_ID f_proba p_proba 0 432 27 1 3 3 1 432 27 2 4 2 2 432 27 3 2 8 3 493 23 1 8 82 4 493 23 2 9 92 5 493 23 3 4 41 6 32 21 1 6 36 7 32 21 2 9 96 8 32 21 3 1 18 | 2 | 5 |
79,397,830 | 2025-1-29 | https://stackoverflow.com/questions/79397830/curve-fitting-a-non-linear-equation | I'm trying to fit my thermal conductivity into the Debye-Callaway equation. However, one of my parameters is coming back negative. I've tried different initial guesses. So I'm attaching a code with thermal conductivity data from literature and the values that they got to understand where my model is going wrong. What concerns me is it's not very sensitive to initial parameters. I'd really appreciate some help regarding this. Thank you so much! The parameters that the paper reported were A=6.73E-43 B=5.38E-18 The parameters I got: Fitted parameters: 1.417070339751493e-44, B = 1.075485244260637e-15. The functions are defined exactly the same as in paper: EDIT: I've added a second piece of code that still returns the second factor negatively. I'm not sure if it's because the factor is getting stuck at a local minimum somewhere? import numpy as np from scipy.integrate import quad from scipy.optimize import curve_fit import matplotlib.pyplot as plt # Constants kb = 1.380649e-23 # Boltzmann constant (J/K) hbar = 1.0545718e-34 # Reduced Planck's constant (J*s) theta_D = 280 # Debye temperature (K) v = 2300 # Average phonon group velocity (m/s) - # Define the inverse relaxation times def tau_PD_inv(omega, A): return A * omega**4 def tau_U_inv(omega, B, T): return B * omega**2 * np.exp(-theta_D / (3 * T)) def tau_GB_inv(omega): D = 0.7e-6 # Grain boundary characteristic length (m) return v / D def tau_total_inv(omega, A, B, T): return tau_PD_inv(omega, A) + tau_U_inv(omega, B, T) + tau_GB_inv(omega) # Debye Callaway model for thermal conductivity def kappa_lattice(T, A, B): def integrand(x): omega = x * kb * T / hbar tau_total = 1 / tau_total_inv(omega, A, B, T) return x**4 * np.exp(x) / (np.exp(x) - 1)**2 * tau_total integral, _ = quad(integrand, 0, theta_D / T) prefactor = kb**4 * T**3 / (2 * np.pi**2 * v * hbar**3) return prefactor * integral # Vectorize the model for curve fitting def kappa_lattice_vectorized(T, A, B ): return np.array([kappa_lattice(Ti, A, B ) for Ti in T]) # Example thermal conductivity and temperature data for Ge WQ T_data = np.array([3.0748730372823156, 3.1962737114707203, 3.4441443065638015, 3.785528026732028, 4.043999986888003, 4.524033859711473, 4.902329259251676, 5.270E+00, 5.902534836759174, 6.721645553370004, 7.690683028568884, 9.031E+00, 1.103E+01, 12.811754075061538, 16.07373842258417, 19.67675447459597, 23.951202854256202, 29.710517518825778, 3.684E+01, 45.030707655028536, 56.92469139605285, 6.929E+01, 85.62796640159988, 105.41803279520694, 129.5096325800293, 159.77670578646953, 200.9314386671018, 240.34130109997898, 291.63127697234717]) # Temperature in K kappa_data = np.array([ 0.5021456742832973, 0.5907837911587943, 0.7331309897165834, 0.9570228747931655, 1.1281866396926783, 1.4904675008427906, 1.8682223847710366, 2.1856825810901013, 2.8515011626042894, 4.057791636291224, 5.328111104734368, 7.645627510566391, 11.045409412114584, 14.277055520040301, 19.54065663430087, 24.738891215975602, 28.776422188514683, 32.562485211039274, 33.37572711709385, 32.455084248250884, 28.56802589740088, 24.937473590597776, 20.887192910896978, 17.37073097716312, 14.783541083880793, 12.26010669954394, 9.991973525044475, 8.531955488387288, 7.347906481962733]) # Thermal conductivity in W/mK # Initial guess for parameters A, B, and C initial_guess = [4.75E-42, 1.054E-17] # Curve fitting popt, pcov = curve_fit(kappa_lattice_vectorized, T_data, kappa_data, p0=initial_guess, maxfev=8_000) #popt, pcov = curve_fit(kappa_lattice_vectorized, T_data, kappa_data, p0=initial_guess, bounds=(0, np.inf)) # Extract fitted parameters A_fit, B_fit = popt # Print the fitted parameters print(f"Fitted parameters: A = {A_fit}, B = {B_fit}") # Generate curve fit data T_fit = np.linspace(min(T_data), max(T_data), 100) kappa_fit = kappa_lattice_vectorized(T_fit, *popt) # Plot the original data, fitted data, and extrapolated data plt.figure(figsize=(8, 6)) plt.plot(T_data, kappa_data, 'o', label='Experimental data') plt.plot(T_fit, kappa_fit, '-', label='Fitted model') #plt.plot(T_extrap, kappa_extrap, '--', label='Extrapolated (400-600K)') plt.xlabel('Temperature (K)') plt.ylabel('Thermal Conductivity (W/mK)') plt.legend() plt.title('Thermal Conductivity Fitting ') plt.show() import numpy as np from scipy.integrate import quad from scipy.optimize import curve_fit import matplotlib.pyplot as plt # Constants kb = 1.380649e-23 # Boltzmann constant (J/K) hbar = 1.0545718e-34 # Reduced Planck's constant (J*s) theta_D = 483.0 # Debye temperature (K) v = 4618.0 # Average phonon group velocity (m/s) - # Define the inverse relaxation times def tau_PD_inv(omega, A): return A * omega**4 def tau_U_inv(omega, B, T): return B * omega**2 * T * np.exp(-theta_D / (3 * T)) def tau_GB_inv(omega): D = 1e-6 # Grain boundary characteristic length (m) return v / D def tau_new_inv(omega, C): return C * omega**2 def tau_total_inv(omega, A, B, C, T): return tau_PD_inv(omega, A) + tau_U_inv(omega, B, T) + tau_GB_inv(omega) + tau_new_inv(omega, C) # Debye Callaway model for thermal conductivity def kappa_lattice(T, A, B, C): def integrand(x): omega = x * kb * T / hbar tau_total = 1 / tau_total_inv(omega, A, B, C, T) return x**4 * np.exp(x) / (np.exp(x) - 1)**2 * tau_total integral, _ = quad(integrand, 0, theta_D / T) prefactor = kb**4 * T**3 / (2 * np.pi**2 * v * hbar**3) return prefactor * integral # Vectorize the model for curve fitting def kappa_lattice_vectorized(T, A, B, C): return np.array([kappa_lattice(Ti, A, B, C) for Ti in T]) # Example thermal conductivity and temperature data for Ge WQ T_data = np.array([31.21198032, 31.63863233, 33.88415583, 36.02698918, 38.39872854, 39.56437842, 43.09065509, 44.50006634, 46.62080745, 43.20705304, 47.77194375, 51.1374173, 53.33894737, 55.68342411, 57.7839973, 60.44084481, 63.31137245, 66.34735373, 69.16983511, 74.09615366, 77.63998236, 82.66616204, 87.95740211, 93.19748533, 99.59297955, 105.6903923, 113.1334233, 120.6776065, 127.0279836, 134.1693093, 141.3911575, 148.1783285, 155.5512459, 162.631427, 169.7085111, 177.378102, 183.5898785, 190.9367188, 197.6917958, 205.7338436, 212.8065032, 220.0698325, 226.7609477, 234.0571384, 241.1287763, 247.8786187, 255.2722521, 262.3425521, 269.4163916, 276.4872816, 283.5510925, 297.1613034, 304.7919793, 311.8668132, 318.9352276, 326.0927208, 333.0893085, 340.164332, 347.2375209, 354.9542919, 362.0246034, 369.0955138, 375.8247965, 383.5799572]) # Temperature in K kappa_data = np.array([ 1.085444844, 1.13592882, 1.176429149, 1.217086899, 1.258678341, 1.305290878, 1.357081371, 1.398857871, 1.26084173, 1.436622858, 1.483042548, 1.528031021, 1.561551609, 1.598710119, 1.632418606, 1.665656346, 1.703762245, 1.742520105, 1.774186181, 1.820109169, 1.855169951, 1.893680154, 1.932568555, 1.962172074, 1.999232142, 2.029595098, 2.0490363, 2.067859052, 2.092450212, 2.119053495, 2.131413546, 2.145488003, 2.158842932, 2.169084281, 2.169487956, 2.184424479, 2.196747698, 2.20620033, 2.217744549, 2.223537925, 2.236483003, 2.258522251, 2.273748326, 2.278514432, 2.28270469, 2.288247862, 2.291668986, 2.298648488, 2.302662675, 2.299560106, 2.311410793, 2.339970141, 2.347949378, 2.349474819, 2.354437976, 2.369869851, 2.378039597, 2.384365082, 2.39183344, 2.395266095, 2.399300875, 2.40766296, 2.409888354, 2.442162261]) # Thermal conductivity in W/mK # Initial guess for parameters A, B, and C initial_guess = [4.75E-43, 3.6979E-19, 1e-16] # Curve fitting popt, pcov = curve_fit(kappa_lattice_vectorized, T_data, kappa_data, p0=initial_guess,maxfev=8_000) #popt, pcov = curve_fit(kappa_lattice_vectorized, T_data, kappa_data, p0=initial_guess, bounds=(0, np.inf)) # Extract fitted parameters A_fit, B_fit, C_fit = popt # Print the fitted parameters print(f"Fitted parameters: A = {A_fit}, B = {B_fit}, C = {C_fit}") # Generate curve fit data T_fit = np.linspace(min(T_data), max(T_data), 100) kappa_fit = kappa_lattice_vectorized(T_fit, *popt) # Plot the original data, fitted data, and extrapolated data plt.figure(figsize=(8, 6)) plt.plot(T_data, kappa_data, 'o', label='Experimental data') plt.plot(T_fit, kappa_fit, '-', label='Fitted model') #plt.plot(T_extrap, kappa_extrap, '--', label='Extrapolated (400-600K)') plt.xlabel('Temperature (K)') plt.ylabel('Thermal Conductivity (W/mK)') plt.legend() plt.title('Thermal Conductivity Fitting ') plt.show() | Your tau_U_inv function forgot to take into account T. The corrected version should be this: def tau_U_inv(omega, B, T): return B * omega**2 * T * np.exp(-theta_D / (3 * T)) When I make this change the parameters show Fitted parameters: A = 1.935907419814169e-43, B = 9.770738747258265e-18 which appears to be much closer to the original. | 4 | 6 |
79,395,477 | 2025-1-29 | https://stackoverflow.com/questions/79395477/how-to-save-a-dataset-in-multiple-shards-using-tf-data-dataset-save | How can I save a tf.data.Dataset in multiple shards using tf.data.Dataset.save()? I am reading in my dataset from CSV using tf.data.experimental.make_csv_dataset. The TF docs here are not very helpful. There is a shard_func argument, but the examples given aren't helpfull and its not clear how to map to an int in a deterministic way. Using random ints doesn't seem to work either. The solution in a similar question here generates an error for me TypeError: unsupported operand type(s) for %: 'collections.OrderedDict' and 'int' Single Shard (works) The below successfully saves to a single shard. import pandas as pd import numpy as np import tensorflow as tf # gen data n=10000 pd.DataFrame( {'label': np.random.randint(low=0, high=2, size=n), 'f1': np.random.random(n), 'f2': np.random.random(n), 'f3': np.random.random(n), 'c1': np.random.randint(n), 'c2': np.random.randint(n)} ).to_csv('tmp.csv') # load data into a tf.data.Dataset data_ts = tf.data.experimental.make_csv_dataset( 'tmp.csv', 1, label_name='label', num_epochs=1) data_ts.save('tmp.data') # single shard, works! Multiple shards usind randint (saves single shard) Trying to save to multiple shard using a random number, still only saves to a single shard, albeit with a random int in the file name. # Try sharding, using random numbers. def random_shard_function(features, label): return np.int64(np.random.randint(10)) data_ts.save('tmp2.data', shard_func=random_shard_function) Modulo shard (error) Trying the sollution from this question. def modulo_shard_function(features, label): return x & 10 data_ts.save('tmp2.data', shard_func=modulo_shard_function) TypeError: unsupported operand type(s) for &: 'collections.OrderedDict' and 'int' Debugging - no idea how shard_fun works. If I print out the inputs, it seems that the shard func is only run once, and the tensors are SymbolicTensors def debug_shard_function(features, label): for val in features.items(): print(f'{val=}') print(f'{label=}') print(f'{type(val[1])}') return np.int64(10) data_ts.save('tmp2.data', shard_func=debug_shard_function) Output: Still saves to a single shard val=('', <tf.Tensor 'args_0:0' shape=(None,) dtype=int32>) val=('f1', <tf.Tensor 'args_3:0' shape=(None,) dtype=float32>) val=('f2', <tf.Tensor 'args_4:0' shape=(None,) dtype=float32>) val=('f3', <tf.Tensor 'args_5:0' shape=(None,) dtype=float32>) val=('c1', <tf.Tensor 'args_1:0' shape=(None,) dtype=int32>) val=('c2', <tf.Tensor 'args_2:0' shape=(None,) dtype=int32>) label=<tf.Tensor 'args_6:0' shape=(None,) dtype=int32> <class 'tensorflow.python.framework.ops.SymbolicTensor'> | shard_func must return a scalar Tensor of type tf.int64 (not Python or NumPy integer). So you cannot just return np.int64(...) or do a Pythonβlevel % on dictionary.You need to pick (or compute) tensor inside dataset element and return tf.cast(..., tf.int64). For example if your CSV has a column "c1" you could do: def shard_func(features, label): return tf.cast(features['c1'][0] % 10, tf.int64) data_ts.save("my_data",shard_func=shard_func) This will produce up to 10 different shards (files) named my_data_0, my_data_1, etc | 2 | 1 |
79,301,659 | 2024-12-22 | https://stackoverflow.com/questions/79301659/assertionerror-detection-in-self-models-using-insightface-on-linux-docker-con | Iβm developing a Python application that uses Flask, running in a Docker container on a Linux server with NGINX. The application works perfectly on my local machine, but when I deploy it on the server, I encounter the following error: ERROR:app:Exception: Traceback (most recent call last): File "/app/app.py", line 32, in analyze_face analyzer = FaceFeatureAnalyzer() # Create an instance here File "/app/face_feature_analyzer/main_face_analyzer.py", line 43, in init self.face_app = FaceAnalysis(name='antelopev2', root=self.model_root) File "/usr/local/lib/python3.9/site-packages/insightface/app/face_analysis.py", line 43, in init assert 'detection' in self.models AssertionError Here is the code class FaceFeatureAnalyzer: def __init__(self): self.model_root = "/root/.insightface" self.model_path = os.path.join(self.model_root, "models/antelopev2") self.zip_path = os.path.join(self.model_root, "models/antelopev2.zip") self.model_url = "https://github.com/deepinsight/insightface/releases/download/v0.7/antelopev2.zip" # Initialize FaceAnalysis self.face_app = FaceAnalysis(name='antelopev2', root=self.model_root) self.face_app.prepare(ctx_id=0, det_size=(640, 640)) I have also tried to download it in same directory but that attempt also results in same error.. here is what i additionally tried class FaceFeatureAnalyzer: def __init__(self): # Initialize the InsightFace model self.face_app = FaceAnalysis(name='antelopev2') self.face_app.prepare(ctx_id=0, det_size=(640, 640)) logger.info("Initialized FaceAnalysis with model 'antelopev2'.") What Iβve Observed and Tried: Model Download and Extraction Logs: β’ During startup, the model antelopev2 is downloaded and extracted to /root/.insightface/models/antelopev2. The logs confirm this: Download completed. Extracting /root/.insightface/models/antelopev2.zip to /root/.insightface/models/antelopev2... Extraction completed. However, when checking the directory, it appears empty or the program cannot detect the models. Manually Adding the Models Previously, manually downloading the antelopev2 model and placing it in /root/.insightface/models/antelopev2 resolved the issue. I also set appropriate permissions using: chmod -R 755 /root/.insightface/models/antelopev2 After making updates to the codebase and rebuilding the Docker container, the issue reappeared. Directory Contents: The following files exist in /root/.insightface/models/antelopev2:# 1k3d68.onnx 2d106det.onnx genderage.onnx glintr100.onnx scrfd_10g_bnkps.onnx These are the expected .onnx files for antelopev2. The application works locally without any errors. The issue only arises in the Docker container on the Linux server. Even though the files are present and permissions are set correctly, the application seems unable to detect them. How can I debug or fix this issue? Dockerfile FROM python:3.9-slim # Set environment variables ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 # Set the working directory in the container WORKDIR /app # Install system dependencies including libgl1-mesa-glx and others RUN apt-get update && apt-get install -y --no-install-recommends \ libgl1-mesa-glx \ libglib2.0-0 \ g++ \ build-essential \ && apt-get clean && rm -rf /var/lib/apt/lists/* # Copy the requirements file into the container COPY requirements.txt /app/ # Install Python dependencies RUN pip install --upgrade pip RUN pip install --no-cache-dir -r requirements.txt # Copy the rest of the application code into the container COPY . /app EXPOSE 7002 # Run the Flask application CMD ["python", "app.py"] Docker-compose.yml version: '3.8' services: flask-app: build: context: ./backend container_name: flask-app ports: - "7000:7000" environment: - FLASK_RUN_HOST=0.0.0.0 - FLASK_RUN_PORT=7000 volumes: - ./backend:/app depends_on: - nginx nginx: image: nginx:latest container_name: nginx ports: - "80:80" - "443:443" volumes: - ./nginx:/etc/nginx/sites-enabled - ./nginx-certificates:/etc/letsencrypt | I met the same issue today with Ubuntu 22.04 + antelopev2 model. I changed the model to buffalo_l and the error is gone. # antelopev2 + Ubuntu 22.04: assert 'detection' in self.models app = FaceAnalysis(name='buffalo_l', providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) app.prepare(ctx_id=0, det_size=(640, 640)) I don't know why, both antelopev2 and buffalo_l works on Mac M3 Max, but antelopev2 just won't work on Linux. | 1 | 1 |
79,290,968 | 2024-12-18 | https://stackoverflow.com/questions/79290968/super-object-has-no-attribute-sklearn-tags | I am encountering an AttributeError while fitting an XGBRegressor using RandomizedSearchCV from Scikit-learn. The error message states: 'super' object has no attribute '__sklearn_tags__'. This occurs when I invoke the fit method on the RandomizedSearchCV object. I suspect it could be related to compatibility issues between Scikit-learn and XGBoost or Python version. I am using Python 3.12, and both Scikit-learn and XGBoost are installed with their latest versions. I attempted to tune the hyperparameters of an XGBRegressor using RandomizedSearchCV from Scikit-learn. I expected the model to fit the training data without issues and provide the best parameters after cross-validation. I also checked for compatibility issues, ensured the libraries were up-to-date, and reinstalled Scikit-learn and XGBoost, but the error persists. | Scikit-learn version 1.6 modified the API around its "tags", and that's the cause of this error. XGBoost has made the necessary changes in PR11021, but at present that hasn't made it into a released version. You can either keep your sklearn version <1.6, or build XGBoost directly from github (or upgrade XGBoost, after a new version is released). In sklearn 1.6.1, the error was downgraded to a warning (to be returned to an error in 1.7). So you may also install sklearn >=1.6.1,<1.7 and just expect DeprecationWarnings. See also sklearn Issue#30479 and 1.6.1 release notes. | 25 | 21 |
79,305,824 | 2024-12-24 | https://stackoverflow.com/questions/79305824/efficiently-removing-a-single-page-from-a-large-multi-page-tiff-with-jpeg-compre | I am working with a large multi-page TIFF file that is JPEG-compressed, and I need to remove a single page from it. I am using the tifffile Python package to process the TIFF, and I already know which page I want to remove based on metadata tags associated with that page. My current approach is to read all pages, modify the target page (either by skipping or replacing it), and write the rest back to a new TIFF file. Hereβs what Iβve tried so far: import tifffile with tifffile.TiffFile('file') as tif: for i, page in enumerate(tif.pages): if some condition with tags is true: # Skip the page to delete or replace with a dummy page image_data = page.asarray(memmap=True) # Memory-mapped access to the page's data # Write the page to the output file writer.write( image_data, compression='jpeg', photometric=page.photometric, metadata=page.tags, ) However, this approach has several issues: Memory Usage: Processing a large file consumes almost all available memory (I have 32GB of RAM, but it uses up to 28GB), which makes it unfeasible for large files. Compression Issues: Different compression methods like LZW, ZSTD, and JPEG create files of vastly different sizes, and some are much larger than the original. Performance: Using methods like strips or chunking leads to very slow processing, taking too long to delete a single page. Output file size: The size of the output file with using a different compression method makes it too big! (3GB Input on JPEG to 50GB+ output on LZW) Is there any way in Python to efficiently remove a single page from a large multi-page TIFF file without consuming too much memory or taking forever? Iβve seen some .NET packages that can delete a page in-placeβdoes Python have a similar solution? | I've created a Python package to handle it. While it can be made more extensible, it efficiently solves the problem without loading all the image data into memory. Core Idea: The package works by: Reconstructing the IFD (Image File Directory) chain: It removes the IFD of concern while keeping references to the original image data. Adjusting Metadata: The metadata pointing to the "label" information is updated implicitly during the reconstruction process. Memory Efficiency: By referencing actual image data, the package avoids the need to load large image files into memory. Installation: You can install the package directly from PyPI: pip install tiff-wsi-label-removal Usage: Once installed, you can use the remove-label command-line tool to remove labels from a TIFF file: remove-label <input_tiff_file> <output_tiff_file> Current Limitations and Future Work: The package is functional, but thereβs room for improvement, including making it more extensible. Suggestions from the comments here and additional planned features are being tracked in the description of the package's PyPI page. Any feedback is welcome! | 2 | 0 |
79,305,326 | 2024-12-24 | https://stackoverflow.com/questions/79305326/how-to-make-pip-virtual-environments-truly-portable | I'm trying to install Open WebUI as a portable installation, where the base folder can be moved or renamed. However, I've encountered issues making the virtual environment truly portable. While installing, I've tried using dynamic paths, but somehow the paths get hardcoded during the installation process. When I try to activate the virtual environment in a different location, I receive an error: Fatal error in launcher: Unable to create process using '"...original\path\venv\Scripts\python.exe" "...new\path\venv\Scripts\open-webui.exe" serve': The system cannot find the file specified. How can I solve this issue and make my pip virtual environments truly portable? In this example I'm using two batch files to install Open WebUI. setup.bat @echo on REM setup.bat - Run this first to create the Open-WebUI_portable environment set BASE_DIR=%~dp0 set VENV_DIR=%BASE_DIR%venv set PYTHON_DIR=%BASE_DIR%python set PIP_DIR=%BASE_DIR%pip-cache set APPDATA=%BASE_DIR%appdata set USERPROFILE=%BASE_DIR%userprofile set HOME=%BASE_DIR%home set XDG_CACHE_HOME=%BASE_DIR%cache set PYTHONUSERBASE=%BASE_DIR%pythonuser echo Creating directories... mkdir "%PIP_DIR%" 2>nul mkdir "%APPDATA%" 2>nul mkdir "%USERPROFILE%" 2>nul mkdir "%HOME%" 2>nul mkdir "%XDG_CACHE_HOME%" 2>nul mkdir "%PYTHONUSERBASE%" 2>nul :DOWNLOAD_PYTHON echo Downloading Python... curl -L "https://www.python.org/ftp/python/3.11.8/python-3.11.8-embed-amd64.zip" -o "%BASE_DIR%python.zip" if not exist "%BASE_DIR%python.zip" ( echo ERROR: Failed to download Python echo Would you like to retry? (Y/N^) choice /c yn /n if errorlevel 2 ( echo Installation cancelled exit /b 1 ) goto DOWNLOAD_PYTHON ) echo Extracting Python... powershell -Command "Expand-Archive -Path '%BASE_DIR%python.zip' -DestinationPath '%PYTHON_DIR%' -Force" if not exist "%PYTHON_DIR%\python.exe" ( echo ERROR: Failed to extract Python pause exit /b 1 ) echo Configuring Python for pip... del "%PYTHON_DIR%\python311._pth" 2>nul ( echo python311.zip echo . echo %PYTHON_DIR%\Lib\site-packages echo . echo import site ) > "%PYTHON_DIR%\python311._pth" :DOWNLOAD_GETPIP echo Downloading get-pip.py... curl -L "https://bootstrap.pypa.io/get-pip.py" -o "%BASE_DIR%get-pip.py" if not exist "%BASE_DIR%get-pip.py" ( echo ERROR: Failed to download get-pip.py echo You can try to download it manually from https://bootstrap.pypa.io/get-pip.py echo and place it in %BASE_DIR%get-pip.py echo. echo Would you like to retry the download? (Y/N^) choice /c yn /n if errorlevel 2 ( echo Installation cancelled exit /b 1 ) goto DOWNLOAD_GETPIP ) echo Installing pip... "%PYTHON_DIR%\python.exe" "%BASE_DIR%get-pip.py" --no-warn-script-location if errorlevel 1 ( echo ERROR: Failed to install pip echo This is a critical error. Please ensure: echo 1. You have internet connection echo 2. Any antivirus is not blocking the installation echo 3. You are running this script as administrator pause exit /b 1 ) echo Installing virtualenv... "%PYTHON_DIR%\Scripts\pip.exe" install virtualenv if errorlevel 1 ( echo ERROR: Failed to install virtualenv echo This is a critical error. Please check your internet connection pause exit /b 1 ) echo Creating virtual environment... "%PYTHON_DIR%\Scripts\virtualenv.exe" "%VENV_DIR%" if not exist "%VENV_DIR%\Scripts\activate.bat" ( echo ERROR: Failed to create virtual environment pause exit /b 1 ) echo Installing Open WebUI... call "%VENV_DIR%\Scripts\activate.bat" "%VENV_DIR%\Scripts\pip.exe" install --cache-dir="%PIP_DIR%" open-webui if errorlevel 1 ( echo ERROR: Failed to install Open WebUI pause exit /b 1 ) echo Creating postactivate script for relocatable paths... mkdir "%VENV_DIR%\Scripts\postactivate.d" 2>nul ( echo @echo on echo set BASE_DIR=%%~dp0 echo set APPDATA=%%BASE_DIR%%appdata echo set LOCALAPPDATA=%%BASE_DIR%%localappdata echo set USERPROFILE=%%BASE_DIR%%userprofile echo set HOME=%%BASE_DIR%%home echo set XDG_CACHE_HOME=%%BASE_DIR%%cache echo set XDG_CONFIG_HOME=%%BASE_DIR%%config echo set PYTHONUSERBASE=%%BASE_DIR%%pythonuser echo set PIP_CACHE_DIR=%%BASE_DIR%%pip-cache echo set TEMP=%%BASE_DIR%%temp echo set TMP=%%BASE_DIR%%temp ) > "%VENV_DIR%\Scripts\postactivate.d\relocatable.bat" echo Cleaning up... del "%BASE_DIR%python.zip" del "%BASE_DIR%get-pip.py" echo Setup complete! Use run.bat to start Open WebUI pause and run.bat @echo on set BASE_DIR=%~dp0 set VENV_DIR=%BASE_DIR%venv REM Set all environment variables to keep files in Open-WebUI_portable directory set APPDATA=%BASE_DIR%appdata set LOCALAPPDATA=%BASE_DIR%localappdata set USERPROFILE=%BASE_DIR%userprofile set HOME=%BASE_DIR%home set XDG_CACHE_HOME=%BASE_DIR%cache set XDG_CONFIG_HOME=%BASE_DIR%config set PYTHONUSERBASE=%BASE_DIR%pythonuser set PIP_CACHE_DIR=%BASE_DIR%pip-cache set TEMP=%BASE_DIR%temp set TMP=%BASE_DIR%temp REM Create necessary directories mkdir "%APPDATA%" 2>nul mkdir "%LOCALAPPDATA%" 2>nul mkdir "%USERPROFILE%" 2>nul mkdir "%HOME%" 2>nul mkdir "%XDG_CACHE_HOME%" 2>nul mkdir "%XDG_CONFIG_HOME%" 2>nul mkdir "%PYTHONUSERBASE%" 2>nul mkdir "%TEMP%" 2>nul REM Activate virtual environment call "%VENV_DIR%\Scripts\activate.bat" REM Run postactivate script for relocatable paths call "%VENV_DIR%\Scripts\postactivate.d\relocatable.bat" REM Run Open WebUI "%VENV_DIR%\Scripts\open-webui.exe" serve REM Open browser to localhost:8080 start "" http://localhost:8080 pause It initially works fine, but when I rename or move the base folder it stops working. Error: F:\OpenWebUI-portable-NEW>call "F:\OpenWebUI-portable-NEW\venv\Scripts\activate.bat" Fatal error in launcher: Unable to create process using '"F:\OpenWebUI-portable-OLD\venv\Scripts\python.exe" "F:\OpenWebUI-portable-NEW\venv\Scripts\open-webui.exe" serve': The system cannot find the file specified. | so far the only workaround is to create offline portable project folder like this: In case of after moving or renaming when the project folder stops working, do the following: empty these folders: python, venv create backup for the following files and folders: telemetry_user_id, webui.db, vector_db, uploads setup.bat paste the following files and folders: telemetry_user_id, webui.db, vector_db, uploads run.bat import any other backup if necessary. | 1 | 0 |
79,305,588 | 2024-12-24 | https://stackoverflow.com/questions/79305588/use-yolo-with-unbounded-input-exported-to-an-mlpackage-mlmodel-file | I want to create an .mlpackage or .mlmodel file which I can import in Xcode to do image segmentation. For this, I want to use the segmentation package within YOLO to check out if it fit my needs. The problem now is that this script creates an .mlpackage file which only accepts images with a fixed size (640x640): from ultralytics import YOLO model = YOLO("yolo11n-seg.pt") model.export(format="coreml") I want the change something here, probably with coremltools, to handle unbounded ranges (I want to handle arbitrary sized images). It's described a bit here: https://apple.github.io/coremltools/docs-guides/source/flexible-inputs.html#enable-unbounded-ranges, but I don't understand how I can implement it with my script. | How to export YOLO segmentation model with flexible input sizes from ultralytics import YOLO import coremltools as ct # Export to torchscript first model = YOLO("yolov8n-seg.pt") model.export(format="torchscript") # Convert to CoreML with flexible input size input_shape = ct.Shape( shape=(1, 3, ct.RangeDim(lower_bound=32, upper_bound=1024, default=640), ct.RangeDim(lower_bound=32, upper_bound=1024, default=640)) ) mlmodel = ct.convert( "yolov8n-seg.torchscript", inputs=[ct.ImageType( name="images", shape=input_shape, color_layout=ct.colorlayout.RGB, scale=1.0/255.0 )], minimum_deployment_target=ct.target.iOS16 ) mlmodel.save("yolov8n-seg-flexible.mlpackage") This creates an .mlpackage that accepts images from 32 32 to 1024 1024 (you can modify these bounds as needed). Default is 640 640. Read about the stuff here: https://apple.github.io/coremltools/docs-guides/source/flexible-inputs.html https://docs.ultralytics.com/modes/export/ https://apple.github.io/coremltools/source/coremltools.converters.mil.input_types.html#coremltools.converters.mil.input_types.ImageType | 2 | 1 |
79,306,155 | 2024-12-24 | https://stackoverflow.com/questions/79306155/move-a-string-from-a-filename-to-the-end-of-the-filename-in-python | If I have documents labeled: 2023_FamilyDrama.pdf 2024_FamilyDrama.pdf 2022-beachpics.pdf 2020 Hello_world bring fame.pdf 2019-this-is-my_doc.pdf I would like them to be FamilyDrama_2023.pdf FamilyDrama_2024.pdf beachpics_2022.pdf Hello_world bring fame_2020.pdf this-is-my_doc_2019.pdf So far, I know how to remove a string from the beginning of the filename, but do not know how to save it and append it to the end of the string. import os for root, dir, fname in os.walk(PATH): for f in fname: os.chdir(root) if f.startswith('2024_'): os.rename(f, f.replace('2024_', '')) | This solution uses parse instead of regular expressions. from parse import parse fnames = [ "2023_FamilyDrama.pdf", "2024_FamilyDrama.pdf", "2022-beachpics.pdf" ] for file_name in fnames: year,sep,name,ext = parse("{:d}{}{:l}.{}", file_name) print(f"{name}_{year}.{ext}") # Output: # FamilyDrama_2023.pdf # FamilyDrama_2024.pdf # beachpics_2022.pdf The statement parse("{:d}{}{:l}.{}", file_name) searches for: {:d} A number (in your case, the year) {} Any number of characters up until the beginning of the next matching rule. In your case, either a - or _. {:l} Any number of letters until the beginning of the next matching rule. . A period {} Everything remaining until the end of the string. The output type from parse() is parse.Result, but I unpacked it like a tuple in this example. You can rename or move the file however you prefer. I like pathlib from pathlib import Path from parse import parse # The directory containing the files. in_dir = Path(r"C:/mystuff/my_pdfs") # Absolute path. Also look into `Path().resolve()` # Create a list of all PDFs in the directory # The initial code snippet had the file names hard-coded as a list of strings. # This creates a list of `Path` objects, so you get just the name of `f` by calling `f.name` files = [f for f in in_dir.iterdir() if f.name.find(".pdf") > -1] # Rename each file, one at a time. for f in files: # Extract the subtrings year,sep,name,ext = parse("{:d}{}{:l}.{}", f.name) # Rearrange them new_name = f"{name}_{year}.{ext}" # Print friendly message for this example print(f"renaming {f.name} to {new_name}") # Rename the file f.rename(new_name) # Output # renaming 2022-beachpics.pdf to beachpics_2022.pdf # renaming 2023_FamilyDrama.pdf to FamilyDrama_2023.pdf # renaming 2024_FamilyDrama.pdf to FamilyDrama_2024.pdf It doesn't matter for your example input, but I should note that {:d} will drop leading zeroes. You can use most format specifications as well. Also see the format examples. >>> from parse import parse >>> foo = "0123_abc.pdf" >>> parse("{0:04}{}{:l}.{}", foo) <Result ('0123', '_', 'abc', 'pdf') {}> Alternatively, you can wipe out the hyphens (since you plan to do that anyway). That allows you to hard-code the _ separator and then treat everything as straight format-less {}. >>> bar = "0123-abc.pdf" >>> parse("{}_{}.{}", bar.replace("-","_")) <Result ('0123', 'abc', 'pdf') {}> Edit: More complicated file names. The file names in the question all followed the template [year][separator][letters].[extension], so that's why I chose the {:l} pattern. If instead, you have a filename like 2022-workplace-picture-site-report.pdf, then you need a rule for the template [year][one character of anything][any number of anything].[extension] >>> from parse import parse >>> fname = "2022-workplace-picture-site-report.pdf" >>> parse("{:d}{}{}.{}", fname) <Result (2022, '-', 'workplace-picture-site-report', 'pdf') {}> | 2 | 2 |
79,303,697 | 2024-12-23 | https://stackoverflow.com/questions/79303697/snakemake-expand-a-string-saved-in-a-variable | My question is very simple, but I can't find how to do it in the Snakemake documentation. Let's say I have a very long string to expand, like : rule all: input: expand("sim_files/test_nGen{ngen}_N{N}_S{S}_NR{NR}_DG{DG}_SS{SS}/test_nGen{ngen}_N{N}_S{S}_NR{NR}_DG{DG}_SS{SS}.yaml", ngen=list_ngen, N=list_N, S=list_S, NR=list_NR, DG=list_DG, SS=list_SS) I want to save my long string in a variable, like this: prefix = "test_nGen{ngen}_N{N}_S{S}_NR{NR}_DG{DG}_SS{SS}" rule all: input: expand("sim_files/{prefix}/{prefix}.yaml",ngen=list_ngen, N=list_N, S=list_S, NR=list_NR, DG=list_DG, SS=list_SS) But when I do that, Snakemake doesn't expand inside the wildcard 'prefix': WildcardError in file /home/bunelpau/Travail/DossierSync/test_snt/Snakefile, line 13: No values given for wildcard 'prefix'. Do I have to use this super long string, or is there a way to save it in some variable like I want to? | Use an f-string to format the string before expansion: prefix = "test_nGen{ngen}_N{N}_S{S}_NR{NR}_DG{DG}_SS{SS}" rule all: input: expand(f"sim_files/{prefix}/{prefix}.yaml",ngen=list_ngen, N=list_N, S=list_S, NR=list_NR, DG=list_DG, SS=list_SS) | 2 | 1 |
79,305,237 | 2024-12-24 | https://stackoverflow.com/questions/79305237/how-can-i-find-the-column-containing-the-third-nan-value-in-each-row-of-a-datafr | I have been given a problem to solve for my assignment. Here's the description for my problem: In the cell below, you have a DataFrame df that consists of 10 columns of floating-point numbers. Exactly 5 entries in each row are NaN values. For each row of the DataFrame, find the column which contains the third NaN value. You should return a Series of column labels: e, c, d, h, d nan = np.nan data = [[0.04, nan, nan, 0.25, nan, 0.43, 0.71, 0.51, nan, nan], [ nan, nan, nan, 0.04, 0.76, nan, nan, 0.67, 0.76, 0.16], [ nan, nan, 0.5 , nan, 0.31, 0.4 , nan, nan, 0.24, 0.01], [0.49, nan, nan, 0.62, 0.73, 0.26, 0.85, nan, nan, nan], [ nan, nan, 0.41, nan, 0.05, nan, 0.61, nan, 0.48, 0.68]] columns = list('abcdefghij') df = pd.DataFrame(data, columns=columns) # write a solution to the question here This is my solution: result = df.apply(lambda row: row.isna().idxmax(), axis=1) print(result) My code returns b, a, a, a, b, a, whereas the desired output is e, c, d, h, d. My solution returns the index of the first NaN but according to the question I want the third NaN in each row. How can i do that? Or is there an alternate solution that gives the desired output. | idxmax was a good approach, you can combine this with a mask that indicates the 3rd NaN, for this use cumsum: m = df.isna() out = (m & m.cumsum(axis=1).eq(3)).idxmax(axis=1) since idxmax always returns the first value if there are several maxes, this could be simplified to: out = df.isna().cumsum(axis=1).eq(3).idxmax(axis=1) Output: 0 e 1 c 2 d 3 h 4 d dtype: object Intermediates: # m a b c d e f g h i j 0 False True True False True False False False True True 1 True True True False False True True False False False 2 True True False True False False True True False False 3 False True True False False False False True True True 4 True True False True False True False True False False # m.cumsum(axis=1) a b c d e f g h i j 0 0 1 2 2 3 3 3 3 4 5 1 1 2 3 3 3 4 5 5 5 5 2 1 2 2 3 3 3 4 5 5 5 3 0 1 2 2 2 2 2 3 4 5 4 1 2 2 3 3 4 4 5 5 5 # m.cumsum(axis=1).eq(3) a b c d e f g h i j 0 False False False False True True True True False False 1 False False True True True False False False False False 2 False False False True True True False False False False 3 False False False False False False False True False False 4 False False False True True False False False False False # m & m.cumsum(axis=1).eq(3) a b c d e f g h i j 0 False False False False True False False False False False 1 False False True False False False False False False False 2 False False False True False False False False False False 3 False False False False False False False True False False 4 False False False True False False False False False False Alternatively, if you can't assume a specific number of NaNs in each row, using melt + groupby.nth: N = 3 out = (df.melt(ignore_index=False) # reshape to long .loc[lambda x: x['value'].isna()] # only keep NaNs .groupby(level=0)['variable'].nth(N-1) # keep 3rd row per group ) Output: 1 c 2 d 4 d 0 e 3 h Name: variable, dtype: object | 1 | 2 |
79,304,669 | 2024-12-24 | https://stackoverflow.com/questions/79304669/when-yielding-anyof-events-is-it-safe-to-use-if-req-triggered-instead-of-if | There seems to be a quirky behaviour when working with requests without the with statement as a context manager, which causes resources to be locked up permanently if using the standard if req in res pattern and both conditions occur simulatenously as follows. req = resource.request() result = yield req | env.timeout(5) # req and timeout occurs simultaneously if req in result: # DO SOMETHING WITH RESOURCE resource.release(req) else: req.cancel() # Important as req is still in resource queue and can be triggered after timeout, which is another possible cause of a resource being locked permanently assert not req.triggered # Quirky behaviour here as req can be triggered at this point After doing further exploration, I found that what is stored within result is only processed events, so req could be triggered but not yet processed (although I feel that a request() being triggered leads to being processed immediately is more intuitive), so we enter the else block of code. After executing the code within the else block, the req would then be processed, but since code execution is already past the if block, the req is left dangling and unreleased. Originally, I thought that req.cancel() was sufficient but it seems like it does not cancel an event that is triggered (correct me if I'm wrong). As such, the alternative that I came up with was to do this which seemingly fixed the problem. # Rest of code else: req.cancel() resource.release(req) # Doesn't feel right (I might be mistaken, so correct me if needed) If req gets processed later on, the release or equivalently get event is scheduled later and will definitely release the resource so it is not left dangling and even if req was not granted, release(req) will not cause an exception. I did not like this approach however. By right, I should end up in the if block if I do get the resource at the given simulation time, even if the actual processing of req is "later" on in the queue which reflects the actual behaviour. As such, I tried another method which again seemingly also works ... # rest of code if req.triggered: # rest of code else: req.cancel() # rest of code So, as the title of this post suggests, is this final method equivalent to doing the standard pattern and thus safe? Would there be any dangers in doing so, since I am technically "skipping time" by treating the req as processed before it is actually processed (I do not have a case where a resource is used for 0 time and then released, but would be interesting to know if this could be a problem as well)? Perhaps finding a way to re-order the events such that req is processed first despite being put in the queue later than the timeout event is better? Thank you very much in advance! Edit: Sample code that demonstrates the behaviour. import random import simpy def source(env, number, counter): for i in range(number): c = customer(env, counter) env.process(c) yield env.timeout(1) def customer(env, counter): patience = 5 while True: req = counter.request() results = yield req | env.timeout(patience) print(f'req triggered = {req.triggered}, req processed = {req.processed}, req in result = {req in results}') if req in results: yield env.timeout(5) counter.release(req) break else: if req.triggered: assert 0 req.cancel() random.seed(42) env = simpy.Environment() counter = simpy.Resource(env, capacity=1) env.process(source(env, 100, counter)) env.run(until=100) | You are correct. There seems to be a edge case where the request is triggered but not processed and expression req in results is false. However when the event is triggered, the resource is seized, which needs to be released. Also if a event has been triggered, the cancel does nothing. I looked that context manager for a request and notice it always does a release So I suggest that the "safe" thing to do if you are not using a with statement is to use a try-finally block, as I did in this code example. If you cannot use a try-finally then you are right, it looks like you should do both a cancel, and a release to "cancel" a request. Added some logging to show the resource seize / release import random import simpy def source(env, number, counter): for i in range(number): c = customer(env, counter, i) env.process(c) yield env.timeout(1) def customer(env, counter, id): patience = 5 while True: req=None try: to = env.timeout(patience) req = counter.request() results = yield req | to print(env.now,id,f'req triggered = {req.triggered}, req processed = {req.processed}, req in result = {req in results}', (req in req.resource.users), (req in req.resource.queue)) if req in results: yield env.timeout(5) break else: if req.triggered: print(id, f'triggered, has resource: {(req in req.resource.users)}, in queue: {(req in req.resource.queue)}') req.cancel() finally: print(id, f'has resource: {(req in req.resource.users)}, in queue: {(req in req.resource.queue)}') counter.release(req) print(id, f'has resource: {(req in req.resource.users)}, in queue: {(req in req.resource.queue)}') random.seed(42) env = simpy.Environment() counter = simpy.Resource(env, capacity=1) env.process(source(env, 100, counter)) env.run(until=50) | 2 | 2 |
79,300,265 | 2024-12-21 | https://stackoverflow.com/questions/79300265/is-it-possible-to-type-annotate-python-function-parameter-used-as-typeddict-key | While working through code challenges I am trying to use type annotations for all function parameters/return types. I use mypy in strict mode with the goal of no errors. I've spent some time on this one and can't figure it out - example of problem: from typing import Literal, NotRequired, TypedDict class Movie(TypedDict): Title: str Runtime: str Awards: str class MovieData(TypedDict): Awards: NotRequired[str] Runtime: NotRequired[str] def get_movie_field(movies: list[Movie], field: Literal['Awards', 'Runtime']) -> dict[str, MovieData]: return {movie['Title']: {field: movie[field]} for movie in movies} # Check with mypy 1.14.0: PS> mypy --strict .\program.py program.py:13: error: Expected TypedDict key to be string literal [misc] Found 1 error in 1 file (checked 1 source file) Is there some way to make this work? I realize I can append # type: ignore to the line but I'm curious if there's another way. I've tried a bunch of different permutations such as ensuring field matches one of the literal values before the dict comprehension but nothing seems to help. I like @chepner's solution. Unfortunately, I couldn't figure out how to make it work with mypy in strict mode - taking his example: def get_movie_field(movies, field): return {movie['Title']: {field: movie[field]} for movie in movies} I get: PS> mypy --strict program.py program.py:18: error: Function is missing a type annotation [no-untyped-def] Found 1 error in 1 file (checked 1 source file) As the last get_movie_field isn't type annotated. From reviewing the mypy docs, I updated the last get_movie_field function as follows - but it still doesn't fix the problem: # @chepner's full solution updated with type annotations for # last get_movie_field: from typing import Literal, NotRequired, TypedDict, overload class Movie(TypedDict): Title: str Runtime: str Awards: str class MovieData(TypedDict): Awards: NotRequired[str] Runtime: NotRequired[str] @overload def get_movie_field(movies: list[Movie], field: Literal['Awards']) -> dict[str, MovieData]: ... @overload def get_movie_field(movies: list[Movie], field: Literal['Runtime']) -> dict[str, MovieData]: ... def get_movie_field(movies: list[Movie], field: Literal['Awards']|Literal['Runtime'] ) -> dict[str, MovieData]: return {movie['Title']: {field: movie[field]} for movie in movies} PS> mypy --strict program.py program.py:19: error: Expected TypedDict key to be string literal [misc] Found 1 error in 1 file (checked 1 source file) However, this inspired me to find another approach. I'm not saying it's great but it does work in the sense that mypy doesn't complain: from typing import Literal, NotRequired, TypedDict class Movie(TypedDict): Title: str Runtime: str Awards: str class MovieData(TypedDict): Awards: NotRequired[str] Runtime: NotRequired[str] def create_MovieData(key: str, value: str) -> MovieData: """Creates a new TypedDict using a given key and value.""" if key == 'Awards': return {'Awards': value} elif key == 'Runtime': return {'Runtime': value} raise ValueError(f"Invalid key: {key}") def get_movie_field(movies: list[Movie], field: Literal['Awards', 'Runtime']) -> dict[str, MovieData]: return {movie['Title']: create_MovieData(field, movie[field]) for movie in movies} PS> mypy --strict program.py Success: no issues found in 1 source file | You can use TypeVar to help mypy deduce the exact literal that is being used, and this can even be propagated to the output. That way mypy knows which keys are definitely present in the output dicts and which are not. Instead of: class MovieData(TypedDict): Awards: NotRequired[str] Runtime: NotRequired[str] You would use: K = TypeVar('K', Literal['Runtime'], Literal['Awards']) # NB. NOT TypeVar('K', Literal['Runtime', 'Awards']) -- TypeError # AND NOT TypeVar('K', bound=Literal['Runtime', 'Awards']) -- coalesces to str MovieData = dict[K, str] Full example (playground): from typing import TypedDict, TypeVar, Literal class Movie(TypedDict): Title: str Runtime: str Awards: str K = TypeVar('K', Literal['Runtime'], Literal['Awards']) MovieData = dict[K, str] def extract(movies: list[Movie], key: K) -> list[MovieData[K]]: return [{key: movie[key]} for movie in movies] res = extract([], key='Runtime') reveal_type(res) # note: Revealed type is "list[dict[Literal['Runtime'], str]]" | 1 | 1 |
79,305,200 | 2024-12-24 | https://stackoverflow.com/questions/79305200/static-typing-of-python-regular-expression-incompatible-type-str-expected | Just to be clear, this question has nothing to do with the regular expression itself and my code is perfectly running even though it is not passing mypy strict verification. Let's start from the basic, I have a class defined as follows: from __future__ import annotations import re from typing import AnyStr class MyClass: def __init__(self, regexp: AnyStr | re.Pattern[AnyStr]) -> None: if not isinstance(regexp, re.Pattern): regexp = re.compile(regexp) self._regexp: re.Pattern[str] | re.Pattern[bytes]= regexp The user can build the class either passing a compiled re pattern or AnyStr. I want the class to store in the private _regexp attribute the compiled value. So I check if the user does not provided a compiled pattern, then I compile it and assign it to the private attribute. So far so good, even though I would have expected self._regexp to be type re.Pattern[AnyStr] instead of the union of the type pattern types. Anyhow, up to here everything is ok with mypy. Now, in some (or most) cases, the user provides the regexp string via a configuration TOML file, that is read in, parsed in a dictionary. For this case I have a class method constructor defined as follow: @classmethod def from_dict(cls, d: dict[str, str]) -> MyClass: r = d.get('regexp') if r is None: raise KeyError('missing regexp') return cls(regexp=r) The type of dictionary will be dict[str, str]. I have to check that the dictionary contains the right key to prevent a NoneType in case the get function cannot find it. I get the error: error: Argument "regexp" to "MyClass" has incompatible type "str"; expected "AnyStr | Pattern[AnyStr]" [arg-type] That looks bizarre, because str should be compatible with AnyStr. Let's say that I modify the dictionary typing to dict[str, AnyStr]. Instead of fixing the problem, it multiplies it because I get two errors: error: Argument "regexp" to "MyClass" has incompatible type "str"; expected "AnyStr | Pattern[AnyStr]" [arg-type] error: Argument "regexp" to "MyClass" has incompatible type "bytes"; expected "AnyStr | Pattern[AnyStr]" [arg-type] It looks like I am in a loop: when I think I have fixed something, I just moved the problem back elsewhere. | AnyStr is a type variable, and type variables should either appear 2+ times in a function signature or 1+ time in the signature and 1 time in the enclosing class as a type variable. If you have neither of these situations, you'd be better off to use a union. See mypy Playground β comment by dROOOze | 1 | 1 |
79,306,481 | 2024-12-24 | https://stackoverflow.com/questions/79306481/how-can-i-save-a-figure-to-pdf-with-a-specific-page-size-and-padding | I have generated a matplotlib figure that I want to save to a PDF. So far, this is straightforward. import matplotlib.pyplot as plt x = [1, 2, 3, 4] y = [3, 5, 4, 7] plt.scatter(x, y) plt.savefig( "example.pdf", bbox_inches = "tight" ) plt.close() However, I would like the figure to appear in the middle of a standard page and have some white space around the figure, rather than fill up the entire page. How can I tweak the above to achieve this? | Another option is to use constrained layout and set a rectangle that you want the all the plot elements to be contained in. import matplotlib.pyplot as plt x = [1, 2, 3, 4] y = [3, 5, 4, 7] fig, ax = plt.subplots(figsize=(8.3, 11.7), layout='constrained') fig.get_layout_engine().set(rect=(0.15, 0.3, 0.7, 0.4)) # left, bottom, width, height in fractions of the figure ax.scatter(x, y) fig.savefig('example.pdf') plt.close() | 5 | 2 |
79,302,863 | 2024-12-23 | https://stackoverflow.com/questions/79302863/how-to-use-python-to-create-orc-file-compressed-with-zlib-compression-level-9 | I want to create an ORC file compressed with ZLIB compression level 9. Thing is, when using pyarrow.orc, I can only choose between "Speed" and "Compression" mode, and can't control the compression level E.g. orc.write_table(table, '{0}_zlib.orc'.format(file_without_ext), compression='ZLIB', compression_strategy='COMPRESSION') Ideally I'm looking for a non existing compression_level parameter, any help would be appreciated. | The Apache ORC library (which is used internally by other libraries for ORC support) doesn't allow to set the compression level freely (neither the C++ nor the Java implementation). The C++ library supports only CompressionStrategy_SPEED and CompressionStrategy_COMPRESSION (source): enum CompressionStrategy { CompressionStrategy_SPEED = 0, CompressionStrategy_COMPRESSION }; The Java library offers an additional FASTEST option (source): enum SpeedModifier { /* speed/compression tradeoffs */ FASTEST, FAST, DEFAULT } There is an open request in the project about this: Support maximum compression ratio in setSpeed. It was created a year ago but the feature has not been implemented so far. So, unless you patch the library yourself, there is no way to set a high compression level. | 1 | 2 |
79,304,262 | 2024-12-23 | https://stackoverflow.com/questions/79304262/curve-on-top-of-heatmap-seaborn | I'm trying to reproduce this graph: Here's my code: #trials_per_sim_list = np.logspace(1, 6, 1000).astype(int) trials_per_sim_list = np.logspace(1, 5, 100).astype(int) trials_per_sim_list.sort() sharpe_ratio_theoretical = pd.Series({num_trials:get_expected_max_SR(num_trials, mean_SR = 0, std_SR = 1) for num_trials in trials_per_sim_list}) sharpe_ratio_theoretical = pd.DataFrame(sharpe_ratio_theoretical, columns = ['max{SR}']) sharpe_ratio_theoretical.index.names = ['num_trials'] sharpe_ratio_sims = get_max_SR_distribution( #num_sims = 1e3, num_sims = 100, trials_per_sim_list = trials_per_sim_list, mean_SR = 0.0, std_SR = 1.0) heatmap_df = sharpe_ratio_sims.copy() heatmap_df['count'] = 1 heatmap_df['max{SR}'] = heatmap_df['max{SR}'].round(3) heatmap_df = heatmap_df.groupby(['num_trials', 'max{SR}']).count().reset_index() heatmap_df = heatmap_df.pivot(index = 'max{SR}', columns = 'num_trials', values = 'count') heatmap_df = heatmap_df.fillna(0) heatmap_df = heatmap_df.sort_index(ascending = False) fig, ax = plt.subplots() sns.heatmap(heatmap_df, cmap = 'Blues', ax = ax) sns.lineplot(x = sharpe_ratio_theoretical.index, y = sharpe_ratio_theoretical['max{SR}'], linestyle = 'dashed', ax = ax) plt.show() I think the issue is that the heatmap is plotting on a log-scale because I've inputted a log-scale, while my lineplot isn't mapping onto the save values. My result so far is this: If you would like to see the code I'm using for the functions please go here: https://github.com/charlesrambo/ML_for_asset_managers/blob/main/Chapter_8.py Edit: No response so far. If it's the quant finance part that's confusing, here's a more straight forward example. I would like to add the graph of y = 1/sqrt{x} to my plot. Here's the code: import numpy as np import pandas as pd import scipy.stats as stats import matplotlib.pyplot as plt vals = np.logspace(0.5, 3.5, 100).astype(int) theoretical_values = pd.Series(1/np.sqrt(vals), index = vals) num_runs = 10000 trials_per_run = 10 exprimental_values = np.zeros(shape = (num_runs * len(vals), 2)) for i, n in enumerate(vals): for j in range(num_runs): dist = stats.norm.rvs(size = (trials_per_run, n)).mean(axis = 1) exprimental_values[num_runs * i + j, 0] = n exprimental_values[num_runs * i + j, 1] = np.std(dist, ddof = 1) exprimental_values = pd.DataFrame(exprimental_values, columns = ['num', 'std']) heatmap_df = exprimental_values.copy() heatmap_df['count'] = 1 heatmap_df['std'] = heatmap_df['std'].round(3) heatmap_df = heatmap_df.groupby(['num', 'std'])['count'].sum().reset_index() heatmap_df = heatmap_df.pivot(index = 'std', columns = 'num', values = 'count') heatmap_df = heatmap_df.fillna(0) heatmap_df = heatmap_df.div(heatmap_df.sum(axis = 0), axis = 1) heatmap_df = heatmap_df.sort_index(ascending = False) fig, ax = plt.subplots() sns.heatmap(heatmap_df, cmap = 'Blues', ax = ax) sns.lineplot(x = theoretical_values.index, y = theoretical_values, ax = ax) plt.show() I'm getting this: | I replicate the visual effect below, using matplotlib: To speed things up I put values into a list rather than into a growing dataframe. To speed things up further, you could do one or a combination of: Compute the inner loop values in a single shot by adding an extra dimension to dist (i.e. this will dispense with the need for an inner loop). Pre-allocate a numpy array rather than concatenating to a list Use numba for easy parallelisation of loops I do the pandas manipulations in a chained manner. Reproducible example Sample data (modified to use list concatenation rather than DataFrame concatenation): import numpy as np import pandas as pd import scipy.stats as stats import matplotlib.pyplot as plt # # Data for testing (using fewer vals compared to OP) # vals = np.logspace(1, 6 - 2, 100).astype(int) theoretical_values = pd.Series(1 / vals**0.5, index=vals) num_runs = 1000 trials_per_run = 10 #This list keeps track of (num, std) values experimental_values_list = [] for idx, val in enumerate(vals): #Report progress (nb. the `tqdm` package gives you nicer progress bars) print( f'val {idx + 1} of {len(vals)} ({(idx + 1) / len(vals):.1%}) [val={val:,}]', ' ' * 100, end='\r' ) for _ in range(num_runs): dist = stats.norm.rvs(size=(trials_per_run, val)).mean(axis=1) experimental_values_list.append( (val, dist.std(ddof=1)) ) #List of [(num, std), ...] to DataFrame experimental_values_df = pd.DataFrame( experimental_values_list, columns=['num', 'std'], ) #View dataframe experimental_values_df Manipulate data into heatmap_df which is formatted as std-by-val: # # Manipulate into an std-by-num table, akin to a heatmap # heatmap_df = ( experimental_values_df #Round std to 3dp .assign(std_round=lambda df_: df_['std'].round(3)) #Count occurences of (num, std_round) pairs, and # unstack (pivot) num to the columns axis .value_counts(['std_round', 'num']).unstack().fillna(0) # #sort by std_round large-small # Could equivalently remove this line and append .iloc[::-1] to line above .sort_index(ascending=False) # #Divide each row by the row's sum .pipe(lambda df_: df_.div(df_.sum(axis=1), axis='index')) ) display(heatmap_df) #Optionally display a styled df for a quick visualisation ( #Subsample for brevity heatmap_df.iloc[::30, ::7] #Style for coloured bolded text .style .format(precision=4) .text_gradient(cmap='magma') .set_table_styles([{'selector': 'td', 'props': 'font-weight: bold'}]) .set_caption('subsampled table coloured by value') ) Intermediate output is a styled dataframe for quick visualisation: Heatmap using matplotlib, with a curve overlay: # # Heatmap and curve # f, ax = plt.subplots(figsize=(8, 3), layout='tight') mappable = ax.pcolormesh( heatmap_df.columns, heatmap_df.index, heatmap_df, cmap='inferno', vmax=0.07, ) ax.set_ylim(heatmap_df.index.min(), 0.1) ax.set_xscale('log') #Title and axis labels ax.set_title('Dispersion across strategies', fontweight='bold') ax.set(xlabel='Number of trials', ylabel='standard deviation') #Add colorbar (and optionally adjust its positioning) f.colorbar(mappable, label='normalized counts', aspect=7, pad=0.02) #Overlay a curve through the mean curve = experimental_values_df.groupby('num')['std'].mean().rename('mean std') curve.plot(ax=ax, color='black', lw=2, legend=True) | 2 | 1 |
79,299,527 | 2024-12-21 | https://stackoverflow.com/questions/79299527/why-numpy-array-so-slow-when-editing-the-data-inside | I wrote an algorithm that uses a long list. Since numpy.array should perform better in dealing with long data, I wrote another version using numpy.array. list version: import math def PrimeTable(n:int) -> list[int]: nb2m1 = n//2 - 1 l = [True]*nb2m1 for i in range(1,(math.isqrt(n)+1)//2): if l[i-1]: for j in range(i*3,nb2m1,i*2+1): l[j] = False return [2] + [i for i,v in zip(range(3,n,2), l) if v] array version: import math, numpy def PrimeTable2(n:int) -> list[int]: nb2m1 = n//2 - 1 l = numpy.full(nb2m1, True) for i in range(1,(math.isqrt(n)+1)//2): if l[i-1]: for j in range(i*3,nb2m1,i*2+1): l[j] = False return [2] + [i for i,v in zip(range(3,n,2), l) if v] It turns out that numpy.array version is 1x slower than list version. The graph:where y0 is PrimeTable and y1 is PrimeTable2 Why is numpy.array so slow, and how can I improve it? | Numpy is optimized for vectorized operations, where large chunks of data are processed in bulk. Instead of updating element-by-element(l[j] = False) we can use slicing to update a range of values at once. Also reducing the number of python loops should make the code more effective. I've further optimised the code by using a single boolean array. The sieve tracks only odd numbers directly. This eliminates the need for multiplication and division steps for odd indices during iteration. After the sieve is complete, the prime numbers are directly reconstructed from the indices, avoiding intermediate computations. Try out this updated code. This should work better than the list version of code: import numpy as np def PrimeTable2_optimized_v2(n: int) -> list[int]: if n < 2: return [] if n == 2: return [2] # Only consider odd numbers; even numbers > 2 are not prime sieve = np.ones(n // 2, dtype=bool) limit = int(n**0.5) + 1 for i in range(1, limit // 2 + 1): if sieve[i]: start = 2 * i * (i + 1) sieve[start::2 * i + 1] = False # Convert sieve to primes primes = 2 * np.nonzero(sieve)[0] + 1 primes[0] = 2 return primes.tolist() | 2 | 1 |
79,304,226 | 2024-12-23 | https://stackoverflow.com/questions/79304226/should-i-manually-patch-the-pandas-dataframe-query-vulnerability-or-wait-for-a | I'm currently addressing the Pandas DataFrame.query() Code Injection vulnerability, which allows arbitrary code execution if unsafe user input is processed by the .query() method. I understand this issue arises because the query() method can execute expressions within the context of the DataFrame, potentially leading to security risks. My questions are as follows: Should I manually patch the vulnerability? For example, I can override the query() method in the Pandas source code to validate expressions using Python's ast module and block unsafe constructs. Is this a recommended approach, or does it pose potential risks (e.g., breaking functionality, maintaining the patch long-term)? How frequently does the Pandas team release updates or patches for vulnerabilities like this? Should I wait for an official update to address this issue? Are there any best practices for monitoring when a fix becomes available? What is the general practice for addressing such library-level vulnerabilities in production environments? Is it better to apply temporary mitigations (like validating input in my application code) and wait for the library maintainers, or should I fork/patch the library for immediate resolution? Any insights, especially from those with experience in maintaining secure Python applications, would be greatly appreciated! | Should I wait for an official update to address this issue? Are there any best practices for monitoring when a fix becomes available? The 'vulnerability' in question strikes me as basically unfixable, so I would not expect a fix to become available. The method DataFrame.query() is designed to allow a user to run essentially arbitrary Python code to filter a DataFrame. Passing untrusted code to DataFrame.query() is exactly as dangerous as passing untrusted code to eval(). I asked about this on the Pandas issue tracker, and this was the response from one of Pandas's contributors: Q: My question is about Pandas's security model. What security guarantees does Pandas make about DataFrame.query() with an attacker-controlled expr? My intuition about this is "none, don't do that," but I'm wondering what the Pandas project thinks. A: This is indeed my take, both query and eval should be used with string literals and not with strings provided by or derived from untrusted user input. (Source.) For example, I can override the query() method in the Pandas source code to validate expressions using Python's ast module and block unsafe constructs. That strikes me as very difficult to do in the general case. If you look at this thread, you'll see example after example of people proposing a way to sandbox Python execution, and it turns out to not be a perfect sandbox because of a feature of Python that the answer doesn't take into account. For that reason, I think that disallowing unsafe constructs is a doomed approach, because it requires anticipating every unsafe construct. Rather, I think you should come up with a list of safe constructs, and only allow those. For example, you could compare the expression to a known-good list of expressions, and only allow those. allowlist = [ 'num > 0', 'num == 0', 'num < 0', ] if expr in allowlist: result = df.query(expr) else: raise Exception('Illegal expr value') This restricts the strings that can be passed to DataFrame.query() to one of these pre-approved values. Is it better to apply temporary mitigations (like validating input in my application code) and wait for the library maintainers, or should I fork/patch the library for immediate resolution? That's hard to answer in general. To me, I would think about three factors: How hard is it to validate the input within my application? How complex or intrusive is the change to the library? How difficult will it be to keep this patch up to date? In this specific case, I would suggest validating input within application code, unless #1 is really difficult. | 3 | 1 |
79,301,935 | 2024-12-23 | https://stackoverflow.com/questions/79301935/how-to-solve-the-derivative-of-the-expression-with-respect-to-d | The integral is Where the variables follow the following distribution: Thus, the integral becomes: Now, my code in python is to integrate the expression with \(W_{1:3}\) being \(1,2,3\) respectively, \(r_1 = 0\), \(v\), which is the variance is \(1\), and the time constant \(tau\) being \(1\), I want to estimate roots of the derivative to find \(D\) as following: import sympy as sp N = 3 w_n = [k for k in range(N)] r_n = [sp.symbols(f'r_{i+1}') for i in range(N)] D = sp.symbols("D",real=True,) v = sp.symbols("v",real=True) t = sp.symbols("t",real=True) def expo_power(i=0,t=t,D=D,v=v): f = ((((w_n[i] - r_n[i]) ** 2)/(2 * v)) + (((r_n[i] - r_n[i-1]) ** 2)/(4 * D * t))) return f def expo_power_sum(lower_bound = 2, upper_bound=N,t=t,D=D,v=v): f = 0 for i in range(lower_bound, upper_bound): f += expo_power(i,t=t,D=D,v=v) return f def P_w_n_P_r_n(i=0,v=v,D=D,t=t): return (1/sp.sqrt(8 * (sp.pi ** 2) * D * t * v) ** (N - 1) ) \ * sp.exp(-expo_power_sum(lower_bound=2,upper_bound=N,t=t,D=D,v=v)) def P_w_i_r_i(i = 0, v=v): return (1/(sp.sqrt(2 * sp.pi * v))) \ * sp.exp(-((w_n[i] - r_n[i]) ** 2/(2 * v))) def normal_dist(x =0, m = r_n[0], v =v): return (1/(sp.sqrt(2 * sp.pi * v))) \ * sp.exp(-(((x - m) ** 2)/(2 * v))) def integrand(v = v,t=t,lower = 2): f = sp.log(P_w_n_P_r_n(i=lower,v=v,D=D,t=t) * P_w_i_r_i(v=v) * normal_dist(x=r_n[0],v=v)) return f def integrate(v=v,t=t, lower_bound = -sp.oo, upper_bound= sp.oo): function = integrand(v=v,t=t) for i in range(N): inte = sp.Integral(function,(r_n[i],lower_bound,upper_bound)) inte = inte.doit() inte = inte.evalf() function = inte k = inte sp.pprint(k) d = sp.diff(k,D) sol = sp.solve(d,D) sp.pprint(sol) integrate(v=1,t=1,lower_bound=0,upper_bound=10) Now, the solution displayed is the ratio between two unevaluated integrals. note that and t in the python code is (tau) and we're finding the roots of the derivative with respect to (D) after marginalizing out (r_1,r_2,r_3), the integral is not being evaluated, particularly, for (r_2) Does the integral have a closed form? if so can the derivative of the form be solved for $D$? if not, what are the alternative to produce a solution for $P'(w_{1:3}|D,v) = 0$ with respect to $D$? EDIT: code edited after fixing bugs pointed out by @RoberDodier | I can't tell fully what's going on, but there seem to be some problems with the way the problem is stated, and I'm thinking that you need to straighten out the problem statement before seeking a solution. If I'm not mistaken, all the distributions involved are normal distributions, and you are marginalizing (integrating) over some of them. If so, the result should again be a normal distribution (if there are still some free variables) or some constant involving the mean and variance. My guess is that if you phrased the joint distribution as a multivariate normal distribution with a suitable covariance matrix, you would be able to state the result in terms of operations on the covariance. But even without that, it should be possible to get an exact result in terms of D, v, and t, if the problem is set up right. The first problem I see is that the integrand (function in your function integrate) is a function of r_2 only. It should be a function of r_1 and r_3 as well, shouldn't it? Shouldn't r_n be assigned [r_1, r_2, r_3] and not [0, r_2, r_3] ? That function is free of r_3 suggests that expo_power_sum isn't working right. Finally, my advice is to work with symbolic infinity (plus/minussp.oo) when stating the integrals instead of plus/minus 1000 -- substituting any finite value is likely to make the results more complicated than they need to be. | 2 | 1 |
79,305,343 | 2024-12-24 | https://stackoverflow.com/questions/79305343/how-to-move-x-axis-on-top-of-the-plot-in-plotnine | I am using plotnine with a date x-axis plot and want to put x-axis date values on top of the chart but couldn't find a way to do it. I have seen in ggplot it can be done using scale_x_discrete(position = "top") but with scale_x_datetime() I couldn't find any position parameter in plotnine. Below is the sample code: import pandas as pd from plotnine import * # Create a sample dataset data = pd.DataFrame({ 'date': pd.date_range('2022-01-01', '2022-12-31',freq="M"), 'value': [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120] }) # Create the line chart (ggplot(data, aes(x='date', y='value')) + geom_line() + labs(title='Line Chart with Dates on X-Axis', x='Date', y='Value') + theme_classic() ) Would really appreciate any help !! Update (facet plot) after the answer: # sample dataset new_data = { 'date': pd.date_range('2022-01-01', periods=8, freq="ME"), 'parent_category': ['Electronics', 'Electronics', 'Fashion', 'Fashion', 'Home Goods', 'Electronics', 'Fashion','Electronics'], 'child_category': ['Smartphones', 'Laptops', 'Shirts', 'Pants', 'Kitchenware','Laptops', 'Shirts', 'Smartphones'] } # Create DataFrame new_data = pd.DataFrame(new_data) new_data plot with answer tips (ggplot(new_data, aes(x="date", y="child_category", group="child_category")) + geom_line(size=1, color="pink") + geom_point(size=3, color="grey") + facet_wrap("parent_category", ncol=1, scales="free_y") + theme_538() + theme(axis_text_x=element_text(angle=45, hjust=1), panel_grid_major=element_blank(), figure_size=(8, 6), axis_line_x=element_line(position=('axes',1))) ) In facet plot it doesn't go to top of the chart. | Try this: # Create the line chart (ggplot(data, aes(x='date', y='value')) + geom_line() + labs(title='Line Chart with Dates on X-Axis', x='Date', y='Value') + theme_classic() + theme(axis_line_x=element_line(position=('axes', 1))) ) Here's the source code explanation/implementation for position=(axes, 1) for more clarity. Update: For the sake of completeness here's more detail on the facet plot for the updated question (all credit goes to @ViSa for figuring this out in the comments): According to the source code of plotnine for facet_wrap: scales : Whether x or y scales should be allowed (free) to vary according to the data on each of the panel. And the allowed values are: "fixed", "free", "free_x", "free_y" with the default value being "fixed". So, setting scales to free_x or free should resolve the problem shown in the image shared in the question. facet_wrap("parent_category", ncol=1, scales="free_x") Here's the updated image with this change: | 2 | 1 |
79,304,899 | 2024-12-24 | https://stackoverflow.com/questions/79304899/how-to-select-rows-that-display-some-type-of-pattern-in-python | I am looking to extract rows from my dataset based on a pattern like condition. The condition I'm looking for is finding periods in a battery's charging history where it discharged from 100-0% without charging in between. For example, in this dataset below I would be interested in a function that would only return timestamp of 7 to 12 as it contains a full discharge of the battery. timestamp Charge level (%) 1 50 2 55 3 40 4 60 5 80 6 100 7 100 8 85 9 60 10 55 11 40 12 0 13 20 The approach I have tried is to use the loc function in Pandas to look for rows with a charge level of 0% and then backtrack until I reach a row with a charge level of 100%. But I am struggling with the backtracking part in this approach. | [Code updated according to comments] The idea I use is to keep only the rows with 0 and 100 and the final rows of interest will be the ones with 100 followed by 0. [after checking with .diff(1) that the values are monotonically decreasing] I also updated your example to include some more difficult cases like when it start discharging and then start charging before it was fully discharged [or fully discharged]. with pd.option_context('display.max_columns', None): display(df.T) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 time 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 level 20 0 40 60 80 100 100 85 60 40 40 0 100 20 20 55 100 10 100 0 100 20 40 0 20 df["diff1"] = df["level"].diff(1) df.set_index("time", inplace=True) res = df.copy() mask =((res["level"] == 100) | (res["level"] == 0)) res = res.loc[mask] mask1 = (res["level"] == 100) & (res["level"].shift(-1) == 0) mask2 = (res["level"] == 0) & (res["level"].shift(1) == 100) res = res.loc[mask1 | mask2, ["level"]] level time 7 100 12 0 19 100 20 0 21 100 24 0 # Get the start and end of all the segments having levels [100, ..., 0] t_start, t_end = res[res["level"] == 100].index, res[res["level"] == 0].index # Among all the segments keep only the ones with level monotonically decreasing idx = [(t_start[i], t_end[i]) for i in range(len(t_start)) if (df.loc[t_start[i]+1:t_end[i], "diff1"] <= 0).all()] print(idx) # All resulting groups with (start_time, end_time) [(7, 12), (19, 20)] # Example to get the sum of all the levels in result group nΒ°0 (idx[0] ie (7, 12)) gp0 = df.loc[idx[0][0]:idx[0][1], ["level"]] display(gp0) level time 7 100 8 85 9 60 10 40 11 40 12 0 print(f"The sum of levels is {sum(gp0['level'])}.") The sum of levels is 325. | 1 | 1 |
79,302,825 | 2024-12-23 | https://stackoverflow.com/questions/79302825/how-do-you-insert-a-map-reduce-into-a-polars-method-chain | Iβm doing a bunch of filters and other transform applications including a group_by on a polars data frame, the objective being to count the number of html tags in a single column per date per publisher. Here is the code: 120 def contains_html3(mindate, parquet_file = default_file, fieldname = "text"): 121 """ checks if html tags are in field """ 122 123 124 html_tags = [ 125 "<html>", "</html>", "<head>", "</head>", "<title>", "</title>", "<meta>", "</meta>", "<link>", "</link>", "<style>", 126 "<body>", "</body>", "<header>", "</header>", "<footer>", "</footer>", "<nav>", "</nav>", "<main>", "</main>", 127 "<section>", "</section>", "<article>", "</article>", "<aside>", "</aside>", "<h1>", "</h1>", "<h2>", "</h2>", 128 "<h3>", "</h3>", "<h4>", "</h4>", "<h5>", "</h5>", "<h6>", "</h6>", "<p>", "</p>", "<ul>", "</ul>", "<ol>", "</ol>", 129 "<li>", "</li>", "<div>", "</div>", "<span>", "</span>", "<a>", "</a>", "<img>", "</img>", "<table>", "</table>", 130 "<thead>", "</thead>", "<tbody>", "</tbody>", "<tr>", "</tr>", "<td>", "</td>", "<th>", "</th>", "<form>", "</form>", 131 "<input>", "</input>", "<textarea>", "</textarea>", "<button>", "</button>", "<select>", "</select>", "<option>", 132 "<script>", "</script>", "<noscript>", "</noscript>", "<iframe>", "</iframe>", "<canvas>", "</canvas>", "<source>"] 133 134 gg = (pl.scan_parquet(parquet_file) 135 .cast({"date": pl.Date}) 136 .select("publisher", "date", fieldname) 137 .drop_nulls() 138 .group_by("publisher", "date") 139 .agg(pl.col(fieldname).str.contains_any(html_tags).sum().alias(fieldname)) 140 .filter(pl.col(fieldname) > 0) 141 .sort(fieldname, descending = True)).collect() 142 143 return gg Here is example output for fieldname = "text": Out[8]: shape: (22_925, 3) βββββββββββββββββββββββββββββ¬βββββββββββββ¬βββββββ β publisher β date β text β β --- β --- β --- β β str β date β u64 β βββββββββββββββββββββββββββββͺβββββββββββββͺβββββββ‘ β Kronen Zeitung β 2024-11-20 β 183 β β Kronen Zeitung β 2024-10-25 β 180 β β Kronen Zeitung β 2024-11-14 β 174 β β Kronen Zeitung β 2024-11-06 β 172 β β Kronen Zeitung β 2024-10-31 β 171 β β β¦ β β¦ β β¦ β β The Faroe Islands Podcast β 2020-03-31 β 1 β β Sunday Standard β 2024-07-16 β 1 β β Stabroek News β 2024-08-17 β 1 β β CivilNet β 2024-09-01 β 1 β β The Star β 2024-06-23 β 1 β βββββββββββββββββββββββββββββ΄βββββββββββββ΄βββββββ The issue is that instead of just passing a single fieldname = "text" argument, I would like to pass a list (for example ["text", "text1", "text2", ...]). The idea would be to run the bottom three lines in the chain for each element of the list. I could wrap the whole polars method chain in a for loop and then join the resulting data frames, but is there a better way? For example to insert a map, or foreach, or other such construct after the group_by clause, and then have polars add a new column for each field name without using a loop? What's the best way of handling this? EDIT WITH REPRODUCIBLE CODE This will produce a dataframe df and a sample output tc, with all four columns text1 through text4, summed and sorted, but not using polars for the last step. #colorscheme orbital dark import polars as pl import datetime as dt from math import sqrt import random random.seed(8472) from functools import reduce html_tags = [ "<html>", "</html>", "<head>", "</head>", "<title>", "</title>", "<meta>", "</meta>", "<link>", "</link>", "<style>", "</style>", "<body>", "</body>", "<header>", "</header>", "<footer>", "</footer>", "<nav>", "</nav>", "<main>", "</main>", "<section>", "</section>", "<article>", "</article>", "<aside>", "</aside>", "<h1>", "</h1>", "<h2>", "</h2>", "<h3>", "</h3>", "<h4>", "</h4>", "<h5>", "</h5>", "<h6>", "</h6>", "<p>", "</p>", "<ul>", "</ul>", "<ol>", "</ol>", "<li>", "</li>", "<div>", "</div>", "<span>", "</span>", "<a>", "</a>", "<img>", "</img>", "<table>", "</table>", "<thead>", "</thead>", "<tbody>", "</tbody>", "<tr>", "</tr>", "<td>", "</td>", "<th>", "</th>", "<form>", "</form>", "<input>", "</input>", "<textarea>", "</textarea>", "<button>", "</button>", "<select>", "</select>", "<option>", "</option>", "<script>", "</script>", "<noscript>", "</noscript>", "<iframe>", "</iframe>", "<canvas>", "</canvas>", "<source>", "</source>"] def makeword(alphaLength): """Make a dummy name if none provided.""" consonants = "bcdfghjklmnpqrstvwxyz" vowels = "aeiou" word = ''.join(random.choice(consonants if i % 2 == 0 else vowels) for i in range(alphaLength)) return word def makepara(nwords): """Make a paragraph of dummy text.""" words = [makeword(random.randint(3, 10)) for _ in range(nwords)] tags = random.choices(html_tags, k=3) parawords = random.choices(tags + words, k=nwords) para = " ".join(parawords) return para def generate_df_with_tags(rows = 100, numdates = 10, num_publishers = 6): publishers = [makeword(5) for _ in range(num_publishers)] datesrange = pl.date_range(start := dt.datetime(2024, 2, 1), end = start + dt.timedelta(days = numdates - 1), eager = True) dates = sorted(random.choices(datesrange, k = rows)) df = pl.DataFrame({ "publisher": random.choices(publishers, k = rows), "date": dates, "text1": [makepara(15) for _ in range(rows)], "text2": [makepara(15) for _ in range(rows)], "text3": [makepara(15) for _ in range(rows)], "text4": [makepara(15) for _ in range(rows)] }) return df def contains_html_so(parquet_file, fieldname = "text"): """ checks if html tags are in field """ gg = (pl.scan_parquet(parquet_file) .select("publisher", "date", fieldname) .drop_nulls() .group_by("publisher", "date") .agg(pl.col(fieldname).str.contains_any(html_tags).sum().alias(fieldname)) .filter(pl.col(fieldname) > 0) .sort(fieldname, descending = True)).collect() return gg if __name__ == "__main__": df = generate_df_with_tags(100) df.write_parquet("/tmp/test.parquet") tc = [contains_html_so("/tmp/test.parquet", fieldname = x) for x in ["text1", "text2", "text3", "text4"]] tcr = (reduce(lambda x, y: x.join(y, how = "full", on = ["publisher", "date"], coalesce = True), tc) .with_columns(( pl.col("text1").fill_null(0) + pl.col("text2").fill_null(0) + pl.col("text3").fill_null(0) + pl.col("text4").fill_null(0)).alias("sum")).sort("sum", descending = True)) print(tcr) Desired output is below, but you'll see that in the bottom of the code I have run a functools.reduce on four dataframes, outside of the polars ecosystem, to join them, and it's basically this reduce that I want to put into the polars method chain somehow. [As an aside, my multiple (textX).fill_null(0) are also a bit clumsy but I'll leave that for a separate question] In [59]: %run so_question.py shape: (45, 7) βββββββββββββ¬βββββββββββββ¬ββββββββ¬ββββββββ¬ββββββββ¬ββββββββ¬ββββββ β publisher β date β text1 β text2 β text3 β text4 β sum β β --- β --- β --- β --- β --- β --- β --- β β str β date β u64 β u64 β u64 β u64 β u64 β βββββββββββββͺβββββββββββββͺββββββββͺββββββββͺββββββββͺββββββββͺββββββ‘ β desob β 2024-02-10 β 5 β 5 β 5 β 5 β 20 β β qopir β 2024-02-03 β 5 β 5 β 5 β 4 β 19 β β jerag β 2024-02-04 β 5 β 5 β 5 β 4 β 19 β β jerag β 2024-02-07 β 5 β 4 β 5 β 5 β 19 β β wopav β 2024-02-07 β 4 β 5 β 3 β 5 β 17 β β β¦ β β¦ β β¦ β β¦ β β¦ β β¦ β β¦ β β jerag β 2024-02-06 β 1 β null β 1 β 1 β 3 β β desob β 2024-02-05 β 1 β 1 β null β 1 β 3 β β cufeg β 2024-02-04 β 1 β 1 β 1 β null β 3 β β cufeg β 2024-02-05 β 1 β null β 1 β 1 β 3 β β wopav β 2024-02-06 β null β 1 β 1 β 1 β 3 β βββββββββββββ΄βββββββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββ So basically, tag counts by columns ["text1", "text2", "text3", "text4"], then summed ignoring nulls, and sorted descending on the sum. Join should be on publisher and date, outer (= "full"), and coalescing. | Is it not the same as aggregating the list of col() names at the same time? fieldnames = ["text1", "text2", "text3", "text4"] (df.group_by("publisher", "date") .agg(pl.col(fieldnames).str.contains_any(html_tags).sum()) .with_columns(sum = pl.sum_horizontal(fieldnames)) ) shape: (45, 7) βββββββββββββ¬βββββββββββββ¬ββββββββ¬ββββββββ¬ββββββββ¬ββββββββ¬ββββββ β publisher β date β text1 β text2 β text3 β text4 β sum β β --- β --- β --- β --- β --- β --- β --- β β str β date β u32 β u32 β u32 β u32 β u32 β βββββββββββββͺβββββββββββββͺββββββββͺββββββββͺββββββββͺββββββββͺββββββ‘ β desob β 2024-02-01 β 2 β 2 β 2 β 2 β 8 β β xikoy β 2024-02-06 β 1 β 1 β 1 β 1 β 4 β β wopav β 2024-02-03 β 2 β 2 β 2 β 2 β 8 β β jerag β 2024-02-05 β 3 β 2 β 3 β 3 β 11 β β qopir β 2024-02-03 β 5 β 5 β 5 β 4 β 19 β β β¦ β β¦ β β¦ β β¦ β β¦ β β¦ β β¦ β β xikoy β 2024-02-10 β 2 β 2 β 2 β 2 β 8 β β xikoy β 2024-02-02 β 1 β 1 β 1 β 1 β 4 β β cufeg β 2024-02-10 β 1 β 1 β 1 β 1 β 4 β β jerag β 2024-02-06 β 1 β 0 β 1 β 1 β 3 β β desob β 2024-02-03 β 2 β 2 β 2 β 2 β 8 β βββββββββββββ΄βββββββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββ pl.sum_horizontal() replaces the multiple fill_null / addition combination. | 3 | 1 |
79,306,493 | 2024-12-24 | https://stackoverflow.com/questions/79306493/unable-to-install-avatar2-on-windows | I'm trying to install the avatar2 package, but I'm encountering this error: posix_ipc_module.c(37): fatal error C1083: Non Γ¨ possibile aprire il file inclusione: 'sys/time.h': No such file or directory I know that it is needed a unix system, bu the framework i'm trying to implement works with angr, which could work also in Windows. Are there any way to install this package on Windows? | Although not explicitly stated, this package doesn't seem to support Windows as can be seen in the installation instructions provided on their GitHub which uses POSIX style commands. One way you can use this package on Windows is to install Windows Subsystem for Linux (WSL) which provides a compatibility layer that lets you run Linux binary executables natively on Windows. You can build your project using WSL on your Windows machine and install packages in your Linux environment (don't forget to use virtual environments). See How to install Linux on Windows with WSL. | 1 | 1 |
79,305,785 | 2024-12-24 | https://stackoverflow.com/questions/79305785/how-does-numpy-solve-nth-5-and-higher-degree-polynomials | There is a function in NumPy that solves any polynomial with given coefficient (numpy.roots()). So how does NumPy solve it if there is no formula for 5th and higher degree polynomials? I know about Newton's method but I wonder how exactly NumPy applies it. I tried finding information about it in the NumPy documentation and other sources but I did not find anything about that function. | So, the answer is in numpy documentation. But since that was the opportunity for me to play with it, I put in an answer an experiment illustrating how it can be done. Let's say we want to solve zeros of X**5 - 7*X**4 + 15*X**3 - 5*X**2 - 16*X + 12 I obtained this polynomial by doing import sympy X=sympy.symbols("X") sympy.expand((X-1)*(X-2)**2*(X-3)*(X+1)) So, spoiler alert, we expect to find -1,1,2,3 as zeros, with 2 being a double one. For that, we can build a companion matrix import numpy as np M=np.array([[0,0,0,0,-12], [1,0,0,0,+16], [0,1,0,0,+5], [0,0,1,0,-15], [0,0,0,1,+7]]) #array([[ 0, 0, 0, 0, -12], # [ 1, 0, 0, 0, 16], # [ 0, 1, 0, 0, 5], # [ 0, 0, 1, 0, -15], # [ 0, 0, 0, 1, 7]]) The eigenvalues of this matrix are the zeros of the polynomial (since the characteristic polynomial of this matrix is the one we are interested in). We can check this np.linalg.eigvals(M) #array([-1. , 1. , 1.99999992, 2.00000008, 3. ]) So, that shifted the question: how those eigenvalues are computed? Since, at school, the way we learn to compute eigenvalues is... by solving zeros of characteristic polynomial! So, obviously here that can't be the solution, or else it wouldn't help a lot. There are several method. For example, we could use gram-schmidt to compute both Q and R, such as M=QR def QR(M): Q=np.zeros_like(M) R=np.zeros_like(M) for i in range(len(M)): ei=M[:,i].copy() for j in range(i): R[j,i] = Q[:,j]@M[:,i] ei -= R[j,i]/(Q[:,j]@Q[:,j]) * Q[:,j] R[i,i] = np.sqrt(ei@ei) Q[:,i] = ei / R[i,i] return Q,R Then the iterative algorithm to compute the eigenvalues A=M.copy() for i in range(100): # That's an arbitrary number of iterations. In real life # we check for convergence. But we aren't in real life: in real life we # we don't code a QR decomposition in pure python :D Q,R=QR(A) A=R@Q np.diag(A) #array([ 3. , 2.02134245, 1.97865755, -1.01636249, 1.01636249]) Result are the eigenvalues of M, that is the zeros of polynomial. | 4 | 6 |
79,305,259 | 2024-12-24 | https://stackoverflow.com/questions/79305259/cant-find-keyword-using-playwright-pyppeteer | When using all kinds of Python automation tools (such as Playwright and Pyppeteer), I can't seem to grab the "Continue with Google" button on https://dropbox.com/login. When I do this through the console like this: const element = [...document.querySelectorAll('span,button,div,a')].find(el => el.textContent.includes('Doorgaan met Google')); // Check if the element is found and click it if (element) { element.click(); console.log('Button clicked successfully!'); } else { console.error('Button not found!'); } It finds and clicks the button in Chrome, but not in Firefox. In the programs I've written using Playwright and Pyppeteer, I couldn't find the button either (although sometimes literally just evaluating the same JS code). Anyone know what this is caused by? | Apparently the structure generated for FireFox is different from the structure generated for other browsers. This worked on FireFox for me: document.querySelector(".L5Fo6c-bF1uUb").click() In FireFox the button has this structure: <div id="some_id_here" class="L5Fo6c-bF1uUb" tabindex="0"></div> and no inner text. This is why your script did not find it in FireFox. | 3 | 2 |
79,303,657 | 2024-12-23 | https://stackoverflow.com/questions/79303657/why-is-byteslst-slower-than-bytearraylst | With lst = [0] * 10**6 I get times like these: 5.4 Β± 0.4 ms bytearray(lst) 5.6 Β± 0.4 ms bytes(bytearray(lst)) 13.1 Β± 0.7 ms bytes(lst) Python: 3.13.0 (main, Nov 9 2024, 10:04:25) [GCC 14.2.1 20240910] namespace(name='cpython', cache_tag='cpython-313', version=sys.version_info(major=3, minor=13, micro=0, releaselevel='final', serial=0), hexversion=51183856, _multiarch='x86_64-linux-gnu') I would've expected bytes(lst) and bytearray(lst) to be equally fast or bytearray(lst) to be the slower one, since it's the more complicated type (has more functionality, as it's mutable). But it's the other way around. And even the detour bytes(bytearray(lst)) is much faster than bytes(lst)! Why is bytes(lst) so slow? Benchmark script (Attempt This Online!): from timeit import timeit from statistics import mean, stdev import random import sys setup = 'lst = [0] * 10**6' codes = [ 'bytes(lst)', 'bytearray(lst)', 'bytes(bytearray(lst))' ] times = {c: [] for c in codes} def stats(c): ts = [t * 1e3 for t in sorted(times[c])[:5]] return f'{mean(ts):5.1f} Β± {stdev(ts):3.1f} ms ' for _ in range(25): random.shuffle(codes) for c in codes: t = timeit(c, setup, number=10) / 10 times[c].append(t) for c in sorted(codes, key=stats): print(stats(c), c) print('\nPython:') print(sys.version) print(sys.implementation) Inspired by an answer that also found bytearray to be faster. | As pointed out by @Homer512 in the comments, bytearray and bytes are implemented quite differently in CPython. While bytearray was given a fast path for creation from list and tuple with GitHub issue #91149, bytes did not receive the same optimization. The said fast path for bytearray takes advantage of the PySequence_Fast_ITEMS, which returns an array of objects, when given a list or a tuple, so that all items can be obtained by simply iterating over the array. The creation of bytes from a list, on the other hand, has to make a PyList_GET_ITEM call to obtain each item in the list with each index, hence the comparatively low performance. I've refactored the code for bytes creation from list and tuple in my PR #128214, which, with a benchmark using: python -m timeit -n 100 -s 'a = [40] * 100000' 'bytes(a)' shows a ~81% increase in performance: 100 loops, best of 5: 3.43 msec per loop # current 100 loops, best of 5: 1.89 msec per loop # PR #128214 and is indeed faster than the same benchmark constructing bytearray instead: 100 loops, best of 5: 2.14 msec per loop | 7 | 7 |
79,300,874 | 2024-12-22 | https://stackoverflow.com/questions/79300874/datatables-instance-not-synced-with-dom-input-checkbox-status | I have a table my_tbl initialized with datatables2.1.8.js jQuery library: /* using python django template because table template is not in the same html file */ {% include 'my_tbl_template.html' %} let my_tbl=$('#my_tbl').DataTable({...}), each cell containing <input type="checkbox" value="some_value"> whether initially checked or unchecked; The problem exactly is that my_tbl is not synced with the html dom after changing manually the checkboxes status! (check or uncheck); e.g. when I have two checkboxes (2 cells) initially both checked, when user unchecks one of them printCheckboxStatusDom() prints 1 //(expected number of checked checkboxes) but printCheckboxStatusTbl() prints 2 //the initial checked checkboxes $(document).ready(function(){ $('input[type=checkbox]').change(function() { printCheckboxStatusDom(); // prints correctly how many checkbox is checked at this time printCheckboxStatusTbl(); //always prints initialized values (no change) }); function printCheckboxStatusDom(){ let checkeds = $(document).find('input[type=checkbox]:checked'); console.log('DOM: checked boxes: ' + checkeds.length); } /* Function to print checkboxes status in the my_tbl instance */ function printCheckboxStatusTbl() { my_tbl.rows().every(function (rowIdx, tableLoop, rowLoop) { let rowNode = this.node(); let cellCheckboxes = $(rowNode).find('input[type=checkbox]:checked'); console.log('Tbl: checked boxes: ' + cellCheckboxes.length); } ); } | The problem was with DataTables initialization hierarchy in html page! my functions printCheckboxStatusDom(), printCheckboxStatusTbl() and $(document).ready(function(){...}) was stored in a javascript file my_scripts.js in a local dir, and the current page was like {% include 'my_tbl_template.html' %}; <script>let my_tbl=$('#my_tbl').DataTable({...})</script> <script type="text/javascript" src="my_scripts.js"> so I tried to to change the code to (init datatables at the end): {% include 'my_tbl_template.html' %} <script type="text/javascript" src="my_scripts.js"> <script>let my_tbl=$('#my_tbl').DataTable({...})</script> And the problem solved! | 1 | 1 |
79,304,741 | 2024-12-24 | https://stackoverflow.com/questions/79304741/how-should-i-convert-this-recursive-function-into-iteration | I have a recursive function in the following form: def f(): if cond1: ... f() elif cond2: ... I've "mechanically" converted it to an iterative function like this: def f(): while True: if cond1: ... elif cond2: ... break else: break I believe this conversion is valid, but is there a more elegant way to do it? For example, one that doesn't need multiple break statements? | Since the loop effectively continues only when cond1 is met, you can make it the condition that the while loop runs on, and run the code specific to cond2 after the loop if it's met: def f(): while cond1: ... if cond2: ... | 1 | 3 |
79,304,247 | 2024-12-23 | https://stackoverflow.com/questions/79304247/polars-transform-meta-data-of-expressions | Is it possible in python polars to transform the root_names of expression meta data? E.g. if I have an expression like expr = pl.col("A").dot(pl.col("B")).alias("AdotB") to add suffixes to the root_names, e.g. transforming the expression to pl.col("A_suffix").dot(pl.col("B_suffix")).alias("AdotB_suffix") I know that expr.meta.root_names() gives back a list of the column names, but I could not find a way to transform them. | There is an example in the tests that does query plan node rewriting in Python with callbacks: https://github.com/pola-rs/polars/blob/main/py-polars/tests/unit/lazyframe/cuda/test_node_visitor.py But I can't see any equivalent API for rewriting expressions? Out of interest, there is .serialize() which can dump to JSON. expr.meta.serialize(format="json") # '{"Alias":[{"Agg":{"Sum":{"BinaryExpr":{"left":{"Column":"A"},"op":"Multiply","right":{"Column":"B"}}}}},"AdotB"]}' # ^^^^^ ^^^^^^^^^^ ^^^^^^^^^^ ^^^^^ Technically, you could modify the Alias and Column values, and .deserialize() back into an expression. def suffix_all(expr, suffix): def _add_suffix(obj): if "Column" in obj: obj["Column"] = obj["Column"] + suffix if "Alias" in obj: obj["Alias"][-1] += suffix return obj ast = expr.meta.serialize(format="json") new_ast = json.loads(ast, object_hook=_add_suffix) return pl.Expr.deserialize(json.dumps(new_ast).encode(), format="json") df = pl.DataFrame({"A_suffix": [2, 7, 3], "B_suffix": [10, 7, 1]}) expr = pl.col("A").dot(pl.col("B")).alias("AdotB") df.with_columns(expr.pipe(suffix_all, "_suffix")) shape: (3, 3) ββββββββββββ¬βββββββββββ¬βββββββββββββββ β A_suffix β B_suffix β AdotB_suffix β β --- β --- β --- β β i64 β i64 β i64 β ββββββββββββͺβββββββββββͺβββββββββββββββ‘ β 2 β 10 β 72 β β 7 β 7 β 72 β β 3 β 1 β 72 β ββββββββββββ΄βββββββββββ΄βββββββββββββββ Which does seem to "work" in this case, but the serialize docs do contain a warning: Serialization is not stable across Polars versions And it's probably just not a recommended approach in general. | 2 | 2 |
79,303,980 | 2024-12-23 | https://stackoverflow.com/questions/79303980/can-you-override-the-default-formatter-for-f-strings | Reading through PEP 3101, it discusses how to subclass string.Formatter to define your own formats, but then you use it via myformatter.format(string). Is there a way to just make it work with f-strings? E.g. I'm looking to do something like f"height = {height:.2f}" but I want my own float formatter that handles certain special cases. | As I mentioned in comments F-strings are actually processed at compile time, not runtime. you can't override the default f-string formatter is due to how Python implements f-strings at a fundamental level. For example if you run f"height = {height:.2f}" It effectively converts "height = {}".format(format(height, '.2f')) conversion happens during compilation, there's no way to "hook into" or modify this process at runtime. What you can do is. use decimal Module for Precision Control. Reference - https://realpython.com/how-to-python-f-string-format-float/ from decimal import Decimal value = Decimal('1.23456789') print(f"Value: {value:.2f}") Output: Value: 1.23 Decimal implements the format method and handles the standard format specifiers like .2f. Side Note: We aren't really "overriding" the default formatter here - it's implementing the existing format specification protocol. As your goal is to handle special cases for float formatting, using Decimal could solve your problem... | 1 | 1 |
79,295,206 | 2024-12-19 | https://stackoverflow.com/questions/79295206/efficiently-draw-random-samples-without-replacement-from-an-array-in-python | I need to draw random samples without replacement from a 1D NumPy array. However, performance is critical since this operation will be repeated many times. Hereβs the code Iβm currently using: import numpy as np # Example array array = np.array([10, 20, 30, 40, 50]) # Number of samples to draw num_samples = 3 # Draw samples without replacement samples = np.random.choice(array, size=num_samples, replace=False) print("Samples:", samples) While this works for one sample, it requires a loop to generate multiple samples, and I believe there could be a way to optimize or vectorize this operation to improve performance when sampling multiple times. Is there a way to vectorize or otherwise optimize this operation? Would another library (e.g., TensorFlow, PyTorch) provide better performance for this task? Are there specific techniques for bulk sampling that avoid looping in Python? | The code below generates random samples of a list without replacement in a vectorized manner. This solution is particularly useful when the number of simulations is large and the number of samples per simulation is low. import numpy as np def draw_random_samples(len_deck, n_simulations, n_cards): """ Draw random samples from the deck. Parameters ---------- len_deck : int Length of the deck. n_simulations : int How many combinations of cards are generated. (Doubles could occur.) n_cards : int How many cards to draw from the deck per simulation. (All cards are unique.) Returns ------- indices : array-like Random indices of the deck. """ indices = np.random.randint(0, len_deck, (1, n_simulations)) for i in range(1, n_cards): new_indices = np.random.randint(0, len_deck-i, n_simulations) new_indices += np.sum(new_indices >= indices - np.arange(i)[:,None], axis=0) indices = np.vstack((indices, new_indices)) indices = np.sort(indices, axis=0) return indices.T | 4 | 0 |
79,301,108 | 2024-12-22 | https://stackoverflow.com/questions/79301108/googles-gemini-on-local-audio-files | Google has a page describing how to use one of their Gemini-1.5 models to transcribe audio. They include a sample script (see below). The script grabs the audio file from Google Storage via the Part.from_uri() command. I would, instead, like to use a local file. Setting the URI to "file:///..." does not work. How can I do that? Google's (working, cloud-based) code is here: import vertexai from vertexai.generative_models import GenerativeModel, GenerationConfig, Part # TODO(developer): Update and un-comment below line # PROJECT_ID = "your-project-id" vertexai.init(project=PROJECT_ID, location="us-central1") model = GenerativeModel("gemini-1.5-flash-002") prompt = """ Can you transcribe this interview, in the format of timecode, speaker, caption. Use speaker A, speaker B, etc. to identify speakers. """ audio_file_uri = "gs://cloud-samples-data/generative-ai/audio/pixel.mp3" audio_file = Part.from_uri(audio_file_uri, mime_type="audio/mpeg") contents = [audio_file, prompt] response = model.generate_content(contents, generation_config=GenerationConfig(audio_timestamp=True)) print(response.text) # Example response: # [00:00:00] Speaker A: Your devices are getting better over time... # [00:00:16] Speaker B: Welcome to the Made by Google podcast, ... # [00:01:00] Speaker A: So many features. I am a singer. ... # [00:01:33] Speaker B: Amazing. DeCarlos, same question to you, ... | Although one might expect the following base-64 encoding and decoding to cancel, the following code appears to work. (The code is a slightly modified version from this page.) ... encoded_audio = base64.b64encode(open(audio_path, "rb").read()).decode("utf-8") mime_type = "audio/mpeg" audio_content = Part.from_data( data=base64.b64decode(encoded_audio), mime_type=mime_type ) contents = [audio_content, prompt] ... | 1 | 1 |
79,302,101 | 2024-12-23 | https://stackoverflow.com/questions/79302101/calling-a-rust-function-that-calls-a-python-function-from-python-code | Iβm new to pyo3, and I would like to achieve the following thing. My main code is in Python, but I would like to call a Rust function from this Python code. The trick is, the Rust function should be able to call a Python function provided as an argument. Letβs say that I have a Python function, that cannot be embedded inside of the Rust module: def double(x): return 2*x I want to create a Rust function taking this function as an argument: use pyo3::prelude::*; fn my_rust_function(a: usize, f: ...) -> PyResult<usize> { ... } Finally, this Rust function would be used somewhere in the same Python code as above: import rust_module def double(x): return 2*x def main(): return rust_module.my_rust_function(3, double) Would such a behavior be possible? I read parts of the documentation about how to call Rust from Python and Python from Rust, but never such a double-way use. | In my_rust_function(), take Bound<'_, PyAny> (or any variant, such as Borrowed<'_, '_, PyAny> or Py<PyAny>, but prefer Bound unless you have reasons not to), and use the call0() method on it to call it without arguments, or call1() to call it with positional arguments only, or call() to call it with both positional and keyword arguments. Exporting the Rust function to Python should be done as usual. For example: use pyo3::prelude::*; #[pyfunction] fn my_rust_function(a: usize, f: Bound<'_, PyAny>) -> PyResult<usize> { f.call1((a,))?.extract::<usize>() } #[pymodule] fn rust_module(m: &Bound<'_, PyModule>) -> PyResult<()> { m.add_function(wrap_pyfunction!(my_rust_function, m)?)?; Ok(()) } | 2 | 2 |
79,302,073 | 2024-12-23 | https://stackoverflow.com/questions/79302073/dealing-with-stopiteration-return-from-a-next-call-in-python | I'm using the below to skip a group of records when a certain condition is met: if (condition met): ... [next(it) for x in range(19)] Where it is an itertuples object created to speed up looping through a large dataframe (yes, the loop is necessary). it = df.itertuples() for row in it: ... What's the idiomatic way of dealing with a StopIteration return from the next call (presumably due to reaching the end of the dataframe)? | So, in general, the solution to a potential exception being raised is to use exception handling. But in the case of next, you can simply use the second argument (which will make it return a default value in case the iterator is exhausted). But you shouldn't use a list comprehension like this anyway, so the most basic solution: if condition: for _ in range(19): next(it, None) But perhaps more elegantly (using the approach from the consume recipe in the itertools docs): import itertools def advance(it, n): next(itertools.islice(it, n, n), None) ... if condition: advance(it, 19) But again, a perfectly idiomatic solution would have been: try: [next(it) for x in range(19)] except StopIteration: pass leaving aside the use of a list comprehension for side-effects | 1 | 2 |
79,294,881 | 2024-12-19 | https://stackoverflow.com/questions/79294881/how-can-i-get-a-certain-number-of-evenly-spaced-points-along-the-octagon-perimet | I want to get the coordinates of a number of points that together form an octagon. For a circle this is done easily as follows: import numpy as np n = 100 x = np.cos(np.linspace(0, 2 * np.pi, n)) y = np.sin(np.linspace(0, 2 * np.pi, n)) coordinates = list(zip(x, y)) By changing n I can increase/decrease the "angularity". Now I want to do the same for an octagon. I know that an octagon has 8 sides and the angle between each side 45 degrees. Let's assume that the perimeter of the octagon is 30.72m. Each side has therefore a length of 3.79m. perimeter = 30.72 n_sides = 8 angle = 45 How can I get n coordinates that represent this octagon? Edit: With the help of the answers of @lastchance and @mozway I am able to generate an octagon. My goal is to get evenly-spaced n coordinates from the perimeter of this octagon. If n = 8 these coordinates correspond to the corners of the octagon, but I'm interested in cases where n > 8 | Let's use some math. Each triangle in the polygon is isosceles. Assuming r the radius of the containing circle and a each side of the polygon we have: n_sides = 8 perimeter = n_sides * a a/2 = sin(pi/n_sides) / r # isosceles = 2 equal right triangles perimeter = n_sides * 2 * sin(pi/n_sides) / r r = perimeter/(2 * n_sides * sin(pi/n_sides)) Thus, the coordinates are: perimeter = 30.72 n_sides = 8 r = perimeter/(2 * n_sides * np.sin(np.pi/n_sides)) x = r * np.cos(np.linspace(0, 2 * np.pi, n_sides, endpoint=False)) y = r * np.sin(np.linspace(0, 2 * np.pi, n_sides, endpoint=False)) Note the endpoint=False in linspace. As a graph: ax = plt.subplot(aspect=1) ax.plot(x, y, marker='o') Output: If you want one extra point to "close" the polygon (the last point being the same as the first point): x = r * np.cos(np.linspace(0, 2 * np.pi, n_sides+1)) y = r * np.sin(np.linspace(0, 2 * np.pi, n_sides+1)) Now, let's use shapely to check that the calculation is correct: from shapely.geometry import Polygon Polygon(zip(x, y)).length # 30.72 Interpolating n points on the octagon/polygon Now that we have a polygon, we can interpolate n points along its perimeter. Let's generate a "closed" polygon (i.e. n_sides+1 point in which the last one is equal to the first), create a LineString and interpolate n values along it: from shapely.geometry import LineString perimeter = 30.72 n_sides = 8 n = 12 # compute the points of the polygon r = perimeter/(2 * n_sides * np.sin(np.pi/n_sides)) x = r * np.cos(np.linspace(0, 2 * np.pi, n_sides+1)) y = r * np.sin(np.linspace(0, 2 * np.pi, n_sides+1)) # create a line string # interpolate n points on the perimeter line = LineString(zip(x, y)) coords = np.concatenate(list(line.interpolate(x).coords for x in np.linspace(0, line.length, n, endpoint=False))) # plot ax = plt.subplot(aspect=1) ax.plot(x, y) ax.plot(*coords.T, ls='', marker='o', label='n = 8') Alternatively, performing the interpolation with numpy.interp: perimeter = 30.72 n_sides = 8 n = 12 # compute the points of the polygon r = perimeter/(2 * n_sides * np.sin(np.pi/n_sides)) x = r * np.cos(np.linspace(0, 2 * np.pi, n_sides+1)) y = r * np.sin(np.linspace(0, 2 * np.pi, n_sides+1)) # interpolate the n points along the perimeter ref = np.linspace(0, 1, n_sides+1)*perimeter # distance to origin: ref dist = np.linspace(0, 1, n+1)*perimeter # distance to origin: n points new_x = np.interp(dist, ref, x) # interpolated x new_y = np.interp(dist, ref, y) # interpolated y ax = plt.subplot(aspect=1) ax.plot(x, y) ax.plot(new_x, new_y, ls='', marker='o') Output: | 5 | 4 |
79,300,458 | 2024-12-22 | https://stackoverflow.com/questions/79300458/how-to-prevent-my-python-script-from-closing-immediately-after-running-on-window | I'm just starting out with Python on my Windows computer. I wrote a simple script to print "Hello, World!" like this: print("Hello, World!") When I double-click the .py file to run it, a command prompt window briefly appears and then closes right away. I can't see the "Hello, World!" message. How can I make the window stay open after the script finishes so I can read the output? Thanks in advance! | you can use input function print("Hello, World!") input("Press Enter to exit...") | 2 | 3 |
79,299,908 | 2024-12-21 | https://stackoverflow.com/questions/79299908/select-rows-if-a-condition-is-met-in-any-two-columns | Please help me filter my dataframe for a condition, that should be fulfilled in any two columns. Imagine a list of students with grades in different sports. I want to filter the list so the new list passed_students shows only those who have scored a 4 or greater in at least two different sports. Students = { "Names": ["Tom", "Rick", "Sally","Sarah"], "Football": [4, 5, 2,1], "Basketball": [4, 2, 4,2], "Volleyball": [6, 1, 6,1], "Foosball": [4, 3, 4,3], } The Code should return this: passed_Students = { "Names": ["Tom", "Sally"], "Football": [4,2], "Basketball": [4,4], "Volleyball": [6,6], "Foosball": [4,4], } I can make it work if one grade above 4 is sufficient: import numpy as np import pandas as pd Students = { "Names": ["Tom", "Rick", "Sally","Sarah"], "Football": [4, 5, 2,1], "Basketball": [4, 2, 4,2], "Volleyball": [6, 1, 6,1], "Foosball": [4, 3, 4,3], } Students = pd.DataFrame(Students) passed_Students= Students[(Students["Football"]>3) |(Students["Basketball"]>3)|(Students["Volleyball"]>3)|(Students["Foosball"]>3) ] print(passed_Students) This returns: Students = { "Names": ["Tom", "Rick", "Sally"], "Football": [4, 5, 2], "Basketball": [4, 2, 4], "Volleyball": [6, 1, 6], "Foosball": [4, 3, 4], } But how can I make it such, that any two grades of 4 or above qualify for passed_students, thereby returning only this? passed_Students = { "Names": ["Tom", "Sally"], "Football": [4,2], "Basketball": [4,4], "Volleyball": [6,6], "Foosball": [4,4], } | drop the "Names", then compare to 4 with ge, sum to count the number of True per row and filter with boolean indexing: passed_Students = Students[Students.drop(columns=['Names']) .ge(4).sum(axis=1).ge(2)] Output: Names Football Basketball Volleyball Foosball 0 Tom 4 4 6 4 2 Sally 2 4 6 4 Intermediates: # Students.drop(columns=['Names']).ge(4) Football Basketball Volleyball Foosball 0 True True True True 1 True False False False 2 False True True True 3 False False False False # Students.drop(columns=['Names']).ge(4).sum(axis=1) 0 4 1 1 2 3 3 0 dtype: int64 # Students.drop(columns=['Names']).ge(4).sum(axis=1).ge(2) 0 True 1 False 2 True 3 False dtype: bool | 1 | 2 |
79,298,368 | 2024-12-20 | https://stackoverflow.com/questions/79298368/inspect-all-probabilities-of-bertopic-model | Say I build a BERTopic model using from bertopic import BERTopic topic_model = BERTopic(n_gram_range=(1, 1), nr_topics=20) topics, probs = topic_model.fit_transform(docs) Inspecting probs gives me just a single value for each item in docs. probs array([0.51914467, 0. , 0. , ..., 1. , 1. , 1. ]) I would like the entire probability vector across all topics (so in this case, where nr_topics=20, I want a vector of 20 probabilities for each item in docs). In other words, if I have N items in docs and K topics, I would like an NxK output. | For individual topic probability across each document you need to add one more argument. topic_model = BERTopic(n_gram_range=(1, 1), nr_topics=20, calculate_probabilities=True) Note: This calculate_probabilities = True will only work if you are using HDBSCAN clustering embedding model. And Bertopic by default uses all-MiniLM-L6-v2. Official documentation: https://maartengr.github.io/BERTopic/api/bertopic.html They have mentioned the same in document as well. | 1 | 1 |
79,299,276 | 2024-12-21 | https://stackoverflow.com/questions/79299276/geopandas-is-missing-states-geojson-file | All, I got the following error when trying to import states.geojson file as described in this page https://www.twilio.com/en-us/blog/geospatial-analysis-python-geojson-geopandas-html. I think that this file is among the pre-installed files with the geopands. I am using geopandas version 0.14.4 import geopandas as gpd states = gpd.read_file('states.geojson') Here is the error Traceback (most recent call last): File fiona/ogrext.pyx:130 in fiona.ogrext.gdal_open_vector File fiona/ogrext.pyx:134 in fiona.ogrext.gdal_open_vector File fiona/_err.pyx:375 in fiona._err.StackChecker.exc_wrap_pointer CPLE_OpenFailedError: states.geojson: No such file or directory The above exception was the direct cause of the following exception: Traceback (most recent call last): Cell In[1], line 2 states = gpd.read_file('states.geojson') # built in-file File ~/anaconda3/lib/python3.11/site-packages/geopandas/io/file.py:289 in _read_file return _read_file_fiona( File ~/anaconda3/lib/python3.11/site-packages/geopandas/io/file.py:315 in _read_file_fiona with reader(path_or_bytes, **kwargs) as features: File ~/anaconda3/lib/python3.11/site-packages/fiona/env.py:457 in wrapper return f(*args, **kwds) File ~/anaconda3/lib/python3.11/site-packages/fiona/__init__.py:342 in open colxn = Collection( File ~/anaconda3/lib/python3.11/site-packages/fiona/collection.py:226 in __init__ self.session.start(self, **kwargs) File fiona/ogrext.pyx:876 in fiona.ogrext.Session.start File fiona/ogrext.pyx:136 in fiona.ogrext.gdal_open_vector DriverError: Failed to open dataset (flags=68): states.geojson Thanks | The read_file method of geopandas expect a file adress as input as can be seen here in the documentation https://geopandas.org/en/stable/docs/reference/api/geopandas.read_file.html import geopandas as gpd gpd.read_file("./directory/fileName.json") it seems that the geojson that you are seeking is a geojson file for the US states. You could find this here https://github.com/PublicaMundi/MappingAPI/blob/master/data/geojson/us-states.json?short_path=1c1ebe5 download the file and then use the function to read it and store it as a geodataframe | 1 | 2 |
79,298,447 | 2024-12-20 | https://stackoverflow.com/questions/79298447/pytorch-scatter-max-for-sparse-tensors | I have the following PyTorch code value_tensor = torch.sparse_coo_tensor(indices=query_indices.t(), values=values, size=(num_lines, img_size, img_size)).to(device=device) value_tensor = value_tensor.to_dense() indices = torch.arange(0, img_size * img_size).repeat(len(lines)).to(device=device) line_tensor_flat = value_tensor.flatten() img, _ = scatter_max(line_tensor_flat, indices, dim=0) img = torch.reshape(img, (img_size, img_size)) Note the line: value_tensor = value_tensor.to_dense(), this is unsurprisingly slow. However, I cannot figure out how to obtain the same results with a sparse tensor. The function in question calls reshape which is not available on sparse tensors. I'm using Scatter Max but opened to using anything that works. | You should be able to directly use scatter_max on the sparse tensor if you keep the indices that you pass to scatter_max also sparse (i.e, only the non-zero ones). Consider this example query_indices = torch.tensor([ [0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2], [0, 1, 0, 0, 1, 0] ]) values = torch.tensor([1, 2, 3, 4, 5, 6]) num_lines = 2 img_size = 3 value_tensor = torch.sparse_coo_tensor( indices=query_indices, values=values, size=(num_lines, img_size, img_size) ) # need to coalesce because for some reason sparse_coo_tensor doesn't guarantee uniqueness of indices value_tensor = value_tensor.coalesce() Then, compute flat_indices as a sparse tensor containing just the non-zero 1-d indices (2-d indices are converted to 1-d indices similar to your arange) indices = value_tensor.indices() values = value_tensor.values() batch_indices = indices[0] # "line" (in your terminology) indices row_indices = indices[1] col_indices = indices[2] flat_indices = row_indices * img_size + col_indices You can use flat_indices to scatter_max flattened_result, _ = scatter_max( values, flat_indices, dim=0, dim_size=img_size * img_size ) per_line_max = flattened_result.reshape(img_size, img_size) indices tensor([[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2], [0, 1, 0, 0, 1, 0]]) values tensor([1, 2, 3, 4, 5, 6]) flat_indices tensor([0, 4, 6, 0, 4, 6]) per_line_max tensor([[4, 0, 0], [0, 5, 0], [6, 0, 0]]) The output I get is the same as what I get from your code. | 2 | 1 |
79,298,424 | 2024-12-20 | https://stackoverflow.com/questions/79298424/generating-binary-arrays-with-alternating-values-based-on-change-indices-in-nump | I have an array a of increasing indexes, e.g. [2 5 9 10], which indicates positions of value change. Assuming the output values are 0 and 1, I want to get array b: [0 0 1 1 1 0 0 0 0 1 0] Is there a NumPy magic to transform a into b? | One way among many others a=np.array([2,5,9,10]) x=np.zeros((a.max()+1,), dtype=np.uint8) x[a]=1 b=x.cumsum()%2 Some explanation (but I guess code, in this rare case, is its own explanation, since it is quite easy, once you see it) x (after x[a]=1) contains 1 at each given position in a. So x.cumsum() contains a value that increments for each of those values: 0 for the 2 first, then 1 for 3 next then 2, then 3, then 4... So x.cumsum()%2 alternates between 1 and 0. Note that I use np.uint8 type because I am cheap, and I can't help thinking "why shoud I pay 32 bits when 8 are enough for a size 11 array". But in reality, since 256 is even, it wouldn't really matter even if a had billions of values. Just x.cumsum() would roll back from 255 to 0 because of overflow. And then x.cumsum()%2 would have the same value. | 8 | 14 |
79,298,285 | 2024-12-20 | https://stackoverflow.com/questions/79298285/calculate-percentage-of-flag-grouped-by-another-column | I have the following dataframe: Account ID Subscription type Cancellation flag 123 Basic 1 222 Basic 0 234 Hybrid 1 345 Hybrid 1 Now I would like to calculate the percentage of cancellations, but grouped by the subscription type. I would like to get it in a format so that I can easily create a bar chart out of the percentages grouped by the subscription type. | Use a groupby.mean: out = df.groupby('Subscription type')['Cancellation flag'].mean().mul(100) Output: Subscription type Basic 50.0 Hybrid 100.0 Name: Cancellation flag, dtype: float64 Then plot.bar: out.plot.bar() Or directly with seaborn.barplot: import seaborn as sns sns.barplot(df, x='Subscription type', y='Cancellation flag', estimator='mean', errorbar=None) Output: | 1 | 2 |
79,298,104 | 2024-12-20 | https://stackoverflow.com/questions/79298104/sessionstore-object-has-no-attribute-get-session-cookie-age | I have a Django project, everything goes well, but when I tried to handle the expiration of the session I got confused In a test view print(request.session.get_session_cookie_age()) give me this error SessionStore' object has no attribute get_session_cookie_age According to the documentation it should returns the value of the setting SESSION_COOKIE_AGE Trying debugging with this code : from django.contrib.sessions.models import Session sk = request.session.session_key s = Session.objects.get(pk=sk) pprint(s.get_decoded()) >>> {'_auth_user_backend': 'django.contrib.auth.backends.ModelBackend', '_auth_user_hash': '055c9b751ffcc6f3530337321e98f9e5e4b8a623', '_auth_user_id': '6', '_session_expiry': 3600} Here is the config : Version Django = 2.0 Python 3.6.9 INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.postgres', 'corsheaders', 'rest_framework', 'rest_framework.authtoken', 'drf_yasg', 'wkhtmltopdf', ] MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', 'corsheaders.middleware.CorsMiddleware', ] Thank's in advance | That functionality is not in Django 2.0, see here. | 1 | 1 |
79,298,219 | 2024-12-20 | https://stackoverflow.com/questions/79298219/plotting-heatmap-with-gridlines-in-matplotlib-misses-gridlines | I am trying to plot a heatmap with gridlines. This is my code (adapted from this post): # Plot a heatmap with gridlines import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from functional import seq arr = np.random.randn(3, 20) plt.tight_layout() ax = plt.subplot(111) ax.imshow(arr, cmap='viridis') xr = ax.get_xlim() yr = ax.get_ylim() ax.set_xticks(np.arange(max(xr))-0.5, minor=True) ax.set_yticks(np.arange(max(yr))-0.5, minor=True) ax.grid(which='minor', snap=False, color='k', linestyle='-', linewidth=1) ax.tick_params(which='major', bottom=False, left=False) ax.tick_params(which='minor', bottom=False, left=False) ax.set_xticklabels([]) ax.set_yticklabels([]) for spine in ax.spines.values(): spine.set_visible(False) plt.show() I get the following plot (cropped to content): There are no vertical grid lines after the 3rd, 8th, and third to last columns. As this script uses minor ticks to plot grid lines, I also thought to print the xticks and make the same plot without hiding the labels: print('xticks:', np.arange(max(xr))-0.5) xticks: [-0.5 0.5 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5 10.5 11.5 12.5 13.5 14.5 15.5 16.5 17.5 18.5] which show that all the necessary ticks are there. What could be the issue here? | I have no idea why this happens. Using the major grid instead fixes this issue: import numpy as np import pandas as pd import matplotlib.pyplot as plt arr = np.random.randn(3, 20) plt.tight_layout() ax = plt.subplot(111) ax.imshow(arr, cmap='viridis') xr = ax.get_xlim() yr = ax.get_ylim() ax.set_xticks(np.arange(max(xr))-0.5, minor=False) ax.set_yticks(np.arange(max(yr))-0.5, minor=False) ax.grid(which='major', snap=False, color='k', linestyle='-', linewidth=1) ax.tick_params(which='major', bottom=False, left=False) ax.tick_params(which='minor', bottom=False, left=False) plt.show() | 1 | 1 |
79,297,339 | 2024-12-20 | https://stackoverflow.com/questions/79297339/simple-1-d-dispersion-equation-numerical-solution | I am new to coding and trying to solve a simple 1D dispersion equation. The equation and boundary conditions: adC/dx = bd^2C/dx^2 x = hj, C = C0; x = - inf, C =0 The analytical solution is C = C0 * exp (a/b*(x-hj)) Here is the code I have : import numpy as np import matplotlib.pyplot as plt C0 = 3.5 # Concentration at injection point h_j = 0.7 # Injection point L = -100 # Location of closed boundary alpha = 2.597 beta = 1.7 N=10000 # number of segments x=np.linspace(L,h_j,N+1) # grid points # determine dx dx = x[1]-x[0] # diagonal elements a1 = -(alpha*dx)/(2*beta) a2 = 2 a3 = alpha*dx/(2*beta)-1 # construct A A = np.zeros((N+2, N+2)) A[0,0] = 1 A[N+1,N+1] = 1 for i in np.arange(1,N+1): A[i,i-1:i+2] = [a1,a2,a3] # construct B B = np.zeros((N+2)) B[0] = 10^(-10) B[N+1] = C0 xs=np.linspace(0,h_j,100) Exact = C0*np.exp(alpha/beta*(xs-h_j)) #solve linear system Au=B u = np.linalg.solve(A,B) fig = plt.figure() ax = fig.add_subplot(111) res = ax.imshow(A, cmap = "jet", interpolation = 'nearest') cb = fig.colorbar(res) # drop the frist point at x_0, as it is outside the domain fig = plt.figure() ax = fig.add_subplot(111) ax.plot(x, u[1:N+2], label= "numerical solution") ax.plot(xs,Exact, label= 'Analytical solution') ax.legend() ax.set_xlim(0,h_j) ax.set_xlabel("z") ax.set_ylabel("C(z)") plt.show() But the solved numerical solution is not the same as the analytical one. I cannot find the mistake in the codes or maybe in my math. | You have a number of errors. Your matrix coefficients are wrong - try discretising those first and second derivatives again carefully. Your boundary values are fixed (Dirichlet BC). So you only have N-1 varying nodes. So your matrix should be (N-1)x(N-1) not (N+1)x(N+1). The boundary conditions are effectively transferred to be the only non-zero source terms (RHS). Your leftmost boundary condition might as well be 0; then you don't have to worry about how you incorrectly wrote 1e-10. Everything is forced from the right boundary - as it would be physically. Finally, you are using "central differencing" for the advection term (the first derivative on the LHS). This is dubious physically (information goes in the direction of flow) and may, for coarse grids, lead to unphysical numerical wiggles. In the long run, consider "upwind"/one-sided differencing. In the present case, your N is large enough, and dx small enough, not to cause a problem. However, if you, say, multiply alpha by 1000 you will get an ... interesting ... result. import numpy as np import matplotlib.pyplot as plt C0 = 3.5 # Concentration at injection point h_j = 0.7 # Injection point L = -100 # Location of closed boundary alpha = 2.597 beta = 1.7 N=5000 # number of segments x=np.linspace(L,h_j,N+1) # grid points # determine dx dx = x[1]-x[0] # diagonal elements a1 = -alpha*dx/(2*beta) - 1 a2 = 2 a3 = alpha*dx/(2*beta) - 1 # construct A A = np.zeros( (N-1,N-1) ) # discretised derivative matrix B = np.zeros( (N-1) ) # RHS (source term) u = np.zeros( (N+1) ) # solution vector (NOTE: includes boundary nodes) # Interior grid nodes for i in range( 1, N-2 ): A[i,i-1] = a1 A[i,i ] = a2 A[i,i+1] = a3 # Boundary nodes A[0 ,0 ] = a2; A[0 ,1 ] = a3; B[0 ] = -a1 * 0 ; u[0] = 0 # left boundary A[N-2,N-3] = a1; A[N-2,N-2] = a2; B[N-2] = -a3 * C0; u[N] = C0 # right boundary # Solve linear system Au=B (NOTE: interior nodes only; boundary values are Dirichlet BC) u[1:N] = np.linalg.solve(A,B) # Exact solution xs=np.linspace(0,h_j,100) Exact = C0*np.exp(alpha/beta*(xs-h_j)) fig = plt.figure() ax = fig.add_subplot(111) res = ax.imshow(A, cmap = "jet", interpolation = 'nearest') cb = fig.colorbar(res) fig = plt.figure() ax = fig.add_subplot(111) ax.plot(x, u , 'bo', label= 'numerical solution' ) ax.plot(xs,Exact, 'r-', label= 'Analytical solution' ) ax.legend() ax.set_xlim(0,h_j) ax.set_xlabel("z") ax.set_ylabel("C(z)") plt.show() | 1 | 0 |
79,297,758 | 2024-12-20 | https://stackoverflow.com/questions/79297758/how-should-i-parse-times-in-the-japanese-30-hour-format-for-data-analysis | I'm considering a data analysis project involving information on Japanese TV broadcasts. The relevant data will include broadcast times, and some of those will be for programs that aired late at night. Late-night Japanese TV schedules follow a non-standard time format called the 30-hour system (brief English explanation here). Most times are given in normal Japan Standard Time, formatted as %H:%M. Times from midnight to 6 AM, however, are treated as an extension of the previous day and numbered accordingly, under the logic that that's how people staying up late experience them. For example, Macross Frontier was broadcast in Kansai at 1:25 AM, but it was written as 25:25. I want to use this data in a Pandas or Polars DataFrame. Theoretically, it could be left as a string, but it'd be more useful to convert it to a standard format for datetimes -- either Python's built-in type, or the types used in NumPy or Polars. One simple approach could be: from datetime import date, time, datetime from zoneinfo import ZoneInfo def process_30hour(d: date, t: str): h, m = [int(n) for n in t.split(':')] # assumes format 'HH:MM' for t if h > 23: h -= 24 d += 1 return datetime.combine(d, time(h, m), ZoneInfo('Japan')) This could then be applied to a whole DataFrame with DataFrame.apply(). There may be a more performant way, however, especially considering the vectorization features of DataFrames -- both libraries recommend avoiding DataFrame.apply() if there's an alternative. | IIUC, you could use create a datetime with '00:00' as time and add the hours as timedelta: from datetime import date, time, datetime, timedelta from zoneinfo import ZoneInfo def process_30hour(d: date, t: str): h, m = map(int, t.split(':')) # assumes format 'HH:MM' for t return (datetime.combine(d, time(), ZoneInfo('Japan')) + timedelta(hours=h, minutes=m)) process_30hour(date(2024, 12, 20), '25:25') Output: datetime.datetime(2024, 12, 21, 1, 25, tzinfo=zoneinfo.ZoneInfo(key='Japan')) The same logic can be used vectorially with pandas: df = pd.DataFrame({'date': ['2024-12-20 20:25', '2024-12-20 25:25']}) # split string tmp = df['date'].str.split(expand=True) # convert to datetime/timedelta, combine df['datetime'] = pd.to_datetime(tmp[0]) + pd.to_timedelta(tmp[1]+':00') For fun, as a one-liner: df['datetime'] = (df['date'].add(':00') .str.split(expand=True) .astype({0: 'datetime64[ns]', 1: 'timedelta64[ns]'}) .sum(axis=1) ) Output: date datetime 0 2024-12-20 20:25 2024-12-20 20:25:00 1 2024-12-20 25:25 2024-12-21 01:25:00 | 4 | 5 |
79,293,528 | 2024-12-19 | https://stackoverflow.com/questions/79293528/how-to-replace-xml-node-value-in-python-without-changing-the-whole-file | Doing my first steps in python I try to parse and update a xml file. The xml is as follows: <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet href="util/style/aaaa-2-0.xsl" type="text/xsl"?> <!DOCTYPE eu:eu-backbone SYSTEM "../../util/dtd/eu-regional.dtd"[]> <test dtd-version="3.2" xmlns:test="http://www.ich.org/test" xmlns:xlink="http://www.w3c.org/1999/xlink"> <mr> <leaf checksum="88ed245997a341a4c7d1e40d614eb14f" > <title>book name</title> </leaf> </mr> </test> I would like to update the value of the checksum.I have written a class with one method: @staticmethod def replace_checksum_in_index_xml(xml_file_path, checksum): logging.debug(f"ReplaceChecksumInIndexXml xml_file_path: {xml_file_path}") try: from xml.etree import ElementTree as et tree = et.parse(xml_file_path) tree.find('.//leaf').set("checksum", checksum) tree.write(xml_file_path) except Exception as e: logging.error(f"Error updating checksum in {xml_file_path}: {e}") I call the method: xml_file_path = "index.xml" checksum = "aaabbb" Hashes.replace_checksum_in_index_xml(xml_file_path, checksum) The checksum is indeed updated. But also the whole xml structure is changed: <test dtd-version="3.2"> <mr> <leaf checksum="aaabbb"> <title>book name</title> </leaf> </mr> </test> How can I update only given node, without changing anything else in given xml file? UPDATE Solution provided by LRD_Clan is better than my original one. But it is still changing a bit the structure of xml. Also when I take more complex example I see again part of xml is removed. More complex example with additional DOCTYPE: <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE eu:eu-backbone SYSTEM "../../util/dtd/eu-regional.dtd"[]> <?xml-stylesheet href="util/style/aaaa-2-0.xsl" type="text/xsl"?> <test dtd-version="3.2" xmlns:test="http://www.ich.org/test" xmlns:xlink="http://www.w3c.org/1999/xlink"> <mr> <leaf checksum="88ed245997a341a4c7d1e40d614eb14f" > <title>book name</title> </leaf> </mr> </test> after running updated script the result is: <?xml version='1.0' encoding='UTF-8'?> <?xml-stylesheet href="util/style/aaaa-2-0.xsl" type="text/xsl"?><test xmlns:test="http://www.ich.org/test" xmlns:xlink="http://www.w3c.org/1999/xlink" dtd-version="3.2"> <mr> <leaf checksum="aaabbb"> <title>book name</title> </leaf> </mr> </test> I would really like to see only this one xml element being changed and the other part of document to be left intact. I had similiar solution written in powershell which looks like [xml]$xmlContent = Get-Content -Path $xmlFilePath $element = $xmlContent.SelectSingleNode("//leaf") $element.SetAttribute("checksum, "new text") $xmlContent.Save((Resolve-Path "$xmlFilePath").Path) I was hoping I will find something at least same elegant in python. | In addition to LMCβs answer a small modification. You can adjust the parser to keep comments and process instructions: (Update: add Doctype info to the xml string manually) from lxml import etree def replace_checksum(infile, new_value): parser = etree.XMLParser(remove_comments=False, remove_pis=False) root = etree.parse(infile, parser) dtd = root.docinfo.doctype + '\n' for elem in root.xpath("//leaf[@checksum]"): elem.set("checksum", new_value) updated_xml = etree.tostring(root, pretty_print=True, xml_declaration=True, encoding="UTF-8").decode("utf-8") # add the doctype manually i = updated_xml.find('?>\n') if len(dtd) > 2: updated_doc = updated_xml[:i + len('?>\n')] + dtd + updated_xml[i + len('?>\n'):] return updated_doc else: return updated_xml if __name__ == "__main__": check_sum = "aaabbb" outfile = replace_checksum("index.xml", check_sum) print(outfile) Output: <?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE test SYSTEM "../../util/dtd/eu-regional.dtd"> <?xml-stylesheet href="util/style/aaaa-2-0.xsl" type="text/xsl"?> <test xmlns:test="http://www.ich.org/test" xmlns:xlink="http://www.w3c.org/1999/xlink" dtd-version="3.2"> <mr> <leaf checksum="aaabbb"> <title>book name</title> </leaf> </mr> </test> As an ALTERNATIVE, parser doc The parser option remove_blank_text only removes empty text nodes. Comments, processing instructions and the doctype are not affected and remain in the parsed document. from lxml import etree def replace_checksum(infile, checksum_new, outfile): parser = etree.XMLParser(remove_blank_text=True) with open(infile, "rb") as f: tree = etree.parse(f, parser) for elem in tree.xpath("//leaf[@checksum]"): elem.set("checksum", checksum_new) with open(outfile, "wb") as f: tree.write(f, pretty_print=True, xml_declaration=True, encoding="UTF-8", doctype=tree.docinfo.doctype) if __name__ == "__main__": infile = "index.xml" outfile = "index_new.xml" check_sum = "aaabbb" replace_checksum(infile, check_sum, outfile) File: <?xml version='1.0' encoding='UTF-8'?> <?xml-stylesheet href="util/style/aaaa-2-0.xsl" type="text/xsl"?> <!DOCTYPE test SYSTEM "../../util/dtd/eu-regional.dtd"> <test xmlns:test="http://www.ich.org/test" xmlns:xlink="http://www.w3c.org/1999/xlink" dtd-version="3.2"> <mr> <leaf checksum="aaabbb"> <title>book name</title> </leaf> </mr> </test> | 1 | 1 |
79,295,258 | 2024-12-19 | https://stackoverflow.com/questions/79295258/ig-lightstreamer-server-always-sends-updates-every-second | I'm trying to connect to IG's lightstreamer server via python to obtain candle chart data. However, I always seem to get item updates every second even though I specify that I want it every minute or every hour. This is my code: from lightstreamer.client import * loggerProvider = ConsoleLoggerProvider(ConsoleLogLevel.WARN) LightstreamerClient.setLoggerProvider(loggerProvider) ls_client = LightstreamerClient("https://demo-apd.marketdatasystems.com", None) ls_client.connectionDetails.setUser("...") ls_client.connectionDetails.setPassword("...") ls_client.connect() subscription = Subscription( mode="MERGE", items=["CHART:CS.D.BITCOIN.CFD.IP:1MINUTE"], fields=["BID_CLOSE"]) subscription.addListener(SubListener()) ls_client.subscribe(subscription) input() ls_client.unsubscribe(subscription) ls_client.disconnect() class SubListener: def onItemUpdate(self, update): print(update.getValue("BID_CLOSE")) The 1MINUTE part of CHART:CS.D.BITCOIN.CFD.IP:1MINUTE doesn't seem to affect anything. I've tried the same item string in IG's streaming companion and I'm able to obtain updates every minute that way, but it won't work in code for some reason. Any ideas? | I believe that the label 1MINUTE in the item name indicates the duration of the time interval for the chart's candlestick aggregation; however, the updates are sent continuously. If you want to receive updates at a specific frequency, you could leverage the setRequestedMaxFrequency option of the Lightstreamer client library. To receive one update per minute, you should use something like the following: subscription.setRequestedMaxFrequency(Decimal(1)/Decimal(60)); | 1 | 1 |
79,296,597 | 2024-12-20 | https://stackoverflow.com/questions/79296597/rolling-window-selection-with-groupby-in-pandas | I have the following pandas dataframe: # Create the DataFrame df = pd.DataFrame({ 'id': [1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2], 'date': [1, 2, 3, 4, 5, 6, 7, 8, 5, 6, 7, 8, 9, 10, 11, 12], 'value': [11, 12, 13, 14, 15, 16, 17, 18, 21, 22, 23, 24, 25, 26, 27, 28] }) df id date value 0 1 1 11 1 1 2 12 2 1 3 13 3 1 4 14 4 1 5 15 5 1 6 16 6 1 7 17 7 1 8 18 8 2 5 21 9 2 6 22 10 2 7 23 11 2 8 24 12 2 9 25 13 2 10 26 14 2 11 27 15 2 12 28 I want to query the above dataframe, in a rolling window manner, for both ids. The rolling window should be of size n. So, if n==2, in the 1st iteration I would like to query this: df.query('(id==1 and (date==1 or date==2)) or (id==2 and (date==5 or date==6))') id date value 0 1 1 11 1 1 2 12 8 2 5 21 9 2 6 22 in the 2nd iteration I would like to query this: df.query('(id==1 and (date==2 or date==3)) or (id==2 and (date==6 or date==7))') id date value 1 1 2 12 2 1 3 13 9 2 6 22 10 2 7 23 in the 3rd iteration I would like to query this: df.query('(id==1 and (date==3 or date==4)) or (id==2 and (date==7 or date==8))') id date value 2 1 3 13 3 1 4 14 10 2 7 23 11 2 8 24 etc. How could I do that in pandas ? My data has around 500 ids | The exact expected logic is not fully clear, but assuming you want to loop over the groups/rolls, you could combine groupby.nth with sliding_window_view. By reusing the DataFrameGroupBy object, will only need to compute the groups once: import numpy as np from numpy.lib.stride_tricks import sliding_window_view as swv n = 2 max_size = df['id'].value_counts(sort=False).max() g = df.sort_values(by=['id', 'date']).groupby('id', sort=False) for idx in swv(np.arange(max_size), n): print(f'rows {idx}') print(g.nth(idx)) Output: rows [0 1] id date value 0 1 1 11 1 1 2 12 8 2 5 21 9 2 6 22 rows [1 2] id date value 1 1 2 12 2 1 3 13 9 2 6 22 10 2 7 23 rows [2 3] id date value 2 1 3 13 3 1 4 14 10 2 7 23 11 2 8 24 rows [3 4] id date value 3 1 4 14 4 1 5 15 11 2 8 24 12 2 9 25 rows [4 5] id date value 4 1 5 15 5 1 6 16 12 2 9 25 13 2 10 26 rows [5 6] id date value 5 1 6 16 6 1 7 17 13 2 10 26 14 2 11 27 rows [6 7] id date value 6 1 7 17 7 1 8 18 14 2 11 27 15 2 12 28 Alternatively, and assuming groups with an identical size and sorted by id/date, using shifted indexing: n = 2 ngroups = df['id'].nunique() for idx in swv(np.arange(len(df)).reshape(-1, ngroups, order='F'), n, axis=0): print(f'indices: {idx.ravel()}') print(df.iloc[idx.flat]) Output: indices: [0 1 8 9] id date value 0 1 1 11 1 1 2 12 8 2 5 21 9 2 6 22 indices: [ 1 2 9 10] id date value 1 1 2 12 2 1 3 13 9 2 6 22 10 2 7 23 indices: [ 2 3 10 11] id date value 2 1 3 13 3 1 4 14 10 2 7 23 11 2 8 24 indices: [ 3 4 11 12] id date value 3 1 4 14 4 1 5 15 11 2 8 24 12 2 9 25 indices: [ 4 5 12 13] id date value 4 1 5 15 5 1 6 16 12 2 9 25 13 2 10 26 indices: [ 5 6 13 14] id date value 5 1 6 16 6 1 7 17 13 2 10 26 14 2 11 27 indices: [ 6 7 14 15] id date value 6 1 7 17 7 1 8 18 14 2 11 27 15 2 12 28 | 2 | 2 |
79,293,060 | 2024-12-19 | https://stackoverflow.com/questions/79293060/how-to-set-multiple-elements-conditionally-in-polars-similar-to-pandas | I am trying to set multiple elements in a Polars DataFrame based on a condition, similar to how it is done in Pandas. Hereβs an example in Pandas: import pandas as pd df = pd.DataFrame(dict( A=[1, 2, 3, 4, 5], B=[0, 5, 9, 2, 10], )) df.loc[df['A'] < df['B'], 'A'] = [100, 210, 320] print(df) This updates column A where A < B with [100, 210, 320]. In Polars, I know that updating a DataFrame in place is not possible, and it is fine to return a new DataFrame with the updated elements. I have tried the following methods: Attempt 1: Using Series.scatter with map_batches import polars as pl df = pl.DataFrame(dict( A=[1, 2, 3, 4, 5], B=[0, 5, 9, 2, 10], )) def set_elements(cols): a, b = cols return a.scatter((a < b).arg_true(), [100, 210, 320]) df = df.with_columns( pl.map_batches(['A', 'B'], set_elements) ) Attempt 2: Creating an update DataFrame and using update() df = df.with_row_index() df_update = df.filter(pl.col('A') < pl.col('B')).select( 'index', pl.Series('A', [100, 210, 320]) ) df = df.update(df_update, on='index').drop('index') Both approaches work, but they feel cumbersome compared to the straightforward Pandas syntax. Question: Is there a simpler or more idiomatic way in Polars to set multiple elements conditionally in a column, similar to the Pandas loc syntax? | updated You can work around pl.Series.scatter() using pl.Expr.arg_true() or pl.arg_where(), although it would require accessing column as pl.Series: df.with_columns( df.get_column("A").scatter( # or df["A"].scatter df.select((pl.col.A < pl.col.B).arg_true()), # or df.select(pl.arg_where(pl.col.A < pl.col.B)), [100, 210, 320] ) ) shape: (5, 2) βββββββ¬ββββββ β A β B β β --- β --- β β i64 β i64 β βββββββͺββββββ‘ β 1 β 0 β β 100 β 5 β β 210 β 9 β β 4 β 2 β β 320 β 10 β βββββββ΄ββββββ Or, using @Hericks answer you can inplace update it: df[df.select((pl.col.A < pl.col.B).arg_true()), "A"] = [100, 210, 320] # or df[df.select(pl.arg_where(pl.col.A < pl.col.B)), "A"] = [100, 210, 320] shape: (5, 2) βββββββ¬ββββββ β A β B β β --- β --- β β i64 β i64 β βββββββͺββββββ‘ β 1 β 0 β β 100 β 5 β β 210 β 9 β β 4 β 2 β β 320 β 10 β βββββββ΄ββββββ original I'd say the way @jqurious did it in this answer is probably as short as you can get. idxs = pl.when(pl.col.A < pl.col.B).then(1).cum_sum() - 1 new_values = pl.lit(pl.Series([100, 210, 320])) df.with_columns(A = pl.coalesce(new_values.get(idxs), 'A')) shape: (5, 2) βββββββ¬ββββββ β A β B β β --- β --- β β i64 β i64 β βββββββͺββββββ‘ β 1 β 0 β β 100 β 5 β β 210 β 9 β β 4 β 2 β β 320 β 10 β βββββββ΄ββββββ | 1 | 1 |
79,295,605 | 2024-12-19 | https://stackoverflow.com/questions/79295605/how-to-generate-an-integer-that-repeats-like-100100100-1-in-binary-with-time-com | I am making a function that takes length:int and distance:int as inputs, and output the largest integer that satisfies the following properties in its binary representation: It starts with 1, ends with 1 There are distance - 1 many 0 between each 1 Its length is strictly less than length It could be implemented as the following: def repeatdigit(length:int, distance:int) -> int: result = 0 pointer = 1 for _ in range(1, length, distance): result |= pointer pointer <<= distance return result For example: repeatdigit(10,3)==0b1001001 , repeatdigit(15,4)=0b1000100010001. Iβm only caring about binary because I need to use it as a tool of bit manipulation. In practice, length will be very big, distance ** 2 < length. Assuming the input is valid. As its name suggested, it is repeating a same pattern over and over. An explicit formula: result = (1 << (length+distance-2 - ((length-2) % distance)))//((1 << distance) - 1) But both algorithm are slow, their time complexity is O(nΒ²), where n = length. Doing similar things with list only takes O(n). Namely ([1]+[0]*(distance-1)) * ((length-2)//distance) + [1] (Context: I want to make such an integer so that I donβt need to store 0 and 1 in a very long list as it is space consuming) How do I make such an integer in O(n)? | Doubling the number of 1-bits instead of adding them just one at a time: def repeatdigit(length:int, distance:int) -> int: def f(want): if want == 1: return 1 have = (want + 1) // 2 x = f(have) add = min(have, want - have) return x | (x << (add * distance)) ones = 1 + (length - 2) // distance return f(ones) My ones, want, have and add refer to numbers of 1-bits. The add = min(have, want - have) could be "simplified", as I either add have or have - 1 1-bits. It comes from this earlier, slightly less optimal version: def repeatdigit(length:int, distance:int) -> int: result = 1 want = 1 + (length-2) // distance have = 1 while have < want: add = min(have, want - have) result |= result << (add * distance) have += add return result | 3 | 0 |
79,294,413 | 2024-12-19 | https://stackoverflow.com/questions/79294413/second-independent-axis-in-altair | I would like to add a second (independent) x-axis to my figure, demonstrating a month for a given week. Here is my snippet: import pandas as pd import numpy as np from datetime import datetime, timedelta weeks = list(range(0, 54)) start_date = datetime(1979, 1, 1) week_dates = [start_date + timedelta(weeks=w) for w in weeks] years = list(range(1979, 2024)) data = { "week": weeks, "date": week_dates, "month": [date.strftime("%b") for date in week_dates], } # Precipitation data.update({str(year): np.random.uniform(0, 100, size=len(weeks)) for year in years}) df = pd.DataFrame(data) df["Mean_Precipitation"] = df[[str(year) for year in years]].mean(axis=1) rain_calendar = alt.Chart(df, title=f"Weekly precipitation volumes").mark_bar().encode( alt.X('week:O', title='Week in calendar year', axis=alt.Axis(labelAlign="right", labelAngle=-45)), alt.Y(f'1979:Q', title='Rain'), alt.Color(f"1979:Q", scale=alt.Scale(scheme='redyellowgreen')), alt.Order(f"1979:Q", sort="ascending")).properties(width=850, height=350) line = alt.Chart(df).mark_line(interpolate='basis', strokeWidth=2, color='#FB0000').encode( alt.X('week:O'), alt.Y('Mean_Precipitation:Q')) month_labels = alt.Chart(df).mark_text( align='center', baseline='top', dy=30 ).encode( alt.X('week:O', title=None), text='month:N') combined_chart = alt.layer(rain_calendar, line, month_labels).resolve_scale(x='shared').configure_axisX( labelAlign="right", labelAngle=-45, titlePadding=10, titleFontSize=14)\ .configure_axisY(titlePadding=10, titleFontSize=14).configure_bar(binSpacing=0) So far, this is the best outcome that I've got: It is a topic related to discussed one there: Second x axis or x labels | This can be done by using the timeUnit parameter of the x encoding along with the axis format and, optionally, labelExpr parameters. Also, Altair works best with long form data and can easily handle the aggregation for you, so I have updated the data generation with this in mind. If your real data is already in short form, you can use pandas melt function. More details are in the documentation import pandas as pd import numpy as np from datetime import datetime, timedelta import altair as alt start_date = datetime(1979, 1, 1) end_date = datetime(2024, 12, 31) df = pd.DataFrame(pd.date_range(start_date, end=end_date, freq="W"), columns=['date']) df['Precipitation'] = np.random.uniform(0, 100, size=len(df)) base = alt.Chart(df, title="Weekly precipitation volumes").encode( x=alt.X( "date:O", timeUnit="week", title="Week in calendar year", axis=alt.Axis( format='W %-W/%b', labelExpr="split(datum.label, '/')" ), ), ) rain_calendar = ( base.mark_bar() .encode( y=alt.Y("Precipitation:Q", title="Rain"), color=alt.Color("Precipitation:Q", scale=alt.Scale(scheme="redyellowgreen"), title="1979"), ) .transform_filter(alt.expr.year(alt.datum['date']) == 1979) .properties(width=850, height=350) ) line = base.mark_line(interpolate="basis", strokeWidth=2, color="#FB0000").encode( y="mean(Precipitation):Q" ) combined_chart = ( alt.layer(rain_calendar, line) .configure_axisX( titlePadding=10, titleFontSize=14 ) .configure_axisY(titlePadding=10, titleFontSize=14) .configure_bar(binSpacing=0) ) combined_chart | 1 | 1 |
79,295,148 | 2024-12-19 | https://stackoverflow.com/questions/79295148/how-do-i-type-hint-a-frame-object-in-python | I'm type hinting a large existing Python codebase, and in one part it sets a signal handler using signal.signal. The signal handler is a custom function defined in the codebase, so I need to type hint it myself. However, while I can guess based on the description of signal.signal that the first parameter is an integer, the second one is just described as "None or a frame object" in the documentation. I have no idea what a frame object is, but thankfully the aforementioned documentation links to a section called "Frame objects" and the inspect module documentation. However, after looking through both of these, I still don't really find any information about how to actually type hint a frame object, just information that might be useful to someone writing code that interacts with frame objects without typing it. So, what type do I use for frame object type hints in Python? | types.FrameType is what you are looking for. | 1 | 2 |
79,294,628 | 2024-12-19 | https://stackoverflow.com/questions/79294628/how-to-dynamically-adjust-colors-depending-on-the-dark-mode | The default header color for DataFrames (render.DataGrid(), render.DataTable()) is a very light grey (probably gainsboro). If I switch an app to dark mode using and DataFrame the header becomes unreadable. Is there any way to make a dict to tell shiny what color to use for the color gainsboro or for a specific object, if I switch to dark mode? Here's an (not so minimal) MRE: from shiny import App, render, ui import polars as pl app_ui = ui.page_fillable( ui.layout_sidebar( ui.sidebar(ui.input_dark_mode()), ui.layout_columns( ui.card( ui.card_header("card_header1", style='background:gainsboro'), ui.output_data_frame("card1"), full_screen=True ), col_widths=12 ) ) ) def server(input, output, session): @output @render.data_frame def card1(): return render.DataGrid(pl.DataFrame({'a': ['a','b','c','d']}), filters=True) app = App(app_ui, server) | We can supply an id to the input_dark_mode(), say id="mode". And then we can define css dynamically depending on input.mode() (which is either "light" or "dark") for the required selectors. Below is an example for the card header. Express from shiny.express import input, render, ui with ui.layout_sidebar(): with ui.sidebar(): ui.input_dark_mode(id="mode") with ui.layout_columns(): with ui.card(): ui.card_header("card_header1", id="card_header1") @render.ui def style(): color = "gainsboro" if input.mode() == "light" else "red" css = f"#card_header1 {{ background: {color} }}" return ui.tags.style(css) Core from shiny import App, ui, reactive app_ui = ui.page_fillable( ui.layout_sidebar( ui.sidebar(ui.input_dark_mode(id="mode")), ui.layout_columns( ui.card( ui.card_header("card_header1", id="card_header1"), full_screen=True ), col_widths=12 ) ) ) def server(input, output, session): @reactive.effect @reactive.event(input.mode) def _(): ui.remove_ui("#header_color") # Remove any previously added style tag color = "gainsboro" if input.mode() == "light" else "red" css = f"#card_header1 {{ background: {color} }}" # Add the CSS to the head of the document if css: style = ui.tags.style(css, id="#header_color") ui.insert_ui(style, selector="head") app = App(app_ui, server) | 1 | 1 |
79,294,574 | 2024-12-19 | https://stackoverflow.com/questions/79294574/in-pandas-how-to-reference-and-use-a-value-from-a-dictionary-based-on-column-an | I've data about how my times people are sick in certain locations (location A and B) at certain times (index of dates). I need to divide each value by the population in that location (column) AND at that time (index), which references a separate dictionary. Eg dataframe: import pandas as pd data = [{'A': 1, 'B': 3}, {'A': 2, 'B': 20}, {'A': "Unk", 'B': 50}] df = pd.DataFrame(data, index=[pd.to_datetime("2019-12-31") , pd.to_datetime("2020-12-30") , pd.to_datetime("2020-12-31")]) Out: A B 2019-12-31 1 3 2020-12-30 2 20 2021-12-31 Unk 50 Population dictionary (location_year): dic = {"A_2019": 100, "B_2019": 200, "A_2020": 120, "B_2020": 150} While it's not necessary to have the output in the same df, the output I'm trying to achieve would be: A B A1 B1 2019-12-31 1 3 0.01 0.015 2020-12-30 2 20 0.017 0.133 2021-12-31 Unk 50 nan 0.333 I've tried lots of different approaches, but almost always get an unhashable type error. for col in df.columns: df[col + "1"] = df[col]/dic[col + "_" + df.index.strftime("%Y")] Out: `TypeError: unhashable type: 'Index I guess I don't understand how pandas is parsing the df.index value to the dictionary(?). Can this be fixed, or is another approach necessary? | You can create a Series from your dictionary, then unstack to DataFrame, reindex/set_axis, perform your operation and join with add_suffix: def split(k): x, y = k.split('_') return (int(y), x) # ensure using NaNs for missing values, not strings df = df.replace('Unk', pd.NA).convert_dtypes() # reshape to match the original DataFrame tmp = (pd.Series({split(k): v for k, v in dic.items()}) .unstack() .reindex(df.index.year) # match years in df.index .set_axis(df.index) # restore full dates ) # divide, add_suffix, join out = df.join(df.div(tmp).add_suffix('1')) # or # out = df.join(tmp.rdiv(df), rsuffix='1') Output: A B A1 B1 2019-12-31 1 3 0.01 0.015 2020-12-30 2 20 0.016667 0.133333 2020-12-31 <NA> 50 <NA> 0.333333 Intermediate tmp: A B 2019-12-31 100 200 2020-12-30 120 150 2020-12-31 120 150 Variant Here we rather create a tmp with only the years, and use an intermediate rename step to perform the alignment: def split(k): x, y = k.split('_') return (int(y), x) df = df.replace('Unk', pd.NA).convert_dtypes() tmp = (pd.Series({split(k): v for k, v in dic.items()}) .unstack() ) out = df.join(df.rename(lambda x: x.year).div(tmp) .add_suffix('1').set_axis(df.index) ) Intermediate tmp: A B 2019 100 200 2020 120 150 | 2 | 3 |
Subsets and Splits