question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
78,074,479
2024-2-28
https://stackoverflow.com/questions/78074479/add-row-of-column-totals-in-polars
I'm trying to move away from pandas to polars, I have a use case where I need to add the column totals as a new row to the polars lazy frame. I've found this answer and it would perfectly solve my problem in pandas, but I haven't found any way or documentation on how to translate this code to polars.
To create a new row with the total of each column, you can first create a row of the column totals using an aggregation and then concatenate it to the dataframe using pl.concat. pl.concat([df, df.select(pl.all().sum())]) Previous answer Disclaimer. My original answer was based on the answer to the linked pandas question. It actually create a new column with the total of each row. To create a new column total that is given by the horizontal sum of all other columns, use pl.sum_horizontal as follows. df.with_columns( pl.sum_horizontal(pl.all()).alias("total") ) If you don't want to sum all columns, give the corresponding column names directly to pl.sum_horizontal, e.g. pl.sum_horizontal("A", "B", "C").
4
5
78,084,763
2024-2-29
https://stackoverflow.com/questions/78084763/collect-common-groups-on-non-index-column-across-two-dataframes
Here are two dataframes grouped how I want them: last5s = pd.Timestamp.now().replace(microsecond=0) - pd.Timedelta('5s') dates = pd.date_range(last5s, periods = 5, freq='s') N=10 data1 = np.random.randint(0,10,N) data2 = np.random.randint(0,10,N) df1 = pd.DataFrame({'timestamp': np.random.choice(dates, size=N), 'A': data1}) df2 = pd.DataFrame({'timestamp': np.random.choice(dates, size=N), 'B': data2}) print(df1) print(df2) print() g1 = df1.groupby(pd.Grouper(key='timestamp', freq='1s')) print("g1:") for time, group in g1: print('time:', time) print(group) print() print() g2 = df2.groupby(pd.Grouper(key='timestamp', freq='1s')) print('g2:') for time, group in g2: print('time:', time) print(group) print() Output (e.g.): timestamp A 0 2024-03-01 10:05:26 7 1 2024-03-01 10:05:25 8 2 2024-03-01 10:05:28 1 3 2024-03-01 10:05:24 2 4 2024-03-01 10:05:28 5 5 2024-03-01 10:05:27 4 6 2024-03-01 10:05:24 6 7 2024-03-01 10:05:26 3 8 2024-03-01 10:05:26 8 9 2024-03-01 10:05:28 8 timestamp B 0 2024-03-01 10:05:25 1 1 2024-03-01 10:05:26 6 2 2024-03-01 10:05:25 5 3 2024-03-01 10:05:28 7 4 2024-03-01 10:05:27 7 5 2024-03-01 10:05:28 1 6 2024-03-01 10:05:28 4 7 2024-03-01 10:05:25 0 8 2024-03-01 10:05:24 6 9 2024-03-01 10:05:24 5 g1: time: 2024-03-01 10:05:24 timestamp A 3 2024-03-01 10:05:24 2 6 2024-03-01 10:05:24 6 time: 2024-03-01 10:05:25 timestamp A 1 2024-03-01 10:05:25 8 time: 2024-03-01 10:05:26 timestamp A 0 2024-03-01 10:05:26 7 7 2024-03-01 10:05:26 3 8 2024-03-01 10:05:26 8 time: 2024-03-01 10:05:27 timestamp A 5 2024-03-01 10:05:27 4 time: 2024-03-01 10:05:28 timestamp A 2 2024-03-01 10:05:28 1 4 2024-03-01 10:05:28 5 9 2024-03-01 10:05:28 8 g2: time: 2024-03-01 10:05:24 timestamp B 8 2024-03-01 10:05:24 6 9 2024-03-01 10:05:24 5 time: 2024-03-01 10:05:25 timestamp B 0 2024-03-01 10:05:25 1 2 2024-03-01 10:05:25 5 7 2024-03-01 10:05:25 0 time: 2024-03-01 10:05:26 timestamp B 1 2024-03-01 10:05:26 6 time: 2024-03-01 10:05:27 timestamp B 4 2024-03-01 10:05:27 7 time: 2024-03-01 10:05:28 timestamp B 3 2024-03-01 10:05:28 7 5 2024-03-01 10:05:28 1 6 2024-03-01 10:05:28 4 How do I "join" the groups together such that I can iterate over them together? E.g. I want to be able to do: for time, group1, group2 in somehow_joined(g1,g2): <do stuff with group1 and group2 in this common time group>
You can just do: for t, d1 in g1: d2 = g2.get_group(t) if d2 is None: print("I don't want this") continue print(d1) print('-'*10) print(d2) print('='*30) Output: timestamp A 0 2024-02-29 19:10:14 0 3 2024-02-29 19:10:14 7 7 2024-02-29 19:10:14 1 ---------- timestamp B 1 2024-02-29 19:10:14 0 3 2024-02-29 19:10:14 6 5 2024-02-29 19:10:14 9 6 2024-02-29 19:10:14 4 ============================== timestamp A 2 2024-02-29 19:10:15 2 5 2024-02-29 19:10:15 8 6 2024-02-29 19:10:15 2 9 2024-02-29 19:10:15 6 ---------- timestamp B 8 2024-02-29 19:10:15 9 ============================== timestamp A 1 2024-02-29 19:10:16 3 4 2024-02-29 19:10:16 9 8 2024-02-29 19:10:16 6 ---------- timestamp B 2 2024-02-29 19:10:16 6 4 2024-02-29 19:10:16 6 ==============================
5
3
78,083,235
2024-2-29
https://stackoverflow.com/questions/78083235/add-a-constant-to-an-existing-column
Dataframe: rng = np.random.default_rng(42) df = pl.DataFrame( { "nrs": [1, 2, 3, None, 5], "names": ["foo", "ham", "spam", "egg", None], "random": rng.random(5), "A": [True, True, False, False, False], } ) Currently, to add a constant to a column I do: df = df.with_columns(pl.col('random') + 500.0) Questions: Why does df = df.with_columns(pl.col('random') += 500.0) throw a SyntaxError? Various AIs tell me that df['random'] = df['random'] + 500 should also work, but it throws the following error instead: TypeError: DataFrame object does not support `Series` assignment by index Use `DataFrame.with_columns`. Why is polars throwing an error? I've been using df['random'] to identify the random column in other parts of my code, and it worked.
AI:s tell you to do so, because they are not actually intelligent They try to suggest you how it's done in pandas, because of similar keywords like dataframe and python. But it just does not work the same way with polars by design. Augmented assignment problem With += too, it's really just a matter of syntax. pl.col is a class (of type <class 'polars.functions.col.ColumnFactory'>), and instantiating that class creates an expression (<Expr ['col("random")'] at 0x701D91A7D850>), but you cannot assign to that the same way as you cannot assign like this before a exists: a += 1 Or more precisely, because you cannot do the same within any function call: >>> a = 10 >>> math.pow(a += 10, 2) File "<stdin>", line 1 math.pow(a += 10, 2) ^^ SyntaxError: invalid syntax >>> math.pow(a + 10, 2) 400.0 Interestingly, you can do this: >>> expr = pl.col("random") >>> expr += 500 >>> expr <Expr ['[(col("random")) + (500)]'] at 0x701D91AE4800> >>> df.select(expr) shape: (5, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ random β”‚ β”‚ --- β”‚ β”‚ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ 500.773956 β”‚ β”‚ 500.438878 β”‚ β”‚ 500.858598 β”‚ β”‚ 500.697368 β”‚ β”‚ 500.094177 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ So this works df = df.with_columns(pl.col('random') + 500.0) because it's basically equivalent to the previous example, you just create the expression on the same line as you do the with_columns.
3
6
78,071,328
2024-2-28
https://stackoverflow.com/questions/78071328/get-the-number-of-all-possible-combinations-that-give-a-certain-product
I'm looking for a algorithm that counts number of all possible combinations that gives certain product. I have a list of perfect squares [1,4,9,16,..,n] and I have two values a, b where a - number of elements that we can use in multiplication to get perfect square, b - maximum value of each element that can be used in multiplication For example, if a = 3 and b = 6 then for perfect square 36 we can have combinations such as [1,6,6], [2,3,6], [4,3,3] and so on (order matters [1,6,6] and [6,6,1] are different). NOTE we can not use [1,9,4] combination because 9 > b I've tried to use combinations of all divisors for each perfect square from itertools and after that I checked each product of combination and if x1x2x3 == 36 I added 1 to my counting for perfect square = 36. This algorithm works but it requires significant amount of time for long multiplication. Can we make it faster than look at each combination for each perfect square? def get_divisors(n): result = [] for i in range(1, n//2 + 1): if n % i == 0: result.append(i) result.append(n) return result a = 2 b = 3 count = 0 all_squares = [1,4,9] for i in all_squares: divisors = get_divisors(i) for r in divisors: if r > b: divisors.remove(r) for j in (itertools.product(divisors, repeat=a)): if numpy.prod(j) == i: count += 1 print(count)
Here's a mostly straightforward solution using recursive generators. No element larger than b shows up because no element so large is ever considered. Duplicates never show up because, by construction, this yields lists in non-increasing order of elements (i.e., in lexicographically reverse order). I don't know what squares have to do with this. The function couldn't care less whether target is a square, and you didn't appear to make any use of that property in what you wrote. def solve(target, a, b): for largest in range(min(target, b), 1, -1): result = [] rem = target # Try one or more factors of `largest`, followed by # solutions for what remains using factors strictly less # than `largest`. while not rem % largest and len(result) < a: rem //= largest assert rem >= 1 result.append(largest) if rem == 1: yield result + [1] * (a - len(result)) else: for sub in solve(rem, a - len(result), largest - 1): yield result + sub and then, e.g., >>> for x in solve(36, 3, 6): ... print(x) [6, 3, 2] [6, 6, 1] [4, 3, 3] >>> for x in solve(720, 4, 10): ... print(x) [10, 9, 8, 1] [10, 9, 4, 2] [10, 8, 3, 3] [10, 6, 4, 3] [10, 6, 6, 2] [9, 8, 5, 2] [9, 5, 4, 4] [8, 6, 5, 3] [6, 6, 5, 4] If target can grow large, the easiest "optimization" to this would be to precoumpute all its non-unit factors <= b, and restrict the "for largest in ..." loop to look only at those. To just count the number, use the above like so: >>> sum(1 for x in solve(36, 3, 6)) 3 >>> sum(1 for x in solve(720, 4, 10)) 9 EDIT BTW, as things go on, mounds of futile search can be saved by adding this at the start: if b ** a < target: return However, whether that speeds or slows things overall depends on the expected characteristics of the inputs. OPTIMIZING Running the above: >>> sum(1 for x in solve(720000000000, 20, 10000)) 4602398 That took close to 30 seconds on my box. What if we changed the function to just return the count, but not the solutions? def solve(target, a, b): count = 0 for largest in range(min(target, b), 1, -1): nres = 0 rem = target while not rem % largest and nres < a: rem //= largest assert rem >= 1 nres += 1 if rem == 1: count += 1 else: sub = solve(rem, a - nres, largest - 1) count += sub return count Returns the same total, but took closer to 20 seconds. Now what if we memoized the function? Just add two lines before the def; import functools @functools.cache Doesn't change the result, but cuts the time to under half a second. >>> solve.cache_info() CacheInfo(hits=459534, misses=33755, maxsize=None, currsize=33755) So the bulk of the recursive calls were duplicates of calls already resolved, and were satisfied by the hidden cache without executing the body of the function. OTOH, we're burning memory for a hidden dict holding about 32K entries. Season to taste. As a comment noted, the usual alternative to memoizing a recursive function is to do heavier thinking to come up with a "dynamic programming" approach that builds possible results "from the bottom up", typically using explicit lists instead of hidden dicts. I won't do that here, though. It's already thousands of times faster than I have any actual use for ;-) Computing # of permutations from a canonical Here's code for that: from math import factorial, prod from collections import Counter def numperms(canonical): c = Counter(canonical) return (factorial(c.total()) // prod(map(factorial, c.values()))) >>> numperms([1, 6, 6]) 3 >>> numperms("mississippi") 34650 Counting distinct permutations This may be what you really want. The code is simpler because it's not being "clever" at all, treating all positions exactly the same (so, e.g., all 3 distinct ways of permuting [6, 6, 1] are considered to be different). You absolutely need to memoize this if you want it to complete a run in your lifetime ;-) import functools @functools.cache def dimple(target, a, b): if target == 1: return 1 if a <= 0: return 0 count = 0 for i in range(1, min(target, b) + 1): if not target % i: count += dimple(target // i, a - 1, b) return count >>> dimple(720000000000, 20, 10000) 1435774778817558060 >>> dimple.cache_info() CacheInfo(hits=409566, misses=8788, maxsize=None, currsize=8788) >>> count = 0 >>> for x in solve(720000000000, 20, 10000): ... count += numperms(x) >>> count 1435774778817558060 The second way uses the original function, and applies the earlier numperms() code to deduce how many distinct permutations each canonical solution corresponds to. It doesn't benefit from caching, though, so is very much slower. The first way took several seconds under CPython, but 1 under PyPy. Of course the universe will end before any approach using itertools to consider all possible arrangements of divisors could compute a result so large. Moving to DP For pedagogical purposes, I think it's worth the effort to show a dynamic programming approach too. The result here is easily the fastest of the bunch, which is typical of a DP approach. It's also typical - alas - that it takes more up-front analysis. This is much like recursive memoization, but "bottom up", starting with the simplest cases and then iteratively building them up to fancier cases. But we're not waiting for runtime recursion to figure out which simpler cases are needed to achieve a fancier result - that's all analyzed in advance. In the dimple() code, b is invariant. What remains can be viewed as entries in a large array, M[i, a], which gives the number of ways to express integer i as a product of a factors (all <= b). There is only one non-zero entry in "row" 0: M[1, 0] = 1. The empty product (1) is always achievable. For row 1, what can we obtain in one step? Well, every divisor of target can be reduced to the row 0 case by dividing it by itself. So row 1 has a non-zero entry for every divisor of target <= b, with value 1 (there's only one way to get it). Row 2? Consider, e.g., M[6, 2]. 6 can be gotten from row 1 via multiplying 1 by 6, or 6 by 1, or 2 by 3, or 3 by 2, so M[6, 2] = M[1, 1] + M[2, 1] + M[3, 1] + M[6, 1] = 4, And so on. With the right preparation, the body of the main loop is very simple. Note that each row i's values depend only on row i-1, so saving the whole array isn't needed. The code only saves the most recent row, and builds the next row from it. In fact, it's often possible to update to the next row "in place", so that memory for only one row is needed. But offhand I didn't see an efficient way to do that in this particular case. In addition, since only divisors of target are achievable, most of a dense M would consist of 0 entries. So, instead, a row is represented as a defaultdict(int), so storage is only needed for the non-zero entries. Most of the code here is in utility routines to precompute the possible divisors. EDIT: simplified the inner loop by precomputing invariants even before the outer loop starts. from itertools import chain, count, islice from collections import Counter, defaultdict from math import isqrt # Return a multiset with n's prime factorization, mapping a prime # to its multiplicity; e.g,, factor(12) is {2: 2, 3: 1}. def factor(n): result = Counter() s = isqrt(n) for p in chain([2], count(3, 2)): if p > s: break if not n % p: num = 1 n //= p while not n % p: num += 1 n //= p result[p] = num s = isqrt(n) if n > 1: result[n] = 1 return result # Return a list of all the positive integer divisors of an int. `ms` # is the prime factorization of the int, as returned by factor(). # Nothing is guaranteed about the order of the list. For example, # alldivs(factor(12)) is a permutation of [1, 2, 3, 4, 6, 12]. # There are prod(e + 1 for e in ms.values()) entries in the list. def alldivs(ms): result = [1] for p, e in ms.items(): # NB: advanced trickery here. `p*v` is applied to entries # appended to `result` _whlle_ the extend() is being # executed. This is well defined, but unusual. It saves # layers of otherwise-needed explicit indexing and/or loop # nesting. For example, if `result` starts as [2, 3] and # e=2, this leaves result as [2, 3, p*2, p*3, p*p*2, p*p*3]. result.extend(p * v for v in islice(result, len(result) * e)) return result def dimple_dp(target, a, b): target_ms = factor(target) if max(target_ms) > b: return 0 divs = alldivs(target_ms) smalldivs = sorted(d for d in divs if d <= b) # div2mults maps a divisor `d` to a list of all divisors # of the form `s*d` where `s` in smalldivs. divs = set(divs) div2mults = {} for div in divs: mults = div2mults[div] = [] for s in smalldivs: mult = s * div if mult in divs: mults.append(mult) elif mult > target: break del divs, div, mults, s, mult # row 0 has 1 entry: 1 way to get the empty product row = defaultdict(int) row[1] = 1 # Compute rows 1 through a-1. Since the only entry we want from # row `a` is row[target], we save a bit of time by stopping # here at row a-1 instead. for _ in range(1, a): newrow = defaultdict(int) for div, count in row.items(): for mult in div2mults[div]: newrow[mult] += count row = newrow return sum(row[target // d] for d in smalldivs) # under a second even in CPython >>> dimple_dp(720000000000, 20, 10000) 1435774778817558060 Note that the amount of memory needed is proportional to the number of divisors of target, and is independent of a. When memoizing a recursive function, the cache never forgets anything (unless you implement and manage it yourself "by hand").
4
5
78,077,316
2024-2-28
https://stackoverflow.com/questions/78077316/exception-not-found-python-cv2-py-typed-error-failed-building-wheel-for-ope
I am following the env setup in CenterPose repo. However, opencv-python doesn't get installed. (CenterPose) mona@ada:/data/CenterPose/data$ pip install opencv-python I get this error: -- Installing: /tmp/pip-install-wjrf6kvm/opencv-python_e5e6222fa4024d2d8f3e1d1c3bd5fb1f/_skbuild/linux-x86_64-3.6/cmake-install/share/opencv4/lbpcascades/lbpcascade_profileface.xml -- Installing: /tmp/pip-install-wjrf6kvm/opencv-python_e5e6222fa4024d2d8f3e1d1c3bd5fb1f/_skbuild/linux-x86_64-3.6/cmake-install/share/opencv4/lbpcascades/lbpcascade_silverware.xml Copying files from CMake output creating directory _skbuild/linux-x86_64-3.6/cmake-install/cv2 copying _skbuild/linux-x86_64-3.6/cmake-install/python/cv2/python-3/cv2.abi3.so -> _skbuild/linux-x86_64-3.6/cmake-install/cv2/cv2.abi3.so copying _skbuild/linux-x86_64-3.6/cmake-install/python/cv2/__init__.py -> _skbuild/linux-x86_64-3.6/cmake-install/cv2/__init__.py copying _skbuild/linux-x86_64-3.6/cmake-install/python/cv2/load_config_py2.py -> _skbuild/linux-x86_64-3.6/cmake-install/cv2/load_config_py2.py copying _skbuild/linux-x86_64-3.6/cmake-install/python/cv2/load_config_py3.py -> _skbuild/linux-x86_64-3.6/cmake-install/cv2/load_config_py3.py copying _skbuild/linux-x86_64-3.6/cmake-install/python/cv2/config.py -> _skbuild/linux-x86_64-3.6/cmake-install/cv2/config.py copying _skbuild/linux-x86_64-3.6/cmake-install/python/cv2/config-3.py -> _skbuild/linux-x86_64-3.6/cmake-install/cv2/config-3.py Traceback (most recent call last): File "/home/mona/anaconda3/envs/CenterPose/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 349, in <module> main() File "/home/mona/anaconda3/envs/CenterPose/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 331, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "/home/mona/anaconda3/envs/CenterPose/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 249, in build_wheel metadata_directory) File "/tmp/pip-build-env-o9p39xto/overlay/lib/python3.6/site-packages/setuptools/build_meta.py", line 231, in build_wheel wheel_directory, config_settings) File "/tmp/pip-build-env-o9p39xto/overlay/lib/python3.6/site-packages/setuptools/build_meta.py", line 215, in _build_with_temp_dir self.run_setup() File "/tmp/pip-build-env-o9p39xto/overlay/lib/python3.6/site-packages/setuptools/build_meta.py", line 268, in run_setup self).run_setup(setup_script=setup_script) File "/tmp/pip-build-env-o9p39xto/overlay/lib/python3.6/site-packages/setuptools/build_meta.py", line 158, in run_setup exec(compile(code, __file__, 'exec'), locals()) File "setup.py", line 537, in <module> main() File "setup.py", line 310, in main cmake_source_dir=cmake_source_dir, File "/tmp/pip-build-env-o9p39xto/overlay/lib/python3.6/site-packages/skbuild/setuptools_wrap.py", line 683, in setup cmake_install_dir, File "setup.py", line 450, in _classify_installed_files_override raise Exception("Not found: '%s'" % relpath_re) Exception: Not found: 'python/cv2/py.typed' ---------------------------------------- ERROR: Failed building wheel for opencv-python Failed to build opencv-python ERROR: Could not build wheels for opencv-python which use PEP 517 and cannot be installed directly To reproduce: CenterPose_ROOT=/path/to/clone/CenterPose git clone https://github.com/NVlabs/CenterPose.git $CenterPose_ROOT conda create -n CenterPose python=3.6 conda activate CenterPose pip install -r requirements.txt conda install -c conda-forge eigenpy a bit sys info: (base) mona@ada:~$ uname -a Linux ada 6.5.0-21-generic #21~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Feb 9 13:32:52 UTC 2 x86_64 x86_64 x86_64 GNU/Linux (base) mona@ada:~$ lsb_release -a LSB Version: core-11.1.0ubuntu4-noarch:security-11.1.0ubuntu4-noarch Distributor ID: Ubuntu Description: Ubuntu 22.04.3 LTS Release: 22.04 Codename: jammy
The last OpenCV version to have wheels built for Python 3.6 was v4.6. With that in mind, I was able to get everything installed by explicitly restricting to that version. Here is a standalone YAML that handles everything needed for environment creation (this doesn't require repository cloning): CenterPose_py36.yaml name: CenterPose channels: - conda-forge - nodefaults dependencies: - python=3.6 - pip - pip: - opencv-python==4.6.* - opencv-python-headless==4.6.* - -r https://github.com/NVlabs/CenterPose/raw/6c89d420b33bd01c14c13f509af08bfe3d8b2fe7/requirements.txt I verified this works on a micromamba-docker image, simply using: micromamba create -n CenterPose -f CenterPose_py36.yaml On conda, the command would be: conda env create -n CenterPose -f CenterPose_py36.yaml
4
5
78,071,815
2024-2-28
https://stackoverflow.com/questions/78071815/design-add-and-search-words-data-structure-leetcode-211
I am currently trying to solve the problem Add and Search Words Data Structure on leetcode. The question is as follows: Design a data structure that supports adding new words and finding if a string matches any previously added string. Implement the WordDictionary class: WordDictionary() Initializes the object. void addWord(word) Adds word to the data structure, it can be matched later. bool search(word) Returns true if there is any string in the data structure that matches word or false otherwise. word may contain dots . where dots can be matched with any letter. My Strategy: My strategy involves representing a trie with a hashmap instead of a traditional linked-list-based tree structure, aiming for better performance and lower complexity. By using a hashmap, we can quickly access the next node without traversing through unnecessary nodes, making operations faster especially with large datasets. For example, when inserting words like apple and application into this structure, it's organized as nested hashmaps where each character in a word points to another hashmap representing the next character. The end of a word is marked with a special key-value pair {'end': {}}. This way, we efficiently store and search for words with minimal space and time complexity. My Code: class WordDictionary(object): def __init__(self): self.map = {} def addWord(self, word): """ :type word: str :rtype: None """ current = self.map for i in word: if i in current: current = current[i] else: current[i] = {} current = current[i] current['end'] = {} return def search(self, word): """ :type word: str :rtype: bool """ current = self.map for i in word: if i in current: current = current[i] elif i == '.': current = {key: value for d in current.values() for key, value in d.items()} else: return False if 'end' in current: return True return False The solution seems to be effective for the majority of cases, but I've hit a error with test case 16, where it's not giving the right outcome. The length of test case 16 makes it particularly challenging to pinpoint where the mistake is happening. I'm in need of some guidance to track down and fix this logical error. Would you be able to lend a hand in sorting this out?
One thing searching in a trie shares with binary search is that it keeps shrinking the possibilities between a range; except instead of binary, it's character-by-character, and instead of dividing down the binary middle, it uses the position in the word. (It still performs with log n iid, see Tong2016Smoothed.) However, when it gets to a wildcard (.), one can't add to the possibilities and do them in parallel. In general, we will have holes in the output such that it is not represented in a single range. For example, when searching for c.t in {cat, cel, cut}, cat and cut are included but cel, in the middle, is not. I think I've modified this code to do this by taking multiple paths. class WordDictionary(object): def __init__(self): self.node = None self.map = {} def __repr__(self): if self.node and self.map: return "\"{0}\".{1}".format(self.node, self.map) elif self.node: return "\"{0}\"".format(self.node) elif self.map: return "βˆ….{0}".format(self.map) else: return "<error>" def addWord(self, word: str) -> None: current = self for i in word: if not i in current.map: current.map[i] = WordDictionary() current = current.map[i] current.node = word def match(self, word: str) -> str: if not word: """ in-order sub-trie traversal will enumerate the words that have `word` as a prefix (hence prefix-tree); (not used) """ if self.node: yield self.node return i = word[0] rest = word[1:] if i == '.': for wild in self.map.values(): yield from wild.match(rest) elif i in self.map: yield from self.map[i].match(rest) def search(self, word: str) -> bool: return next(self.match(word), None) != None words = WordDictionary() words.addWord("zulu") words.addWord("november") words.addWord("mike") words.addWord("yankee") words.addWord("cat") words.addWord("cate") words.addWord("cel") words.addWord("cut") words.addWord("cute") print("words: {0}".format(words)) print("first match a:", next(words.match("a"), None)) print("first match mi..:", next(words.match("mi.."), None)) print("search a:", words.search("a")) print("search c.t:", words.search("c.t")) print("search z.lu:", words.search("z.lu")) print("search mi..:", words.search("mi..")) print("search ..:", words.search("..")) print("search ....:", words.search("....")) print("search c.te", words.search("c.te")) The match function is a (recursive) generator for all the words that could be possible using the string with wildcards. The self.node string is a more expressive form of the end sentinel; end could be used, but the way I've written search requires that any words are returned from match. (Sorry for the python3 instead of python, this is not my usual language at all.)
3
1
78,082,450
2024-2-29
https://stackoverflow.com/questions/78082450/removing-duplicate-sub-dataframes-from-a-pandas-dataframe
I have a pandas dataframe, for example df_dupl = pd.DataFrame({ 'EVENT_TIME': ['00:01', '00:01', '00:01', '00:03', '00:03', '00:03', '00:06', '00:06', '00:06', '00:08', '00:08', '00:10', '00:10', '00:11', '00:11', '00:13', '00:13', '00:13'], 'UNIQUE_ID': [123, 123, 123, 125, 125, 125, 123, 123, 123, 127, 127, 123, 123, 123, 123, 123, 123, 123], 'Value1': ['A', 'B', 'A', 'A', 'B', 'A', 'A', 'B', 'A', 'A', 'B', 'A', 'B', 'C', 'B', 'A', 'B', 'A'], 'Value2': [0.3, 0.2, 0.2, 0.1, 1.3, 0.2, 0.3, 0.2, 0.2, 0.1, 1.3, 0.3, 0.2, 0.3, 0.2, 0.3, 0.2, 0.2] }) I want to remove the sequences of rows that have the same values as the previous (by EVENT_TIME) rows with the same UNIQUE_ID. For the example the result should look like this: df = pd.DataFrame({ 'EVENT_TIME': ['00:01', '00:01', '00:01', '00:03', '00:03', '00:03', '00:08', '00:08', '00:10', '00:10', '00:11', '00:11', '00:13', '00:13', '00:13'], 'UNIQUE_ID': [123, 123, 123, 125, 125, 125, 127, 127, 123, 123, 123, 123, 123, 123, 123], 'Value1': ['A', 'B', 'A', 'A', 'B', 'A', 'A', 'B', 'A', 'B', 'C', 'B', 'A', 'B', 'A'], 'Value2': [0.3, 0.2, 0.2, 0.1, 1.3, 0.2, 0.1, 1.3, 0.3, 0.2, 0.3, 0.2, 0.3, 0.2, 0.2] }). The rows with time 00:06 should be removed, because the previous sub-dataframe with UNIQUE_ID 123 (time 00:01) is identical. On the other hand the rows with time 00:13 should remain - they are also identical to the rows with Time 00:01, but there are other rows with UNIQUE_ID 123 in between. The key thinq is that I want to compare the whole sub-dataframes, not single rows. I can achieve the desired result by using the folllowing function, but it is quite slow. def del_dupl_gr(df): out = [] for x in df['UNIQUE_ID'].unique(): prev_df = pd.DataFrame() for y in df[df['UNIQUE_ID'] == x]['EVENT_TIME'].unique(): test_df = df[(df['UNIQUE_ID'] == x) & (df['EVENT_TIME'] == y)] if not test_df.iloc[:, 2:].reset_index(drop=True).equals(prev_df.iloc[:, 2:].reset_index(drop=True)): out.append(test_df) prev_df = test_df return pd.concat(out).sort_index().reset_index(drop=True) The real dataframe is quite large (over million rows) and this looping takes a lot of time. I am sure there must be proper (or at least faster) way to do this. Results Thanks for all the submitted answers. I compared their speed. In some cases i slightly edited the methods to produce exactly the same results. So in all sort_values methods I added kind='stable' to ensure that the order is preserved, and at the end I added .reset_index(drop=True). Method 1000 rows 10 000 rows 100 000 rows original 556 ms 5.41 s Not tested mozway 1.24 s 10.1 s Not tested Andrej Kesely 696 ms 4.56 s Not tested Quang Hoang 11.3 ms 34.1 ms 318 ms EDIT The accepted answer works fine if there are no NaN values in the dataframe. In my case (though not mentioned in the original question), there are NaN values which I need to be treated as equal. So the comparison part of the code needs to be modified as follows (the other parts remain the same) dup = ((df_dupl.groupby(['UNIQUE_ID',enums])[value_cols].shift().eq(df_dupl[value_cols]) | (df_dupl.groupby(['UNIQUE_ID',enums])[value_cols].shift().isna() & df_dupl[value_cols].isna())).all(axis=1).groupby(groups).transform('all') & sizes.groupby([df_dupl['UNIQUE_ID'],enums]).diff().eq(0)) This makes the method about 50% slower, but it remains by far the fastest of all the solutions.
Another approach is to shift the rows by enumeration, then compare: # the value columns value_cols = df.columns[2:] # groups are identified as `EVENT_TIME` and `UNIQUE_ID` groupby = df_dupl.groupby(['EVENT_TIME','UNIQUE_ID'])['Value1'] # these are the groups groups = groupby.ngroup() # enumeration within the groups enums = groupby.cumcount() # sizes of the groups - populated across the rows sizes = groupby.transform('size') dup = (df_dupl.groupby(['UNIQUE_ID',enums])[value_cols].shift(). # shift by enumeration within `UNIQUE_ID` .eq(df_dupl[value_cols]).all(axis=1) # equal the current rows .groupby(groups).transform('all') # identical across the groups & sizes.groupby([df_dupl['UNIQUE_ID'],enums]).diff().eq(0). # and the group size are equal too ) # output df_dupl.loc[~dup] Output: EVENT_TIME UNIQUE_ID Value1 Value2 0 00:01 123 A 0.3 1 00:01 123 B 0.2 2 00:01 123 A 0.2 3 00:03 125 A 0.1 4 00:03 125 B 1.3 5 00:03 125 A 0.2 9 00:08 127 A 0.1 10 00:08 127 B 1.3 11 00:10 123 A 0.3 12 00:10 123 B 0.2 13 00:11 123 C 0.3 14 00:11 123 B 0.2 15 00:13 123 A 0.3 16 00:13 123 B 0.2 17 00:13 123 A 0.2
5
5
78,080,665
2024-2-29
https://stackoverflow.com/questions/78080665/insert-attibute-in-position
I need to insert an element attribute at the correct position using the lxml library. Here is an example where I am trying to insert the attr2 attribute in front of the attr3 attribute: from lxml import etree xml = '<root attr0="val0" attr1="val1" attr3="val3" attr4="val4" attr5="val5"/>' root = etree.fromstring(xml) inse_pos = root.keys().index('attr3') attrib_items = root.items() attrib_items.insert(inse_pos, ('attr2', 'val2')) root.attrib = dict(attrib_items) print(etree.tostring(root)) But I'm getting an error: AttributeError: attribute 'attrib' of 'lxml.etree._Element' objects is not writable
One possible solution could be to recreate the element attributes from scratch: from lxml import etree xml = '<root attr0="val0" attr1="val1" attr3="val3" attr4="val4" attr5="val5"/>' root = etree.fromstring(xml) attribs = root.attrib.items() root.attrib.clear() for k, v in attribs: if k == "attr3": root.attrib["attr2"] = "val2" root.attrib[k] = v print(etree.tostring(root)) Prints: b'<root attr0="val0" attr1="val1" attr2="val2" attr3="val3" attr4="val4" attr5="val5"/>'
2
3
78,080,284
2024-2-29
https://stackoverflow.com/questions/78080284/test-for-missing-import-with-pytest
In the __init__.py-file of my project I have the following code, for retrieving the current version of the program from the pyproject.toml-file: from typing import Any from importlib import metadata try: import tomllib with open("pyproject.toml", "rb") as project_file: pyproject: dict[str, Any] = tomllib.load(project_file) __version__ = pyproject["tool"]["poetry"]["version"] except Exception as _: __version__: str = metadata.version(__package__ or __name__) This code works fine on Python 3.11 and newer. On older versions, however, tomllib does not exist, and therefore the exception should be raised (and the version will be determined by using metadata) When testing on Python 3.11 with pytest, is there any way of designing a test to check if that approach of determining the version works both with and without tomllib, without having to create a second venv without tomllib? I.e. how can I artificially generate the exception, such that I can test both branches without having to switch to different versions?
You can monkeypatch sys.modules, so tomllib is not there (see Python testing: Simulate ImportError) Here is a curated example: import sys def func(): try: import tomllib return 1 except ImportError: return 0 def test_with_tomllib(): assert func() == 1 def test_without_tomllib(monkeypatch): monkeypatch.setitem(sys.modules, 'tomllib', None) assert func() == 0
3
4
78,075,981
2024-2-28
https://stackoverflow.com/questions/78075981/joining-polars-dataframes-while-ignoring-duplicate-values-in-the-on-column
Given the code df1 = pl.DataFrame({"A": [1, 1], "B": [3, 4]}) df2 = pl.DataFrame({"A": [1, 1], "C": [5, 6]}) result = df1.join(df2, on='A') result looks like shape: (4, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ A ┆ B ┆ C β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ 1 ┆ 3 ┆ 5 β”‚ β”‚ 1 ┆ 4 ┆ 5 β”‚ β”‚ 1 ┆ 3 ┆ 6 β”‚ β”‚ 1 ┆ 4 ┆ 6 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ but I would like it to be shape: (2, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ A ┆ B ┆ C β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ 1 ┆ 3 ┆ 5 β”‚ β”‚ 1 ┆ 4 ┆ 6 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ Experimenting with left_on, right_on and how parameters did not resolve this issue.
If the tables are aligned (possibly, after sorting by the on column), you can concatenate them horizontally using pl.concat. pl.concat([df1.sort("A"), df2.sort("A").drop("A")], how="horizontal") shape: (2, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ A ┆ B ┆ C β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ 1 ┆ 3 ┆ 5 β”‚ β”‚ 1 ┆ 4 ┆ 6 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ If one table contains more rows and you want to keep the extra rows, you could add an extra column to store the index of the row among all rows with same on column(s). Then, you can join the tables using both on column(s) and the index. ( df1 .with_columns(pl.int_range(pl.len()).over("A").alias("id")) .join( df2.with_columns(pl.int_range(pl.len()).over("A").alias("id")), on=["A", "id"], how="left", ) .select(pl.exclude("id")) )
3
3
78,077,410
2024-2-28
https://stackoverflow.com/questions/78077410/styling-negative-numbers-in-pandas
I have a dataframe that I am exporting to Excel. I would also like to style it before the export. I have this code which changes the background color and text color and works fine, but I would like to add to it: df.style.set_properties(**{'background-color': 'black', 'color': 'lawngreen', 'border-color': 'white'}).to_excel(writer, sheet_name='Sheet1', startrow=rowPos, float_format = "%0.5f") I need columns with strings and dates to have a white text color, and then positive numbers to be green and negative numbers to be red. I pulled these styles directly from pandas documentation on styling since I have never used it before, and am unsure how to achieve these results. Lets say my dataframe looks like this: StartDate ExpiryDate Commodity Quantity Price Total --------- ---------- ---------- ------- ----- ----- 02/28/2024 12/28/2024 HO 10000 -3.89 -38900 02/28/2024 12/28/2024 WPI 10000 4.20 42000 how could I achieve what I am looking for?
I'd break it down into three steps (see the comments #) : st = ( df.style # 1-applying the default styles .set_properties(**default_css) # 2-formatting the numeric columns .apply( lambda df_: df_.select_dtypes("number") .lt(0).replace({True: tc(neg), False: tc(pos)}), axis=None, ) .format(precision=2) # this one is optional # 3-formatting the string-like dates and strings .map(lambda v: tc(obj) if isinstance(v, str) else "") ) # st.to_excel("output.xlsx", index=False) # uncomment to make an Excel Output : Used CSS : default_css = { "background-color": "black", "border": "1px solid white", } tc = "color: {}".format # css text color obj, pos, neg = "white", "lawngreen", "red"
3
1
78,076,178
2024-2-28
https://stackoverflow.com/questions/78076178/how-do-i-type-hint-a-return-type-of-one-function-based-init-if-init-is-overloade
I have a class that accepts either ints or floats in it's init but all must be int or float so i am using typing.overload to achieve that and I want to be able to type hint the return of a function based on the given values. class Vector3: @overload def __init__(self, x: int, y: int, z: int) -> None: ... @overload def __init__(self, x: float, y: float, z: float) -> None: ... def __init__(self, x, y, z) -> None: self._x = x self._y = y self._z = z # This function def __key(self) -> tuple[int | float, int | float, int | float]: return (self._x, self._y, self._z) Also how would I type hint the values of x, y, and z? I plan to use @property to obfuscate the _x, _y, _z values and don't know how I'd type hint them either. @property def x(self) -> int | float: return self._x
Don't overload __init__ at all. Make Vector3 generic instead, using a constrained type variable. from typing import Generic, TypeVar T = TypeVar('T', int, float) class Vector3(Generic[T]): def __init__(self, x: T, y: T, z: T) -> None: self._x: T = x self._y: T = y self._z: T = z def __key(self) -> tuple[T, T, T]: return (self._x, self._y, self._z) @property def x(self) -> T: return self._x # etc Then reveal_type(Vector3(1, 2, 3)) # Vector3[int] reveal_type(Vector3(1., 2., 3.)) # Vector3[float] reveal_type(Vector3(1, 2, 3.)) # Vector3[float], via promotion of int values to floats reveal_type(Vector3('1', '2', '3')) # type error, T cannot be bound to str Note that int is considered a subtype of float in this context.
4
5
78,075,720
2024-2-28
https://stackoverflow.com/questions/78075720/how-to-write-to-a-sqlite-database-using-polars-in-python
import polars as pl import sqlite3 conn = sqlite3.connect("test.db") df = pl.DataFrame({"col1": [1, 2, 3]}) According to the documentation of pl.write_database, I need to pass a connection URI string e.g. "sqlite:////path/to/database.db" for SQLite database: df.write_database("test_table", f"sqlite:////test.db", if_table_exists="replace") However, I got the following error: OperationalError: (sqlite3.OperationalError) unable to open database file EDIT: Based on the answer, install SQLAlchemy with the pip install polars[sqlalchemy] command.
When working with SQLite, the number of slashes depends on how you are accessing the sqlite file. If you use two slashes as suggested in the comments, you'll see this: sqlalchemy.exc.ArgumentError: Invalid SQLite URL: sqlite://test.db Valid SQLite URL forms are: sqlite:///:memory: (or, sqlite://) sqlite:///relative/path/to/file.db sqlite:////absolute/path/to/file.db It appears that in your case, you want to use 3 slashes and not 4. Please try this instead: df.write_database("test_table", f"sqlite:///test.db", if_table_exists="replace") This would be a working example, based on the sample code snippets you provided: import polars as pl import sqlite3 conn = sqlite3.connect("test.db") df = pl.DataFrame({"col1": [1, 2, 3]}) df.write_database("test_table", f"sqlite:///test.db", if_table_exists="replace")
7
3
78,073,286
2024-2-28
https://stackoverflow.com/questions/78073286/how-can-i-have-the-value-of-a-gradio-block-be-passed-to-a-functions-named-param
Example: import gradio as gr def dummy(a, b='Hello', c='!'): return '{0} {1} {2}'.format(b, a, c) with gr.Blocks() as demo: txt = gr.Textbox(value="test", label="Query", lines=1) txt2 = gr.Textbox(value="test2", label="Query2", lines=1) answer = gr.Textbox(value="", label="Answer") btn = gr.Button(value="Submit") btn.click(dummy, inputs=[txt], outputs=[answer]) gr.ClearButton([answer]) demo.launch() How can I have the value of txt2 be given to the dummy()'s named parameter c?
As far as I know, Gradio doesn't directly allow it. I.e., it doesn't allow something like this: btn.click(fn=dummy, inputs={"a": txt, "c": txt2}, outputs=answer) In this specific case, if you can't modify the dummy function then I would rearrange the parameters with the following function: def rearrange_args(func): def wrapper(*args): a, c = args return func(a, c=c) return wrapper Now, you just need to modify your example as follows: btn.click(rearrange_args(dummy), inputs=[txt, txt2], outputs=[answer])
2
3
78,072,152
2024-2-28
https://stackoverflow.com/questions/78072152/getting-all-value-that-is-max-occurrence-of-each-row-of-numpy
How to gather all the values that repeated the most in each row of the numpy array, like the result of np.unique. However, I want to avoid loops as the data to be handle will be much larger (much more rows). See the example below Input: a is a 2D array, had the shape of (x, k), where x would be very large. a = np.asarray([[2, 7, 7, 2, 1], [1, 2, 3, 5, 5], [6, 6, 6, 6, 6]]) Ideal output: [[2,7], [5], [6]], first row had [2,7] both exist in 2 times Using loops can nearly do the job, but np.unique seems not working the same for multi-dimension arrays [np.array(np.unique(i, return_counts=1)) for i in a] # decent output > [array([[1, 2, 7],[1, 2, 2]], dtype=int64), array([[1, 2, 3, 5], [1, 1, 1, 2]], dtype=int64), array([[6], [5]], dtype=int64)] # Multi-dimension Input np.unique(a, return_counts=1, axis=1) # Useless Output > (array([[1, 2, 2, 7, 7], [5, 1, 5, 2, 3], [6, 6, 6, 6, 6]]), array([1, 1, 1, 1, 1], dtype=int64))
Use statistics.multimode: from statistics import multimode out = list(map(multimode, a)) Output: [[2, 7], [5], [6]]
2
3
78,071,920
2024-2-28
https://stackoverflow.com/questions/78071920/add-multiple-columns-from-one-function-call-in-python-polars
I would like to add multiple columns at once to a Polars dataframe, where each column derives from the same object (for a row), by creating the object only once and then returning a method of that object for each column. Here is a simplified example using a range object: import polars as pl df = pl.DataFrame({ 'x': [11, 22], }) def uses_object(x): r = list(range(0, x)) c10 = r.count(10) c12 = r.count(12) return c10, c12 df = df.with_columns( count_of_10 = pl.col('x').map_elements(lambda x: uses_object(x)[0]), count_of_12 = pl.col('x').map_elements(lambda x: uses_object(x)[1]), ) print(df) shape: (2, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ x ┆ count_of_10 ┆ count_of_12 β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════════β•ͺ═════════════║ β”‚ 11 ┆ 1 ┆ 0 β”‚ β”‚ 22 ┆ 1 ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ I tried multiple assignment df = df.with_columns( count_of_10, count_of_12 = uses_object(pl.col('x')), ) but got error NameError name 'count_of_10' is not defined. Can I change the code to call uses_object only once?
If you return a dictionary from your function: return dict(count_of_10=c10, count_of_12=c12) You will get a struct column: df.with_columns( count = pl.col('x').map_elements(uses_object) ) shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ x ┆ count β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ struct[2] β”‚ β•žβ•β•β•β•β•β•ͺ═══════════║ β”‚ 11 ┆ {1,0} β”‚ β”‚ 22 ┆ {1,1} β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Which you can .unnest() into individual columns. df.with_columns( count = pl.col('x').map_elements(uses_object) ).unnest('count') shape: (2, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ x ┆ count_of_10 ┆ count_of_12 β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════════β•ͺ═════════════║ β”‚ 11 ┆ 1 ┆ 0 β”‚ β”‚ 22 ┆ 1 ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ As for your current approach, you would call it once and then use Polars list methods to extract the values in a separate .with_columns / .select e.g. df.with_columns( count = pl.col('x').map_elements(uses_object) ).with_columns( count_of_10 = pl.col('count').list.first(), count_of_12 = pl.col('count').list.last(), ).drop('count')
3
2
78,059,324
2024-2-26
https://stackoverflow.com/questions/78059324/attributeerror-styler-object-has-no-attribute-style
This is my DataFrame: import pandas as pd df = pd.DataFrame( { 'a': [2, 2, 2, -4, 4, 4, 4, -3, 2, -2, -6], 'b': [2, 2, 2, 4, 4, 4, 4, 3, 2, 2, 6] } ) I use a function to highlight cells in a when I use to_excel: def highlight_cells(s): if s.name=='a': conds = [s > 0, s < 0, s == 0] labels = ['background-color: lime', 'background-color: pink', 'background-color: gold'] array = np.select(conds, labels, default='') return array else: return ['']*s.shape[0] Now I want to add one more feature by adding plus sign if a value in a is positive. For example 1 becomes +1. I want this feauture only for column a. This is my attempt but it does not work. It gives me the error that is the title of the post. df.style.apply(highlight_cells).style.format({'a': '{:+g}'}).to_excel('df.xlsx', sheet_name='xx', index=False)
style.apply already returns a Styler object, you only need further do operation on this, i.e. df.style.apply(highlight_cells).format(...) # ^ # | # No need .style again
2
4
78,047,434
2024-2-23
https://stackoverflow.com/questions/78047434/deserialize-aliased-json-using-pydantic
I am wanting to use the Pydantic (Version: 2.6.1) aliases so that I can use a Python keyword ('from') when creating JSON. I thought this would work: from pydantic import BaseModel, Field class TestModel(BaseModel): from_str: str = Field(serialization_alias="from") test_obj = TestModel(from_str="John") print("TestModel: ", test_obj) test_json = test_obj.model_dump(by_alias=True) print("Test JSON:", test_json) deserialized_obj = TestModel.model_validate(test_json) print("Deserialized: ", deserialized_obj) This correctly loads into a dict with the right alias ('from'). But it gives me this error in the call to model_validate: pydantic_core._pydantic_core.ValidationError: 1 validation error for TestModel from_str Field required [type=missing, input_value={'from': 'John'}, input_type=dict] I thought it would also use the serialization alias when converting back to a TestModel, but it doesn't seem so. I do need it to be able to round-trip. Any help is appreciated. I have tried fiddling with validation_alias as well, but then I get issues when calling model_dump.
I was able to 'fix' this by setting up the class as follows: class TestModel(BaseModel): model_config = ConfigDict(populate_by_name=True) from_str: str = Field(alias="from") Where populate_by_name allows you to access the class variable via it's name, but serialization and validation use the alias.
2
2
78,040,248
2024-2-22
https://stackoverflow.com/questions/78040248/visualize-nodes-and-their-connections-in-clusters-via-networkx
I have a list of Connections between two nodes describing similarities of Entries in a Dataset. I'm thinking of vizualising the Entries and their connections to show that there are clusters of very similar entries. Each tuple stands for a pair of very similar nodes. I've chosen weight as 1 for all of them since it's required but I want all edges equally thick. I've started with networkx, problem is I don't really now how to cluster the similar nodes together in a useful manner. I have a List of the connections in a Dataframe: smallSample = [[0, 1492, 1], [12, 937, 1], [16, 989, 1], [18, 371, 1], [18, 1140, 1], [26, 398, 1], [26, 1061, 1], [30, 1823, 1], [33, 1637, 1], [54, 1047, 1], [63, 565, 1]] I Create a graph the following way: import networkx as nx import matplotlib.pyplot as plt G = nx.Graph() for index, row in CC.iterrows(): G.add_edge(CC['source'].loc[index],CC['target'].loc[index], weight =1) pos = nx.spring_layout(G, seed=7) nx.draw_networkx_nodes(G, pos, node_size=5) nx.draw_networkx_edges(G, pos, edgelist=G.edges(), width=0.5) pos = nx.spring_layout(G, k=1, iterations=200) plt.figure(3, figsize=(2000,2000), dpi =2) With the small sample provided above the result looks like this: The result from my real df which consists of thousands of points: How can I Group the linked nodes together so that it is better visible how many of them are in each cluster? I dont want them to overlap so hard, its really not that easy to grasp how many of them are there specially in the big sample.
From an InfoVis perspective there are a few things you can do transparency & node size Transparency can be used to visualize overlapping. You have to choose between these two tradeoffs: A lower transparency level allows you to visualize more layers, for that many nodes need to overlap and you should increase the node size. However, a larger node size makes individual nodes stick out less and the visualization of node edges adds clutter (disable or use less tick edges). TL;DR: Choose/Play between smaller node size and high alpha values vs. larger node sizes and lower alpha values. play with the k parameter for nx.spring_layout, the larger it is the further away are the nodes. The default is 1/sqrt(len(G)) a slight increase [1.2-1.7]/sqrt(len(G)) can give you some more clarity. Last but not least I would suggest jitter for you that shuffles the position of nodes a bit and lessens overlap (there are many papers on jitter and some better versions than just uniform that I choose here, however it is the most simplest to implement.) Some recreation of the dataset This code creates a similar looking dataset import random import numpy as np import pandas as pd from copy import deepcopy import networkx as nx import matplotlib.pyplot as plt from math import sqrt random.seed(7) np.random.seed(7) # Create a bigger dataset smallSample = [ [0, 1492, 1], [12, 937, 1], [16, 989, 1], [18, 371, 1], [18, 1140, 1], [26, 398, 1], [26, 1061, 1], [30, 1823, 1], [33, 1637, 1], [54, 1047, 1], [63, 565, 1]] sample = deepcopy(smallSample) AMOUT = 4000 present_nodes = list(set(x for edge in sample for x in edge)) i = 2 while i < AMOUT: source = target = None while source == target: if random.random() < 0.9: # Create at least one new node source = i if random.random() < 0.7: # High value for many small clusters # Create a second new node target = i = i+1 present_nodes.append(target) else: target = random.choice(present_nodes) present_nodes.append(source) else: # Link existing ones source = random.choice(present_nodes) target = random.choice(present_nodes) i += 1 sample.append([source, target, 1]) CC = pd.DataFrame(sample, columns=["source", "target", "weight"], dtype=int) # Create the Graph G = nx.Graph() for index, row in CC.iterrows(): G.add_edge(CC['source'].loc[index],CC['target'].loc[index], weight =1) Calcualte Positions # Defaul k = 1/sqrt(len(G)) pos = nx.spring_layout(G, k=1/sqrt(len(G)), seed=7, iterations=100) # cast the pos dict to an np.array apos = np.fromiter(pos.values(), dtype=np.dtype((float, 2))) Default Look Transparency nx.draw_networkx_nodes(G, pos, node_size=10, alpha=0.45, linewidths=0.2) nx.draw_networkx_edges(G, pos, edgelist=G.edges(), width=0.5, alpha=0.2) plt.title("Transparency") plt.figure(3, figsize=(2000,2000), dpi =2) Use a larger k value This increases the distances between the nodes and makes it less clumpy pos15 = nx.spring_layout(G, k=1.5/sqrt(len(G)), seed=7, iterations=100) # Larger k to make it less clumpy # cast the pos dict to an np.array apos15 = np.fromiter(pos15.values(), dtype=np.dtype((float, 2))) nx.draw_networkx_nodes(G, pos15, node_size=10, alpha=0.55, linewidths=0.2) nx.draw_networkx_edges(G, pos15, edgelist=G.edges(), width=0.5, alpha=0.2) plt.title("Larger k") plt.figure(3, figsize=(2000,2000), dpi =2) Adding Jitter JITTER = 0.025 jitter = np.random.uniform(low=-JITTER, high=JITTER, size=apos.shape) jpos = {k:p for k,p in zip(pos.keys(), apos + jitter)} jpos15 = {k:p for k,p in zip(pos15.keys(), apos15 + jitter)} nx.draw_networkx_nodes(G, jpos, node_size=10, alpha=0.45, linewidths=0.2) nx.draw_networkx_edges(G, jpos, edgelist=G.edges(), width=0.5, alpha=0.2) plt.title("default + jitter") plt.figure(3, figsize=(2000,2000), dpi =2) plt.show() nx.draw_networkx_nodes(G, jpos15, node_size=10, alpha=0.55, linewidths=0.2) # As nodes overlapp less I would increase the alpha level a bit nx.draw_networkx_edges(G, jpos15, edgelist=G.edges(), width=0.5, alpha=0.2) plt.title("larger k + jitter") plt.figure(3, figsize=(2000,2000), dpi =2) In the end it is some playing around with the parameter to choose something you like.
3
2
78,070,256
2024-2-27
https://stackoverflow.com/questions/78070256/htmx-hx-target-error-fails-when-hx-target-defined-django-python
I am using the htmx extension "response-targets" via "hx-ext" in a python django project. The Problem "hx-target-error" will not work when "hx-target" is also defined, but works when it is the only target. Environment htmx 1.9.10, response-targets 1.9.10, python 3.12, django 5.0.1 Example html: <div hx-ext="response-targets"> <button hx-get={% ur "appname:endpoint" %} hx-target-error="#id-fail" hx-target="#id-success" type="submit" > Press Me! </button> </div> hx-target I have tried with various other types of targets such as "next td", "next span", etc. All of them produce a valid target value "request.htmx.target" in the endpoint defined in views.py. hx-target-error I have also tried various alternatives supported in response-targets such as "hx-target-*", "hx-target-404", etc with no change in results. I verified that "response-targets" is installed correctly and being used correctly because when "hx-target" is removed from the "button", then "hx-target-error" works. Error generation in views.ph @login_required def endpoint(request) return HttpResponseNotFound("Not found 404") Logs Bad Request: /testbed/endpoint/ 127.0.0.1 - [27/Feb/2024] "GET /testbed/endpoint/ HTTP/1.1" 404 response-targets extension https://htmx.org/extensions/response-targets/
Resolved: The solution is to move the "hx-ext" attribute <div hx-ext="response-targets"> to a higher encompassing level than any DOM elements referenced by either "hx-target" or "hx-target-error". In my case, "hx-target-error" pointed to a div 'id' outside of the div containing "hx-ext". Thank you @guigui42 for suggesting this solution Example incorrect usage: <tr> <td> <div hx-ext="response-targets"> <button hx-get={% url "testbed:rebound" %} hx-headers='{"custom": "{{ test.id }}"}' hx-target-error="#GET-{{ test.id }}-fail" hx-target="#GET-{{ test.id }}-ok" class="btn btn-sm btn-primary testbed-btn" > GET {{ test.msg }} </button> </div> </td> <td> <span id="GET-{{ test.id }}-ok" /> <span class="text-danger" id="GET-{{ test.id }}-fail" /> </td> </tr> Example correct usage: <tr hx-ext="response-targets"> <td> <button hx-get={% url "testbed:rebound" %} hx-headers='{"custom": "{{ test.id }}"}' hx-target-error="#GET-{{ test.id }}-fail" hx-target="#GET-{{ test.id }}-ok" class="btn btn-sm btn-primary testbed-btn" > GET {{ test.msg }} </button> </td> <td> <span id="GET-{{ test.id }}-ok" /> <span class="text-danger" id="GET-{{ test.id }}-fail" /> </td> </tr>
2
7
78,057,740
2024-2-25
https://stackoverflow.com/questions/78057740/chrome-122-how-to-allow-insecure-content-insecure-download-blocked
I'm unable to test file download with Selenium (python), after Chrome update to the version '122.0.6261.70'. Previously running Chrome with the '--allow-running-insecure-content' arg did a trick. The same is suggested over the net. On some sites one additional arg is suggested: '--disable-web-security'. But both change nothing for me (the warning keeps appearing). Does anybody know if something has been changed between the 121 and 122 versions? Is there some arg or pref that I'm missing? Warning image for the reference: Driver creation (simplified): from selenium import webdriver from selenium.webdriver.chrome.options import Options options = Options() for arg in ["--allow-running-insecure-content", "--disable-web-security"]: options.add_argument(arg) driver = webdriver.Chrome(options=options)
Okay, so found two solutions: --unsafely-treat-insecure-origin-as-secure=* This is an experimental flag that allows you to list which domains to treat as secure so the download is no longer blocked. --disable-features=InsecureDownloadWarnings This is a more stable flag that disables the insecure download blocking feature for all domains. -- This is what worked for me: from selenium import webdriver from selenium.webdriver.chrome.options import Options chrome_options = Options() chrome_options.add_argument("--window-size=1920,1080") chrome_options.add_argument("--allow-running-insecure-content") # Allow insecure content chrome_options.add_argument("--unsafely-treat-insecure-origin-as-secure=http://example.com") # Replace example.com with your site's domain chrome_options.add_experimental_option("prefs", { "download.default_directory": download_path, "download.prompt_for_download": False, "download.directory_upgrade": True, "safebrowsing.enabled": True }) driver = webdriver.Chrome(options=chrome_options)
10
20
78,065,058
2024-2-27
https://stackoverflow.com/questions/78065058/knapsack-problem-find-top-k-lower-profit-solutions
In the classic 0-1 knapsack problem, I am using the following (dynamic programming) algorithm to construct a "dp table": def knapsack(weights, values, capacity): n = len(weights) weights = np.concatenate(([0], weights)) # Prepend a zero values = np.concatenate(([0], values)) # Prepend a zero table = np.zeros((n+1, capacity+1), dtype=np.int64) # The first row/column are zeros for i in range(n+1): for w in range(capacity+1): if i == 0 or w == 0: table[i, w] = 0 elif weights[i] <= w: table[i, w] = max( table[i-1, w-weights[i]] + values[i], table[i-1, w] ) else: table[i, w] = table[i-1, w] return table Then, by traversing the table using the following code, I am then able identify the items that make up the optimal solution: def get_items(weights, capacity, table): items = [] i = len(weights) j = capacity weights = np.concatenate(([0], weights)) # Prepend a zero table_copy = table.copy() while i > 0 and j > 0: if table_copy[i, j] == table_copy[i-1, j]: pass # Item is excluded else: items.append(i-1) # Item is included, fix shifted index due to prepending zero j = j - weights[i] i = i-1 return items This is great for finding the items that make up the single optimal solution (i.e., highest total summed value). However, I can't seem to figure out how to retrieve, say, the top-3 or top-5 solutions from this table that has a total weight that is less than or equal to the maximum capacity. For example, the top-3 solutions for the following input would be: weights = [1, 2, 3, 2, 2] values = [6, 10, 12, 6, 5] capacity = 5 # with corresponding "dp table" array([[ 0, 0, 0, 0, 0, 0], [ 0, 6, 6, 6, 6, 6], [ 0, 6, 10, 16, 16, 16], [ 0, 6, 10, 16, 18, 22], [ 0, 6, 10, 16, 18, 22]]) Total Summed Value Items (Zero-based Index) 22 1, 2 22 0, 1, 3 21 0, 1, 4 Note that there is a tie for both first place and so we'd truncate the solutions after the first 3 rows (though, getting all ties is preferred if possible). Is there an efficient way to obtain the top-k solutions from the "dp table"?
I'd build a richer DP table. For each cell, you store just the total value of just one item set. I'd instead store information of the top k item sets for that cell: both the total value and the indices. Output of the below program: 22 [0, 1, 3] 22 [1, 2] 21 [0, 1, 4] And that's the information I store in the final table cell table[-1][-1] instead of just your 22. More precisely, this is the Python data I store there: [(22, [1, 2]), (22, [0, 1, 3]), (21, [0, 1, 4])] If you need it more efficient, I think you could store only the last index of each item set instead of all its indices. But then you have to reconstruct the item sets later, and I'm not in the mood to do that :-) Full code: import numpy as np from operator import itemgetter def knapsack(weights, values, capacity, k): n = len(weights) weights = np.concatenate(([0], weights)) # Prepend a zero values = np.concatenate(([0], values)) # Prepend a zero get_total = itemgetter(0) table = np.zeros((n+1, capacity+1), dtype=object) # The first row/column are zeros for i in range(n+1): for w in range(capacity+1): if i == 0 or w == 0: table[i, w] = [(0, [])] elif weights[i] <= w: table[i, w] = sorted( [ (total + values[i], indices + [i-1]) for total, indices in table[i-1, w-weights[i]] ] + table[i-1, w], key=get_total, reverse=True )[:k] else: table[i, w] = table[i-1, w] return table[-1][-1] weights = [1, 2, 3, 2, 2] values = [6, 10, 12, 6, 5] capacity = 5 k = 3 top_k = knapsack(weights, values, capacity, k) for total, indices in top_k: print(total, indices) Attempt This Online!
4
1
78,048,223
2024-2-23
https://stackoverflow.com/questions/78048223/adding-folder-with-data-with-pyproject-toml
I would like to package some legacy code to a hello Python package containing only one module (hello.py) file in the top-level directory alongside with some data in a folder called my_data without changing the folder structure: hello/ |-hello.py |-pyproject.toml |-my_data/ |-my_data.csv Packaging the Python source code with the following pyproject.toml file is surprisingly simple (without any prior knowledge on packaging), but running pip install . -vvv fails to copy the data: [project] name = "hello" version = "0.1" [tool.setuptools] py-modules = ['hello'] [tool.setuptools.package-data] hello = ['my_data/*'] The content of hello.py could be minimal: def hello: print('Hello, world!') I tried multiple variants of this pyproject.toml file according to the documentation on http://setuptools.pypa.io/en/stable/userguide/datafiles.html as well as a related question on Specifying package data in pyproject.toml, but none of them would result in copying of the my_data/ folder directly into the site-packages folder (which is intended, but probably bad practice). I also found documentation suggesting to use a MANIFEST.in> graft my_data but also this doesn't result in the data to be installed alongside the code.
The package-data configuration of setuptools can be used when you have a package. But instead you seem to have a a single file-module. In other words package-data is incompatible with the directory structure that you have. I suggest rearranging the files like the following: hello/ |-pyproject.toml |-hello/ |-__init__.py # <----- previously named `hello.py` |-my_data.csv # pyproject.toml diff ... [tool.setuptools] - py-modules = ['hello'] + packages = ['hello'] [tool.setuptools.package-data] - hello = ['my_data/*'] + "" = ['*.csv'] # <--- "" (empty string) means "all packages".
2
2
78,056,565
2024-2-25
https://stackoverflow.com/questions/78056565/how-do-i-get-doctest-to-run-with-examples-in-markdown-codeblocks-for-mkdocs
I'm using mkdocs & mkdocstring to build my documentation and including code examples in the docstrings. I'm also using doctest (via pytest --doctest-modules) to test all those examples. Option 1 - format for documentation If I format my docstring like this: """ Recursively flattens a nested iterable (including strings!) and returns all elements in order left to right. Examples: -------- ``` >>> [x for x in flatten([1,2,[3,4,[5],6],7,[8,9]])] [1, 2, 3, 4, 5, 6, 7, 8, 9] ``` """ Then it renders nicely in the documentation but doctest fails with the error: Expected: [1, 2, 3, 4, 5, 6, 7, 8, 9] ``` Got: [1, 2, 3, 4, 5, 6, 7, 8, 9] That makes sense as doctest treats everything until a blank line as expected output and aims to match is exactly Option 2 - format for doctest If I format the docstring for doctest without code blocks: """ Recursively flattens a nested iterable (including strings!) and returns all elements in order left to right. Examples: -------- >>> [x for x in flatten([1,2,[3,4,[5],6],7,[8,9]])] [1, 2, 3, 4, 5, 6, 7, 8, 9] """ then doctest passes but the documentation renders [x for x in flatten([1,2,[3,4,[5],6],7,[8,9]])][1, 2, 3, 4, 5, 6, 7, 8, 9] Workaround? - add a blank line for doctest If I format it with an extra blank line before the end of the codeblock: """ Recursively flattens a nested iterable (including strings!) and returns all elements in order left to right. Examples: -------- ``` >>> [x for x in flatten([1,2,[3,4,[5],6],7,[8,9]])] [1, 2, 3, 4, 5, 6, 7, 8, 9] ``` """ Then doctest passes but there is a blank line at the bottom of the example in the documentation (ugly) I need to remember to add a blank line at the end of each example (error prone and annoying) Does anyone know of a better solution?
Patching the regex that doctest uses to identify codeblocks solved this problem. Documenting it here for those who stumble across this in the future ... As this is not something I want to do regularly in projects(!), I created pytest-doctest-mkdocstrings as a pytest plugin to do this for me and included some additional sanity-checking, configuration options etc. pip install pytest-doctest-mkdocstrings pytest --doctest-mdcodeblocks --doctest-modules --doctest-glob="*.md" For those who are looking here for the answer in code to use yourself, the required change is: _MD_EXAMPLE_RE = re.compile( r""" # Source consists of a PS1 line followed by zero or more PS2 lines. (?P<source> (?:^(?P<indent> [ ]*) >>> .*) # PS1 line (?:\n [ ]* \.\.\. .*)*) # PS2 lines \n? # Want consists of any non-blank lines that do not start with PS1. (?P<want> (?:(?![ ]*$) # Not a blank line (?![ ]*```) # Not end of a code block (?![ ]*>>>) # Not a line starting with PS1 .+$\n? # But any other line )*) """, re.MULTILINE | re.VERBOSE, ) doctest.DocTestParser._EXAMPLE_RE = _MD_EXAMPLE_RE Specifically I have included (?![ ]*```) # Not end of a code block in the identification of the "want"
3
0
78,048,681
2024-2-23
https://stackoverflow.com/questions/78048681/add-a-new-column-into-an-existing-polars-dataframe
I want to add a column new_column to an existing dataframe df. I know this looks like a duplicate of Add new column to polars DataFrame but the answer to that questions, as well as the answers to many similar questions, don't really add a column to an existing dataframe. They create a new column with another dataframe. I think this can be fixed like this: df = df.with_columns( new_column = pl.lit('some_text') ) However, rewriting the whole dataframe just to add a few columns, seems a bit of a waste to me. Is this the right approach?
Your question suggests that you think that when you do df = df.with_columns( new_column = pl.lit('some_text') ) that you're copying everything over to some new df which would be really inefficient. You're right that that would be really inefficient but that isn't what happens. A DataFrame is just a way to organize pointers to the actual data. The hierarchy is that you have, at the top, DataFrames. Within a DataFrame are Serieses which are how columns are represented. Even at the Series level, it's still just pointers, not data. It is made up of one or more chunked arrays which fit the apache arrow memory model. When you "make a new df" all you're doing is organizing pointers, not data. The data doesn't move or copy. Conversely consider pandas's inplace parameter. It certainly makes it seem like you're modifying things in place and not making copies. inplace does not generally do anything inplace but makes a copy and reassigns the pointer https://github.com/pandas-dev/pandas/issues/16529#issuecomment-323890422 The crux of the issue is that in pandas everything you do makes a copy (or several). In polars, that isn't the case so even when you assign a new df that new df is just an outer layer that points to data. The data doesn't move, nor is it copied unless you specifically execute an operation that does. That said, there are methods which will insert columns without requiring you to use the df=df... syntax but they don't do anything different under the hood as using the preferred assignment syntax.
4
6
78,065,276
2024-2-27
https://stackoverflow.com/questions/78065276/why-python-multiprocessing-process-passes-a-queue-parameter-and-read-faster-th
In python I have implemented 2 types of queue reads The difference: queue is created and executed in the main process queue is created in the main process and executed by other processes. But there is a performance difference, I tried to debug it, but I can't see why! code: queue1.py import multiprocessing import time import cProfile, pstats, io def put_queue(queue): for i in range(500000): queue.put(i) def get_queue(queue): pr = cProfile.Profile() pr.enable() print(queue.qsize()) while queue.qsize() > 0: try: queue.get(block=False) except: pass pr.dump_stats("queue1.prof") pr.disable() s = io.StringIO() sortby = "cumtime" ps = pstats.Stats(pr, stream=s).sort_stats(sortby) ps.print_stats() print(s.getvalue()) q1 = multiprocessing.Queue() t1 = time.time() put_queue(q1) t2 = time.time() print(t2-t1) t1 = time.time() p1 = multiprocessing.Process(target=get_queue, args=(q1,)) p1.start() p1.join() t2 = time.time() print(t2-t1) queue2.py import multiprocessing import time import cProfile, pstats, io def put_queue(queue): for i in range(500000): queue.put(i) def get_queue(queue): pr = cProfile.Profile() pr.enable() print(queue.qsize()) while queue.qsize() > 0: try: queue.get(block=False) except: pass pr.dump_stats("queue2.prof") pr.disable() s = io.StringIO() sortby = "cumtime" ps = pstats.Stats(pr, stream=s).sort_stats(sortby) ps.print_stats() print(s.getvalue()) q2 = multiprocessing.Queue() t1 = time.time() put_queue(q2) t2 = time.time() print(t2 - t1) t1 = time.time() get_queue(q2) t2 = time.time() print(t2 - t1) python queue2.py takes longer than queue1.py I also print profile. enter image description here queue2.py cost muth time in built-in method posix.read. I want to know exactly why.
I don't know if I cam tell you "exactly why" you observe what you see but I have what I believe is a fairly good explanation -- even if it doesn't explain all of what you see. First, if you read the documentation on multiprocessing.Queue, you will see that the call to method qsize is not reliable and should not be used (if you follow my explanation below of what is occurring you can readily see why this is so). Let's use a simpler benchmark that abides by the documentation: from multiprocessing import Queue, Process import time N = 500_000 def putter(queue): for i in range(N): queue.put(i) def getter(queue): for i in range(N): queue.get() def benchmark1(queue): t = time.time() putter(queue) getter(queue) elapsed = time.time() - t print('benchmark 1 time:', elapsed) def benchmark2(queue): t = time.time() putter(queue) p = Process(target=getter, args=(queue,)) p.start() p.join() elapsed = time.time() - t print('benchmark 2 time:', elapsed) if __name__ == '__main__': queue = Queue() benchmark1(queue) benchmark2(queue) Prints: benchmark 1 time: 19.12191128730774 benchmark 2 time: 8.261705160140991 Indeed we see that the version of code where getter is running in another process is approximately twice as fast. Explanation There is clearly a difference in the two cases and I believe the following explains it: A multiprocessing queue is built on top of a multiprocessing.Pipe instance, which has a very limited capacity. If you attempt to send data through the pipe without having another thread/process reading on the other end, you will soon block after having sent very little data. Yet we know that if we define a queue with unlimited capacity I can do, as in the above benchmarks, 500_000 put requests without blocking. What magic makes this possible? When you create the queue, a collections.deque instance is created as part of the queue's implmentation. When a put is done you are simply appending to a deque with unlimited capacity. In addition, a thread is started that waits for the deque to have data and it is actually this thread that is responsible for sending the data through the pipe. Thus it is this special thread that is blocking while the main thread can continue adding data to the queue (actually, the underlying dequeue) without blocking. By time we call the putter function in benchmark1 essentially all the "put" data is still on the dequeue. As the getter function running in the main thread starts to get items from the queue, the other thread can send more data through the pipe. So we are going rapidly back and forth between these two threads with one thread essentially idle. Here is a bit of the sequence. For the sake of simplicity I am assuming that Pipe's capacity is just one item: The putter thread is blocked on the pipe (not running) waiting for something to get the data from the pipe. The getter thread gets an item from the queue (actually the underlying pipe) but now the queue is temporarily empty until the putter can take the next item from its dequeue and send it through the pipe. Until that occurs the getter is in a wait state. The putter "wakes up" being no longer blocked on the pipe and can now send another item through the pipe. But now it blocks again until the getter is dispatched and can get the item from the pipe. In reality, the pipe's capacity might be several of these small integer items and so the getter might run and get several of these items from the queue allowing the putter to concurrently put a few more items without blocking. However, much of the queueing logic is Python bytecode that needs to acquire the GIL before it can execute. In the multiprocessing case, benchmark2, the two threads are running in different processes and so there is no GIL contention. This means items can be moved from the deque through the pipe in parallel with the getter retrieving them and I believe this accounts for the timing difference.
2
3
78,070,791
2024-2-27
https://stackoverflow.com/questions/78070791/my-floating-point-problem-trial-in-c-python
In what follows, IEEE-754 Double-precision floating-point format is taken for granted to be used. Python: "...almost all machines use IEEE 754 binary floating-point arithmetic, and almost all platforms map Python floats to IEEE 754 binary64 'double precision' values." C++: associates double with IEEE-754 double-precision. Machine epsilon is s.t. fl(1+epsilon)>1. Using double precision, formally epsilon = 2^-52, but mostly implemented (because of rounding-to-nearest) as epsilon = 2^-53. In Python (Spyder), when I do: i = 1.0 + 2**-53 print(i) >> 1.0 C++ version: #include <iostream> #include <cmath> #include <iomanip> int main() { double i = std::pow(2.0, -53); double j = 1.0 + i; std::cout << j; return 0; } >> 1.0 Furthermore, when I do in Python: i = 1.0 + 2**-52 + 2**-53 print(i) >> 1.0000000000000004 C++ version: #include <iostream> #include <cmath> #include <iomanip> int main() { double i = std::pow(2.0, -52); double j = 1.0 + i; double k = j + std::pow(2.0, -53); std::cout << std::setprecision(17) << k; return 0; } >> 1.0000000000000004 where the order of addition is from left to right so that the magic of the non-associativity does not come into play, i.e. i = 1.0 + 2**-52 + 2**-53 <=> i = (1.0 + 2**-52) + 2**-53. The weird (to me) thing happens here. Firstly, 1 + 2**-52 is stored (in registers) as exactly the Double Precision format equivalent of 1 + 2*epsilon, i.e. 0 + 01111111111 + 000...01 Moreover, 2**-53 is stored as: 0 + 01111001010 + 000...00 If we write these in the form (mantissa)*(2**exponent), 1 + 2**-52 is (1.0000...01)*(2**0), and 2**-53 is (1.0000...00)*(2**-53), where the mantissa (1.xxxx...xx) length is 53 (including implicit 1 bit). A bitwise addition requires the same exponent, thus I shift the mantissa of the smaller one (i.e. 2**-53) so that it has exponent 0: (1.0000...00)*(2**-53) -> (0.|52 zeros here|10000...00)*(2**0) So that the addition is like: 1.0000...010 (51 zero digits after the radix, then a '1' and '0' making a total of 53 digits) 0.0000...001 (52 zero digits after the radix + '1' digit making a total of 53 digits) + ------------ 1.0000...011 (53 digits after the radix) Now, mantissa should be 53 in total, but it is 54 above, thus it should be rounded. My question begins here: Is the reason why both of these programming languages give 1.0000000000000004 output when I do 1.0 + 2**-52 + 2**-53 because tie-to-even rule is implemented so that 1.0000...011 is rounded to 1.0000...10, which is esentially 1.0000000000000004 up to 16 digits precision? or is it (completely) something else & I made a mistake in the calculations etc.? Sorry if it seems as a bit of overkill or overthinking on a simple subject like this, however, it bothers me for days and I could not figure out the reason why or could not verify my thoughts. Any answer and comment is appreciated.
Yes, it's entirely about round-to-nearest/even. Python's float.hex() can show you the bits directly: >>> 1.0 + 2**-52 + 2**-53 1.0000000000000004 >>> _.hex() '0x1.0000000000002p+0' Although Python and C have little to do with this: it's almost certainly a result of how your CPU/FPU implement float addition (they almost certainly implement 754-style nearest/even rounding in hardware by default). `` Note too that associativity doesn't matter in this specific example. 1.0 + 2**-52 and 2**-52 + 2**-53 are both exactly representable on their own.
4
1
78,069,220
2024-2-27
https://stackoverflow.com/questions/78069220/what-sorting-algorithm-is-this-merge-iterator-with-itself
As long as the list isn't sorted, keep replacing it with a merge of an iterator with itself. Is that (equivalent to) one of the commonly known sorting algorithms, just implemented weirdly, or is it something new? from random import shuffle from heapq import merge from itertools import pairwise # Create test data a = list(range(100)) shuffle(a) # Sort while any(x > y for x, y in pairwise(a)): it = iter(a) a = list(merge(it, it)) print(a) Attempt This Online! heapq.merge does merge two inputs in the "standard" way, but it's an implementation detail and its inputs are supposed to be sorted, which they aren't here (and they're also not independent, as they use the same source). To eliminate it being an implementation detail, let merge actually be this (the standard way, always comparing the two "current" values, yielding the smaller one and fetching a replacement for it): def merge(xs, ys): none = object() x = next(xs, none) y = next(ys, none) while (x is not none) and (y is not none): if x <= y: yield x x = next(xs, none) else: yield y y = next(ys, none) if x is not none: yield x if y is not none: yield y yield from xs yield from ys
It's indeed bubble sort implemented weirdly. Bubble sort repeatedly does this until the list is sorted: Slide a two-item window over the list. At each position, sort the two items in the window, then slide the window one position to the right. This leaves the smaller item behind, the larger item remains in the window, and the next item enters the window. The merge with the same iterator twice does effectively the same thing. Its "two-item window" is the two "current values", the x and y in the question's merge implementation. Like in bubble sort, the smaller one is "left behind" (gets yielded, becomes the next item in the output). The larger one remains in the "window" (remains x or y). And the next item from the input list enters the window (becomes the other one of x and y). Thinking of Knuth's "Beware ... I have only proved it ... not tried it", let's also try it, checking experimentally whether the behavior is indeed identical to ordinary bubble sort. Sorting a shuffled list with 1000 elements both with my question's sort and with bubble sort, and recording the state of the list after each "round". They're identical for both sorts. from random import shuffle from heapq import merge from itertools import pairwise # Create test data a = list(range(1000)) shuffle(a) def mysort_steps(a): steps = [] while any(x > y for x, y in pairwise(a)): it = iter(a) a = list(merge(it, it)) steps.append(a[:]) return steps def bubblesort_steps(a): steps = [] n = len(a) for i in range(n - 1): swapped = False for j in range(n - 1 - i): if a[j] > a[j+1]: a[j], a[j+1] = a[j+1], a[j] swapped = True if not swapped: break steps.append(a[:]) return steps print(mysort_steps(a[:]) == bubblesort_steps(a[:])) Attempt This Online!
4
2
78,063,441
2024-2-26
https://stackoverflow.com/questions/78063441/rolling-mean-with-conditions-interview-problem
I encountered this question during an interview and can't think of a solution. This is the problem, suppose you had a dataset as follows (it goes beyond time 2 but this is just a sample to work with): import pandas as pd data = pd.DataFrame({ 'time': [1, 1, 1, 2, 2, 2], 'names': ["Andy", "Bob", "Karen", "Andy", "Matt", "Sim"], 'val': [1, 2, 3, 5, 6, 8] }) Write a function to calculate the mean of values up till each time point but don't count duplicate names. That is, for time 1 the mean is (1+2+3)/3, for time 2 I don't include Andy's first value of '1' I only include the most recent value so the mean for time 2 is (2+3+5+6+8)/5. I have tried creating two dictionaries, one that stores the 'time' count and the other keeping track of 'names' and 'values' but I don't know how to proceed from there or how to come up with an efficient solution so I am not recalculating means at each step (this was another requirement for the interview). It doesn't have to be a pandas solution, the data form can be anything you prefer. I just presented it as a pandas df.
IIUC, you want to compute the mean of the values up to the current time, while considering only the last seen duplicates (if any). If so, here is one potential option that uses boolean indexing inside a for-loop to build the expanding windows : # uncomment if necessary # data.sort_values("time", inplace=True) to_keep = "last" # duplicate means = {} for t in data["time"].unique(): window = data.loc[data["time"].le(t)] m = ~window["names"].duplicated(to_keep) means[t] = window.loc[m, "val"].mean() Output (means) : { # time|mean 1: 2.0, # (1+2+3)/3 2: 4.8, # (2+3+5+6+8)/5 }
3
1
78,069,577
2024-2-27
https://stackoverflow.com/questions/78069577/datetime-now-in-windows-vs-wsl-vs-linux
I see a difference in the datetime.now() function in Windows Python 3.11 vs WSL Python 3.10 and Linux Python 3.10. On Windows, I get duplicate entries. On WSL and Linux, I don't. I am converting a test system from primarily being used with Python in WSL to using Python in Windows natively and hit a kind of weird issue. At one place the previous author was recording the datetime.now() at the outset of the script, then checking its delta later to see if it was still the same (I'm legit not sure why, but this is what was going on). This suddenly broke with Python in Windows. After some digging, it seems that it's because there is literally no delta in the two datetime.now() calls most of the time. I tried it with WSL and it still worked. Here's an example of what I'm talking about in germ: from datetime import datetime as dt for i in range(0,10): print(f"date and time is: {dt.now()}") With Windows, this is what it prints out: date and time is: 2024-02-26 17:09:39.393765 date and time is: 2024-02-26 17:09:39.393765 date and time is: 2024-02-26 17:09:39.408956 date and time is: 2024-02-26 17:09:39.408956 date and time is: 2024-02-26 17:09:39.408956 date and time is: 2024-02-26 17:09:39.409962 date and time is: 2024-02-26 17:09:39.409962 date and time is: 2024-02-26 17:09:39.409962 date and time is: 2024-02-26 17:09:39.409962 date and time is: 2024-02-26 17:09:39.410971 As you can see, multiple entries are identical. Same system and code with WSL: date and time is: 2024-02-26 17:10:58.658753 date and time is: 2024-02-26 17:10:58.658802 date and time is: 2024-02-26 17:10:58.658828 date and time is: 2024-02-26 17:10:58.658837 date and time is: 2024-02-26 17:10:58.658857 date and time is: 2024-02-26 17:10:58.658880 date and time is: 2024-02-26 17:10:58.658888 date and time is: 2024-02-26 17:10:58.658892 date and time is: 2024-02-26 17:10:58.658911 date and time is: 2024-02-26 17:10:58.658935 No duplicates. My original assumption was that this was because of the overhead of WSL being a VM, but I'm not really sure. I just found it odd and couldn't really find any help with it online as it seems like usually the reason people run into datetime.now() "not updating" is because they're assigning the output to a variable and not rerunning it, but I am running it again each time. I was curious, so I just ran it again on a Linux box I have set up (Ubuntu 22.04), and it never saw any duplicates either. date and time is: 2024-02-27 09:38:16.767813 date and time is: 2024-02-27 09:38:16.767853 date and time is: 2024-02-27 09:38:16.767860 date and time is: 2024-02-27 09:38:16.767864 date and time is: 2024-02-27 09:38:16.767869 date and time is: 2024-02-27 09:38:16.767873 date and time is: 2024-02-27 09:38:16.767878 date and time is: 2024-02-27 09:38:16.767882 date and time is: 2024-02-27 09:38:16.767889 date and time is: 2024-02-27 09:38:16.767894 I checked the Python versions in each and do see a difference: Windows: Python 3.11.8 WSL: Python 3.10.12 Ubuntu: Python 3.10.12 The CPU's are different, too, if that matters in this case: Windows: i7-8700 CPU @ 3.20GHz, 3192 Mhz, 6 Core(s), 12 Logical Processor(s) Linux: lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Vendor ID: GenuineIntel Model name: Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz CPU family: 6 Model: 94 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 Stepping: 3 CPU max MHz: 3400.0000 CPU min MHz: 800.0000 It's easy to get the code I was working with to work with Windows' Python by adding a time.sleep(0.000001), and I'm not sure if the offending code is even needed, but I'm just curious why this is happening and wondered if anyone could shed any light on it. Thanks!
As Python uses underlying operating system features, some of its behaviors will differ depending on what OS fuels your runtime environment. And while your test is bit flawed because you use different Python versions, unifying these would most likely yield to similar results. Windows has historically had issues with time granularity, where the clock update frequency is not as high as on some Linux configurations. The precision of time measurements and the resolution of system time updates can significantly vary, leading to differences in how frequently datetime.now() returns a new value. The addition of time.sleep(0.000001) in your script likely forces the Python interpreter to yield execution long enough for the system clock to update, thus avoiding duplicates. This workaround is generally acceptable but should be used judiciously, understanding that it introduces a small overhead to each loop iteration which might be not really needed if such detailed time granularity is not really necessary.
3
4
78,068,806
2024-2-27
https://stackoverflow.com/questions/78068806/polars-aggregate-without-a-groupby
Is there a way to call .agg() without grouping first? I want to perform standard aggregations but only want one row in the response, rather than separate rows for separate groups. I could do something like df.with_columns(dummy_col=pl.lit("dummy_col")).group_by('dummy_col').agg(<aggregateion>) but I'm wondering if there's a way without the dummy stuff
When all expressions within a select context are aggregations, the resulting dataframe only has a single row. import polars as pl df = pl.DataFrame({ "a": [1, 2, 3, 4, 5], "b": [0, 0, 1, 1, 1], }) df.select(pl.col("a").mean(), pl.col("b").first()) shape: (1, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ f64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════║ β”‚ 3.0 ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
4
5
78,068,616
2024-2-27
https://stackoverflow.com/questions/78068616/what-is-the-difference-between-driver-and-webdriver-python-selenium
About Selenium with Python... What is the difference between from seleniumbase import Driver (seleniumbase) and from selenium import webdriver (selenium, seleniumwire)? What is the difference in "use cases"? I see only constructor difference: seleniumbase Driver is harder to set options, then manipulaton with object are same. from seleniumbase import Driver driver = Driver(uc=False, headless=True, proxy=proxy, incognito=None, user_data_dir=None, extension_dir=None, binary_location=None) from selenium import webdriver as Driver driver = Driver.Chrome(options=options)
It sounds like you're trying to compare this: from seleniumbase import Driver driver = Driver() with: from selenium import webdriver driver = webdriver.Chrome() The seleniumbase driver has more methods than the regular selenium one. The seleniumbase driver methods also have auto-selector detection, smart waiting, special assertion methods, allow truncated URLs, and support the TAG:contains("TEXT") selector. That means you can do this: from seleniumbase import Driver driver = Driver() driver.open("seleniumbase.io/simple/login") driver.type("#username", "demo_user") driver.type("#password", "secret_pass") driver.click('a:contains("Sign in")') driver.assert_exact_text("Welcome!", "h1") driver.assert_element("img#image1") driver.highlight("#image1") driver.click_link("Sign out") driver.assert_text("signed out", "#top_message") driver.quit() There are some other differences, such as the way options are passed. SeleniumBase options are passed as args into the Driver() definition for the Driver() Manager format (there are many other formats, such as SB(), BaseCase, etc.) SeleniumBase also has a UC Mode option, which has special methods for letting your bots bypass CAPTCHAs that block regular Selenium bots: from seleniumbase import Driver driver = Driver(uc=True) driver.uc_open_with_reconnect("https://top.gg/", 6) driver.quit() Here's a CAPTCHA-bypass example where clicking is required: from seleniumbase import Driver driver = Driver(uc=True) driver.uc_open_with_reconnect("https://seleniumbase.io/apps/turnstile", 3) driver.uc_switch_to_frame("iframe") driver.uc_click("span.mark") driver.sleep(3) driver.quit()
2
2
78,057,716
2024-2-25
https://stackoverflow.com/questions/78057716/trying-to-run-gemma-on-kaggle-and-got-issue-keras-nlp-models-has-no-attribute
I'm trying to run Gemma on Keras with this model https://www.kaggle.com/models/keras/gemma/frameworks/keras/variations/gemma_instruct_7b_en And I'm reproducing the Example available on "Model Card" on above page. When I run this code: gemma_lm = keras_nlp.models.GemmaCausalLM.from_preset("gemma_instruct_7b_en") gemma_lm.generate("Keras is a", max_length=30) # Generate with batched prompts. gemma_lm.generate(["Keras is a", "I want to say"], max_length=30) I get this error: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[2], line 1 ----> 1 gemma_lm = keras_nlp.models.GemmaCausalLM.from_preset("gemma_instruct_7b_en") 2 gemma_lm.generate("Keras is a", max_length=30) 4 # Generate with batched prompts. AttributeError: module 'keras_nlp.models' has no attribute 'GemmaCausalLM' How can I fix this?
Try restarting the kernel. Note that the same behavior occurs with the 2B Gemma model, i.e. gemma_lm = keras_nlp.models.GemmaCausalLM.from_preset("gemma_2b_en"). I am not sure why restarting the kernel works, but it's worth noting that this doesn't happen in Google Colab.
3
3
78,067,486
2024-2-27
https://stackoverflow.com/questions/78067486/given-a-value-from-a-pandas-column-dataframe-select-n-rows-above-and-below-to-t
I have two pandas DataFrames: import pandas as pd data1 = { 'score': [1, 2], 'seconds': [1140, 2100], } data2 = { 'prize': [5.5, 14.5, 14.6, 21, 23, 24, 26, 38, 39, 40, 50], 'seconds': [840, 1080, 1380, 1620, 1650, 1680, 1700, 1740, 2040, 2100, 2160], } df1 = pd.DataFrame.from_dict(data1) df2 = pd.DataFrame.from_dict(data2) Output: df1 score seconds 0 1 1140 1 2 2100 Output: df2 prize seconds 0 5.5 840 1 14.5 1080 2 14.6 1380 3 21.0 1620 4 23.0 1650 5 24.0 1680 6 26.0 1700 7 38.0 1740 8 39.0 2040 9 40.0 2100 10 50.0 2160 For each value in seconds column from df1, I would like to get the match (or the closest to) row from df2 and also the closest 2 rows above and below the match. The seconds columns contains only sorted unique values. As result, I expect this: Output: result prize seconds 0 5.5 840 1 14.5 1080 # closest match to 1140 2 14.6 1380 3 21.0 1620 7 38.0 1740 8 39.0 2040 9 40.0 2100 # match 2100 10 50.0 2160
You can use a merge_asof to identify the closest value to each value in df1, then a rolling.max to extend the selection to the neighboring N rows: N = 2 # number of surronding rows to keep s1 = df1['seconds'].sort_values() s2 = df2['seconds'].sort_values().rename('_') keep = pd.merge_asof(s1, s2, left_on='seconds', right_on='_', direction='nearest')['_'] out = df2[s2.isin(keep) .rolling(2*N+1, center=True, min_periods=1) .max().astype(bool)] NB. if the seconds are already sorted, you can skip the .sort_values(). Output: prize seconds 0 5.5 840 1 14.5 1080 2 14.6 1380 3 21.0 1620 7 38.0 1740 8 39.0 2040 9 40.0 2100 10 50.0 2160 Intermediates: prize seconds closest isin(keep) rolling.max 0 5.5 840 NaN False True 1 14.5 1080 1140.0 True True 2 14.6 1380 NaN False True 3 21.0 1620 NaN False True 4 23.0 1650 NaN False False 5 24.0 1680 NaN False False 6 26.0 1700 NaN False False 7 38.0 1740 NaN False True 8 39.0 2040 NaN False True 9 40.0 2100 2100.0 True True 10 50.0 2160 NaN False True
6
8
78,065,399
2024-2-27
https://stackoverflow.com/questions/78065399/select-consecutive-elements-that-satisfy-a-certain-condition-as-separate-arrays
Given an array of values, I want to select multiple sequences of consecutive elements that satisfy a condition. The result should be one array for each sequence of elements. For example I have an array containing both negative and positive numbers. I need to select sequences of negative numbers, with each sequence in a separate array. Here is an example : import numpy as np # Example data values = np.array([1, 2, 3, -1, -2, -3, 4, 5, 6, -7, -8, 10]) mask = values < 0 Here is how the output should look like : Array 1: [-1 -2 -3] Array 2: [-7 -8] I tried to do it using numpy.split, but it became more like spaghetti code. I was wondering is there a Pythonic way to do this task?
clustering groups of negative numbers If you only want to group the chunks of negative values, irrespective of their relative values, then simply compute a second mask to identify the starts of each negative chunk: mask = values < 0 mask2 = np.r_[True, np.diff(mask)] out = np.array_split(values[mask], np.nonzero(mask2[mask])[0][1:]) Output: [array([-1, -7, -3]), array([-7, -8])] clustering groups of negative numbers if they are successive in value If you want to cluster the negative values that also a successively decreasing (e.g. -1, -2, -3, -5, -6 would form 2 clusters: -1, -2, -3 and -5, -6. Then I would use pandas: convert to Series identify the negative values create a grouper for consecutive negative values ((~mask).cumsum()) add the index (or a range) to group the successive groupby import pandas as pd s = pd.Series(values) # mask to keep negative values mask = s<0 # group consecutive negatives group1 = (~mask).cumsum() # group successive decrementing values s2 = s+s.index group2 = s2.ne(s2.shift()).cumsum() out = [g.to_numpy() for k, g in s[mask].groupby([group1, group2])] Output: [array([-1, -2, -3]), array([-7, -8]), array([-7])] Intermediates: s mask s2 group1 group2 0 1 False 1 1 1 1 2 False 3 2 2 2 3 False 5 3 3 3 -1 True 2 3 4 # out 1 4 -2 True 2 3 4 # 5 -3 True 2 3 4 # 6 4 False 10 4 5 7 5 False 12 5 6 8 6 False 14 6 7 9 -7 True 2 6 8 # out 2 10 -8 True 2 6 8 # 11 -7 True 4 6 9 # out 3 12 10 False 22 7 10
4
4
78,065,410
2024-2-27
https://stackoverflow.com/questions/78065410/how-to-get-real-part-of-a-sympy-expression
In Sympy I want to get the real part of an expression, but unable to separate the real part. The steps I have done: from sympy import * a, b = symbols('a b') z1 = a + b*I z2 = simplify(expand(z1**3)) The z2 output is: a**3 + 3*I*a**2*b - 3*a*b**2 - I*b**3 When I try to get the real part, I get this: >>> re(z2) re(a)**3 - 3*re(a)*im(a)**2 + 3*re(b)**2*im(b) - 3*re(a*b**2) - im(b)**3 - 3*im(a**2*b) I expected a**3 - 3*a*b**2.
Sympy doesn't realize that you want a and b to be real variables, so its output assumes they are complex. If a and b are real, then terms containing im(a) and/or im(b) are 0 since there are no imaginary parts to a or b. To fix this, you should define your variables with your assumptions, i.e. tell sympy that a and b are real. Then the result will be as expected. from sympy import * a, b = symbols("a b", real=True) z1 = a + b*I z2 = simplify(expand(z1**3)) print(re(z2)) Output: a**3 - 3*a*b**2
2
3
78,063,360
2024-2-26
https://stackoverflow.com/questions/78063360/pandas-modified-rolling-average
Below is my outlier detection code in pandas. I am doing rolling over window of 15, what I want is to do over window of 5 where this window is based on the day of week of the centered date i.e. if the centre is Monday take 2 backwards monday and 2 forward monday. Rolling doesn't have any support for this. How to do? import pandas as pd import numpy as np np.random.seed(0) dates = pd.date_range(start='2022-01-01', end='2023-12-31', freq='D') prices1 = np.random.randint(10, 100, size=len(dates)) prices2 = np.random.randint(20, 120, size=len(dates)).astype(float) data = {'Date': dates, 'Price1': prices1, 'Price2': prices2} df = pd.DataFrame(data) r = df.Price1.rolling(window=15, center=True) price_up, price_low = r.mean() + 2 * r.std(), r.mean() - 2 * r.std() mask_upper = df['Price1'] > price_up mask_lower = df['Price1'] < price_low df.loc[mask_upper, 'Price1'] = r.mean() df.loc[mask_lower, 'Price1'] = r.mean()
One option using a groupby.rolling and the dayofweek as grouper to ensure only using the identical days in the rolling: r = (df.set_index('Date') .groupby(df['Date'].dt.dayofweek.values) # avoid index alignment .rolling(f'{5*7}D', center=True) ['Price1'] ) avg = r.mean().set_axis(df.index) # restore correct index std = r.std().set_axis(df.index) price_up, price_low = avg + 2 * std, avg - 2 * std mask_upper = df['Price1'] > price_up mask_lower = df['Price1'] < price_low df.loc[mask_upper, 'Price1'] = avg df.loc[mask_lower, 'Price1'] = avg Example output: Date Price1 Price2 0 2022-01-01 54.0 86.0 1 2022-01-02 57.0 117.0 2 2022-01-03 74.0 32.0 3 2022-01-04 77.0 35.0 4 2022-01-05 77.0 53.0 .. ... ... ... 725 2023-12-27 44.0 37.0 726 2023-12-28 60.0 65.0 727 2023-12-29 30.0 116.0 728 2023-12-30 53.0 82.0 729 2023-12-31 10.0 42.0 [730 rows x 3 columns]
2
5
78,050,400
2024-2-23
https://stackoverflow.com/questions/78050400/list-of-tuples-combine-two-dataframe-columns-if-first-tuple-elements-match
I have two separate lists each of n elements, with one being ID numbers and the second being pandas dataframes. I will define them as id and dfs. The dataframes in dfs have the same format with columns A, B, and C but with different numerical values. I have zipped the two lists together as such: df_groups = list(zip(id, dfs)) With this list I am trying to locate any instances where id is the same and then add columns A and B together for those dataframes and merge into one dataframe. For an example, I will use the following: id = ['a','b','c','a','d'] The corresponding dataframes I have may look as such: dfs[0] A B C 0 0 1 0 0 1 dfs[1] A B C 0 1 1 0 1 1 dfs[2] A B C 1 1 1 1 2 1 dfs[3] A B C 5 6 1 11 8 1 dfs[4] A B C 3 5 2 3 18 2 Then, as can be seen above, id[0] is the same as id[3]. As such, I want to create a new list of tuples such that dfs[0]['A'] and dfs[3]['A'] are added together (similarly also for column B and the duplicate id value is dropped. Thus, it should look like this: id = ['a','b','c','d'] dfs[0] A B C 5 6 1 11 8 1 dfs[1] A B C 0 1 1 0 1 1 dfs[2] A B C 1 1 1 1 2 1 dfs[3] A B C 3 5 2 3 18 2 The following worked for removing the duplicate values of id but I am not quite sure how to go about the column operations on dfs. I will of course need to add the columns A and B first before running the below: from itertools import groupby df_groups_b = ([next(b) for a, b in groupby(df_groups, lambda x: x[0])]) Any assistance would be much appreciated, thank you! Edit: to clarify, column C from the original dataframe would be retained as is. In the case where the first tuple elements match, column C from the corresponding dataframes will be identical.
You can write a custom summarising function to go through all dataframes in a group and return a sum. I don't entirely love this solution, because in converts C to float, but you can play with it further if needed: from itertools import groupby import pandas as pd ids = ['a','b','c','a','d'] dfs = [ pd.DataFrame({"A": [0], "B": [0], "C": [0]}), pd.DataFrame({"A": [1], "B": [1], "C": [1]}), pd.DataFrame({"A": [2], "B": [2], "C": [2]}), pd.DataFrame({"A": [3], "B": [3], "C": [3]}), pd.DataFrame({"A": [4], "B": [4], "C": [4]}) ] df_groups = list(zip(ids, dfs)) df_groups = sorted(df_groups, key=lambda x: x[0]) def summarise_cols(group, cols_to_summarise=["A", "B"]): _, df = next(group) for _, next_df in group: df = df.add(next_df[cols_to_summarise], fill_value=0) return df df_groups_b = ([summarise_cols(group) for _, group in groupby(df_groups, lambda x: x[0])]) for d in df_groups_b: print(d) Output: A B C 0 3 3 0.0 A B C 0 1 1 1 A B C 0 2 2 2 A B C 0 4 4 4 UPD: just noticed your update Edit: to clarify, column C from the original dataframe would be retained as is. In the case where the first tuple elements match, column C from the corresponding dataframes will be identical. You can then do this instead and the column types will be unchanged def summarise_cols(group, cols_to_keep=["C"]): _, df = next(group) for _, next_df in group: df += next_df df[cols_to_keep] = next_df[cols_to_keep] return df UPD2: Returning group_id and unified dataframe together as a tuple. def summarise_cols(group, cols_to_keep=["C"]): group_id, df = next(group) for _, next_df in group: df += next_df df[cols_to_keep] = next_df[cols_to_keep] return group_id, df
3
4
78,057,189
2024-2-25
https://stackoverflow.com/questions/78057189/how-do-i-change-the-mv-status-of-a-gekko-variable-after-solving-for-steady-state
I am trying to find the value for an inlet flow rate that keeps a gravity drained tank at a required level. After the solution is found using IMODE 3, I want to simulate the system at this state for a period of time using IMODE 4. Below is the code I am using: from gekko import GEKKO import numpy as np m = GEKKO(remote = False) F_in = m.MV(value = 1, name= 'F_in') level = m.CV(value = 1, name = 'level') F_out = m.Var(value = 1, name = 'F_out') level.STATUS = 1 level.SPHI = 6 level.SPLO = 4 m.Equation(level.dt() == (F_in - F_out)/5) m.Equation(F_out == 2*(level)**0.5) # Find the steady state. F_in.STATUS = 1 m.options.IMODE = 3 m.options.SOLVER = 3 m.solve(disp = False) print(F_in.value) # Run at steady state for 5 time units. F_in.STATUS = 0 # Turn STATUS off for 0 DOF. m.time = np.linspace(0,5,10) m.options.IMODE = 4 m.solve(disp = False) print(level.value) However, the code produces the error stating that IMODE 4 requires 0 DOF. Does changing the F_in status to 0 not result in 0 DOF? Below is the full error message: --------------------------------------------------------------------------- Exception Traceback (most recent call last) Cell In[4], line 27 25 m.time = np.linspace(0,5,10) 26 m.options.IMODE = 4 ---> 27 m.solve(disp = False) 28 print(level.value) File ~\VRMEng\VRMEng\Lib\site-packages\gekko\gekko.py:2140, in GEKKO.solve(self, disp, debug, GUI, **kwargs) 2138 print("Error:", errs) 2139 if (debug >= 1) and record_error: -> 2140 raise Exception(apm_error) 2142 else: #solve on APM server 2143 def send_if_exists(extension): Exception: @error: Degrees of Freedom * Error: DOF must be zero for this mode STOPPING... Any help will be appreciated greatly!
Set m.options.SPECS = 0 to ignore the calculated / fixed specifications when changing modes (see documentation). Here is a complete script that solves successfully. from gekko import GEKKO import numpy as np m = GEKKO(remote = False) F_in = m.MV(value = 1, name= 'F_in') level = m.CV(value = 1, name = 'level') F_out = m.Var(value = 1, name = 'F_out') level.STATUS = 1 level.SPHI = 6 level.SPLO = 4 m.Equation(level.dt() == (F_in - F_out)/5) m.Equation(F_out == 2*(level)**0.5) # Find the steady state. F_in.STATUS = 1 m.options.IMODE = 3 m.options.SOLVER = 3 m.solve(disp = False) print(level.value) print(F_in.value) print(F_out.value) # Run at steady state for 5 time units. m.options.SPECS = 0 F_in.value = F_in.value.value F_in.STATUS = 0 # Turn STATUS off for 0 DOF. m.time = np.linspace(0,5,10) m.options.IMODE = 4 m.solve(disp = True) print(level.value) print(F_in.value) print(F_out.value) A few more details: The t0 file in the run directory m._path stores whether a variable should be calculated or fixed. The rto.t0 file is generated with IMODE=3 and stores the steady-state values and the fixed/calculated information. Setting SPECS=0 ignores the specifications from the t0 file and uses the default specifications. For IMODE=4 the STATUS for each of the FVs and MVs are over-written to turn off. Set F_in.value = F_in.value.value to retain the F_in value from the steady state simulation.
3
1
78,058,636
2024-2-26
https://stackoverflow.com/questions/78058636/condaverificationerror-when-installing-pytorch
I am trying to install pytorch using either of below command and I got a lot of error. I am using windows CPU only. conda install pytorch::pytorch or conda install pytorch torchvision torchaudio cpuonly -c pytorch some of the error are CondaVerificationError: The package for pytorch located at C:\Users\test\miniconda3\pkgs\pytorch-2.2.1-py3.10_cpu_0 appears to be corrupted. The path 'Lib/site-packages/torchgen/static_runtime/gen_static_runtime_ops.py' specified in the package manifest cannot be found. CondaVerificationError: The package for pytorch located at C:\Users\test\miniconda3\pkgs\pytorch-2.2.1-py3.10_cpu_0 appears to be corrupted. The path 'Lib/site-packages/torchgen/static_runtime/generator.py' specified in the package manifest cannot be found. CondaVerificationError: The package for pytorch located at C:\Users\test\miniconda3\pkgs\pytorch-2.2.1-py3.10_cpu_0 appears to be corrupted. The path 'Lib/site-packages/torchgen/utils.py' specified in the package manifest cannot be found. CondaVerificationError: The package for pytorch located at C:\Users\test\miniconda3\pkgs\pytorch-2.2.1-py3.10_cpu_0 appears to be corrupted. The path 'Lib/site-packages/torchgen/yaml_utils.py' specified in the package manifest cannot be found. CondaVerificationError: The package for pytorch located at C:\Users\test\miniconda3\pkgs\pytorch-2.2.1-py3.10_cpu_0 appears to be corrupted. The path 'Scripts/convert-caffe2-to-onnx-script.py' specified in the package manifest cannot be found. CondaVerificationError: The package for pytorch located at C:\Users\test\miniconda3\pkgs\pytorch-2.2.1-py3.10_cpu_0 appears to be corrupted. The path 'Scripts/convert-onnx-to-caffe2-script.py' specified in the package manifest cannot be found. CondaVerificationError: The package for pytorch located at C:\Users\test\miniconda3\pkgs\pytorch-2.2.1-py3.10_cpu_0 appears to be corrupted. The path 'Scripts/torchrun-script.py' specified in the package manifest cannot be found.
One of your packages seems to be corrupt. You can try cleaning the packages from cache and reinstalling pytorch: conda clean -p
3
4
78,059,445
2024-2-26
https://stackoverflow.com/questions/78059445/how-to-add-plus-sign-for-positive-number-when-using-to-excel
This is my DataFrame: import pandas as pd import numpy as np df = pd.DataFrame( { 'a': [2, 2, 2, -4, np.nan, np.nan, 4, -3, 2, -2, -6], 'b': [2, 2, 2, 4, 4, 4, 4, 3, 2, 2, 6] } ) I want to add a plus sign for positive numbers only for column a when exporting to Excel. For example 1 becomes +1. Note that I have NaN values as well. I want them to be empty cells in Excel similar to the default behavior of Pandas when dealing with NaN values in to_excel. I have tried many solutions. This is one of them. But it didn't work in Excel. df.style.format({'a': '{:+g}'}).to_excel(r'df.xlsx', sheet_name='xx', index=False)
What about using a callable: (df.style.format({'a': lambda x: f'{x:+g}' if np.isfinite(x) else ''}) .to_excel(r'/tmp/df.xlsx', sheet_name='xx', index=False) ) Alternative callable: lambda x: '' if np.isnan(x) else f'{x:+g}' Output (in jupyter):
3
4
78,056,934
2024-2-25
https://stackoverflow.com/questions/78056934/pandas-or-polars-find-index-of-previous-element-larger-than-current-one
Suppose my data looks like this: data = { 'value': [1,9,6,7,3, 2,4,5,1,9] } For each row, I would like to find the row number of the latest previous element larger than the current one. So, my expected output is: [None, 0, 1, 2, 1, 1, 3, 4, 1, 0] the first element 1 has no previous element, so I want None in the result the next element 9 is at least as large than all its previous elements, so I want 0 in the result the next element 6, has its previous element 9 which is larger than it. The distance between them is 1. So, I want 1 in the result here. I'm aware that I can do this in a loop in Python (or in C / Rust if I write an extension). My question: is it possible to solve this using entirely dataframe operations? pandas or Polars, either is fine. But only dataframe operations. So, none of the following please: apply map_elements map_rows iter_rows Python for loops which loop over the rows and extract elements one-by-one from the dataframes
This iterates only on the range of rows that this should look. It doesn't loop over the rows themselves in python. If your initial bound_range covers all the cases then it won't ever actually do a loop. lb=0 bound_range=3 df=df.with_columns(z=pl.lit(None, dtype=pl.UInt64)) while True: df=df.with_columns( z=pl.when(pl.col('value')>=pl.col('value').shift(1).cum_max()) .then(pl.lit(0, dtype=pl.UInt64)) .when(pl.col('z').is_null()) .then( pl.coalesce( pl.when(pl.col('value')<pl.col('value').shift(x)) .then(pl.lit(x, dtype=pl.UInt64)) for x in range(lb, lb+bound_range) ) ) .otherwise(pl.col('z')) ) if df[1:]['z'].drop_nulls().shape[0]==df.shape[0]-1: break lb+=bound_range For this example I set bound_range to 3 to make sure it loops at least once. I ran this with 1M random integers between 0 and 9(inclusive) and I set the bound_range to 50 and it took under 2 sec. You could make this smarter in between loops by checking things more explicitly but the best approach there would be data dependent.
18
4
78,057,705
2024-2-25
https://stackoverflow.com/questions/78057705/is-there-a-simple-way-to-access-a-value-in-a-polars-struct
A basic use of value_counts in a DataFrame is to obtain the count of a specific value. If I have a df such as: DataFrame({"color": ["red", "blue", "red", "green", "blue", "blue"]}) then if I want the count for color = 'red' in Pandas I could simply use: df['color'].value_counts()['red'] which is clear and obvious. In Polars, value_counts() produces a DF with a column of struct values: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ color β”‚ β”‚ --- β”‚ β”‚ struct[2] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ {"green",1} β”‚ β”‚ {"blue",3} β”‚ β”‚ {"red",2} β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ which could be split into a DF with separate columns using counts = df.select(pl.col("color").value_counts()).unnest('color') and then the required value can be obtained using counts.select(pl.col('count').filter(pl.col('color') == 'red')).item() similarly group_by('color').len() could be used instead of value_counts This all seems rather complicated for such a frequent requirement. Is there a simpler way of extracting a single count value using Polars and more generally to access struct values.
If you don't want to use unnest, you can do df.select(pl.col("color").value_counts()).filter( pl.col("color").struct["color"] == "red" ).item()["count"] which gives 2 There's tradeoffs - not having an Index opens up more doors for scalability, but admittedly some operations to become more verbose
3
2
78,054,752
2024-2-25
https://stackoverflow.com/questions/78054752/fastapi-sqlmodel-pydantic-not-serializing-datetime
My understanding is that SQLModel uses pydantic BaseModel (checked the _mro_. Why is it that the bellow fails the type comparison. (err provided). class SomeModel(SQLModel,table=True): timestamp: datetime id: UUID = Field(default_factory=lambda: str(uuid4()), primary_key=True) def test_some_model(): m=SomeModel(**{'timestamp':datetime.utcnow().isoformat()}) assert type(m.timestamp)==datetime E AssertionError: assert <class 'str'> == datetime FastAPI/SQLModel experts explain yourselves :D . Note - I tried using Field with default factories etc... as well.
Adding validate_assignment fixes the issue. Picked this solution up from https://github.com/tiangolo/sqlmodel/issues/52 after more than a couple days of breaking my head. Hopefully this helps other poor souls. class SomeModel(SQLModel,table=True): class Config: validate_assignment = True timestamp: datetime age:int id: UUID = Field(default_factory=lambda: str(uuid4()), primary_key=True)
2
3
78,052,641
2024-2-24
https://stackoverflow.com/questions/78052641/tkinter-output-of-function-as-table-and-graph
I have two functions that produce two types of output one is a data frame table and the other is a plot of that dataframe. All functions take one file as input. which we load from the previous tkinter function. I would like to dynamically select a function from the radio box, next, once we select a particular function it should show a blank box that will ask for input from a user and based on input the function will get executed. from tkinter import ttk import tkinter as tk root= Tk() root.geometry("800X600") root.config(bg='light blue') root.title('Dashboard') frame = tk.Frame(root,bg='light blue') frame.pack(padx=10,pady=10) file_label = tk.Label(frame,text='Input_File') file_label.grid(row=0,colulmn=0) def function_1(Input_File,Var1,Var2): # Table and Graph will be based on the above input. def function_2(Input_File,Var3,Var4,Var5,Var6): # Table and Graph will be based on the above input. root.mainloop() Once we select function_1 from the radio box, then immediately we should get two boxes next to the radio box which will ask for "Var1" and "Var2". If we select function_2 then we should get four boxes next to the radio box which will ask for "Var3", "Var4", "Var5", and "Var6". Once all the input is received we should process the respective function and below we should get two outputs first "dataframe" table produced from function and plot again produced from the function. Please note "InputFile" in both the function is same as InputFile from file_lable.
You have asked for the complete algorithm, in fact complete programming except how to manipulate your data. I don't know whether this kind of questions are allowed in stack overflow. I've given explanation as and where required. I couldn't check the code as you haven't provided the data and function you want to execute. So, there could be errors. Correct it accordingly in the real time. Note: Add widgets for column headers(i.e panda index) separately as I forgot to add. Use Label and grid methods. Customize the widgets according to your requirement. Edit: I had done a mistake of executing the function run_functions by default assuming that when it is run by default, that will continue to run once all the inputs are given. Later I realized that it won't as the function run_functions will just run once and stop. Now I 've edited the code. Now the inputs var1, var2, var3, var4, var5 and var6 are bound with '<Leave>' and run_function. So after leaving each entry, run_functions will run and check for the status. Maybe, some may think why all the entries to be bound and why not to bind only the last entry. This is because, sometimes user may fill the last Entry first and then the previous entry. from tkinter import * from tkinter import ttk import tkinter as tk #import file fialogue module to select file from tkinter import filedialog as fd #define functions before the design part def getFile(): #declare the variable got_file global to use in other functions global got_file #assuming the file_type is image file. change the file type accordingly. fileType = [('image', '*.png'),('image', '*.jpg'),('image', '*.jpeg')] #open a popup box to select file got_file = fd.askopenfilename(filetypes=fileType) #show the selected file in the widget selected_file selected_file.configure(text = got_file) #function to disable vars def disable_vars(): for x in [var1, var2, var3, var4, var5, var6, output_frame]: x['state'] = DISABLED def enable_vars(vars): for x in vars: x['state'] = NORMAL def run_functions(): if rad_var.get() == 'function 1': if all(x.get() != '' for x in [var1, var2, selected_file]): function_1() elif radio_var.get == 'function 2': if all(x.get() != '' for x in [var3, var4, var5, var6, selected_file]) function_2() else: pass def function_1(): #declare global for got_file as we should use here global got_file #enable frame where the table/graph to be displayed output_frame['state'] = NORMAL ''' Your code for dataframe here ''' #convert the dataframe to matrix matrix = df.to_numpy() for i,x in enumerate(matrix): for j,y in enumerate(x): cell = Entry(output_frame, width=20, fg='blue',borderwidth=1, relief="solid") cell.grid(row=i, column=j) cell.insert(END, y) def function_2(): #Use the same method followed in function_1 root= Tk() root.geometry("800X900") root.config(bg='light blue') root.title('Dashboard') #run the function disable_vars immediately while opening the app root.after_idle(disable_vars) frame = tk.Frame(root,bg='light blue') frame.pack(padx=10,pady=10) #assuming all the widgets are there in the frame var1 = Entry(frame) var1.pack() var2 = Entry(frame) var2.pack() radio_var = tk.StringVar() fun1 = Radiobutton(frame, text='function 1', variable=radio_var, value= 'function 1', command = lambda: enable_vars([var1, var2])) fun1.pack(side='top') var3 = Entry(frame) var3.pack() var4 = Entry(frame) var4.pack() var5 = Entry(frame) var5.pack() var6 = Entry(frame) var6.pack() rad_var = = tk.StringVar() fun1 = Radiobutton(frame, text='function 2',variable=radvar, value= 'function 2', command = lambda: enable_vars([var3, var4, var5, var6])) fun1.pack(side='top') file_label = Label(frame,text='Input_File') file_label.pack(side='top') selected_file = Label(frame, text = '') selected_file.pack(side= 'top') file_btn = Button(tab1, width=25, text='select the File', command=getFile) prgbtn.pack(side='top', padx=10, pady= 0) output_frame = Frame(root) output_frame.pack() for x in [vars1, vars2, vars3, vars4, vars5, vars6]: x.bind('Leave', run_functions) root.mainloop()
2
2
78,056,851
2024-2-25
https://stackoverflow.com/questions/78056851/how-to-leave-out-all-nans-a-pandas-dataframe
I'm upgrading some code from python2 to python3 and the modern pandas version (I now have pandas 2.0.3 and numpy version 1.26.4) My dataframe is : N NE E SE S SW W NW H12 NaN NaN NaN NaN NaN NaN NaN NaN H13 0.7 NaN NaN NaN NaN NaN 1.0 1.4 H14 0.3 NaN NaN NaN NaN NaN 0.8 1.1 H15 NaN NaN NaN NaN NaN NaN NaN NaN H16 NaN NaN NaN NaN NaN NaN NaN NaN And I want to leave out all the NaNs such that i get a new df: N W NW H13 0.7 1.0 1.4 H14 0.3 0.8 1.1 My old code has df.any(1) or something very similar used working, but now I get and error message NDFrame._add_numeric_operations.<locals>.any() takes 1 positional argument but 2 were given Maybe there is a better way to do it, I'm not fussed about using any().
any is now keyword-only, you have to use axis=1. Although you have not shown your original code, the following should work: out = df.loc[df.any(axis=1), df.any()] Output: N W NW H13 0.7 1.0 1.4 H14 0.3 0.8 1.1
2
2
78,057,088
2024-2-25
https://stackoverflow.com/questions/78057088/how-to-keep-a-numeric-sequence-in-a-pandas-dataframe-column-that-only-increase
I have a DataFrame like this: import pandas as pd df = pd.DataFrame({ 'minutes':[1,2,3,4,5,6,7,8,9,10], 'score1': [0,0,1,1,0,0,0,1,1,2], 'score2': [0,1,1,1,1,2,1,1,2,2], 'sum_score': [0,1,2,2,1,2,1,2,3,4] }) Output: minutes score1 score2 sum_score 0 1 0 0 0 1 2 0 1 1 2 3 1 1 2 3 4 1 1 2 4 5 0 1 1 5 6 0 2 2 6 7 0 1 1 7 8 1 1 2 8 9 1 2 3 9 10 2 2 4 The columns score1 and score2 are not sequentially increasing. There are rows in which its values decrease. When loop through each row, I would like to remove all previous rows where values are higher than the current one considering columns score1 and score2. I tried to apply masks: m1 = (df['score1'] > df['score1'].shift(-1)) m2 = (df['score2'] > df['score2'].shift(-1)) df = df.loc[~(m1 & m2)] But it does not work as expected. I expect to get this: Output: minutes score1 score2 sum_score 0 1 0 0 0 1 2 0 1 1 4 5 0 1 1 6 7 0 1 1 7 8 1 1 2 8 9 1 2 3 9 10 2 2 4
Try: x = df[["score1", "score2"]][::-1].cummin()[::-1] mask = (df.score1 == x.score1) & (df.score2 == x.score2) print(df[mask]) Prints: minutes score1 score2 sum_score 0 1 0 0 0 1 2 0 1 1 4 5 0 1 1 6 7 0 1 1 7 8 1 1 2 8 9 1 2 3 9 10 2 2 4
2
3
78,054,482
2024-2-25
https://stackoverflow.com/questions/78054482/optimal-way-of-counting-the-number-of-non-overlapping-pairs-given-a-list-of-inte
I'm trying to count the number of non-overlapping pairs given a list of intervals. For example: [(1, 8), (7, 9), (3, 10), (7, 12), (11, 13), (13, 14), (9, 15)] There are 8 pairs: ((1, 8), (11, 13)) ((1, 8), (13, 14)) ((1, 8), (9, 15)) ((7, 9), (11, 13)) ((7, 9), (13, 14)) ((3, 10), (11, 13)) ((3, 10), (13, 14)) ((7, 12), (13, 14)) I can't seem to figure out a better solution other than to just brute force it by comparing everything with virtually everything else, resulting in a O(n^2) solution. def count_non_overlapping_pairs(intervals): intervals = list(set(intervals)) # deduplicate any intervals intervals.sort(key=lambda x: x[1]) pairs = 0 for i in range(len(intervals)): for j in range(i+1, len(intervals)): if intervals[i][1] < intervals[j][0]: pairs += 1 return pairs Is there a more optimal solution than this?
Sort the intervals, once by start point, once by end point. Now, given an interval, perform a binary search using the start point in the intervals sorted by end point. The index you get tells you how many non-overlapping intervals come before: All intervals that end before your interval starts are non-overlapping. Do the same for the end point: Do a binary search in the array of intervals sorted by start point. All intervals that start after your interval ends are non-overlapping. All other intervals either start before your interval ends, but after it has started, or end after your interval starts, but start before it. Do this for every interval, sum the results. Make sure to halve, to not count intervals twice. This looks as follows: Overall you get O(n log n): Two sorts, O(n) times two O(log n) binary searches. Now observe that half of this is not even needed - if A and B are two non-overlapping intervals, it suffices if B counts the intervals before it, including A; A doesn't need to count the intervals after it. This lets us simplify the solution further; you just need to sort the end points to be able to count the intervals before an interval, and of course we now don't need to halve the resulting sum anymore: # Counting the intervals before suffices def count_non_overlapping_pairs(intervals): ends = sorted(interval[1] for interval in intervals) def count_before(interval): return bisect_left(ends, interval[0]) return sum(map(count_before, intervals)) (Symmetrically, you could also just count the intervals after an interval.)
3
1
78,054,656
2024-2-25
https://stackoverflow.com/questions/78054656/what-is-the-difference-between-the-various-spline-interpolators-from-scipy
My goal is to calculate a smooth trajectory passing through a set of points as shown below. I have had a look at the available methods of scipy.interpolate, and also the scipy user guide. However, the choice of the right method is not quite clear to me. What is the difference between BSpline, splprep, splrep, UnivariateSpline, interp1d, make_interp_spline and CubicSpline? According to the documentation, all functions compute some polynomial spline function through a sequence of input points. Which function should I select? A) What is the difference between a cubic spline and a B-spline of order 3? B) What is the difference between splprep() and splrep()? C) Why is interp1d() going to be deprecated? I know I'm asking different questions here, but I don't see the point in splitting the questions up as I assume the answers will be related. All in all, I find that the scipy.interpolate module is organized a little confusingly. I thought maybe I'm not the only one who has this impression, which is why I'm reaching out to SO. Here's how far I've come. Below is some code that runs the different spline functions for some test data. It creates the figure below. I've read somewhere: "All cubic splines can be represented as B-splines of order 3", and that "it's a matter of perspective which representation is more convenient". But why do I end up in different results if I use CublicSpline and any of the B-spline methods? I found that it is possible to construct a BSpline object from the output of splprep and splrep, such that the results of splev() and BSpline() are equivalent. That way, we can convert the output of splrep and splprep into the object-oriented interface of scipy.interpolate(). splrep, UnivariateSpline and make_interp_spline lead to the same result. In my 2D data, I need to apply the interpolation independently per data dimension that it works. The convenience function interp1d yields the same result, too. Related SO-question: Link splprep and splrep seem unrelated. Even if I compute splprep twice for every data axis independently (see p0_new), the result looks differently. I see in the docs that splprep computes the B-spline representation of an n-D curve. But should splrep and splprep not be related? splprep, splrep and UnivariateSpline have a smoothing parameter, while other interpolators have no such parameter. splrep pairs with UnivariateSpline. However, I couldn't find a matching object-oriented counterpart for splprep. Is there one? import numpy as np from scipy.interpolate import * import matplotlib.pyplot as plt points = [[0, 0], [4, 4], [-1, 9], [-4, -1], [-1, -9], [4, -4], [0, 0]] points = np.asarray(points) n = 50 ts = np.linspace(0, 1, len(points)) ts_new = np.linspace(0, 1, n) (t0_0,c0_0,k0_0), u = splprep(points[:,[0]].T, s=0, k=3) (t0_1,c0_1,k0_1), u = splprep(points[:,[1]].T, s=0, k=3) p0_new = np.r_[np.asarray(splev(ts_new, (t0_0,c0_0,k0_0))), np.asarray(splev(ts_new, (t0_1,c0_1,k0_1))), ].T # splprep/splev (t1,c1,k1), u = splprep(points.T, s=0, k=3) p1_new = splev(ts_new, (t1,c1,k1)) # BSpline from splprep p2_new = BSpline(t1, np.asarray(c1).T, k=k1)(ts_new) # splrep/splev (per dimension) (t3_0,c3_0,k3_0) = splrep(ts, points[:,0].T, s=0, k=3) (t3_1,c3_1,k3_1) = splrep(ts, points[:,1].T, s=0, k=3) p3_new = np.c_[splev(ts_new, (t3_0,c3_0,k3_0)), splev(ts_new, (t3_1,c3_1,k3_1)), ] # Bspline from splrep p4_new = np.c_[BSpline(t3_0, np.asarray(c3_0), k=k3_0)(ts_new), BSpline(t3_1, np.asarray(c3_1), k=k3_1)(ts_new), ] # UnivariateSpline p5_new = np.c_[UnivariateSpline(ts, points[:,0], s=0, k=3)(ts_new), UnivariateSpline(ts, points[:,1], s=0, k=3)(ts_new),] # make_interp_spline p6_new = make_interp_spline(ts, points, k=3)(ts_new) # CubicSpline p7_new = CubicSpline(ts, points, bc_type="clamped")(ts_new) # interp1d p8_new = interp1d(ts, points.T, kind="cubic")(ts_new).T fig, ax = plt.subplots() ax.plot(*points.T, "o-", label="Original points") ax.plot(*p1_new, "o-", label="1: splprep/splev") ax.plot(*p2_new.T, "x-", label="1: BSpline from splprep") ax.plot(*p3_new.T, "o-", label="2: splrep/splev") ax.plot(*p4_new.T, "x-", label="2: BSpline from splrep") ax.plot(*p5_new.T, "*-", label="2: UnivariateSpline") ax.plot(*p6_new.T, "+-", label="2: make_interp_spline") ax.plot(*p7_new.T, "x-", label="3: CubicSpline") #ax.plot(*p8_new.T, "k+-", label="3: interp1d") #ax.plot(*p0_new.T, "k+-", label="3: CubicSpline") ax.set_aspect("equal") ax.grid("on") ax.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.show()
What is the difference between BSpline, splprep, splrep, UnivariateSpline, interp1d, make_interp_spline and CubicSpline? a BSpline object represents a spline function in terms of knots t, coefficients c and degree k. It does not know anything about the data x, y. It is a low-level implementation object, on par with PPoly --- think of PPoly vs BSpline as a change of basis. make_interp_spline constructs a spline (BSpline) which passes through the data x and y -- it's an interpolating spline, so that spl(x) == y. It can handle batches of data: if y is two-dimensional and the second dimension has length n, make_interp_spline(x, y) represents a stack of functions y_1(x), y_2(x), ..., y_n(x). CubicSpline is similar but more limited: it only allows cubic splines, k=3. If you ever need to switch from cubics to e.g. linear interpolation, with make_interp_spline you just change k=3 to k=1 at the call site. OTOH, CubicSpline has some features BSpline does not: e.g. root-finding. It is a PPoly instance, not BSpline. interp1d is not deprecated, it is legacy. A useful part of interp1d is nearest/previous/next modes. The rest just delegates to make_interp_spline, so better use that directly. splrep constructs a smoothing spline function given data. The amount of smoothing is controlled by the s parameter, with s=0 being interpolation. It returns not a BSpline but a tck tuple (knots, coefficients and degree). splprep constructs a smoothing spline curve in a parametric form. That is (x(u), y(u)) not y(x). Also returns tck-tuples. UnivariateSpline is equivalent to splrep. Note that scipy.interpolate has grown organically. With at least four generations of developers over almost quarter century. And the FITPACK library which splrep, splprep and UnivariateSpline delegate to is from 1980s. CubicSpline, BSpline and make_interp_spline are not using FITPACK. So in new code I personally would recommend make_interp_spline + BSpline; if you only need cubics, CubicSpline works, too. But why do I end up in different results if I use CublicSpline and any of the B-spline methods? Boundary conditions. Both make_interp_spline and CubicSpline default to not-a-knot, and you can change it. splrep et al only use not-a-knot.
3
3
78,054,851
2024-2-25
https://stackoverflow.com/questions/78054851/when-using-random-uniforma-b-is-b-inclusive-or-exclusive
I was exploring the random module and can't find a correct answer to whether b is inclusive or exclusive in random.uniform(a, b). In a code like random.uniform(0, 1), some answers say 1 is included while others say 1 is never produced. What's the correct answer?
The documentation for random.uniform(a, b) suggests that you may encounter cases where the upper limit b is included in the range due to how floating-point numbers are represented, but it's not guaranteed: Return a random floating point number N such that a <= N <= b for a <= b and b <= N <= a for b < a. The end-point value b may or may not be included in the range depending on floating-point rounding in the equation a + (b-a) * random(). So sometimes the upper limit b may be included, and sometimes it may not be.
3
5
78,054,093
2024-2-24
https://stackoverflow.com/questions/78054093/why-does-it-take-longer-to-sum-integers-when-theyre-object-attributes
I am learning python after having some java in school back in the day. If there are 2 lists, one with single digit integers and one with objects with a bunch of attributes, they appear to be a different speed to step through them? I was under the understanding that the objects are a memory pointer. Example: int_list = [1] * 1000 obj_list = [CustObject(0,1,2,3)] * 1000 total = 0 for i in int_list: total += i print(total) total = 0 for o in obj_list: total += o.int_variable print(total) I ran this with some tile objects that have about 20 attributes, one of them being an image. Obviously the tile list creation takes a lot more time, which is expected. But stepping through them is slower as seen from the output: print("starting test") ints = [[1 for i in range(100)] for j in range(80)] print("created ints") tiles = [[Tile(i, j, 30, 1) for i in range(100)] for j in range(80)] print("created tiles") # step through the ints start_time = time.perf_counter() total = 0 for i in range(0, len(ints)): for j in range(0, len(ints[i])): total += ints[i][j] end_time = time.perf_counter() elapsed_time = end_time - start_time print("Ints Elapsed time: ", elapsed_time, total) # step through the Tiles start_time = time.perf_counter() total = 0 for i in range(0, len(tiles)): for j in range(0, len(tiles[i])): total += tiles[i][j].tile_type end_time = time.perf_counter() elapsed_time = end_time - start_time print("Tiles Elapsed time: ", elapsed_time, total) Output starting test created ints created tiles Ints Elapsed time: 0.0011542000574991107 8000 Tiles Elapsed time: 0.002249700017273426 8000
Accessing ints[i][j] needs: lookup for ints variable lookup for i & j variables array boundary check Accessing tiles[i][j].tile_type needs the above plus name lookup for tile_type, so it's slower. Anyway, instead of trying to figure out why all this is slower, you can optimize this code a lot by forgetting about indices. This code: for i in range(0, len(tiles)): for j in range(0, len(tiles[i])): total += tiles[i][j].tile_type becomes (after directly iterating on the objects, not the indices) for ta in tiles: for e in ta: total += e.tile_type or even faster probably with sum and double flat comprehension: total = sum(e.tile_type for ta in tiles for e in ta)
2
4
78,053,222
2024-2-24
https://stackoverflow.com/questions/78053222/optimising-array-addition-y-x-rgba
I have two arrays A, B that are both of the shape (42, 28, 4) where: 42 : y_dim 28 : x_dim 4 : RGBA ## I'm on MacBook Air M1 2020 16Gb btw I want to combine these through a similar process to this: def add(A, B): X = A.shape[1] Y = A.shape[0] alpha = A[..., 3] / 255 B[..., :3] = blend(B[..., :3], A[..., :3], alpha.reshape(Y, X, 1)) return B def blend(c1, c2, alpha): return np.asarray((c1 + np.multiply(c2, alpha))/(np.ones(alpha.shape) + alpha), dtype='uint8') But currently this is a bit too slow (~20 ms with 250 images overlayed on top of a base array [1]) for my liking and if you have any ways to improve this (preferably with 8bit alpha support) I'd be happy to know. [1]: start = time.time() for obj in l: # len(l) == 250 _slice = np.index_exp[obj.y * 42:(obj.y+1) * 42, obj.x * 28 : (obj.x+1) * 28, :] self.pixels[_slice] = add(obj.array, self.pixels[_slice]) stop = time.time() >>> stop - start # ~20ms I've semi-tried the following: # cv2.addWeighted() in add() ## doesn't work because it has one alpha for the whole image, ## but I want to have indiviual alpha control for each pixel B = cv.addWeighted(A, 0.5, B, 0.5, 0) # np.vectorize blend() and use in add() ## way too slow because as the docs mention it's basically just a for-loop B[..., :3] = np.vectorize(blend)(A[..., :3], B[..., :3], A[..., 3] / 255) # changed blend() to the following def blend(a, b, alpha): if alpha == 0: return b elif alpha == 1: return a return (b + a * alpha) / (1 + alpha) # moved the blend()-stuff to add() ## doesn't combine properly; too dark with alpha np.multiply(A, alpha.reshape(Y, X, 1)) + np.multiply(B, 1 - alpha.reshape(Y, X, 1)) I've also tried some bitwise stuff but my monkey brain can't comprehend it properly. I am on a M1 Mac so if you have any experience with metalcompute and Python please include any thoughts about that! Any input is welcomed, thanks in advance! Answer: Christoph Rackwitz posted a very elaborate and well constructed answer so if you are curious about similar things check out the accepted commend below. To add to this I ran Christoph's code on my M1 computer to show the results. 2500 calls (numpy) = 0.0807 2500 calls (other) = 0.0833 2500 calls (ChristophΒ΄s) = 0.0037
First, your blending equation looks wrong. Even with alpha being 255, you'd only get a 50:50 mix. You'd want something like B = B * (1-alpha) + A * alpha or rearranged B += (A-B) * alpha but that expression has teeth (integer subtraction will have overflow/underflow). You appear to be drawing "sprites" in a grid on the display of a game. Just use 2D graphics libraries for this, or even 3D (OpenGL?). GPUs are very good at drawing textured quads with transparency. Even without involvement of a GPU, the right library will contain optimized primitives and you don't have to write any of that yourself. The cost of uploading textures (to GPU memory) is a one-time cost, assuming the sprites don't change appearance. If they change for every frame, that may be noticeable. Since I originally proposed to use numba and previous answers have only gotten a factor of 2 and then 10 out of it, I'll show a few more points to be aware. A previous answer proposes a function with this in its inner loop: B[i, j, :3] = (B[i, j, :3] + A[i, j, :3] * alpha[i, j]) / (1 + alpha[i, j]) Seems sensible because it treats the entire pixel at once but this still uses numpy, which is comparatively slow because it's written to be generic (various dtypes and shapes). Numba has no such requirement. It'll happily generate the specific machine code for this specific situation (uint8, fixed number of dimensions, fixed iterations of inner loop). If you unroll this one more time, removing the numpy calls from the inner loop, you'll get the true speed of numba: for k in range(3): B[i, j, k] = (B[i, j, k] + A[i, j, k] * alpha[i, j]) / (1 + alpha[i, j]) Timing results (relative speed important, my computer is old): 2500 calls (numpy) = 0.3845 2500 calls (other) = 0.5039 2500 calls (mine) = 0.0901 You can keep going, pulling constants out of each loop, in case LLVM (what numba uses) doesn't notice the optimization. Cache locality plays a huge role as well. Instead of calculating an entire alpha array once, just calculate the per-pixel alpha in the second to inner loop: You should look at the dtype of alpha = .../255, which is float64. Use float32 instead of float64 because that's usually faster. alpha = A[i, j, 3] * np.float32(1/255) for k in range(3): B[i, j, k] = (B[i, j, k] + A[i, j, k] * alpha) / (1 + alpha) And now let's do integer arithmetic, which is even faster than floating point, and with correct blending: alphai = np.int32(A[i, j, 3]) # uint8 needs to be widened to avoid overflow/underflow for k in range(3): a = A[i, j, k] b = B[i, j, k] c = (a * alphai + b * (255 - alphai) + 128) >> 8 # fixed point arithmetic, may be off by 1 B[i, j, k] = c Finally: 2500 calls (numpy) = 0.3904 2500 calls (other) = 0.5211 2500 calls (mine) = 0.0118 So that's a speedup of 33. And that's on my old computer that doesn't have any of the latest vector instruction sets. And instead of calling such a function 250 times, you could call it one time with all the data. That may open up the possibility of parallelism. Numba lets you do it, but it's not trivial... Since your game display is a grid, you can collect all the sprites for each grid cell (ordered of course). Then you can render each cell in parallel.
4
3
78,052,071
2024-2-24
https://stackoverflow.com/questions/78052071/pyspark-count-over-a-window-with-reset
I have a PySpark DataFrame which looks like this: df = spark.createDataFrame( data=[ (1, "GERMANY", "20230606", True), (2, "GERMANY", "20230620", False), (3, "GERMANY", "20230627", True), (4, "GERMANY", "20230705", True), (5, "GERMANY", "20230714", False), (6, "GERMANY", "20230715", True), ], schema=["ID", "COUNTRY", "DATE", "FLAG"] ) df.show() +---+-------+--------+-----+ | ID|COUNTRY| DATE| FLAG| +---+-------+--------+-----+ | 1|GERMANY|20230606| true| | 2|GERMANY|20230620|false| | 3|GERMANY|20230627| true| | 4|GERMANY|20230705| true| | 5|GERMANY|20230714|false| | 6|GERMANY|20230715| true| +---+-------+--------+-----+ The DataFrame has more countries. I want to create a new column COUNT_WITH_RESET following the logic: If FLAG=False, then COUNT_WITH_RESET=0. If FLAG=True, then COUNT_WITH_RESET should count the number of rows starting from the previous date where FLAG=False for that specific country. This should be the output for the example above. +---+-------+--------+-----+----------------+ | ID|COUNTRY| DATE| FLAG|COUNT_WITH_RESET| +---+-------+--------+-----+----------------+ | 1|GERMANY|20230606| true| 1| | 2|GERMANY|20230620|false| 0| | 3|GERMANY|20230627| true| 1| | 4|GERMANY|20230705| true| 2| | 5|GERMANY|20230714|false| 0| | 6|GERMANY|20230715| true| 1| +---+-------+--------+-----+----------------+ I have tried with row_number() over a window but I can't manage to reset the count. I have also tried with .rowsBetween(Window.unboundedPreceding, Window.currentRow). Here's my approach: from pyspark.sql.window import Window import pyspark.sql.functions as F window_reset = Window.partitionBy("COUNTRY").orderBy("DATE") df_with_reset = ( df .withColumn("COUNT_WITH_RESET", F.when(~F.col("FLAG"), 0) .otherwise(F.row_number().over(window_reset))) ) df_with_reset.show() +---+-------+--------+-----+----------------+ | ID|COUNTRY| DATE| FLAG|COUNT_WITH_RESET| +---+-------+--------+-----+----------------+ | 1|GERMANY|20230606| true| 1| | 2|GERMANY|20230620|false| 0| | 3|GERMANY|20230627| true| 3| | 4|GERMANY|20230705| true| 4| | 5|GERMANY|20230714|false| 0| | 6|GERMANY|20230715| true| 6| +---+-------+--------+-----+----------------+ This is obviously wrong as my window is partitioning only by country, but am I on the right track? Is there a specific built-in function in PySpark to achieve this? Do I need a UDF? Any help would be appreciated.
Partition the dataframe by COUNTRY then calculate the cumulative sum over the inverted FLAG column to assign group numbers in order to distinguish between different blocks of rows which start with false W1 = Window.partitionBy('COUNTRY').orderBy('DATE') df1 = df.withColumn('blocks', F.sum((~F.col('FLAG')).cast('long')).over(W1)) df1.show() # +---+-------+--------+-----+------+ # | ID|COUNTRY| DATE| FLAG|blocks| # +---+-------+--------+-----+------+ # | 1|GERMANY|20230606| true| 0| # | 2|GERMANY|20230620|false| 1| # | 3|GERMANY|20230627| true| 1| # | 4|GERMANY|20230705| true| 1| # | 5|GERMANY|20230714|false| 2| # | 6|GERMANY|20230715| true| 2| # +---+-------+--------+-----+------+ Partition the dataframe by COUNTRY along with blocks then calculate row number over the ordered partition to create sequential counter W2 = Window.partitionBy('COUNTRY', 'blocks').orderBy('DATE') df1 = df1.withColumn('COUNT_WITH_RESET', F.row_number().over(W2) - 1) df1.show() # +---+-------+--------+-----+------+----------------+ # | ID|COUNTRY| DATE| FLAG|blocks|COUNT_WITH_RESET| # +---+-------+--------+-----+------+----------------+ # | 1|GERMANY|20230606| true| 0| 0| # | 2|GERMANY|20230620|false| 1| 0| # | 3|GERMANY|20230627| true| 1| 1| # | 4|GERMANY|20230705| true| 1| 2| # | 5|GERMANY|20230714|false| 2| 0| # | 6|GERMANY|20230715| true| 2| 1| # +---+-------+--------+-----+------+----------------+
2
1
78,051,606
2024-2-24
https://stackoverflow.com/questions/78051606/cannot-reproduce-a-bifurcation-diagram-from-article
I have been reading this article A Simple Guide for Plotting a Proper Bifurcation Diagram and I want to reproduce the following figure (Fig. 10, p. 2150011-7): I have created this procedure that works fine with other classical bifurcation diagrams: def model(x, r): return 8.821 * np.tanh(1.487 * x) - r * np.tanh(0.2223 * x) def diagram(r, x=0.1, n=1200, m=200): xs = [] for i in range(n): x = model(x, r) if i >= n - m: xs.append(x) return np.array(xs).T rlin = np.arange(5, 30, 0.01) xlin = np.linspace(-0.1, 0.1, 2) clin = np.linspace(0., 1., xlin.size) colors = plt.get_cmap("cool")(clin) fig, axe = plt.subplots(figsize=(8, 6)) for x0, color in zip(xlin, colors): x = diagram(rlin, x=x0, n=600, m=100) _ = axe.plot(rlin, x, ',', color=color) axe.set_title("Bifurcation diagram") axe.set_xlabel("Parameter, $r$") axe.set_ylabel("Serie term, $x_n(r)$") axe.grid() But for this system, it renders: Which looks similar to some extent but is not at the same scale and when r > 17.5 has a totally different behaviour than presented in the article. I am wondering why this difference happens. What have I missed?
I think that the journal reviewers for that article should have been a little more careful. The model equation is based on an earlier article (Baghdadi et al., 2015 - I had to go through my workplace's institutional access to get at it) the original value of B is 5.821, not 8.821 (see Fig 2 in the 2015 article). def model(x, r): return 5.821 * np.tanh(1.487 * x) - r * np.tanh(0.2223 * x) This renders as
2
2
78,050,439
2024-2-23
https://stackoverflow.com/questions/78050439/selenium-undetected-chromedriver-with-different-chrome-versions
I have the following code which works fine on a computer where chrome 122 is installed currently - import undetected_chromedriver as uc driver = uc.Chrome() driver.get('https://ballzy.eu/en/men/sport/shoes') But when I run this code on a computer where a different chrome-version is installed like 120, I get the following error - (selenium) C:\DEV\Fiverr\ORDER\stefamn_jan669_jankore_janxx2\Ballzy>python test3.py Traceback (most recent call last): File "C:\DEV\Fiverr\ORDER\stefamn_jan669_jankore_janxx2\Ballzy\test3.py", line 2, in <module> driver = uc.Chrome(version_main=122) File "C:\DEV\.venv\selenium\lib\site-packages\undetected_chromedriver\__init__.py", line 466, in __init__ super(Chrome, self).__init__( File "C:\DEV\.venv\selenium\lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 45, in __init__ super().__init__( File "C:\DEV\.venv\selenium\lib\site-packages\selenium\webdriver\chromium\webdriver.py", line 61, in __init__ super().__init__(command_executor=executor, options=options) File "C:\DEV\.venv\selenium\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 208, in __init__ self.start_session(capabilities) File "C:\DEV\.venv\selenium\lib\site-packages\undetected_chromedriver\__init__.py", line 724, in start_session super(selenium.webdriver.chrome.webdriver.WebDriver, self).start_session( File "C:\DEV\.venv\selenium\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 292, in start_session response = self.execute(Command.NEW_SESSION, caps)["value"] File "C:\DEV\.venv\selenium\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 347, in execute self.error_handler.check_response(response) File "C:\DEV\.venv\selenium\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 229, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.WebDriverException: Message: unknown error: cannot connect to chrome at 127.0.0.1:50596 from session not created: This version of ChromeDriver only supports Chrome version 122 Current browser version is 120.0.6099.200 Is it somehow possible that the correct chromedriver is automatically downloaded? (when I use the normal selenium driver - I just use the following driver-definition, and it works fine with that on several computers) srv=Service() driver = webdriver.Chrome (service=srv, options=options) How can i do this also with the undetected chromedriver so it is working on differnt chrome version installations on different computers?
With undetected-chromedriver, you have to update the version_main arg of uc.Chrome() if you don't want to use the latest available driver version to match Chrome. Eg: import undetected_chromedriver as uc driver = uc.Chrome(version_main=120) Alternatively, you can use https://github.com/seleniumbase/SeleniumBase with UC Mode, which is basically undetected-chromedriver with some improvements, such as automatically getting a version of chromedriver that is compatible with your version of Chrome. Set uc=True to activate UC Mode in a SeleniumBase script. Here's an example: from seleniumbase import Driver driver = Driver(uc=True) There's some documentation about it here: https://github.com/seleniumbase/SeleniumBase/issues/2213 Here's a larger script: from seleniumbase import Driver driver = Driver(uc=True) driver.get("https://nowsecure.nl/#relax") # DO MORE STUFF driver.quit() For customizing the reconnect time when loading a URL that has bot detection services, you could swap the get() line with something like this: driver.uc_open_with_reconnect("https://nowsecure.nl/#relax", reconnect_time=5) (The reconnect_time is the wait time before chromedriver reconnects with Chrome. Before the time is up, a website cannot detect Selenium, but it also means that Selenium can't yet issue commands to Chrome.)
2
2
78,050,000
2024-2-23
https://stackoverflow.com/questions/78050000/how-to-assign-each-item-in-a-list-an-equal-amount-of-items-from-another-list-in
Let's say I have a list of people ['foo','bar','baz'] and a list of items ['hat','bag','ball','bat','shoe','stick','pie','phone'] and I want to randomly assign each person an equal amount of items, like so { 'foo':['hat','bat','stick'], 'bar':['bag','shoe','phone'], 'baz':['ball','pie'] } I think itertools is the job for this, but I couldn't seem to find the right function as most itertools functions seem to just work on one object. EDIT: Order does not matter. I just want to randomly assign each person an equal amount of items.
Another solution, using itertools.cycle: import random from itertools import cycle persons = ["foo", "bar", "baz"] items = ["hat", "bag", "ball", "bat", "shoe", "stick", "pie", "phone"] random.shuffle(items) out = {} for p, i in zip(cycle(persons), items): out.setdefault(p, []).append(i) print(out) Prints (for example): { "foo": ["phone", "pie", "bat"], "bar": ["bag", "stick", "hat"], "baz": ["shoe", "ball"], } If there could be fewer items than persons and each person should have key in output dictionary you can use: import random from itertools import cycle persons = ["foo", "bar", "baz"] items = ["hat", "bag", "ball", "bat", "shoe", "stick", "pie", "phone"] random.shuffle(items) random.shuffle(persons) # to randomize who gets fewest items out = {p: [] for p in persons} for lst, i in zip(cycle(out.values()), items): lst.append(i) print(out)
2
4
78,050,189
2024-2-23
https://stackoverflow.com/questions/78050189/df-query-in-polars
What is the equivalent of pandas.DataFrame.query in polars? import pandas as pd data= { 'A':["Polars","Python","Pandas"], 'B' :[23000,24000,26000], 'C':['30days', '40days',np.nan], } df = pd.DataFrame(data) A B C 0 Polars 23000 30days 1 Python 24000 40days 2 Pandas 26000 NaN Now, defining a variable item item=24000 df.query("B>= @item") A B C 1 Python 24000 40days 2 Pandas 26000 NaN Now, using polars: import polars as pl df = pl.DataFrame(data) item=24000 df.query("B>= @item") I get: AttributeError: 'DataFrame' object has no attribute 'query' My wild guess is df.filter() but syntax does not look same and filter looks equivalent to df.loc[] as well ?
indeed, filter is all you need df.filter(pl.col("B") >= item) clean, simple, predictable, without hacks
5
10
78,049,793
2024-2-23
https://stackoverflow.com/questions/78049793/inverse-value-with-polars-dataframe
Say, I have a dataset in the following DataFrame: df=pl.DataFrame({ 'x':['a','a','b','b'], 'y':['b','c','c','a'], 'value':[3,5,1,4] }) df shape: (4, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ x ┆ y ┆ value β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═══════║ β”‚ a ┆ b ┆ 3 β”‚ β”‚ a ┆ c ┆ 5 β”‚ β”‚ b ┆ c ┆ 1 β”‚ β”‚ b ┆ a ┆ 4 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Now, I'd like to add a column to this dataframe that would contain the inverse value. I define an inverse value as inverse(x, y) == value(y, x). E.g., from the example above, inverse (a, b) == value(b, a) == 4. If value(y, x) didn't exist then inverse(x, y) would be given the default value of 0. In other words, I'd like to add an inverse column such as I'd end up with something like this: shape: (4, 4) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ x ┆ y ┆ value ┆ inverse β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═══════β•ͺ═════════║ β”‚ a ┆ b ┆ 3 ┆ 4 β”‚ β”‚ a ┆ c ┆ 5 ┆ 0 β”‚ β”‚ b ┆ c ┆ 1 ┆ 0 β”‚ β”‚ b ┆ a ┆ 4 ┆ 3 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Is this doable in an easy and optimal way? Preferably with expressions? Thanks a lot in advance.
You could join it to itself with aliases and then do fill_null(0). df.join( df.select( y="x", x="y", inverse="value" ), on=["x","y"], how="left" ).fill_null(0) shape: (4, 4) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ x ┆ y ┆ value ┆ inverse β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═══════β•ͺ═════════║ β”‚ a ┆ b ┆ 3 ┆ 4 β”‚ β”‚ a ┆ c ┆ 5 ┆ 0 β”‚ β”‚ b ┆ c ┆ 1 ┆ 0 β”‚ β”‚ b ┆ a ┆ 4 ┆ 3 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
2
4
78,048,750
2024-2-23
https://stackoverflow.com/questions/78048750/in-polars-select-all-column-ending-with-pattern-and-add-new-columns-without-patt
I have the following dataframe: import polars as pl import numpy as np df = pl.DataFrame({ "nrs": [1, 2, 3, None, 5], "names_A0": ["foo", "ham", "spam", "egg", None], "random_A0": np.random.rand(5), "A_A2": [True, True, False, False, False], }) digit = 0 For each column X whose name ends with the string suf =f'_A{digit}', I want to add an identical column to df, whose name is the same as X, but without suf. In the example, I need to add columns names and random to the original dataframe df, whose content is identical to that of columns names_A0 and random_A0 respectively.
You can you Polars Selectors along with some basic strings operations to accomplish this. Depending on what you how you expect this problem to evolve, you can jump straight to regular expressions, or use polars.selectors.ends_with/string.removesuffix String Suffix Operations This approach uses - polars.selectors.ends_with # find columns ending with string - string.removesuffix # remove suffix from end of string translating to import polars as pl from polars import selectors as cs import numpy as np import re from functools import partial df = pl.DataFrame( { "nrs": [1, 2, 3, None, 5], "names_A0": ["foo", "ham", "spam", "egg", None], "random_A0": np.random.rand(5), "A_A2": [True, True, False, False, False], } ) digit = 0 suffix = f'_A{digit}' print( # keep original A0 columns df.with_columns( cs.ends_with(suffix).name.map(lambda s: s.removesuffix(suffix)) ), # shape: (5, 6) # β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” # β”‚ nrs ┆ names_A0 ┆ random_A0 ┆ A_A2 ┆ names ┆ random β”‚ # β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ # β”‚ i64 ┆ str ┆ f64 ┆ bool ┆ str ┆ f64 β”‚ # β•žβ•β•β•β•β•β•β•ͺ══════════β•ͺ═══════════β•ͺ═══════β•ͺ═══════β•ͺ══════════║ # β”‚ 1 ┆ foo ┆ 0.713324 ┆ true ┆ foo ┆ 0.713324 β”‚ # β”‚ 2 ┆ ham ┆ 0.980031 ┆ true ┆ ham ┆ 0.980031 β”‚ # β”‚ 3 ┆ spam ┆ 0.242768 ┆ false ┆ spam ┆ 0.242768 β”‚ # β”‚ null ┆ egg ┆ 0.528783 ┆ false ┆ egg ┆ 0.528783 β”‚ # β”‚ 5 ┆ null ┆ 0.583206 ┆ false ┆ null ┆ 0.583206 β”‚ # β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ # drop original A0 columns df.select( ~cs.ends_with(suffix), cs.ends_with(suffix).name.map(lambda s: s.removesuffix(suffix)) ), # shape: (5, 4) # β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” # β”‚ nrs ┆ A_A2 ┆ names ┆ random β”‚ # β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ # β”‚ i64 ┆ bool ┆ str ┆ f64 β”‚ # β•žβ•β•β•β•β•β•β•ͺ═══════β•ͺ═══════β•ͺ══════════║ # β”‚ 1 ┆ true ┆ foo ┆ 0.713324 β”‚ # β”‚ 2 ┆ true ┆ ham ┆ 0.980031 β”‚ # β”‚ 3 ┆ false ┆ spam ┆ 0.242768 β”‚ # β”‚ null ┆ false ┆ egg ┆ 0.528783 β”‚ # β”‚ 5 ┆ false ┆ null ┆ 0.583206 β”‚ # β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ sep='\n\n' ) Regular Expressions Alternatively you can use regular expressions to detect a range of suffix patterns - polars.selectors.matches # find columns matching a pattern - re.sub # substitute in string based on pattern We will need to ensure our pattern ends with a '$' to anchor the pattern to the end of the string. import polars as pl from polars import selectors as cs import numpy as np import re from functools import partial df = pl.DataFrame( { "nrs": [1, 2, 3, None, 5], "names_A0": ["foo", "ham", "spam", "egg", None], "random_A0": np.random.rand(5), "A_A2": [True, True, False, False, False], } ) digit=0 suffix = fr'_A{digit}$' print( # keep original A0 columns df.with_columns( cs.matches(suffix).name.map(lambda s: re.sub(suffix, '', s)) ), # shape: (5, 6) # β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” # β”‚ nrs ┆ names_A0 ┆ random_A0 ┆ A_A2 ┆ names ┆ random β”‚ # β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ # β”‚ i64 ┆ str ┆ f64 ┆ bool ┆ str ┆ f64 β”‚ # β•žβ•β•β•β•β•β•β•ͺ══════════β•ͺ═══════════β•ͺ═══════β•ͺ═══════β•ͺ══════════║ # β”‚ 1 ┆ foo ┆ 0.713324 ┆ true ┆ foo ┆ 0.713324 β”‚ # β”‚ 2 ┆ ham ┆ 0.980031 ┆ true ┆ ham ┆ 0.980031 β”‚ # β”‚ 3 ┆ spam ┆ 0.242768 ┆ false ┆ spam ┆ 0.242768 β”‚ # β”‚ null ┆ egg ┆ 0.528783 ┆ false ┆ egg ┆ 0.528783 β”‚ # β”‚ 5 ┆ null ┆ 0.583206 ┆ false ┆ null ┆ 0.583206 β”‚ # β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ # drop original A0 columns df.select( ~cs.matches(suffix), cs.matches(suffix).name.map(lambda s: re.sub(suffix, '', s)) ), # shape: (5, 4) # β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” # β”‚ nrs ┆ A_A2 ┆ names ┆ random β”‚ # β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ # β”‚ i64 ┆ bool ┆ str ┆ f64 β”‚ # β•žβ•β•β•β•β•β•β•ͺ═══════β•ͺ═══════β•ͺ══════════║ # β”‚ 1 ┆ true ┆ foo ┆ 0.713324 β”‚ # β”‚ 2 ┆ true ┆ ham ┆ 0.980031 β”‚ # β”‚ 3 ┆ false ┆ spam ┆ 0.242768 β”‚ # β”‚ null ┆ false ┆ egg ┆ 0.528783 β”‚ # β”‚ 5 ┆ false ┆ null ┆ 0.583206 β”‚ # β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ sep='\n\n' )
4
3
78,048,307
2024-2-23
https://stackoverflow.com/questions/78048307/gradient-color-on-broken-barh-plot-in-matplotlib
I am trying to reproduce this: using this code: import matplotlib.pyplot as plt import numpy as np import pandas as pd from matplotlib.lines import Line2D colors = ["#CC5A43","#5375D4"]*3 data = { "year": [2004, 2022, 2004, 2022, 2004, 2022], "countries" : ["Sweden", "Sweden", "Denmark", "Denmark", "Norway", "Norway"], "sites": [13,15,4,10,5,8] } df= pd.DataFrame(data) df['pct_change'] = df.groupby('countries', sort=True)['sites'].apply( lambda x: x.pct_change()).to_numpy()*-1 df['ctry_code'] = df.countries.astype(str).str[:2].astype(str).str.upper() df = df.sort_values(['countries','year'], ascending=True ).reset_index(drop=True) df['diff'] = df.groupby(['countries'])['sites'].diff() df['diff'].fillna(df.sites, inplace=True) countries = df.countries.unique() code = df.ctry_code.unique() pct_change = df.pct_change x_coord = df.groupby('countries')['diff'].apply(lambda x: x.values) #convert the columns into numpy 2D array fig, ax = plt.subplots(figsize=(6,5), facecolor = "#FFFFFF") import matplotlib.cm # use a colormap cmap = plt.cm.RdBu for i, (color, x_c, country) in enumerate(zip(colors,x_coord, countries)): ax.broken_barh([x_c], (i-0.2,0.4),facecolors=cmap(0.7),alpha= 0.2) ax.scatter( df.sites, df.countries, marker="D", s=300, color = colors) ax.set(xlim=[0, 16], ylim=[-1, 3]) ax.xaxis.set_ticks(np.arange(0,20,5),labels = [0,5,10,15]) ax.tick_params(axis="x", which="major",length=0,labelsize=14,colors= '#C8C9C9') # Major ticks every 20, minor ticks every 5 major_ticks = np.arange(0, 16, 1) ax.set_xticks(major_ticks) ax.grid(which='major', axis='x', linestyle='-', alpha=0.4, color = "#C8C9C9") ax.set_axisbelow(True) plt.yticks([]) plt.box(False) #add legend labels = ['2004','2022'] colors = ["#5375D4","#CC5A43",] lines = [Line2D([0], [0], color=c, marker='D',linestyle='', markersize=12,) for c in colors] leg = ax.get_legend() plt.figlegend( lines,labels, labelcolor="#C8C9C9", bbox_to_anchor=(0.3, -0.1), loc="lower center", ncols = 2,frameon=False, fontsize= 12) Which produces this: My question is, how do i do the gradient on the broken_barh plots? I have tried to do a cmap on facecolors, but no luck. I have also tried using ax.barh and ax.plot but still stuck :(
Refer to the answer of How to fill matplotlib bars with a gradient? I have made some changes of the function gradientbars in which I add some comments. Also I have changed the size of the scatter plot for making it consistent with broken_barh and also for the legend which is disappeared. Here are the full codes: import matplotlib.pyplot as plt import numpy as np import pandas as pd from matplotlib.lines import Line2D from matplotlib.colors import LinearSegmentedColormap def gradientbars(bars, ax): colors = [(1, 0, 0), (0, 0, 1), ] # first color is red, last is blue cm = LinearSegmentedColormap.from_list( "Custom", colors, N=256) # Conver to color map mat = np.indices((10,10))[1] # define a matrix for imshow lim = ax.get_xlim()+ax.get_ylim() for bar in bars: bar.set_zorder(1) bar.set_facecolor("none") # get the coordinates of the rectangle x_all = bar.get_paths()[0].vertices[:, 0] y_all = bar.get_paths()[0].vertices[:, 1] # Get the first coordinate (lower left corner) x,y = x_all[0], y_all[0] # Get the height and width of the rectangle h, w = max(y_all) - min(y_all), max(x_all) - min(x_all) # Show the colormap ax.imshow(mat, extent=[x,x+w,y,y+h], aspect="auto", zorder=0, cmap=cm, alpha=0.2) ax.axis(lim) colors = ["#CC5A43","#5375D4"]*3 data = { "year": [2004, 2022, 2004, 2022, 2004, 2022], "countries" : ["Sweden", "Sweden", "Denmark", "Denmark", "Norway", "Norway"], "sites": [13,15,4,10,5,8] } df= pd.DataFrame(data) df['pct_change'] = df.groupby('countries', sort=True)['sites'].apply( lambda x: x.pct_change()).to_numpy()*-1 df['ctry_code'] = df.countries.astype(str).str[:2].astype(str).str.upper() df = df.sort_values(['countries','year'], ascending=True ).reset_index(drop=True) df['diff'] = df.groupby(['countries'])['sites'].diff() df['diff'].fillna(df.sites, inplace=True) countries = df.countries.unique() code = df.ctry_code.unique() pct_change = df.pct_change x_coord = df.groupby('countries')['diff'].apply(lambda x: x.values) #convert the columns into numpy 2D array fig, ax = plt.subplots(figsize=(6,5), facecolor = "#FFFFFF") # use a colormap cmap = plt.cm.RdBu bars = [] for i, (color, x_c, country) in enumerate(zip(colors,x_coord, countries)): bar = ax.broken_barh([x_c], (i-0.2,0.4),facecolors=cmap(0.7),alpha= 0.2) bars.append(bar) gradientbars(bars, ax) ax.scatter( df.sites, df.countries, marker="D", s=3000, color = colors) ax.set(xlim=[0, 16], ylim=[-1, 3]) ax.xaxis.set_ticks(np.arange(0,20,5),labels = [0,5,10,15]) ax.tick_params(axis="x", which="major",length=0,labelsize=14,colors= '#C8C9C9') # Major ticks every 20, minor ticks every 5 major_ticks = np.arange(0, 16, 1) ax.set_xticks(major_ticks) ax.grid(which='major', axis='x', linestyle='-', alpha=0.4, color = "#C8C9C9") ax.set_axisbelow(True) plt.yticks([]) plt.box(False) labels = ['2004','2022'] colors = ["#5375D4","#CC5A43",] lines = [Line2D([0], [0], color=c, marker='D',linestyle='', markersize=25,) for c in colors] leg = ax.get_legend() plt.figlegend( lines,labels, labelcolor="#C8C9C9", bbox_to_anchor=(0.3, 0.00), loc="lower center", ncols = 2,frameon=False, fontsize= 25) Which gives you the result:
2
1
78,043,310
2024-2-22
https://stackoverflow.com/questions/78043310/abstract-property-instantiating-a-partially-implemented-class
I read this very nice documentation on abstract class abc.ABC. It has this example (shortened by me for the purpose of this question): import abc class Base(abc.ABC): @property @abc.abstractmethod def value(self): return 'Should never reach here' @value.setter @abc.abstractmethod def value(self, new_value): return class PartialImplementation(Base): # setter not defined/overridden @property def value(self): return 'Read-only' To my biggest surprise, PartialImplementation can be instantiated though it only overrides the getter: >>> PartialImplementation() <__main__.PartialImplementation at 0x7fadf4901f60> Naively, I would have thought that since the interface has two abstract methods both would have to be overridden in any concrete class, which is what is written in the documentation: "Although a concrete class must provide implementations of all abstract methods,...". The resolution must be in that we actually have only one abstract name, value, that needs to be implemented and that does happen in PartialImplementation. Can someone please explain this to me properly? Also, why would you want to lift the setter to the interface if you are not required to implement it; current implementation does nothing if at all callable on a PartialImplementation instance.
TL;DR PartialImplementation is not partially implemented; you defined a new property with a concrete getter and a (logically) concrete setter, instead of supplying only a concrete getter. Abstract properties are tricky. Your base class has one property, which defines itself as abstract by the presence of the abstract getter and setter, rather than a property that you explicitly defined as abstract. ABCMeta determines if a class is abstract by looking for class attributes with a __isabstractmethod__ attribute set to True. Base.value is such an attribute. PartialImplementation.value is not, because its value attribute is a brand new property independent of Base.value, and it consists solely of a concrete getter. Instead, you need to create a new property that's based on the inherited property, using the appropriate method supplied by the inherited property. This class is still abstract, because @Base.value.getter creates a property with a concrete getter but retaining the original abstract setter. class PartialImplementation(Base): @Base.value.getter def value(self): return 'Read-only' This class is concrete and also supplies a read-only property. class PartialImplementation(Base): @Base.value.getter def value(self): return 'Read-only' @value.setter def value(self, v): # Following the example of the __set__ method # described in https://docs.python.org/3/howto/descriptor.html#properties raise AttributeError("property 'value' has no setter") Note that if we used Base.value.setter instead of value.setter, we would have create a new property that only added a concrete setter to the inherited property (which still only has an abstract getter). We want to add the setter to our new property with a concrete getter. (Also note that while value.setter can take None as an argument to "remove" an existing setter, it is not sufficient to override an abstract setter.) I highly recommend looking at the pure-Python implemeantion of property in the Descriptor Guide to see how properties work under the hood, and to understand why such care must be taken when trying to manipulate them. One thing it lacks, though, is the code that manipulates the __isabstractmethod__ attribute to make @property stackable with @abstractmethod. Essentially, it would set __isabstractmethod__ on itself if any of its initial components were abstract, and each of getter, setter, and deleter set the attribute to false if the last abstract component were replaced with a concrete one.
2
2
78,047,164
2024-2-23
https://stackoverflow.com/questions/78047164/optimize-assigning-an-index-to-groups-of-split-data-in-polars
SOLVED: Fastest Function: 995x faster than original function def add_range_index_stack2(data, range_str): range_str = _range_format(range_str) df_range_index = ( data.group_by_dynamic(index_column="date", every=range_str, by="symbol") .agg() .with_columns( pl.int_range(0, pl.len()).over("symbol").alias("range_index") ) ) data = data.join_asof(df_range_index, on="date", by="symbol") return data OG Question: Data Logic: I have a time series that needs to be split into chunks. Let's say it needs to be split into 3 chunks for this post. The data I am using is stock quote data and daily prices. If the length of the time series data is 3 months, and the 'split' range is 1 month, then there should be 3 chunks of data, each month labeled with increasing integers. So, there should be 3 sections in the time series, all in one data frame. There should be a column named range_index that starts at 0 and iterates until 2. For example, if the data was January-March data, each price quote should be labeled 0, 1, or 2. 0 for January, 1 for February, and 2 for March data. I would like for this to be done for each symbol in the data frame. The start_date of each symbol may not be the same, so it should be robust in that way, and correctly assign range_index values based on the symbol stock data. What I've Done: I have built a function using polar logic that adds a column onto this data frame, but I think that there are possibly faster ways to do this. When I add a few symbols with a few years of data, it slows down to about ~3s to execute. I would love any advice on how to speed up the function, or even a novel approach. I'm aware that row-based operations are slower in polar than columnar. If there are any polars nerds out there that see a glaring issue....please help! def add_range_index( data: pl.LazyFrame | pl.DataFrame, range_str: str ) -> pl.LazyFrame | pl.DataFrame: """ Context: Toolbox || Category: Helpers || Sub-Category: Mandelbrot Channel Helpers || **Command: add_n_range**. This function is used to add a column to the dataframe that contains the range grouping for the entire time series. This function is used in `log_mean()` """ # noqa: W505 range_str = _range_format(range_str) if "date" in data.columns: group_by_args = { "every": range_str, "closed": "left", "include_boundaries": True, } if "symbol" in data.columns: group_by_args["by"] = "symbol" symbols = (data.select("symbol").unique().count())["symbol"][0] grouped_data = ( data.lazy() .set_sorted("date") .group_by_dynamic("date", **group_by_args) .agg( pl.col("adj_close").count().alias("n_obs") ) # using 'adj_close' as the column to sum ) range_row = grouped_data.with_columns( pl.arange(0, pl.count()).over("symbol").alias("range_index") ) ## WIP: # Extract the number of ranges the time series has # Initialize a new column to store the range index data = data.with_columns(pl.lit(None).alias("range_index")) # Loop through each range and add the range index to the original dataframe for row in range_row.collect().to_dicts(): symbol = row["symbol"] start_date = row["_lower_boundary"] end_date = row["_upper_boundary"] range_index = row["range_index"] # Apply the conditional logic to each group defined by the 'symbol' column data = data.with_columns( pl.when( (pl.col("date") >= start_date) & (pl.col("date") < end_date) & (pl.col("symbol") == symbol) ) .then(range_index) .otherwise(pl.col("range_index")) .over("symbol") # Apply the logic over each 'symbol' group .alias("range_index") ) return data def _range_format(range_str: str) -> str: """ Context: Toolbox || Category: Technical || Sub-Category: Mandelbrot Channel Helpers || **Command: _range_format**. This function formats a range string into a standard format. The return value is to be passed to `_range_days()`. Parameters ---------- range_str : str The range string to format. It should contain a number followed by a range part. The range part can be 'day', 'week', 'month', 'quarter', or 'year'. The range part can be in singular or plural form and can be abbreviated. For example, '2 weeks', '2week', '2wks', '2wk', '2w' are all valid. Returns ------- str The formatted range string. The number is followed by an abbreviation of the range part ('d' for day, 'w' for week, 'mo' for month, 'q' for quarter, 'y' for year). For example, '2 weeks' is formatted as '2w'. Raises ------ RangeFormatError If an invalid range part is provided. Notes ----- This function is used in `log_mean()` """ # noqa: W505 # Separate the number and range part num = "".join(filter(str.isdigit, range_str)) # Find the first character after the number in the range string range_part = next((char for char in range_str if char.isalpha()), None) # Check if the range part is a valid abbreviation if range_part not in {"d", "w", "m", "y", "q"}: msg = f"`{range_str}` could not be formatted; needs to include d, w, m, y, q" raise HumblDataError(msg) # If the range part is "m", replace it with "mo" to represent "month" if range_part == "m": range_part = "mo" # Return the formatted range string return num + range_part Expected Data Form: The same is done for PCT stock symbol.
Another solution is to create a separate DataFrame that for each symbol and range index stores the corresponding start date. df_range_index = ( df .group_by_dynamic(index_column="date", every="1w", by="symbol").agg() .with_columns(pl.int_range(0, pl.len()).over("symbol").alias("range_index")) ) shape: (106, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ date ┆ range_index β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ date ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ═════════════║ β”‚ AAPL ┆ 2022-12-26 ┆ 0 β”‚ β”‚ AAPL ┆ 2023-01-02 ┆ 1 β”‚ β”‚ … ┆ … ┆ … β”‚ β”‚ PCT ┆ 2023-12-11 ┆ 50 β”‚ β”‚ PCT ┆ 2023-12-18 ┆ 51 β”‚ β”‚ PCT ┆ 2023-12-25 ┆ 52 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ We can then use pl.DataFrame.join_asof to merge the range index to the original dataframe. df.join_asof(df_range_index, on="date", by="symbol") Edit. As suggested by @jqurious, it might be possible to represented the range index as a simple truncation (+ some offset) of the date. Then, we can use .dt.truncate and map the date groups to ids using .rle_id. ( df .with_columns( range_index=pl.col("date").dt.truncate(every="1w").rle_id().over("symbol") ) )
2
2
78,046,217
2024-2-23
https://stackoverflow.com/questions/78046217/my-custom-object-is-not-deepcopied-when-it-is-used-as-the-default-parameter-in-p
I know that Pydantic creates a deepcopy of mutable objects for "every new instances" that we create, if they are placed in the default values. It holds true for my lst field, but not for my custom object item. (The code for __deepcopy__ is taken from here) from copy import deepcopy from typing import Self from pydantic import BaseModel, ConfigDict class Spam: def __init__(self) -> None: self.names = ["hi"] def __deepcopy__(self, memo: dict) -> Self: print("deepcopy called") cls = self.__class__ result = cls.__new__(cls) memo[id(self)] = result for k, v in self.__dict__.items(): setattr(result, k, deepcopy(v, memo)) return result class Person(BaseModel): model_config = ConfigDict(arbitrary_types_allowed=True) item: Spam = Spam() lst: list = [] print("-----------------------------------") obj1 = Person() obj2 = Person() obj1.lst.append(10) obj1.item.names.append("bye") print(obj1.lst) print(obj1.item.names) print(obj2.lst) print(obj2.item.names) print(id(obj1.item) == id(obj2.item)) output: deepcopy called ----------------------------------- [10] ['hi', 'bye'] [] ['hi', 'bye'] True I printed the dashed-line after creating the class and before any instantiation just to show that the deepcopy of my object is indeed happened in class creation which is in contrast to the documentation: Pydantic will deepcopy the default value when creating each instance of the model Do I miss something here?
It works only for not hashable objects, but according to Python documentation Objects which are instances of user-defined classes are hashable by default https://docs.python.org/3/glossary.html#term-hashable (I was also surprised) So, you can use default_factory to achieve your goal: from copy import deepcopy from pydantic import BaseModel, ConfigDict, Field class Spam: def __init__(self) -> None: self.names = ["hi"] # def __deepcopy__(self, memo: dict): # print("deepcopy called") # cls = self.__class__ # result = cls.__new__(cls) # memo[id(self)] = result # for k, v in self.__dict__.items(): # setattr(result, k, deepcopy(v, memo)) # return result class Person(BaseModel): model_config = ConfigDict(arbitrary_types_allowed=True) item: Spam = Field(default_factory=Spam) lst: list = [] print("-----------------------------------") obj1 = Person() obj2 = Person() obj1.lst.append(10) obj1.item.names.append("bye") print(obj1.lst) print(obj1.item.names) print(obj2.lst) print(obj2.item.names) print(id(obj1.item) == id(obj2.item))
2
2
78,046,145
2024-2-23
https://stackoverflow.com/questions/78046145/why-cant-an-attribute-have-the-same-name-as-a-class-if-its-type-is-a-union-and
There seems to be some ambiguity in how the use of | in attribute type annotations together with attribute default values is parsed. Leading to the following error: >>> class A: pass ... >>> class B: ... A: A | None = None ... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in B TypeError: unsupported operand type(s) for |: 'NoneType' and 'NoneType' If we rename the class A to something else it works. Why does this not parse correctly?
A class definition creates a namespace, and by placing A on the left side of an assignment the compiler binds A as a local name to the None object in the namespace of the class B, instead of looking up A outside the scope of the class B and binding A to the class A defined in the module namespace. So A | None gets evaluated as None | None, and since the type of None, NoneType, doesn't support the | operation, you get the said TypeError.
2
2
78,045,627
2024-2-23
https://stackoverflow.com/questions/78045627/using-dynamic-cut-breaks-for-each-row-of-a-dataframe
I am trying to bin values to prepare data to be later fed into a plotting library. For this I am trying to use polars Expr.cut. The dataframe I operate on contains different groups of values, each of these groups should be binned using different breaks. Ideally I would like to use np.linspace(BinMin, BinMax, 50) for the breaks argument of Expr.cut. I managed to make the BinMin and BinMax columns in the dataframe. But I can't manage to use np.linspace to define the breaks dynamically for each row of the dataframe. This is a minimal example of what I tried: import numpy as np import polars as pl df = pl.DataFrame({"Value": [12], "BinMin": [0], "BinMax": [100]}) At this point the dataframe looks like: β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Value ┆ BinMin ┆ BinMax β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ════════β•ͺ════════║ β”‚ 12 ┆ 0 ┆ 100 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ And trying to use Expr.cut with dynamic breaks: df.with_columns(pl.col("Value").cut(breaks=np.linspace(pl.col("BinMin"), pl.col("BinMax"), 50)).alias("Bin")) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[10], line 1 ----> 1 df.with_columns(pl.col("Value").cut(breaks=range(pl.col("BinMin"), pl.col("BinMax"))).alias("Bin")) TypeError: 'Expr' object cannot be interpreted as an integer I understand the error, that np.linspace is expecting to be called with actual scalar integers, not polars Expr. But I cannot figure out how to call it with dynamic breaks derived from the BinMin and BinMax columns.
Unfortunately, pl.Expr.cut doesn't support expressions for the breaks argument (yet), but requires a fixed sequence. (This would be a good feature request though). A naive solution that will work for DataFrames, but doesn't use polars' native expression API, would be to use pl.Expr.map_elements together with the corresponding functionality in numpy. def my_cut(x, num=50): seq = np.linspace(x["BinMin"], x["BinMax"], num=num) idx = np.digitize(x["Value"], seq) return seq[idx-1:idx+1].tolist() ( df .with_columns( pl.struct("Value", "BinMin", "BinMax").map_elements(my_cut).alias("Bin") ) ) shape: (1, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Value ┆ BinMin ┆ BinMax ┆ Bin β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 ┆ list[f64] β”‚ β•žβ•β•β•β•β•β•β•β•ͺ════════β•ͺ════════β•ͺ════════════════════════║ β”‚ 12 ┆ 0 ┆ 100 ┆ [10.204082, 12.244898] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
4
2
78,044,552
2024-2-22
https://stackoverflow.com/questions/78044552/how-to-add-data-from-one-data-frame-to-another-by-row-only-when-the-value-in-th
I have 2 dataframes. For each row in dataframe2, I want to see if dataframe1 already has a row with the same value in column 'name'. If it does, I want to add the data from the row in dataframe2 to corresponding row in dataframe 1. The value in column 'name' should not be added across. Steve should not be copied across and bob should have no new data added. df1 = pd.DataFrame([{'name': 'Ben', 'goals': 1, 'minutes': 90}, {'name': 'Bob', 'goals': 1, 'minutes': 64}, {'name': 'Kevin', 'goals': 1, 'minutes': 90}]) df2 = pd.DataFrame([{'name': 'Ben', 'goals': 1, 'minutes': 88}, {'name': 'Kevin', 'goals': 1, 'minutes': 3}, {'name': 'Steve', 'goals': 1, 'minutes': 13}]) The final output should be: name goals minutes Ben 2 178 Bob 1 64 Kevin 2 93 This is what I have tried for index, row in df1.iterrows(): if df2.isin([row['name']]).any().any(): position = int(df2[df2['name'] == str(row['name'])].index.values) df1.iloc[index, 1:] = df1.iloc[index, 1:] + df2.iloc[position, 1:]
Concatenate df1 + df2 minus the rows with names that aren't in df1; Group by name; Sum the values in each group; Reset the index. df = pd.concat([df1, df2.loc[df2["name"].isin(df1["name"])]]).groupby("name").sum().reset_index() Result: name goals minutes 0 Ben 2 178 1 Bob 1 64 2 Kevin 2 93
2
2
78,043,285
2024-2-22
https://stackoverflow.com/questions/78043285/df-drop-duplicates-in-polars
import polars as pl df = pl.DataFrame( { "X": [4, 2, 3, 4], "Y": ["p", "p", "p", "p"], "Z": ["b", "b", "b", "b"], } ) We know the equivalent of pandas's df.drop_duplicates() is df.unique() in python-polars But, each time I execute my query I get a different result? print(df.unique()) X Y Z i64 str str 3 "p" "b" 2 "p" "b" 4 "p" "b" X Y Z i64 str str 4 "p" "b" 2 "p" "b" 3 "p" "b" X Y Z i64 str str 2 "p" "b" 3 "p" "b" 4 "p" "b" Is this intentional and what is the reason behind it?
Yes, this is an intentional behavior. If you need a consistent behavior then do: df.unique(maintain_order=True) polars.DataFrame.unique maintain_order Keep the same order as the original DataFrame. This is more expensive to compute. Settings this to True blocks the possibility to run on the streaming engine. Maintaining order is not streaming-friendly as it requires bringing together all the chunks in memory to compare the order of the rows. With this change of default the developers want to ensure that Polars is ready to work with datasets of all sizes while allowing users to choose different behaviour if desired. A related point is the choice of which row within each duplicated group is kept by unique. In Pandas this defaults to the first row of each duplicated groups. In Polars the default is any as this again allows more optimizations. Other functions that have this behavior include: 1.group_by (maintain_order: bool = False) 2.partition_by (maintain_order: bool = True) 3.pivot (maintain_order: bool = True) 4.upsample (maintain_order: bool = False) A detailed article here by @LiamBrannigan: https://www.rhosignal.com/posts/polars-ordering/
2
5
78,043,002
2024-2-22
https://stackoverflow.com/questions/78043002/selecting-a-range-of-columns-using-polars
I have a many-column DF from which I need to process various ranges of columns. In Pandas I could use an expression along the lines of : df.loc[:, 'first_name': 'last_name'] to obtain the required columns between the two end-points. Is there an equivalent in Polars which does not involve listing all the numerous column names in each required range?
Since df.columns is just a list you can use the .index method to find their locations. Accomplishing this via the eager [] API or the .select interface can be done as follows: import polars as pl def inclusive(target, a, b): start, stop = target.columns.index(a), target.columns.index(b) return pl.col(target.columns[start:stop+1]) df = pl.DataFrame({ 'date': ['2000-01-01', '2000-01-02'], 'first_name': ['Alice', 'Bob' ], 'middle_name': [None, 'Edward' ], 'last_name': ['Smith', 'Jones' ], 'standing': ['good', 'bad'], }) print( df[:, 'first_name':'last_name'], # eager [] selection # shape: (2, 3) # β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” # β”‚ first_name ┆ middle_name ┆ last_name β”‚ # β”‚ --- ┆ --- ┆ --- β”‚ # β”‚ str ┆ str ┆ str β”‚ # β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════β•ͺ═══════════║ # β”‚ Alice ┆ null ┆ Smith β”‚ # β”‚ Bob ┆ Edward ┆ Jones β”‚ # β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ df.select(inclusive(df, 'first_name', 'last_name')), # shape: (2, 3) # β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” # β”‚ first_name ┆ middle_name ┆ last_name β”‚ # β”‚ --- ┆ --- ┆ --- β”‚ # β”‚ str ┆ str ┆ str β”‚ # β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════β•ͺ═══════════║ # β”‚ Alice ┆ null ┆ Smith β”‚ # β”‚ Bob ┆ Edward ┆ Jones β”‚ # β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ sep='\n\n', )
3
2
78,041,555
2024-2-22
https://stackoverflow.com/questions/78041555/python-development-dependencies
I'm familiar with installing dependencies using a requirements.txt or environment.yml, but I've only ever seen syntax in those files like some_package>=1.2.3. What does it mean when dependencies are listed with curly braces, as in: pytest = "^6.2.5" coverage = {extras = ["toml"], version = "^5.5"} safety = "^1.10.3" mypy = "^0.910" typeguard = "^2.12.1" xdoctest = {extras = ["colors"], version = "^0.15.5"} Sphinx = "^4.1.2" sphinx-autobuild = "^2021.3.14" and how do you install those dependences? Trying to install those by treating the file as a requirements.txt or environment.yml throws an ERROR: Invalid requirement or CondaValueError: invalid package specification, respectively.
python-poetry uses that format for defining dependencies in a pyproject.toml file usually under [tool.poetry.dependencies] As explained by PEP 508 packages can have extra dependencies which enables optional features of a given package (package dependent). You could install poetry and use that to manage dependencies or transform that list into a valid format that conda or pip understands. For example the dependency list for pip would look like this: pytest>=6.2.5 coverage[toml]>=5.5 safety>=1.10.3 mypy>=0.910 typeguard>=2.12.1 xdoctest[colors]>=0.15.5 Sphinx>=4.1.2 sphinx-autobuild>=2021.3.14
2
3
78,039,918
2024-2-22
https://stackoverflow.com/questions/78039918/how-to-use-isin-in-polars-dataframe
I have a polars DataFrame: import polars as pl import numpy as np df = pl.DataFrame({'A': ['red', 'blue', 'green', np.nan, 'orange']}) my_list = ['red','orange'] I want to know which colors are present in my_list. In pandas, I would do something like: df.A.isin(my_list) But, I am getting this error: AttributeError: 'DataFrame' object has no attribute 'A' How to do this in polars?
The following can be used to create a dataset where column A is replaced by a boolean column indicating whether each item was in my_list. df.with_columns(pl.col("A").is_in(my_list)) shape: (5, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β” β”‚ A β”‚ β”‚ --- β”‚ β”‚ bool β”‚ β•žβ•β•β•β•β•β•β•β•‘ β”‚ true β”‚ β”‚ false β”‚ β”‚ false β”‚ β”‚ null β”‚ β”‚ true β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”˜ If instead you'd like to create a new boolean column, use pl.Expr.alias as follows df.with_columns(pl.col("A").is_in(my_list).alias("B") have the return value be a pl.Series (like in @KrisKruse's answer), use pl.DataFrame.get_column as follows df.get_column("A").is_in(my_list) filter for rows for which the value of A is in my_list, use pl.DataFrame.filter as follows df.filter(pl.col("A").is_in(my_list))
3
2
78,039,427
2024-2-22
https://stackoverflow.com/questions/78039427/how-to-apply-custom-function-with-many-parameters-in-a-pandas-column
I have the following function import math def f(k,a,x): return (1/(1+math.exp(-k*x))^a) And the following pandas dataframe import pandas as pd df = pd.DataFrame({'x': list(range(1,100))}) How can I apply the above function on the x column of the df for lets say k=1 and a=2 ?
Using your function: def f(k,a,x): return (1/(1+math.exp(-k*x))**a) df['out'] = df['x'].apply(lambda x: f(k=1, a=2, x=x)) But better vectorize with numpy: import numpy as np def f_vectorized(k, a, x): return (1/(1+np.exp(-k*x))**a) df['out'] = f_vectorized(k=1, a=2, x=df['x']) Output: x out 0 1 0.534447 1 2 0.775803 2 3 0.907397 3 4 0.964351 4 5 0.986659 .. .. ... 94 95 1.000000 95 96 1.000000 96 97 1.000000 97 98 1.000000 98 99 1.000000 [99 rows x 2 columns]
2
2
78,036,340
2024-2-21
https://stackoverflow.com/questions/78036340/ansible-get-the-last-character-of-a-fact
I would like to have the last character of a fact. My playbook looks like this: - name: sf set_fact: root_dev: "{{ ansible_mounts|json_query('[?mount == `/`].device') }}" - name: e debug: msg: "{{ root_dev[:-1] }}" The problem is the output in this case always: "msg": [] or if I use the playbook without semicolon: debug: msg: "{{ root_dev[-1] }}" then the whole partition will be the output: "msg": "/dev/sda1" I also can't quote the whole root_dev because it is a fact and I would like to get the last character of it's value. The split filter also not working in this case, because the device can be /dev/sda or /dev/mapper/root_part_0 and so. What would be the best option in this case?
Explanation of the problem: You have to understand how the slicing works. Given the lists l1 and l2 for testing l1: [a] l2: [a, b, c] The index [-1] gives you the last item (which is also the first item) l1[-1]: a The slice [:-1] gives you all items but the last one. The result is an empty list because there is only one item l1[:-1]: [] Test it by slicing l2 l2[:-1]: [a, b] Solution: The value of root_dev is a list root_dev: "{{ ansible_mounts | json_query('[?mount == `/`].device') }}" In your case root_dev: ['/dev/sda1'] To get the number of the partition, index the first item in the list and the last character in the item root_dev[0][-1]: 1 Example of a complete playbook for testing - hosts: localhost vars: root_dev: "{{ ansible_mounts | json_query('[?mount == `/`].device') }}" tasks: - setup: gather_subset: mounts # - debug: # var: ansible_mounts - debug: var: root_dev - debug: var: root_dev[0][-1] The above solution works for a single-digit partitions only. Use regex_replace to cover also multiple-digit options part_no: "{{ root_dev.0 | regex_replace('^(.+?)(\\d*)$', '\\2') }}"
2
3
78,036,231
2024-2-21
https://stackoverflow.com/questions/78036231/how-to-resample-ohlc-dataframe-in-python-without-peeking-into-the-future
I have a Pandas DataFrame with one second frequency datetime index and columns 'Open', 'High', 'Low', 'Close' which represent prices for a financial instrument. I want to resample this DataFrame to 15 min (or any frequency) but without peeking into the future and still keeping the original DataFrame with one second frequency but adding four new columns for each candle. The goal is to represent how candles form in real time. For example, for a 15 min candle, I would have four new columns in the original DataFrame named 'Open_15m', 'High_15m', 'Low_15m', 'Close_15m' which would update the values each second as a rolling OHLC. Keep in mind that a 15 min candle can only start at hh:00:00 or hh:15:00 hh:30:00 or hh:45:00. This means if for example our DataFrame starts at time 09:00:00, we have rolling OHLC from 09:00:00 until 09:15:00 then we reset and start over as a new 15 min candle starts forming at 09:15:00. I came up with the code to do this and I think its correct, but it is too slow for DataFrames with millions and millions of rows. If the code is correct, it would need to be sped up somehow by using Numpy & Numba for example. # Function to find the nearest 15-minute floor def nearest_quarter_hour(timestamp): return timestamp.floor('15T') # Find the nearest 15-minute floor for each timestamp df['15_min_floor'] = df.index.map(nearest_quarter_hour) # Group by the nearest 15-minute floor and calculate rolling OHLC rolling_df = df.groupby('15_min_floor').rolling(window='15T').agg({ 'Open': lambda x: x.iloc[0], # First value in the window 'High': 'max', 'Low': 'min', 'Close': lambda x: x.iloc[-1] # Last value in the window }).reset_index(level=0, drop=True) # add _15 to each column rolling df rolling_df.columns = [f'{col}_15' for col in rolling_df.columns] # Merge with original DataFrame result_df = pd.concat([df, rolling_df], axis=1)
Here is numba version that computes OHLC your way which is significantly faster: from numba import njit @njit def compute_ohlc(floor_15_min, O, H, L, C, O_out, H_out, L_out, C_out): first, curr_max, curr_min, last = O[0], H[0], L[0], C[0] last_v = floor_15_min[0] for i, v in enumerate(floor_15_min): if v != last_v: first, curr_max, curr_min, last = O[i], H[i], L[i], C[i] last_v = v else: curr_max = max(curr_max, H[i]) curr_min = min(curr_min, L[i]) last = C[i] O_out[i] = first H_out[i] = curr_max L_out[i] = curr_min C_out[i] = last def compute_numba(df): df["15_min_floor_2"] = df.index.floor("15 min") df[["Open_15_2", "High_15_2", "Low_15_2", "Close_15_2"]] = np.nan compute_ohlc( df["15_min_floor_2"].values, df["Open"].values, df["High"].values, df["Low"].values, df["Close"].values, df["Open_15_2"].values, df["High_15_2"].values, df["Low_15_2"].values, df["Close_15_2"].values, ) compute_numba(df) Benchmark with random df with 432001 rows: from timeit import timeit import pandas as pd from numba import njit # generate some random data: np.random.seed(42) idx = pd.date_range("1-1-2023", "1-6-2023", freq="1000ms") df = pd.DataFrame( { "Open": 50 + np.random.random(len(idx)) * 100, "High": 50 + np.random.random(len(idx)) * 100, "Low": 50 + np.random.random(len(idx)) * 100, "Close": 50 + np.random.random(len(idx)) * 100, }, index=idx, ) def get_result_df(df): def nearest_quarter_hour(timestamp): return timestamp.floor("15min") # Find the nearest 15-minute floor for each timestamp df["15_min_floor"] = df.index.map(nearest_quarter_hour) # Group by the nearest 15-minute floor and calculate rolling OHLC rolling_df = ( df.groupby("15_min_floor") .rolling(window="15min") .agg( { "Open": lambda x: x.iloc[0], # First value in the window "High": "max", "Low": "min", "Close": lambda x: x.iloc[-1], # Last value in the window } ) .reset_index(level=0, drop=True) ) # add _15 to each column rolling df rolling_df.columns = [f"{col}_15" for col in rolling_df.columns] # Merge with original DataFrame result_df = pd.concat([df, rolling_df], axis=1) return result_df @njit def compute_ohlc(floor_15_min, O, H, L, C, O_out, H_out, L_out, C_out): first, curr_max, curr_min, last = O[0], H[0], L[0], C[0] last_v = floor_15_min[0] for i, v in enumerate(floor_15_min): if v != last_v: first, curr_max, curr_min, last = O[i], H[i], L[i], C[i] last_v = v else: curr_max = max(curr_max, H[i]) curr_min = min(curr_min, L[i]) last = C[i] O_out[i] = first H_out[i] = curr_max L_out[i] = curr_min C_out[i] = last def compute_numba(df): df["15_min_floor_2"] = df.index.floor("15 min") df[["Open_15_2", "High_15_2", "Low_15_2", "Close_15_2"]] = np.nan compute_ohlc( df["15_min_floor_2"].values, df["Open"].values, df["High"].values, df["Low"].values, df["Close"].values, df["Open_15_2"].values, df["High_15_2"].values, df["Low_15_2"].values, df["Close_15_2"].values, ) t1 = timeit("get_result_df(df)", number=1, globals=globals()) t2 = timeit("compute_numba(df)", number=1, globals=globals()) print(f"Time normal = {t1}") print(f"Time numba = {t2}") Prints on my computer AMD 5700x (432001 rows): Time normal = 29.57983471499756 Time numba = 0.2751060768496245 With dataframe pd.date_range("1-1-2004", "1-1-2024", freq="1000ms") (~631 millions of rows) the result is: Time numba = 11.551695882808417
3
1
78,034,110
2024-2-21
https://stackoverflow.com/questions/78034110/are-python-c-extensions-faster-than-numba-jit
I am testing the performance of the Numba JIT vs Python C extensions. It seems the C extension is about 3-4 times faster than the Numba equivalent for a for-loop-based function to calculate the sum of all the elements in a 2d array. Update: Based on valuable comments, I realized a mistake that I should have compiled (called) the Numba JIT once. I provide the results of the tests after the fix along with extra cases. But the question remains on when and how which method should be considered. Here's the result (time_s, value): # 200 tests mean (including JIT compile inside the loop) Pure Python: (0.09232537984848023, 29693825) Numba: (0.003188209533691406, 29693825) C Extension: (0.000905141830444336, 29693825.0) # JIT once called before the test loop (to avoid compile time) Normal: (0.0948486328125, 29685065) Numba: (0.00031280517578125, 29685065) C Extension: (0.0025129318237304688, 29685065.0) # JIT no warm-up also no test loop (only calling once) Normal: (0.10458517074584961, 29715115) Numba: (0.314251184463501, 29715115) C Extension: (0.0025091171264648438, 29715115.0) Is my implementation correct? Is there a reason for why C extensions are faster? Should I probably always use C extensions if I want the best performance? (non-vectorized functions) main.py import numpy as np import pandas as pd import numba import time import loop_test # ext def test(fn, *args): res = [] val = None for _ in range(100): start = time.time() val = fn(*args) res.append(time.time() - start) return np.mean(res), val sh = (30_000, 20) col_names = [f"col_{i}" for i in range(sh[1])] df = pd.DataFrame(np.random.randint(0, 100, size=sh), columns=col_names) arr = df.to_numpy() def sum_columns(arr): _sum = 0 for i in range(arr.shape[0]): for j in range(arr.shape[1]): _sum += arr[i, j] return _sum @numba.njit def sum_columns_numba(arr): _sum = 0 for i in range(arr.shape[0]): for j in range(arr.shape[1]): _sum += arr[i, j] return _sum print("Pure Python:", test(sum_columns, arr)) print("Numba:", test(sum_columns_numba, arr)) print("C Extension:", test(loop_test.loop_fn, arr)) ext.c #define PY_SSIZE_CLEAN #include <Python.h> #include <numpy/arrayobject.h> static PyObject *loop_fn(PyObject *module, PyObject *args) { PyObject *arr; if (!PyArg_ParseTuple(args, "O!", &PyArray_Type, &arr)) return NULL; npy_intp *dims = PyArray_DIMS(arr); npy_intp rows = dims[0]; npy_intp cols = dims[1]; double sum = 0; PyArrayObject *arr_new = (PyArrayObject *)PyArray_FROM_OTF(arr, NPY_DOUBLE, NPY_ARRAY_IN_ARRAY); double *data = (double *)PyArray_DATA(arr_new); npy_intp i, j; for (i = 0; i < rows; i++) for (j = 0; j < cols; j++) sum += data[i * cols + j]; Py_DECREF(arr_new); return Py_BuildValue("d", sum); }; static PyMethodDef Methods[] = { { .ml_name = "loop_fn", .ml_meth = loop_fn, .ml_flags = METH_VARARGS, .ml_doc = "Returns the sum using for loop, but in C.", }, {NULL, NULL, 0, NULL}, }; static struct PyModuleDef Module = { PyModuleDef_HEAD_INIT, "loop_test", "A benchmark module test", -1, Methods}; PyMODINIT_FUNC PyInit_loop_test(void) { import_array(); return PyModule_Create(&Module); } setup.py from distutils.core import setup, Extension import numpy as np module = Extension( "loop_test", sources=["ext.c"], include_dirs=[ np.get_include(), ], ) setup( name="loop_test", version="1.0", description="This is a test package", ext_modules=[module], ) python3 setup.py install
I would like to complete the good answer of John Bollinger: First of all, C extensions tends to be compiled with GCC on Linux (possibly MSVC on Windows and Clang on MacOS AFAIK), while Numba uses the LLVM compilation toolchain internally. If you want to compare both, then you should use Clang which is based on the LLVM toolchain. In fact, you should also use the same version of LLVM than Numba for the comparison to be fair. Clang, GCC and MSVC are not optimizing codes the same way so the resulting program can have pretty different performances. Moreover, Numba is a JIT so it does not care about the compatibility (of instruction set extensions) between different platforms. This means it can use the AVX-2 SIMD instruction set if available on your machine while mainstream compilers will not do that by default for sake of compatibility. In fact, Numba actually does that. You can specify Clang and GCC to optimize the code for the target machine and not to care about compatibility between machines with the compilation flag -march=native. As a result, the resulting package will certainly be faster but can also crash on old machines (or be possibly significantly slower). You can also enable some specific instruction set (with flags like -mavx2). Additionally, Numba uses an aggressive optimization level by default while AFAIK C extension use the -O2 flags which does *not auto-vectorize the code by default on both GCC and Clang (i.e. no use of packed SIMD instructions). You should certainly specify manually to use the -O3 flag if this is not already done. On MSVC, the equivalent flag is /O2 (AFAIK there is no /O3 yet). Please note that Numba functions can be compiled eagerly (as opposed to lazily by default) by providing a specific signature (possibly multiple one). This means you should know the type of the input parameters and the start-up time of your application can significantly increase. Numba functions can also be cached so not to recompile the funciton over and over on the same platform. This can be done with the flag cache=True. It may not always work regarding your specific use-case though. Last but not least, the two codes are not equivalent. This is certainly the most important point. The Numba code deal with an int32-typed arr and accumulate the value in a 64-bit integer _sum, while the C extension accumulate the value in a double-precision floating-point type. Floating-point types are not associative (unless you tell the compiler to assume they are, with the flag -ffast-math, which is not enabled by default since it is unsafe) so accumulating floating-point numbers is far more expensive than integers due to the high latency of the FMA unit on most platform. Besides, I actually wonder if PyArray_FROM_OTF performs the correct conversion, but if it does, then I expect the conversion to be pretty expensive. You should use the same types in the two code for the comparison to be fair (possibly 64-bit integers in the two). For more information, please read the related posts: Fastest way to do loop over 2D arrays in Cython Why is cython so much slower than numba for this simple loop?
4
8
78,035,329
2024-2-21
https://stackoverflow.com/questions/78035329/is-it-possible-in-a-pandas-dataframe-to-have-some-multiindexed-columns-and-some
In pandas I would like to have a dataframe whose some columns have a multi index, some don't. Visually I would like something like this: | c | | |--------| d | | a | b | | ================| | 1 | 4 | 0 | | 2 | 5 | 1 | | 3 | 6 | 2 | In pandas I can do something like this: df = pd.DataFrame({'a':[1,2,3],'b':[4,5,6], 'd':[0,1,2]}) columns=[('c','a'),('c','b'), 'd'] df.columns=pd.MultiIndex.from_tuples(columns) and the output would be: c d a b NaN 0 1 4 0 1 2 5 1 2 3 6 2 However, when accessing the d column by df['d'], I get as output a Pandas Dataframe, not Pandas series. The problem is clearly that pandas applied multicolumn indexing to every column. So my question is: is there a way to apply column multindexing only to certain columns and leave the others as they are? In other words, I would like that the result of df['d'] would be a Series as in a normal dataframe, the result of df['c'] a pd.DataFrame as in column multindex and the result of df['c']['a'] a Pandas Series. Is this possible?
You can use the empty string "" as a placeholder : columns = [ ("c", "a"), ("c", "b"), ("d", ""), # << here ] Output : type(df["d"]) # pandas.core.series.Series type(df["c"]) # pandas.core.frame.DataFrame type(df["c"]["a"]) # pandas.core.series.Series
3
2
78,033,371
2024-2-21
https://stackoverflow.com/questions/78033371/how-to-change-the-value-of-a-button-in-python-flask
I want to change the value of a button written in html to "records". I want to adress and change the button in python. This is in order to prevent to tap the record button twice as the request form changes and to get a visual feedback that it works. this is what i have: @app.route('/camera', methods=['GET', 'POST']) def camera(): if request.method == 'POST': if request.form['record'] == 'record here' : # <-------- here I ask what the value of the button print ("is recording") # is and it works fine p1 = threading.Thread(target=recording) p1.start() return render_template('camera.html') this is what i want: @app.route('/camera', methods=['GET', 'POST']) def camera(): if request.method == 'POST': if request.form['record'] == 'record here' : print ("is recording") p1 = threading.Thread(target=recording) p1.start() request.form['record'] = 'records' : # <----------------- Something like this return render_template('camera.html') I am a newbie in python html flask and web development in general.
What you are looking to do should be done through JavaScript, because the most natural way to change the label of the button is through client-side programming. However, if you insist on doing this from server-side code, you can start by changing the template camera.html. Let's say that you have the following button in your template: <input type="submit" name="record" value="record here"> This should be changed to include a variable: <input type="submit" name="record" value="{{ record }}"> Now you can send the value of this variable using render_template: @app.route('/camera', methods=['GET', 'POST']) def camera(): if request.method == 'POST': if request.form['record'] == 'record here' : print ("is recording") p1 = threading.Thread(target=recording) p1.start() return render_template('camera.html', record='records') return render_template('camera.html', record='record here')
3
2
78,033,462
2024-2-21
https://stackoverflow.com/questions/78033462/compute-the-ratio-between-the-number-of-rows-where-a-true-to-the-number-of-rows
I have a Polars dataframe: df = pl.DataFrame( { "nrs": [1, 2, 3, None, 5], "names": ["foo", "ham", "spam", "egg", None], "random": np.random.rand(5), "A": [True, True, False, False, False], } ) How can I compute the ratio between the number of rows where A==True, to the number of rows where A==False? Note that A is always True or False. I found a solution, but it seems a bit clunky: ntrue = df.filter(pl.col('A')==1).shape[0] ratio = ntrue/(df.shape[0]-ntrue)
You can leverage polars' expression API as follow. df.select(pl.col("A").sum() / pl.col("A").not_().sum()).item() The summing works as A is a boolean column. If this is not the case, you can exchange pl.col("A") for another corresponding boolean expression.
4
4
78,031,550
2024-2-21
https://stackoverflow.com/questions/78031550/how-to-make-a-more-efficient-mapping-function-based-on-a-dataframe-with-the-inde
Say I have a dataframe with column ['Subtype'] and ['Building Condition'], Dataframe 1: Subtype Building Condition A Good B Bad C Bad I want to map dataframe 1 using another dataframe based on the values of those two columns Dataframe 2: Good Bad A Repair Retrofit B Retrofit Reconstruct C Reconstruct Reconstruct Iterate over the first dataframe and use the pd.df.at function for each row and append them to an empty list. #assignment of intervention based on subtype and building condition intervention_list = [] for index, row in bldg_df.iterrows(): # print(bldg_df.at[index, 'Subtype']) intervention = matrix_df.at[bldg_df['Subtype'][index], bldg_df['Building Condition'][index]] intervention_list.append(intervention) bldg_df['Intervention'] = intervention_list So the resulting dataframe would be Subtype Building Condition Intervention A Good Repair B Bad Reconstruct C Bad Reconstruct This worked but I'm just thinking if there's a faster and more efficient way of going about this. Maybe using the map or merge as_of function?
The documented approach in such case is to use indexing lookup on the underyling numpy array: # factorize the requested name idx, cols = pd.factorize(df1['Building Condition']) # reorder and slice df1['Intervention'] = (df2.reindex(index=df1['Subtype'], columns=cols) .to_numpy()[np.arange(len(df1)), idx] ) Output: Subtype Building Condition Intervention 0 A Good Repair 1 B Bad Reconstruct 2 C Bad Reconstruct This is also the most efficient approach:
2
1
78,017,822
2024-2-18
https://stackoverflow.com/questions/78017822/no-overloads-for-update-match-the-provided-arguments
I'm currently reading FastAPI's tutorial user guide and pylance is throwing the following warning: No overloads for "update" match the provided argumentsPylancereportCallIssue typing.pyi(690, 9): Overload 2 is the closest match Here is the code that is throwing the warning: from fastapi import FastAPI app = FastAPI() @app.get("/items/") async def read_items(q: str | None = None): results = {"items": [{"item_id": "Foo"}, {"item_id": "Bar"}]} if q: results.update({"q": q}) # warning here return results I tried changing Python β€Ί Analysis: Type Checking Mode to basic and using the Pre-Release version of Pylance but warning persists.
You're bitten by intelligent type inference. To make it clear, I'll get rid of FastAPI dependency and reproduce on a simpler case: foo = {"foo": ["bar"]} foo["bar"] = "baz" Whoops, mypy (or pylance) error. I don't have Pylance available anywhere nearby, so let's stick with mypy - the issue is the same, wording may differ. E: Incompatible types in assignment (expression has type "str", target has type "list[str]") [assignment] Now it should be clear: foo is inferred as dict[str, list[str]] on first assignment. We can add reveal_type to confirm, here's the playground. Now type checker rightfully complains: you're trying to set a str as a value in dict with value type of list[str]. Bad idea. So, what now? You need to tell typechecker that you don't want inference to be this precise. You have few options (any of the following will work): from typing import Any, TypedDict, NotRequired class Possible(TypedDict): foo: list[str] bar: NotRequired[str] foo: dict[str, Any] = {"foo": ["bar"]} foo: dict[str, object] = {"foo": ["bar"]} foo: dict[str, list[str] | str] = {"foo": ["bar"]} foo: Possible = {"foo": ["bar"]} What's the difference? object is a "just something" type - such type that you know almost nothing about it. You can't add or multiply objects or pass them to functions requiring some specific types, but can e.g. print it. I'd argue that this solution is best for your use case, because you don't do anything with the dict afterwards. Possible TypedDict is a runner-up here: if this dict defines some important structure that you heavily use in processing later, you'd better say explicitly what each key means. Any means "leave me alone, I know what I'm doing". It'd probably be a good solution if you use this dict somehow later - object will require a lot of type narrowing to work with, and Any will just work. Union solution is the worst, unless your keys are really dynamic (no semantic meaning). Type checker will shout at you whenever you try to use a value as either of union types - it can always be of another type. (this is not a general advice: dict[str, int | str] may be a great type for something where keys do not represent a structure like JS object) So, in your specific case @app.get("/items/") async def read_items(q: str | None = None): results: dict[str, object] = {"items": [{"item_id": "Foo"}, {"item_id": "Bar"}]} if q: results.update({"q": q}) # warning here return results should pass. Since you and other answerers got misled by overload, here's how MutableMapping.update (dict subclasses MutableMapping in stubs) looks in typeshed: class MutableMapping(Mapping[_KT, _VT]): ... # More methods @overload def update(self, __m: SupportsKeysAndGetItem[_KT, _VT], **kwargs: _VT) -> None: ... @overload def update(self, __m: Iterable[tuple[_KT, _VT]], **kwargs: _VT) -> None: ... @overload def update(self, **kwargs: _VT) -> None: ... I have no idea why "Overload 2 is the closest match", according to Pylance, but you should hit the first one, of course.
3
4
77,992,977
2024-2-14
https://stackoverflow.com/questions/77992977/how-does-tensor-permutation-work-in-pytorch
I'm struggling with understanding the way torch.permute() works. In general, how exactly is an n-D tensor permuted? An example with explaination for a 4-D or higher dimension tensor is highly appreciated. I've search across the web but did not find any clearly explaination.
All tensors are contiguous 1D data lists in memory. What differs is the interface PyTorch provides us with to access them. This all revolves around the notion of stride, which is the way this data is navigated through. Indeed, on a higher level, we prefer to reason our data in higher dimensions by using tensor shapes. The following example and description are still valid for higher-dimensional tensors. The permutation operator offers a way to change how you access the tensor data by seemingly changing the order of dimensions. Permutations return a view and do not require a copy of the original tensor (as long as you do not make the data contiguous), in other words, the permuted tensor shares the same underlying data. At the user interface, permutation reorders the dimensions, which means the way this tensor is indexed changes depending on the order of dimensions supplied to the torch.Tensor.permute method. Take a simple 3D tensor example: x shaped (I=3,J=2,K=2). Given i<I, j<J, and k<K, x could naturally be accessed via x[i,j,k]. Concerning the underlying data being accessed, since the stride of x is (JK=4, J=2, 1), then x[i,j,k] corresponds to _x[i*JK+j*J+1] where _x is the underlying data of x. By "corresponds to", it means the data array associated with tensor x is being accessed with the index i*JK+j*J+1. If you now were to permute your dimensions, say y = x.permute(2,0,1), then the underlying data would remain the same (in fact data_ptr would yield the same pointer) but the interface to y is different! We have y with a shape of (K,I,J) and accessing y[i,j,k] translate to x[k,i,j] ie. dim=2 move to the front and dim=0,1 moved to the back... After permutation the stride is no longer the same, y has a stride of (IJ, I, 1) so y[i,j,k] corresponds to _x[i*IJ+j*I+1]. To read more about views and strides, refer to this.
2
3
77,999,734
2024-2-15
https://stackoverflow.com/questions/77999734/how-can-i-verify-an-emails-dkim-signature-in-python
Given a raw email, how can I validate the DKIM signature with Python? Ideally I’d like more than just a pass / fail result, I’d like to know details of any issues. I’ve found the dkimpy package, but the API isn’t obvious to me.
For a simple pass/fail validation: import dkim # dkimpy # Returns True/False res = dkim.verify(mail_data.encode()) For something more nuanced you can do this: d = dkim.DKIM(mail_data.encode(), logger=None, minkey=1024, timeout=5, tlsrpt=False) try: d.verify() # If it fails, a `dkim.ValidationError` exception will be thrown with details except dkim.ValidationError as e: print(e) # dkim.ValidationError: body hash mismatch (got b'PXUrNdoTzGcLtd4doJs+CufsiNvxoM5Q3SUPGi00C+I=', expected b'ax9SInd7Z3AQjRzcZSnY6UK392QEvjnKrjhAnsqfDnM=')
2
3
77,992,104
2024-2-14
https://stackoverflow.com/questions/77992104/why-does-google-vertex-api-blocking-responses-even-with-safety-off
I am attempting to use evaluate how Gemini performs on the Learned Hands dataset, which prompts to see if a given post has a specific legal issue included. My code is the following: from vertexai.preview import generative_models from vertexai.preview.generative_models import GenerativeModel prompt = """ Does the post discuss dealing with domestic violence and abuse, including getting protective orders, enforcing them, understanding abuse, reporting abuse, and getting resources and status if there is abuse? Post: Like the title states when I was 19 I was an escort for about a year. I was groomed into it by a man I met online. Tony. When I met him I was under the impression he wanted to be my sugar daddy. Instead I was greeted by him and a couple of his girls at a nice hotel. One girl in particular was the star of the show. We were all taken care of by her. Her name was Mo. They promised up to 1,000$ a day for my services. I didn’t have a choice I had nothing. We were forced to see up to 8 guys in a single day. Then cut our profit with Tony. However eventually I came to my senses, took my cash and cut ties. It was incredibly corrupt. They remained bitter at me, sending the occasional threatening message. Very petty nothing worth worrying about. He had my real information such as my email &amp; full name. Cut to about a year later and a former client sends me a news article. Seems Tony got greedy and started his own service. Except he had really fucked up. He was a third striker caught with an underage girl. Mo lured her in so she was also caught and they went to jail. I was completely removed from their life when they made these choices. Cut to three days ago. I get an email from a detective. He knows my full name. He wants to speak to me immediately. Wants to know everything I know. What the heck do I do!!! I’m terrified. Label: """ model = GenerativeModel("gemini-pro") safety_config = { generative_models.HarmCategory.HARM_CATEGORY_UNSPECIFIED: generative_models.HarmBlockThreshold.BLOCK_NONE, generative_models.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: generative_models.HarmBlockThreshold.BLOCK_NONE, generative_models.HarmCategory.HARM_CATEGORY_HARASSMENT: generative_models.HarmBlockThreshold.BLOCK_NONE, generative_models.HarmCategory.HARM_CATEGORY_HATE_SPEECH: generative_models.HarmBlockThreshold.BLOCK_NONE, generative_models.HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: generative_models.HarmBlockThreshold.BLOCK_NONE, } chat = model.start_chat() response = chat.send_message( prompt, safety_settings=safety_config, ) print(response.candidates[0].text) Unfortunately, this produces the following output: Traceback (most recent call last): File "/Users/langston/Documents/Eval/vertex_sample.py", line 25, in <module> response = chat.send_message( ^^^^^^^^^^^^^^^^^^ File "/Users/langston/miniconda3/envs/prl/lib/python3.11/site-packages/vertexai/generative_models/_generative_models.py", line 709, in send_message return self._send_message( ^^^^^^^^^^^^^^^^^^^ File "/Users/langston/miniconda3/envs/prl/lib/python3.11/site-packages/vertexai/generative_models/_generative_models.py", line 806, in _send_message if response.candidates[0].finish_reason not in _SUCCESSFUL_FINISH_REASONS: ~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range After digging into the raw API response, I assume this error is due to content moderation. Two questions: Why is the moderation not disabled by my safety_settings? Why is the error message not better?
Please update your SDK version. The new versions have much better output when the result is blocked or has no candidates. It's not possible to completely disable safety blocks. This is not your case, but chat.send_message also raises exception when the response is truncated due to exceeding maximum output tokens. On the other hand, model.generate_content does not do any validation and can be used for debugging.
3
0
78,005,435
2024-2-16
https://stackoverflow.com/questions/78005435/how-to-randomly-sample-very-large-pyarrow-dataset
I have a very large arrow dataset (181GB, 30m rows) from the huggingface framework I've been using. I want to randomly sample with replacement 100 rows (20 times), but after looking around, I cannot find a clear way to do this. I've tried converting to a pd.Dataframe so that I can use df.sample(), but python crashes everytime (assuming due to large dataset). So, I'm looking for something built-in within pyarrow. df = Dataset.from_file("embeddings_job/combined_embeddings_small/data-00000-of-00001.arrow") df=df.to_table().to_pandas() #crashes at this line random_sample = df.sample(n=100) Some ideas: not sure if this is w/replacement import numpy as np random_indices = np.random.randint(0, len(df), size=100) # Take the samples from the dataset sampled_table = df.select(random_indices) Using huggingface shuffle sample_size = 100 # Shuffle the dataset shuffled_dataset = df.shuffle() # Select the first 100 rows sampled_dataset = df.select(range(sample_size)) Is the only other way through terminal commands? Would this be correct: for i in {1..30}; do shuf -n 1000 -r file > sampled_$i.txt; done After getting each chunk, the plan is to run each chunk through a random forest algoritm. What is the best way to go about this? Also, I would like to note that whatever solution should make sure the indices do not get reset when I get the output subset.
A bit late, but I just had to write a function to randomly sample a pyarrow Table. It produces the sample directly from a pyarrow Table without converting to a pandas dataframe. def sample_table(table: pa.Table, n_sample_rows: int = None) -> pa.Table: if n_sample_rows is None or n_sample_rows >= table.num_rows: return table indices = random.sample(range(table.num_rows), k=n_sample_rows) return table.take(indices)
2
2
78,013,301
2024-2-17
https://stackoverflow.com/questions/78013301/python-dataclasses-and-sqlite3-adapters
What is the cleanest way to commit data stored in instances of a dataclass contained in a list to SQLite with sqlite3's executemany? For example: @dataclass class Person: name: str age: int a = Person("Alice", 21) b = Person("Bob", 22) c = Person("Charlie", 23) people = [a, b, c] # ... cursor.executemany(f"INSERT INTO {table} ({headers}) ({placeholders})", people) # <-- fails, obviously Is there a built-in mechanism that does the work for me (goes over each attribute of the dataclass), or an idiomatic way, or, if not, how do I implement an adapter for this without explicitly listing the attributes one by one? I could convert it to a list of tuples, that works out of the box, yet feels redundant.
Use the astuple() function to convert the dataclass instances to tuples. from dataclasses import dataclass, fields, astuple headers = ','.join(f.name for f in fields(Person)) people = map(astuple, [a, b, c]) placeholders = ','.join(['?'] * len(headers)) table = 'persons' cursor.executemany(f"INSERT INTO {table} ({headers}) ({placeholders})", people)
4
2
78,019,134
2024-2-19
https://stackoverflow.com/questions/78019134/how-to-properly-save-the-finetuned-transformer-model-in-safetensors-without-losi
I have been trying to finetune a casual LM model by retraining its lm_head layer. I've been training with Deepspeed Zero stage 3 (this part works fine). But I have problem saving my finetuned model and loading it back. I think the problem is that the unwrapped_model.save_pretrained() function automatically ignores the frozen parameters during saving. Here is my code and error messages: # finetuning: accelerator = Accelerator(log_with="tensorboard", project_dir=project_dir) model: torch.nn.Module = AutoModelForCausalLM.from_pretrained("the path to LM model", trust_remote_code=True) model.half() model.train() # freeze parameters for param in model.parameters(): param.requires_grad = False for param in model.lm_head.parameters(): param.requires_grad = True ... # save finetuned model if step == 5000 and accelerator.is_main_process: unwrapped_model: PreTrainedModel = accelerator.unwrap_model(model) save_fn = accelerator.save unwrapped_model.save_pretrained( "mycogagent", is_main_process=accelerator.is_main_process, save_function=save_fn, ) The codes above will print a warning: Removed shared tensor {a long list of parameter names in the original LM model except the parameter name of lm_head} while saving. This should be OK, but check by verifying that you don't receive any warning while reloading And the saved model only takes 7MB disk space, however, I was expecting the saved model to be over 30GB. Looks like only the unfrozen part is saved to disk. To verify my speculation, I tried to load it back with the following codes: model = AutoModelForCausalLM.from_pretrained("mycogagent", trust_remote_code=True) But it will result in an error of size mismatch. RuntimeError: Error(s) in loading state_dict for CogAgentForCausalLM: size mismatch for model.embed_tokens.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([32000, 4096]). You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method. I alse tried following the instruction in the error message, but it's not working, either. The program printed a list of warnings and got stuck. Some weights of CogAgentForCausalLM were not initialized from the model checkpoint at mycogagent and are newly initialized:[a list of parameter names] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Some weights of CogAgentForCausalLM were not initialized from the model checkpoint at mycogagent and are newly initialized because the shapes did not match: - model.embed_tokens.weight: found shape torch.Size([0]) in the checkpoint and torch.Size([32000, 4096]) in the model instantiated You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. The warning messages clearly suggests loading the finetuned model is unsuccessful, and the reason why the program stuck looks like another issue. But all in all, my problem is how to save the full model instead of only the finetuned parameters? What's the proper convention to save/load finetuned huggingface models ? Expected Behaviors: The save_pretrained function should save all the tensors in the huggingface transformer model, even if their requires_grad attribute is False. --- UPDATE --- I just located the cause of my problem. The state_dicts works fine during the entire saving process, but id_tensor_storage(tensor) function (in site-packages/transformers/pytorch_utils.py) will not get the correct pointer to the tensor. The output of this function will always be (device(type='cuda', index=0), 0, 0). In practice, the unique_id should be equal to the memory address of the tensor instead of 0. Thus, the source of this issue must lie in accelerate.unwrap function.
I think I found the solution. The problem is, in ZeRO3 we have to call accelerator.get_state_dict(model) function before saving. Directly saving the model itself won't work because its parameters are stored across different GPUs. Calling accelerator.get_state_dict(model) can force Deepspeed to collect the values of ALL parameters. There is an example [here][1] [1]: https://huggingface.co/docs/accelerate/usage_guides/deepspeed#:~:text=unwrapped_model%20%3D%20accelerator.unwrap_model,accelerator.get_state_dict(model)%2C%0A)
3
0
78,019,438
2024-2-19
https://stackoverflow.com/questions/78019438/axioserror-request-failed-with-status-code-403-in-streamlit
I am getting this error "AxiosError: Request failed with status code 403" while uploading image on streamlit app. this is my code screenshot of error import streamlit as st from PIL import Image # Title st.title('Image Viewer') #Upload an image uploaded_image = st.file_uploader("Upload an image", type=["jpg", "jpeg", "png", "webp"]) # Check the type of the uploaded image if uploaded_image is not None: # Open the image using PIL image = Image.open(uploaded_image) # Display the image st.image(image, use_column_width=True) I checked all the necessary packages is installed along with tried running same project in newly created environment but it didn't solve anything.
If you are running it locally try: streamlit run app.py --server.enableXsrfProtection false
3
9
78,027,325
2024-2-20
https://stackoverflow.com/questions/78027325/stable-baselines-3-throws-valueerror-when-episode-is-truncated
So I'm trying to train an agent on my custom gymnasium environment trough stablebaselines3 and it kept crashing seemingly random and throwing the following ValueError: Traceback (most recent call last): File "C:\Users\bo112\PycharmProjects\ecocharge\code\Simulation Env\prototype_visu.py", line 684, in <module> model.learn(total_timesteps=time_steps, tb_log_name=log_name) File "C:\Users\bo112\PycharmProjects\ecocharge\venv\lib\site-packages\stable_baselines3\ppo\ppo.py", line 315, in learn return super().learn( File "C:\Users\bo112\PycharmProjects\ecocharge\venv\lib\site-packages\stable_baselines3\common\on_policy_algorithm.py", line 277, in learn continue_training = self.collect_rollouts(self.env, callback, self.rollout_buffer, n_rollout_steps=self.n_steps) File "C:\Users\bo112\PycharmProjects\ecocharge\venv\lib\site-packages\stable_baselines3\common\on_policy_algorithm.py", line 218, in collect_rollouts terminal_obs = self.policy.obs_to_tensor(infos[idx]["terminal_observation"])[0] File "C:\Users\bo112\PycharmProjects\ecocharge\venv\lib\site-packages\stable_baselines3\common\policies.py", line 256, in obs_to_tensor vectorized_env = vectorized_env or is_vectorized_observation(obs_, obs_space) File "C:\Users\bo112\PycharmProjects\ecocharge\venv\lib\site-packages\stable_baselines3\common\utils.py", line 399, in is_vectorized_observation return is_vec_obs_func(observation, observation_space) # type: ignore[operator] File "C:\Users\bo112\PycharmProjects\ecocharge\venv\lib\site-packages\stable_baselines3\common\utils.py", line 266, in is_vectorized_box_observation raise ValueError( ValueError: Error: Unexpected observation shape () for Box environment, please use (1,) or (n_env, 1) for the observation shape. I don't know why the observation shape/content would change though, since it doesn't change how the state gets its values at all. I figured out that it crashes, whenever the agent 'survives' a whole episode for the first time and truncation gets used instead of termination. Is there some kind of weird quirk for returning truncated and terminated that I don't know about? Because I can't find the error in my step function. def step(self, action): ... # handling the action etc. reward = 0 truncated = False terminated = False # Check if time is over/score too low - else reward function if self.n_step >= self.max_steps: truncated = True print('truncated') elif self.score < -1000: terminated = True # print('terminated') else: reward = self.reward_fnc_distance() self.score += reward self.d_score.append(self.score) self.n_step += 1 # state: [current power, peak power, fridge 1 temp, fridge 2 temp, [...] , fridge n temp] self.state['current_power'] = self.d_power_sum[-1] self.state['peak_power'] = self.peak_power for i in range(self.n_fridges): self.state[f'fridge{i}_temp'] = self.d_fridges_temp[i][-1] self.state[f'fridge{i}_on'] = self.fridges[i].on if self.logging: print(f'score: {self.score}') if (truncated or terminated) and self.logging: self.save_run() return self.state, reward, terminated, truncated, {} This is the general setup for training my models: hidden_layer = [64, 64, 32] time_steps = 1000_000 learning_rate = 0.003 log_name = f'PPO_{int(time_steps/1000)}k_lr{str(learning_rate).replace(".", "_")}' vec_env = make_vec_env(env_id=ChargeEnv, n_envs=4) model = PPO('MultiInputPolicy', vec_env, verbose=1, tensorboard_log='tensorboard_logs/', policy_kwargs={'net_arch': hidden_layer, 'activation_fn': th.nn.ReLU}, learning_rate=learning_rate, device=th.device("cuda" if th.cuda.is_available() else "cpu"), batch_size=128) model.learn(total_timesteps=time_steps, tb_log_name=log_name) model.save(f'models/{log_name}') vec_env.close() As mentioned above, episodes only get truncated when it also throws the ValueError and vice versa, so I'm pretty sure it has to be that. EDIT: From the answer below, I found the problem was to simply put all my float/Box values of self.state into numpy arrays before returning them like following: self.state['current_power'] = np.array([self.d_power_sum[-1]], dtype='float32') self.state['peak_power'] = np.array([self.peak_power], dtype='float32') for i in range(self.n_fridges): self.state[f'fridge{i}_temp'] = np.array([self.d_fridges_temp[i][-1]], dtype='float32') self.state[f'fridge{i}_on'] = self.fridges[i].on (Note: the dtype specification is not necessary in itself, it's just important for using the SubprocVecEnv from stable_baselines3)
The problem is most likely in your custom environment definition (ChargeEnv). The error says that it has a wrong observation shape (it is empty). You should check your ChargeEnv.observation_space. If you want to create a custom environment, make sure to read the documentation to set it up correctly (https://gymnasium.farama.org/tutorials/gymnasium_basics/environment_creation/#declaration-and-initialization, https://stable-baselines3.readthedocs.io/en/master/guide/custom_env.html). This is an example implementation of your ChargeEnv, where the observation space is defined correctly: import gymnasium as gym from gymnasium import spaces class ChargeEnv(gym.Env): def __init__(self, n_fridges=2): super().__init__() # Define observation space observation_space_dict = { 'current_power': spaces.Box(low=0, high=100, shape=(1,), dtype=np.float32), 'peak_power': spaces.Box(low=0, high=100, shape=(1,), dtype=np.float32) } for i in range(n_fridges): observation_space_dict[f'fridge{i}_temp'] = spaces.Box(low=-10, high=50, shape=(1,), dtype=np.float32) observation_space_dict[f'fridge{i}_on'] = spaces.Discrete(2) # 0 or 1 (off or on) self.observation_space = spaces.Dict(observation_space_dict) # Other environment-specific variables self.n_fridges = n_fridges # Initialize other variables as needed def reset(self): # Reset environment to initial state # Initialize state variables, e.g., current_power, peak_power, fridge temperatures, etc. # Return initial observation initial_observation = { 'current_power': np.array([50.0]), 'peak_power': np.array([100.0]) } for i in range(self.n_fridges): initial_observation[f'fridge{i}_temp'] = np.array([25.0]) # Example initial temperature initial_observation[f'fridge{i}_on'] = 0 # Example: Fridge initially off return initial_observation def step(self, action): # Implement step logic (similar to your existing step function) # Update state variables, compute rewards, check termination conditions, etc. # Return observation, reward, done flag, and additional info
2
2
78,002,829
2024-2-15
https://stackoverflow.com/questions/78002829/lru-cache-vs-dynamic-programming-stackoverflow-with-one-but-not-with-the-other
I'm doing this basic dp (Dynamic Programming) problem on trees (https://cses.fi/problemset/task/1674/). Given the structure of a company (hierarchy is a tree), the task is to calculate for each employee the number of their subordinates. This: import sys from functools import lru_cache # noqa sys.setrecursionlimit(2 * 10 ** 9) if __name__ == "__main__": n: int = 200000 boss: list[int] = list(range(1, 200001)) # so in my example it will be a tree with every parent having one child graph: list[list[int]] = [[] for _ in range(n)] for i in range(n-1): graph[boss[i] - 1].append(i+1) # directed so neighbours of a node are only its children @lru_cache(None) def dfs(v: int) -> int: if len(graph[v]) == 0: return 0 else: s: int = 0 for u in graph[v]: s += dfs(u) + 1 return s dfs(0) print(*(dfs(i) for i in range(n))) crashes (I googled the error message and it means stack overflow) Process finished with exit code -1073741571 (0xC00000FD) HOWEVER import sys sys.setrecursionlimit(2 * 10 ** 9) if __name__ == "__main__": n: int = 200000 boss: list[int] = list(range(1, 200001)) # so in my example it will be a tree with every parent having one child graph: list[list[int]] = [[] for _ in range(n)] for i in range(n-1): graph[boss[i] - 1].append(i+1) # directed so neighbours of a node are only its children dp: list[int] = [0 for _ in range(n)] def dfs(v: int) -> None: if len(graph[v]) == 0: dp[v] = 0 else: for u in graph[v]: dfs(u) dp[v] += dp[u] + 1 dfs(0) print(*dp) doesn't and it's exactly the same complexity right? The dfs goes exactly as deep in both situations too? I tried to make the two pieces of code as similar as I could. I tried 20000000 instead of 200000 (i.e. graph 100 times deeper) and it still doesn't stackoverflow for the second option. Obviously I could do an iterative version of it but I'm trying to understand the underlying reason why there are such a big difference between those two recursive options so that I can learn more about Python and its underlying functionning. I'm using Python 3.11.1.
lru_cache is implemented in C, its calls are interleaved with your function's calls, and your C code recursion is too deep and crashes. Your second program only has deep Python code recursion, not deep C code recursion, avoiding the issue. In Python 3.11 I get a similar bad crash: [Execution complete with exit code -11] In Python 3.12 I just get an error: Traceback (most recent call last): File "/ATO/code", line 34, in <module> dfs(0) File "/ATO/code", line 31, in dfs s += dfs(u) + 1 ^^^^^^ File "/ATO/code", line 31, in dfs s += dfs(u) + 1 ^^^^^^ File "/ATO/code", line 31, in dfs s += dfs(u) + 1 ^^^^^^ [Previous line repeated 496 more times] RecursionError: maximum recursion depth exceeded That's despite your sys.setrecursionlimit(2 * 10 ** 9). What’s New In Python 3.12 explains: sys.setrecursionlimit() and sys.getrecursionlimit(). The recursion limit now applies only to Python code. Builtin functions do not use the recursion limit, but are protected by a different mechanism that prevents recursion from causing a virtual machine crash So in 3.11, your huge limit is applied to C recursion as well, Python obediently attempts your deep recursion, its C stack overflows, and the program crashes. Whereas in 3.12 the limit doesn't apply to C recursion, Python protects itself with that different mechanism at a relatively shallow recursion depth, producing that error instead. Let's avoid that C recursion. If I use a (simplified) Python version of lru_cache, your first program works fine in both 3.11 and 3.12 without any other change: def lru_cache(_): def deco(f): memo = {} def wrap(x): if x not in memo: memo[x] = f(x) return memo[x] return wrap return deco See CPython's GitHub Issue 3.12 setrecursionlimit is ignored in connection with @functools.cache for more details and the current progress on this. There are efforts to increase the C recursion limit, but it looks like it'll remain only in the thousands. Not enough for your super deep recursion. Miscellaneous further information I might expand on later: Python 3.11 introduced Inlined Python function calls, which "avoids calling the C interpreting function altogether" when a Python function calls a Python function, so that "Most Python function calls now consume no C stack space". This allows your second program to succeed. Python's devguide has more details about The call stack of its interpreter. The open issue sys.setrecursionlimit docs are incorrect in 3.12 and 3.13 points out issues with the documentation vs what's actually happening. C/C++ maximum stack size of program on mainstream OSes appears to by default be only up to ~8 MB (answer from 2020. In my experience, this allows C recursion depth in the thousands or tens of thousands. In Linux, you might be able to increase/decrease the C stack limit with ulimit -s sizeInKB. If you want the Python version of the regular lru_cache, you could try the hack of importing and deleting the C version of its wrapper before you import the Python version (Technique from this answer, which did that with heapq): import _functools del _functools._lru_cache_wrapper from functools import lru_cache Or you could remove the import that replaces the Python implementation with the C implementation. But that's an even worse hack and I wouldn't do that. Maybe I would copy&rename the functools module and modify the copy. But I mostly mention these hacks to illustrate what's happening, or as a last resort.
4
7
78,024,123
2024-2-19
https://stackoverflow.com/questions/78024123/find-max-value-in-a-range-of-a-given-column-in-polars
I have the following dataframe: df = pl.DataFrame({ "Column A": [2, 3, 1, 4, 1, 3, 3, 2, 1, 0], "Column B": [ "Life", None, None, None, "Death", None, "Life", None, None, "Death" ] }) shape: (10, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Column A ┆ Column B β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ══════════║ β”‚ 2 ┆ Life β”‚ β”‚ 3 ┆ null β”‚ β”‚ 1 ┆ null β”‚ β”‚ 4 ┆ null β”‚ β”‚ 1 ┆ Death β”‚ β”‚ 3 ┆ null β”‚ β”‚ 3 ┆ Life β”‚ β”‚ 2 ┆ null β”‚ β”‚ 1 ┆ null β”‚ β”‚ 0 ┆ Death β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ I want to create a new column, let's call it Column C. For each row where Column B is 'Life', Column C should have the maximum value in the range of values in Column A from that row until the row where Column B is 'Death'. In cases where Column B is not 'Life', Column C should be set to 'None' The end result should look like this: shape: (10, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Column A ┆ Column B ┆ Column C β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ══════════β•ͺ══════════║ β”‚ 2 ┆ Life ┆ 4.0 β”‚ β”‚ 3 ┆ null ┆ null β”‚ β”‚ 1 ┆ null ┆ null β”‚ β”‚ 4 ┆ null ┆ null β”‚ β”‚ 1 ┆ Death ┆ null β”‚ β”‚ 3 ┆ null ┆ null β”‚ β”‚ 3 ┆ Life ┆ 3.0 β”‚ β”‚ 2 ┆ null ┆ null β”‚ β”‚ 1 ┆ null ┆ null β”‚ β”‚ 0 ┆ Death ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ How can I achieve this using Polars in Python? Any help or suggestions would be appreciated!
I think the general idea is to assign "group ids" to each "range". A common approach for this is to use cumulative sum along with forward filling. ( df.with_columns( start = (pl.col("Column B") == "Life").cum_sum().forward_fill(), end = (pl.col("Column B") == "Death").cum_sum().forward_fill() ) .with_columns( group_id_1 = pl.col("start") + pl.col("end") ) .with_columns( group_id_2 = pl.when(pl.col("Column B") == "Death") .then(pl.col("group_id_1").shift()) .otherwise(pl.col("group_id_1")) ) ) shape: (10, 6) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Column A ┆ Column B ┆ start ┆ end ┆ group_id_1 ┆ group_id_2 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ u32 ┆ u32 ┆ u32 ┆ u32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ══════════β•ͺ═══════β•ͺ═════β•ͺ════════════β•ͺ════════════║ β”‚ 2 ┆ Life ┆ 1 ┆ 0 ┆ 1 ┆ 1 β”‚ β”‚ 3 ┆ null ┆ 1 ┆ 0 ┆ 1 ┆ 1 β”‚ β”‚ 1 ┆ null ┆ 1 ┆ 0 ┆ 1 ┆ 1 β”‚ β”‚ 4 ┆ null ┆ 1 ┆ 0 ┆ 1 ┆ 1 β”‚ β”‚ 1 ┆ Death ┆ 1 ┆ 1 ┆ 2 ┆ 1 β”‚ # 2 -> 1 β”‚ 3 ┆ null ┆ 1 ┆ 1 ┆ 2 ┆ 2 β”‚ β”‚ 3 ┆ Life ┆ 2 ┆ 1 ┆ 3 ┆ 3 β”‚ β”‚ 2 ┆ null ┆ 2 ┆ 1 ┆ 3 ┆ 3 β”‚ β”‚ 1 ┆ null ┆ 2 ┆ 1 ┆ 3 ┆ 3 β”‚ β”‚ 0 ┆ Death ┆ 2 ┆ 2 ┆ 4 ┆ 3 β”‚ # 4 -> 3 β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ group_id_1 gets most of the way there apart from the Death rows which need to be shifted to produce group_id_2 As it is sufficiently complex you may want to use variables and/or a function to build the final expression: start = pl.col("Column B") == "Life" end = pl.col("Column B") == "Death" group_id = (start.cum_sum() + end.cum_sum()).forward_fill() # id_1 group_id = ( # id_2 pl.when(end) .then(group_id.shift()) .otherwise(group_id) ) # Insert the max over each group into each Life row df.with_columns( pl.when(start) .then(pl.col("Column A").max().over(group_id)) .alias("Column C") ) shape: (10, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Column A ┆ Column B ┆ Column C β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ══════════β•ͺ══════════║ β”‚ 2 ┆ Life ┆ 4 β”‚ β”‚ 3 ┆ null ┆ null β”‚ β”‚ 1 ┆ null ┆ null β”‚ β”‚ 4 ┆ null ┆ null β”‚ β”‚ 1 ┆ Death ┆ null β”‚ β”‚ 3 ┆ null ┆ null β”‚ β”‚ 3 ┆ Life ┆ 3 β”‚ β”‚ 2 ┆ null ┆ null β”‚ β”‚ 1 ┆ null ┆ null β”‚ β”‚ 0 ┆ Death ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
4
6
78,005,051
2024-2-16
https://stackoverflow.com/questions/78005051/modulenotfounderror-no-module-named-llama-index-graph-stores
I am trying to use the NebulaGraphStore class from llama_index via from llama_index.graph_stores.nebula import NebulaGraphStore as suggested by the llama_index documentation, but the following error occurred: ModuleNotFoundError Traceback (most recent call last) Cell In[2], line 1 ----> 1 from llama_index.graph_stores.nebula import NebulaGraphStore ModuleNotFoundError: No module named 'llama_index.graph_stores' I tried updating llama_index (version 0.10.5) with pip install -U llama-index but it doesn't work. How can I resolve this?
According to latest doc of llama-index, all graph-store module are not included in llama-index core packages and needs to install it by pip: %pip install llama-index-llms-openai %pip install llama-index-embeddings-openai %pip install llama-index-graph-stores-nebula %pip install llama-index-llms-azure-openai https://docs.llamaindex.ai/en/stable/examples/index_structs/knowledge_graph/NebulaGraphKGIndexDemo.html
2
2
78,030,255
2024-2-20
https://stackoverflow.com/questions/78030255/why-register-next-step-handler-doesnt-call-a-function-immediately-and-waits-for
I'm creating a simple bot using Telebot. The start func offers two buttons to the user (Currency/Weather). I expect the bot to move to the func "where_to_go" when a button is pressed. It happens only when I press the button twice or send any other second message. Then "where_to_go" func reads the first message. So it works, but why do I have to send one more message to push it? How can I fix it? Here is the code: @bot.message_handler(commands = ['start','main']) def start(message): markup = types.ReplyKeyboardMarkup() btn1 = types.KeyboardButton('Currency Calc') markup.row(btn1) btn2 = types.KeyboardButton('Weather Today') markup.row(btn2) file = open('./Start Photo.png', 'rb') bot.send_photo(message.chat.id, file, reply_markup=markup) bot.send_message(message.chat.id, f"Good night, {message.from_user.first_name} {message.from_user.last_name} πŸ’‹. \n") bot.send_message(message.chat.id, f"I'm PogodiPogoda Bot. Choose a button or just talk to me.", reply_markup=markup) bot.register_next_step_handler(message, where_to_go) def where_to_go(message): if message.text == 'Currency Calc': bot.register_next_step_handler(message, converter) elif message.text == 'Weather Today': bot.register_next_step_handler(message, city) The bot works well when he make the following step (from "where_to_go" to the funcs "converter" or "city").
You can try this : @bot.message_handler(commands = ['start','main']) def start(message): markup = types.ReplyKeyboardMarkup() btn1 = types.KeyboardButton('Currency Calc') markup.row(btn1) btn2 = types.KeyboardButton('Weather Today') markup.row(btn2) file = open('./Start Photo.png', 'rb') bot.send_photo(message.chat.id, file, reply_markup=markup) bot.send_message(message.chat.id, f"Good night, {message.from_user.first_name} {message.from_user.last_name} πŸ’‹. \n") bot.send_message(message.chat.id, f"I'm PogodiPogoda Bot. Choose a button or just talk to me.", reply_markup=markup) # Register next step handler directly within the start function bot.register_next_step_handler(message, where_to_go) def where_to_go(message): if message.text == 'Currency Calc': converter(message) # Call the function directly instead of registering another next step handler elif message.text == 'Weather Today': city(message) # Call the function directly instead of registering another next step handler @bot will move to the where_to_go function immediately after the user selects an option, without the need for an additional message
2
2
78,030,813
2024-2-20
https://stackoverflow.com/questions/78030813/fill-between-areas-with-gradient-color-in-matplotlib
I try to plot the following plot with gradient color ramp using matplotlib fill_between. I can manage to get the color below the horizontal line correctly, but I had difficult time to do the same for above horizontal line. The idea is to keep color above and below the horizontal line up to y values boundary, and the rest removed. Any help would be very great. Here is the same code and data. import numpy as np import matplotlib.pyplot as plt text = "54.38,53.99,65.39,66.22,57.65,49.17,42.72,42.07,44.88,46.56,55.27,57.28,60.54,65.87,37.61,44.21,50.62,56.15,52.65,52.17,57.71,61.21,60.77,50.74,62.9,56.51,62.1,52.79,53.96,50.33,48.44,43.72,39.03,36.18,41.6,50.44,50.67,53.68,50.05,47.92,40.43,29.87,24.78,15.53,17.37,20.56,20.3,30.22,38.89,45.04,45.87,47.99,50.97,54.59,53.69,48.09,48.56,47.62,52.78,57.26,48.77,51.6,56.6,58.37,48.39,43.63,42.15,40.05,33.62,48.73,45.29,51.47,50.73,51.52,54.86,49.18,51.03,50.26,45.73,46.7,52.0,42.17,49.93,53.08,51.34,52.44,54.06,50.56,51.04,55.47,52.71,52.36,53.59,65.08,62.74,59.23,56.07,49.21,46.67,40.62,44.59,44.89,35.57,37.67,49.74,46.52,38.47,42.08,49.73,53.82,60.76,56.2,56.41,53.83,59.9,51.06,46.9,49.11,36.01,46.72,56.53,59.04,58.52,60.78,60.02,51.78,54.78,52.88,51.73,59.76,67.84,66.63,60.97,53.69,53.17,52.44,46.54,49.08,43.62,40.81,41.64,43.3,46.36,58.07,54.95,50.82,54.17,51.37,55.26,53.55,47.57,36.05,38.66,35.3,51.02,54.96,59.34,53.47,48.34,50.25,54.06,49.99,47.44,41.59,37.58,42.39,41.41,41.84,47.77,52.75,54.84,49.63,51.5,56.26,52.47,49.35,47.13,35.94,28.42,33.14,44.38,56.38,59.63,60.86,64.35,54.59,63.34,74.68,66.29,58.33,57.64,64.64,61.49,59.11,53.72,60.37,56.66,56.92,61.58,57.21,58.12,61.93,45.75,54.77,52.95,50.06,54.54,52.64,48.31,49.9,56.49,54.99,51.83,61.78,50.93,52.92,58.76,58.82,51.26,48.29,41.18,50.69,54.0,45.13,48.72,45.32,42.29,30.57,41.28,53.54,55.47,57.54,53.48,50.01,52.42,55.38,53.12,53.31,56.26,57.56,53.87,53.48,53.82,56.8,58.31,60.45,63.22,68.44,76.04,72.14,75.31,64.74,56.5,60.42,54.05,55.62,52" y = np.array([float(i) for i in text.split(",")]) x = np.arange(len(y)) fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10,6)) cmap ="Spectral" # Define above and below threshold below = y.copy() below[below<=50]=0 above = y.copy() above[above>50]=0 ymin = 10 ymax = 80 xi, yi = np.meshgrid(x, np.linspace(ymin, ymax, 100)) ax.contourf( xi, yi, yi, cmap=cmap, levels=np.linspace(ymin, ymax, 100) ) ax.axhline(50, linestyle="--", linewidth=1.0, color="k") ax.fill_between(x=x, y1=y, color="w") ax.plot(x,y, color="k") ax.set_ylim(ymin, ymax)
You can use fill_between with np.minimum to color below the threshold line. And with np.maximum to color above that line. import numpy as np import matplotlib.pyplot as plt y = [54.38,53.99,65.39,66.22,57.65,49.17,42.72,42.07,44.88,46.56,55.27,57.28,60.54,65.87,37.61,44.21,50.62,56.15,52.65,52.17,57.71,61.21,60.77,50.74,62.9,56.51,62.1,52.79,53.96,50.33,48.44,43.72,39.03,36.18,41.6,50.44,50.67,53.68,50.05,47.92,40.43,29.87,24.78,15.53,17.37,20.56,20.3,30.22,38.89,45.04,45.87,47.99,50.97,54.59,53.69,48.09,48.56,47.62,52.78,57.26,48.77,51.6,56.6,58.37,48.39,43.63,42.15,40.05,33.62,48.73,45.29,51.47,50.73,51.52,54.86,49.18,51.03,50.26,45.73,46.7,52.,42.17,49.93,53.08,51.34,52.44,54.06,50.56,51.04,55.47,52.71,52.36,53.59,65.08,62.74,59.23,56.07,49.21,46.67,40.62,44.59,44.89,35.57,37.67,49.74,46.52,38.47,42.08,49.73,53.82,60.76,56.2,56.41,53.83,59.9,51.06,46.9,49.11,36.01,46.72,56.53,59.04,58.52,60.78,60.02,51.78,54.78,52.88,51.73,59.76,67.84,66.63,60.97,53.69,53.17,52.44,46.54,49.08,43.62,40.81,41.64,43.3,46.36,58.07,54.95,50.82,54.17,51.37,55.26,53.55,47.57,36.05,38.66,35.3,51.02,54.96,59.34,53.47,48.34,50.25,54.06,49.99,47.44,41.59,37.58,42.39,41.41,41.84,47.77,52.75,54.84,49.63,51.5,56.26,52.47,49.35,47.13,35.94,28.42,33.14,44.38,56.38,59.63,60.86,64.35,54.59,63.34,74.68,66.29,58.33,57.64,64.64,61.49,59.11,53.72,60.37,56.66,56.92,61.58,57.21,58.12,61.93,45.75,54.77,52.95,50.06,54.54,52.64,48.31,49.9,56.49,54.99,51.83,61.78,50.93,52.92,58.76,58.82,51.26,48.29,41.18,50.69,54.,45.13,48.72,45.32,42.29,30.57,41.28,53.54,55.47,57.54,53.48,50.01,52.42,55.38,53.12,53.31,56.26,57.56,53.87,53.48,53.82,56.8,58.31,60.45,63.22,68.44,76.04,72.14,75.31,64.74,56.5,60.42,54.05,55.62,52.] x = np.arange(len(y)) ymin = 10 ymax = 80 ythresh = 50 fig, ax = plt.subplots(figsize=(10, 6)) ax.axhline(ythresh, linestyle="--", linewidth=1.0, color="k") ax.imshow(np.linspace(0, 1, 256).reshape(-1, 1), origin='lower', aspect='auto', cmap='Spectral', extent=[x[0], x[-1], ymin, ymax]) ax.fill_between(x, ymin, np.minimum(y, ythresh), color="w") ax.fill_between(x, ymax, np.maximum(y, ythresh), color="w") ax.plot(x, y, color="k") plt.show() Here is a version combining the two fill_betweens into one and use it to clip the gradient. That way, the original background stays visible. As an example, grid lines are drawn behind the gradient. Also, a TwoSlopeNorm puts the threshold at the center of the colormap. The 'turbo' colormap is similar to 'Spectral' and a bit brighter. import matplotlib.pyplot as plt from matplotlib.colors import TwoSlopeNorm from matplotlib.patches import PathPatch import numpy as np y = [54.38,53.99,65.39,66.22,57.65,49.17,42.72,42.07,44.88,46.56,55.27,57.28,60.54,65.87,37.61,44.21,50.62,56.15,52.65,52.17,57.71,61.21,60.77,50.74,62.9,56.51,62.1,52.79,53.96,50.33,48.44,43.72,39.03,36.18,41.6,50.44,50.67,53.68,50.05,47.92,40.43,29.87,24.78,15.53,17.37,20.56,20.3,30.22,38.89,45.04,45.87,47.99,50.97,54.59,53.69,48.09,48.56,47.62,52.78,57.26,48.77,51.6,56.6,58.37,48.39,43.63,42.15,40.05,33.62,48.73,45.29,51.47,50.73,51.52,54.86,49.18,51.03,50.26,45.73,46.7,52.,42.17,49.93,53.08,51.34,52.44,54.06,50.56,51.04,55.47,52.71,52.36,53.59,65.08,62.74,59.23,56.07,49.21,46.67,40.62,44.59,44.89,35.57,37.67,49.74,46.52,38.47,42.08,49.73,53.82,60.76,56.2,56.41,53.83,59.9,51.06,46.9,49.11,36.01,46.72,56.53,59.04,58.52,60.78,60.02,51.78,54.78,52.88,51.73,59.76,67.84,66.63,60.97,53.69,53.17,52.44,46.54,49.08,43.62,40.81,41.64,43.3,46.36,58.07,54.95,50.82,54.17,51.37,55.26,53.55,47.57,36.05,38.66,35.3,51.02,54.96,59.34,53.47,48.34,50.25,54.06,49.99,47.44,41.59,37.58,42.39,41.41,41.84,47.77,52.75,54.84,49.63,51.5,56.26,52.47,49.35,47.13,35.94,28.42,33.14,44.38,56.38,59.63,60.86,64.35,54.59,63.34,74.68,66.29,58.33,57.64,64.64,61.49,59.11,53.72,60.37,56.66,56.92,61.58,57.21,58.12,61.93,45.75,54.77,52.95,50.06,54.54,52.64,48.31,49.9,56.49,54.99,51.83,61.78,50.93,52.92,58.76,58.82,51.26,48.29,41.18,50.69,54.,45.13,48.72,45.32,42.29,30.57,41.28,53.54,55.47,57.54,53.48,50.01,52.42,55.38,53.12,53.31,56.26,57.56,53.87,53.48,53.82,56.8,58.31,60.45,63.22,68.44,76.04,72.14,75.31,64.74,56.5,60.42,54.05,55.62,52.] x = np.arange(len(y)) ymin = 10 ymax = 80 ythresh = 50 fig, ax = plt.subplots(figsize=(10, 6)) ax.grid(True, zorder=-1) # set grid behind the plot ax.axhline(ythresh, linestyle="--", linewidth=1.0, color="k") gradient_rect = ax.imshow(np.linspace(ymin, ymax, 256).reshape(-1, 1), origin='lower', aspect='auto', cmap='turbo_r', norm=TwoSlopeNorm(vcenter=ythresh), extent=[x[0], x[-1], ymin, ymax], zorder=1) fill_poly = ax.fill_between(x, np.minimum(y, 50), np.maximum(y, 50), color="none") clip_poly = PathPatch(fill_poly.get_paths()[0], transform=ax.transData) gradient_rect.set_clip_path(clip_poly) fill_poly.remove() # the polygon isn't needed anymore ax.plot(x, y, color="k") plt.show()
3
3
78,029,515
2024-2-20
https://stackoverflow.com/questions/78029515/managed-dictionary-does-not-behave-as-expected-in-multiprocessing
From what I've managed to figure out so far, this may only be a problem in macOS. Here's my MRE: from multiprocessing import Pool, Manager from functools import partial def foo(d, n): d.setdefault("X", []).append(n) def main(): with Manager() as manager: d = manager.dict() with Pool() as pool: pool.map(partial(foo, d), range(5)) print(d) if __name__ == "__main__": main() Output: {'X': []} Expected output: {'X': [0, 1, 2, 3, 4]} Platform: macOS 14.3.1 Python 3.12.2 Maybe I'm doing something fundamentally wrong but I understood that the whole point of the Manager was to handle precisely this kind of scenario. EDIT There is another hack which IMHO should be unnecessary but even this doesn't work (produces identical output): from multiprocessing import Pool, Manager def ipp(d, lock): global gDict, gLock gDict = d gLock = lock def foo(n): global gDict, gLock with gLock: gDict.setdefault("X", []).append(n) def main(): with Manager() as manager: d = manager.dict() lock = manager.Lock() with Pool(initializer=ipp, initargs=(d, lock)) as pool: pool.map(foo, range(5)) print(d) if __name__ == "__main__": main()
This is a crappy "answer" and I apologize for that and will delete it once you post a comment that you have seen it. I get the distinct impression that the answer is related to the "sub-list" not being managed per that post I mentioned. For example: This seems to work: def foo(d, n): d.setdefault("X", []).append(n) # the thing being updated is managed def main(): with Manager() as manager: d = manager.dict() d["X"] = manager.list() with Pool() as pool: pool.map(partial(foo, d), range(5)) print(list(d["X"])) if __name__ == "__main__": main() as does: def foo(d, n): target = d.setdefault("X", []) target.append(n) d["X"] = target # the thing being updated is managed def main(): with Manager() as manager: d = manager.dict() with Pool() as pool: pool.map(partial(foo, d), range(5)) print(d) if __name__ == "__main__": main() So updating a "managed" object seems to work while updating an unmanaged list even inside a managed container object does not.
2
2
78,027,751
2024-2-20
https://stackoverflow.com/questions/78027751/keyerror-when-creating-new-column-from-groupby-operation-in-pandas
I am trying to create a new column by performing some operations on the existing columns but it is throwing a key error in my code. I tried to debug it by using df.columns and copy pasted the exact names but still I got the same error. My code is as follows: def calculate_elasticity(group): sales_change = group['Primary Sales Quantity'].pct_change() price_change = group['MRP'].pct_change() elasticity = sales_change / price_change return elasticity df['Variant-based Elasticity'] = df.groupby('Variant').transform(calculate_elasticity) The error shown is --------------------------------------------------------------------------- KeyError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 3801 try: -> 3802 return self._engine.get_loc(casted_key) 3803 except KeyError as err: 16 frames pandas/_libs/index_class_helper.pxi in pandas._libs.index.Int64Engine._check_type() KeyError: 'Primary Sales Quantity' The above exception was the direct cause of the following exception: KeyError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 3802 return self._engine.get_loc(casted_key) 3803 except KeyError as err: -> 3804 raise KeyError(key) from err 3805 except TypeError: 3806 # If we have a listlike key, _check_indexing_error will raise KeyError: 'Primary Sales Quantity' I tried to debug and following is the result of df.columns Index(['Cal. year / month', 'Material', 'Product Name', 'MRP', 'Distribution Channel (Master)', 'Unnamed: 5', 'L1 Prod Category', 'L2 Prod Brand', 'L3 Prod Sub-Category', 'State', 'Primary Actual GSV Value', 'Primary Sales Qty (CS)', 'Secondary GSV', 'Secondary sales Qty(CS)', 'Primary Volume(MT/KL)', 'Secondary Volume(MT/KL)', 'Variant', 'Weight', 'Offers', 'Primary Sales Quantity'], dtype='object') and result to print(df['Primary Sales Quantity']) is 0 155 1 16953 2 455 3 138 4 2653 ... 14147 6 14148 1 14149 8428 14150 237 14151 24 Name: Primary Sales Quantity, Length: 14152, dtype: int64 I tried to debug using the column names. I can even access the column by that name just the error is thrown in this function.
If use GroupBy.transform is not possible processing 2 columns together, need GroupBy.apply: def calculate_elasticity(group): sales_change = group['Primary Sales Quantity'].pct_change() price_change = group['MRP'].pct_change() group['Variant-based Elasticity'] = sales_change / price_change return group df = df.groupby('Variant', group_keys=False).apply(calculate_elasticity) print (df) Variant Primary Sales Quantity MRP Variant-based Elasticity 0 a 10 8 NaN 1 a 7 10 -1.200000 2 b 87 3 NaN 3 b 8 2 2.724138 Or change solution without helper function: g = df.groupby('Variant') df['Variant-based Elasticity'] = (g['Primary Sales Quantity'].pct_change() / g['MRP'].pct_change()) print (df) Variant Primary Sales Quantity MRP Variant-based Elasticity 0 a 10 8 NaN 1 a 7 10 -1.200000 2 b 87 3 NaN 3 b 8 2 2.724138 Alternative solution with helper df1 DataFrame: df1 = df.groupby('Variant')[['Primary Sales Quantity', 'MRP']].pct_change() df['Variant-based Elasticity'] = df1['Primary Sales Quantity'] / df1['MRP'] print (df) Variant Primary Sales Quantity MRP Variant-based Elasticity 0 a 10 8 NaN 1 a 7 10 -1.200000 2 b 87 3 NaN 3 b 8 2 2.724138 Sample data: df = pd.DataFrame({'Variant': ['a', 'a', 'b', 'b'], 'Primary Sales Quantity': [10, 7, 87, 8], 'MRP': [8, 10, 3, 2]})
2
2
78,025,976
2024-2-20
https://stackoverflow.com/questions/78025976/create-a-column-for-cumulative-sum-for-each-string-value-in-a-column-pandas
I have the following pandas dataframe import pandas as pd import random random.seed(42) pd.DataFrame({'index': list(range(0,10)), 'cluster': [random.choice(['S', 'C', ]) for l in range(0,10)]}) index cluster 0 0 S 1 1 S 2 2 C 3 3 S 4 4 S 5 5 S 6 6 S 7 7 S 8 8 C 9 9 S I would like to create 5 new columns, one for each unique value of the cluster column, which will be the cumulative sum of appearances of each value. The output pandas dataframe should look like this : pd.DataFrame({'index': list(range(0,10)), 'cluster': [random.choice(['S', 'C', ]) for l in range(0,10)], 'cumulative_S': [1,2,2,3,4,5,6,7,7,8], 'cumulative_C': [0,0,1,1,1,1,1,1,2,2]}) index cluster cumulative_S cumulative_C 0 0 S 1 0 1 1 S 2 0 2 2 C 2 1 3 3 S 3 1 4 4 S 4 1 5 5 S 5 1 6 6 S 6 1 7 7 S 7 1 8 8 C 7 2 9 9 S 8 2 How can I achieve that ?
Code df is your input dataframe tmp = (pd.get_dummies(df['cluster']) .cumsum()[df['cluster'].unique()] .add_prefix('cumulative_') ) out = pd.concat([df, tmp],axis=1) out index cluster cumulative_S cumulative_C 0 0 S 1 0 1 1 S 2 0 2 2 C 2 1 3 3 S 3 1 4 4 S 4 1 5 5 S 5 1 6 6 S 6 1 7 7 S 7 1 8 8 C 7 2 9 9 S 8 2
2
2
78,025,261
2024-2-20
https://stackoverflow.com/questions/78025261/re-findall-giving-different-results-to-re-compile-regex
Why does re.compile.findall not find "um" if "um" is at the beginning of the string (it works fine is "um" isn't at the beginning of the string, as per the last 2 lines below) >>> s = "um" >>> re.findall(r"\bum\b", s, re.IGNORECASE) ['um'] >>> re.compile(r"\bum\b").findall(s, re.IGNORECASE) [] >>> re.compile(r"\bum\b").findall(s + " foobar", re.IGNORECASE) [] >>> re.compile(r"\bum\b").findall("foobar " + s, re.IGNORECASE) ['um'] I would have expected the two options to be identical. What am I missing?
You intended to pass re.IGNORECASE to the compile() function, but in the failing cases you're actually passing it to the findall() method. There it's interpreted as an integer giving the starting position for the search to begin. Its value as an integer isn't defined, but happens to be 2 today: >>> int(re.IGNORECASE) 2 Rewrite the code to work as intended, and it's fine; for example: >>> re.compile(r"\bum\b", re.IGNORECASE).findall(s + " foobar") # pass to compile() ['um'] As originally written, it can't work unless "um" starts at or after position 2: >>> re.compile(r"\bum\b").findall(" " + s, re.IGNORECASE) [] >>> re.compile(r"\bum\b").findall(" " + s, re.IGNORECASE) # starts at 2 ['um']
2
3
78,024,292
2024-2-20
https://stackoverflow.com/questions/78024292/true-python-scatter-function
I am writing this to have a reference post when it comes to plotting circles with a scatter plot but the user wants to provide a real radius for the scattered circles instead of an abstract size. I have been looking around and there are other posts that explain the theory of it, but have not come accross a ready-to-use function. I have tried myself and hope that I am close to finding the solution. This is what I have so far: import matplotlib.pyplot as plt import numpy as np def true_scatter(x, y, r, ax, **kwargs): # Should work for an equal aspect axis ax.set_aspect('equal') # Access the DPI and figure size dpi = ax.figure.dpi fig_width_inch, _ = ax.figure.get_size_inches() # Calculate plot size in data units xlim = ax.get_xlim() plot_width_data_units = xlim[1] - xlim[0] # Calculate the scale factor: pixels per data unit plot_width_pixels = fig_width_inch * dpi scale = plot_width_pixels / plot_width_data_units # Convert radius to pixels, then area to points squared radius_pixels = r * scale area_pixels_squared = np.pi * (radius_pixels ** 2) area_points_squared = area_pixels_squared * (72 / dpi) ** 2 # Scatter plot with converted area scatter = ax.scatter(x, y, s=area_points_squared, **kwargs) return scatter # Example with single scatter fig, ax = plt.subplots() ax.set_xlim(-2, 2) ax.set_ylim(-2, 2) scatter = true_scatter(0, 0, 1, ax) ax.set_xlim(-2, 2) ax.set_ylim(-2, 2) plt.grid() plt.show() Wrong single scatter plot Unfortunately, it is not quite the answer. I get a circle of radius ~1.55 instead of 1. Would anyone be able to spot what is wrong with my approach? Thank you!
It is not very clear whether you want r to be the radius or the diameter of the circle. The code below supposes it is the radius (just leave out the * 2 if you want the diameter. For a discussion about how the dot size is measured, see this post. The code below converts the radius in data units to pixels, and then to "points". Three different sizes are tested. import matplotlib.pyplot as plt def true_scatter(x, y, r, ax, **kwargs): # the radius is given in data coordinates in the x direction # Should work for an equal aspect axis ax.set_aspect('equal') # measure the data coordinates in pixels radius_in_pixels, _ = ax.transData.transform((r, 0)) - ax.transData.transform((0, 0)) # one "point" is 1/72 of an inch radius_in_points = radius_in_pixels * 72.0 / ax.figure.dpi # the scatter plot size is set in square point units area_points_squared = (radius_in_points * 2) ** 2 # Scatter plot with converted area scatter = ax.scatter(x, y, s=area_points_squared, **kwargs) return scatter # Example with three scatter dots with different radii fig, ax = plt.subplots(figsize=(5, 5)) ax.set_xlim(-2, 2) ax.set_ylim(-2, 2) scatter = true_scatter(0, 0, 1.5, ax) scatter = true_scatter(0, 0, 1, ax) scatter = true_scatter(0, 0, 0.5, ax) ax.grid() plt.show()
2
2
78,022,472
2024-2-19
https://stackoverflow.com/questions/78022472/what-is-windows-equivalent-of-pwd-getpwuid
What is the windows equivalent of pwd.getpwuid? I believe pwd is an UNIX package. Here is the code and the error when using windows import pwd file_owner_name = pwd.getpwuid(file_owner_uid).pw_name import pwd ModuleNotFoundError: No module named 'pwd'
Unfortunately not supported by Python out of the box. Methods like pathlib.Path.owner() always return NotImplementedError on Windows. You'll need to call Windows functions directly. The easiest way is to install PyWin32. Then: import win32security path = './' security_descriptor = win32security.GetFileSecurity(path, win32security.OWNER_SECURITY_INFORMATION) owner_sid = security_descriptor.GetSecurityDescriptorOwner() name, domain, type = win32security.LookupAccountSid(None, owner_sid) print(f'{domain}\\{name}') # BOPPREH-DESKTOP\boppreh Alternatively, you can use ctypes to avoid an extra dependency, but the code is a lot more complicated, and most mistakes will result in segfaults or unreliable behavior instead of nice exceptions.
2
2
78,001,435
2024-2-15
https://stackoverflow.com/questions/78001435/remove-central-noise-from-image
I'm working with biologists, who are imaging DNA strands under a microscope, giving greyscale PNGs. Depending on the experiment, there can be a lot of background noise. On the ones where there is quite little, a simple threshold for pixel intensity is generally enough to remove it. However for some, it's not so simple. (not sure why it's uploading as jpg - this should let you download it as PNG) When I do apply a threshold for pixel intensity, I get this result: If I raise the threshold any more, I'll start to lose the pixels at the edges of the image that aren't very bright. The noise seems to follow a gaussian distribution based on its location within the image, probably due to the light source of the microscope. What's the best way to compare pixels to the local background noise? Can I integrate the noise distribution? For the moment I've been using imageio.v3 in python3 for basic pixel manipulation, but I'm open to suggestions. EDIT: I tried to use histogram equalisation, but it doesn't quite seem to be what I'm looking for...
I am not sure what result you want so I am working somewhat in the dark. I suspect you want a "Local Area Threshold" - see Wikipedia. Also referred to as "Adaptive Threshold". It is available, or can be achieved in many image processing packages, but for brevity and until I (or others) better understand what you want as a result, I will just demonstrate with ImageMagick which is free and can be installed on macOS, Linux and Windows. The command is -lat which means "Local Area Threshold". So, the syntax to highlight in white any pixel that is more than 2% brighter than the surrounding local area of 80x80 is as follows: magick image.png -alpha off -lat 80x80+2% result.png If you want any pixel more than 2% brighter than its surrounding 10x10 neighbourhood, use: magick image.png -alpha off -lat 10x10+2% result.png Note that I used -alpha off because your image has a superfluous alpha channel I wanted to discard. Using OpenCV you'll get something vaguely similar with: #!/usr/bin/env python3 import cv2 as cv # Load image as greyscale im = cv.imread('image.png', cv.IMREAD_GRAYSCALE) # Local area is 49x49, and pixel must be 15 above threshold to show in output thMean = cv.adaptiveThreshold(im,255,cv.ADAPTIVE_THRESH_MEAN_C, cv.THRESH_BINARY,49,-15) cv.imwrite('result-mean.png', thMean) # Local area is 29x29, and pixel must be 10 above threshold to show in output thGauss = cv.adaptiveThreshold(im,255,cv.ADAPTIVE_THRESH_GAUSSIAN_C, cv.THRESH_BINARY,29,-10) cv.imwrite('result-Gauss.png', thGauss)
2
2
78,021,584
2024-2-19
https://stackoverflow.com/questions/78021584/runtimeerror-await-wasnt-used-with-future-async-dict-comprehension-in-python
I've finished this tutorial about async list comprehension. Now I want to try an async dict comprehension. This is the code to replicate: import asyncio async def div7_tuple(x): return x, x/7 async def main(): lost = [4, 8, 15, 16, 23, 42] awaitables = asyncio.gather(*[div7_tuple(x) for x in lost]) print({k: v for k, v in awaitables}) if __name__ == '__main__': asyncio.run(main()) However, this results in an exception: > RuntimeError: await wasn't used with future > sys:1: RuntimeWarning:coroutine 'div7_tuple' was never awaited How to do this with asyncio.gather()? It's weird this does not work in unsorted way for contruct an unsorted object, because it works if I try in a sorted way: async def div7(x): return x/7 async def main2(): lost = [4, 8, 15, 16, 23, 42] print({k: await v for k, v in [(x, div7(x)) for x in lost]})
gather() gives you a Future object back(this is that Future object that the error message says - await wasn't used with future). If you need the result of the object(in order to iterate over it) you need to await it first: async def main(): lost = [4, 8, 15, 16, 23, 42] awaitables = asyncio.gather(*[div7_tuple(x) for x in lost]) print({k: v for k, v in await awaitables}) or: async def main(): lost = [4, 8, 15, 16, 23, 42] awaitables = await asyncio.gather(*[div7_tuple(x) for x in lost]) print(dict(awaitables)) The relevant code in the source code is here: def __await__(self): if not self.done(): self._asyncio_future_blocking = True yield self if not self.done(): raise RuntimeError("await wasn't used with future") return self.result() # May raise too. __iter__ = __await__ # make compatible with 'yield from'. That __iter__ = __await__ is what makes it possible to iterate over a Future(and not to get a TypeError instead which says "TypeError: 'Future' object is not iterable") but since you didn't use await to make that Future done(It's result get set) it shows you that error message.
2
4