question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
78,901,362 | 2024-8-22 | https://stackoverflow.com/questions/78901362/why-are-dict-keys-dict-values-and-dict-items-not-subscriptable | Referring to an item of a dict_keys, dict_values, or dict_items object by index raises a type error. For example: > my_dict = {"foo": 0, "bar": 1, "baz": 2} > my_dict.items()[0] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) ----> 1 my_dict.items()[0] TypeError: 'dict_items' object is not subscriptable My question is not how to address this. I am aware that one can just call list() or tuple() on a dict_keys, dict_values, or dict_items object and then subscript that. My question is why does this behaviour still persist in python given that the order of items in a dictionary has been guaranteed since python 3.7.0. Why is it not possible (or not desirable) to refactor the dict_keys, dict_values, and dict_items types so that they are subscriptable? The documentation for the stdlib describes these types as "view objects". I imagine that is where the answer to my question lies but I can't find a more detailed description of what a "view object" actually is. Can anyone enlighten me on this matter as well? | "Ordered" doesn't mean "efficiently indexable". There is no more efficient way to retrieve the Nth item of a dict view than to iterate N steps. Allowing indexing would give the impression that views can support this operation efficiently, which would in turn encourage extremely inefficient code. Trying to change the implementation so indexing is efficient would make other dict operations much less efficient, which isn't worthwhile for the use cases dicts are actually designed for. | 6 | 5 |
78,902,666 | 2024-8-22 | https://stackoverflow.com/questions/78902666/rolling-rank-groupby | I am trying to get rolling rank based on group by. This is my demo data and "Rolling 2Y Rank' is expected column. The way how it works is I intend to group by each "ID" and calculate rank based on its own historical "Value". df = pd.DataFrame({ 'Year': [2000,2000,2000,2001,2001,2001,2002,2002,2002,2003,2003,2003], 'ID': ['A','B','C','A','B','C','A','B','C','A','B','C'], 'Value': [5,1,2,3,4,3,2,7,1,1,13,23], 'Rolling 2Y Rank': [np.nan,np.nan,np.nan, 2,1,1,2,1,2,2,1,1]}) Year ID Value Rolling 2Y Rank 2000 A 5 nan 2000 B 1 nan 2000 C 2 nan 2001 A 3 2 2001 B 4 1 2001 C 3 1 2002 A 2 2 2002 B 7 1 2002 C 1 2 2003 A 1 2 2003 B 13 1 2003 C 23 1 For example, for "ID"="A": For first instance ("Year"=="2020"), there is no only one data, so it has nothing to compare against so it is NA. For second instance, we compare 3 with 5 (rolling 2-year window), so we get rank 2 (because 5 is larger than 3). For the third instance, we compare 2 against 3, so get the rank of 2 and so on so forth. df[df['ID']=='A'] I have tried and it did not work out: df.groupby('ID')['Value'].rolling(2).rank() | You want the reversed (descending) rank, so pass that to the rank function. Then you can do a merge to assign back to the original dataframe: df = df.merge( df.groupby('ID').rolling(2, on='Year')['Value'].rank(ascending=False), on=['ID','Year'], ) Output: Year ID Value_x Value_y 0 2000 A 5 NaN 1 2000 B 1 NaN 2 2000 C 2 NaN 3 2001 A 3 2.0 4 2001 B 4 1.0 5 2001 C 3 1.0 6 2002 A 2 2.0 7 2002 B 7 1.0 8 2002 C 1 2.0 9 2003 A 1 2.0 10 2003 B 13 1.0 11 2003 C 23 1.0 | 2 | 3 |
78,902,630 | 2024-8-22 | https://stackoverflow.com/questions/78902630/is-there-an-efficient-way-to-include-every-remaining-unselected-column-in-a-pyth | I'm trying to reorder the columns in a Polars dataframe and put 5 columns out of 100 first (the document must unfortunately be somewhat readable in excel). I can't seem to find an easy way to do this. Ideally, I'd like something simple like df.select( 'col2', 'col1', r'^.*$', # the rest of the columns, but this throws a duplicate column name error ) Negative lookahead is not supported so it's not possible to make a regex that excludes my selected columns. I could make two overlapping selections, drop the columns from one selection, and then join them, but this does not seem like it would be the intended way to do this. Every other solution I've found involves explicitly naming every single column, which I'm trying to avoid as the columns get added or change names somewhat frequently. | You can combine pl.exclude with the walrus operator. Suppose you have something like df=pl.DataFrame( [ pl.Series('c', [1, 2, 3], dtype=pl.Int64), pl.Series('b', [2, 3, 4], dtype=pl.Int64), pl.Series('fcvem', [4, 5, 6], dtype=pl.Int64), pl.Series('msoy', [4, 5, 6], dtype=pl.Int64), pl.Series('smrn', [4, 5, 6], dtype=pl.Int64), pl.Series('z', [4, 5, 6], dtype=pl.Int64), pl.Series('wxous', [4, 5, 6], dtype=pl.Int64), pl.Series('uusn', [4, 5, 6], dtype=pl.Int64), pl.Series('ydj', [4, 5, 6], dtype=pl.Int64), pl.Series('squr', [4, 5, 6], dtype=pl.Int64), pl.Series('yyx', [4, 5, 6], dtype=pl.Int64), pl.Series('nl', [4, 5, 6], dtype=pl.Int64), pl.Series('a', [0, 1, 2], dtype=pl.Int64), ] ) and you want the first 3 columns to be 'a', 'b', 'c'. You can do: df.select(*(start_cols:=['a','b','c']), pl.exclude(start_cols)) This creates a list called start_cols which contains 'a','b','c'. The asterisk unwraps the list and then pl.exclude uses the contents of start_cols to tell polars to return everything except start_cols. If you prefer, you could do this syntax instead: df.select((start_cols:=['a','b','c'])+ [pl.exclude(start_cols)]) | 5 | 2 |
78,901,940 | 2024-8-22 | https://stackoverflow.com/questions/78901940/can-i-subset-an-iterator-and-keep-it-an-iterator | I have a use case where I need the permutations of boolean. However I do not need them when they are reversed. So I can do something like this: import itertools [ p for p in itertools.product([True,False],repeat=4) if p!=p[::-1] ] This issue is that this is a list, stored in memory, and with increasing repeats I will run into memory errors. Is it possible to "subset" an iterator while keeping it as an iterator? If possible I am looking for a general answer, that applies to any sub setting strategy on any iterator | You use a generator expression, syntactically identical to a listcomp, but using parentheses rather than square brackets as the delimiter: ( p for p in itertools.product([True,False],repeat=4) if p!=p[::-1] ) If the generator expression is the sole argument to a function, you don't even need the extra parentheses (the function call parentheses serve double duty delimiting the function argument and the genexpr). | 2 | 4 |
78,897,975 | 2024-8-21 | https://stackoverflow.com/questions/78897975/python-idiom-if-name-main-in-uwsgi | What is Python idiom in uwsgi for if __name__ == '__main__': main() I found here a long string in uwsgi instead of __main__ What __name__ string does uwsgi use?. But it looks like a workaround. Is there better way to call function once, when uwsgi starts my python script? | It depends what you are trying to guard against. There are three use cases I can think of: Do something only if running a real server, and not unit tests. Do something in a module if it is the main WSGI file. Do something after uwsgi has forked a worker process. Do something only if running a real server, and not unit tests The uwsgi binary provides a module called uwsgi that is otherwise not importable. This is best used for detecting if you are running unit tests or not. See Unit testing Flask app running under uwsgi try: import uwsgi except ImportError: UWSGI_SERVER = False else: UWSGI_SERVER = True if UWSGI_SERVER: do_something() else: import test_utils test_utils.mock_do_somethng() Do something in a module if it is the main WSGI file This implies the above, that if your main module has a uwsgi_file_* module name, then you are running a uwsgi server. It also lets you know which file was run as the main module. That is, the one that contains your application. Maybe you are running a handful of micro-services that all use the same core, but need to be configured slightly differently depending on which service is running. File: myapp.py import sys if __name__ == 'uwsgi_file_myapp': assert 'uwsgi_file_myapp' in sys.modules, ( 'other modules can also see what the main module is' ) connect_to_redis() See What __name__ string does uwsgi use? for more details. Do something after uwsgi has forked a worker process Useful for doing one-time initialisations per worker process. Such as connecting to a database. try: import uwsgi except ImportError: def postfork(func): """Dummy decorator for unit tests""" return func else: from uwsgidecorators import postfork DB = None @postfork def init_db(): global DB from datetime import datetime DB = f'some-database connection, set @ {datetime.now().isoformat()}' def application(env, start_response): start_response('200 OK', [('Content-Type','text/html')]) return [f'{DB=!r}'.encode()] This program will need to be run with the --master option. eg. uwsgi --http :8080 --master --wsgi-file myapp.py | 2 | 2 |
78,900,257 | 2024-8-22 | https://stackoverflow.com/questions/78900257/plotting-differently-sized-subplots-in-pyplot | I want to plot a figure in pyplot that has 3 subplots, arranged vertically. The aspect ratios of the first one is 1:1, while for the other two it is 2:1. And the heights of each of these plots should be the same. This would mean that the left and right boundaries of the 1st plot are quite far away from the boundaries of the canvas. Could someone please help me with how to go about doing this? I have tried meddling with the hspace command, but that ends up changing the spacing of all the subplots, not just the first one. I have also tried the following by creating subplots within a subplot, but the wspace doesnt seem to make any difference in the plots: fig = plt.figure(figsize=(3.385, 2*3)) outer = gridspec.GridSpec(2, 1, wspace=2.0, hspace=0.1, height_ratios=[1,3]) inner1 = gridspec.GridSpecFromSubplotSpec(1, 1, subplot_spec=outer[0], wspace=0.1, hspace=0.1) inner2= gridspec.GridSpecFromSubplotSpec(2, 1, subplot_spec=outer[1], wspace=0.1, hspace=0.1) ax1 = plt.Subplot(fig, inner1[0]) (ax2,ax3)= plt.Subplot(fig, inner2[0]), plt.Subplot(fig, inner2[1]) fig.add_subplot(ax1); fig.add_subplot(ax2); fig.add_subplot(ax3) ax1.plot(x1, y1, "-", color="k" ) ax2.plot(x2, y2, "-") ax3.plot(x3,y3, "--) plt.show() Could someone please help me with how this could be done? | Create 3 vertically arranged subplots, their heights will by equal be default. Then set the aspect ratios as desired. import matplotlib.pyplot as plt fig, axes = plt.subplots(nrows=3) axes[0].set_aspect(1) axes[1].set_aspect(.5) axes[2].set_aspect(.5) | 3 | 2 |
78,898,497 | 2024-8-21 | https://stackoverflow.com/questions/78898497/create-categorical-series-from-physical-values | I want to create a categorical column, where each category has a descriptive name for self-documentation. I have a list of integers equivalent to the physical values in the categorical column, and I want to make the categorical column without creating an intermediate list of strings to pass to pl.Series. import polars as pl dt = pl.Enum(["0", "1", "2"]) s1 = pl.Series(["0", "0", "2", "1"], dtype=dt) physical = list(s1.to_physical()) print(f"{physical=}") s2 = pl.Series([str(p) for p in physical], dtype=dt) assert s1.equals(s2) # turning physical to strings just to create the series which is stored as ints is a waste of compute power # how to construct a series from the physical values? s2 = pl.Series.from_physical(physical, dtype=dt) assert s1.equals(s3) This prints physical=[0, 0, 2, 1] Then it errors because Series.to_physical doesn't exist. Is there a function like from_physical that would make this snippet run to completion without erroring on the final assertion? | You can simply cast to the enum datatype. assert s1.equals(s1.to_physical().cast(dt)) # True | 3 | 2 |
78,898,081 | 2024-8-21 | https://stackoverflow.com/questions/78898081/groupby-index-and-keep-the-max-column-value-given-a-single-column | Scenario: With a dataframe with duplicated indices, I want to groupby while keeping the max value. I found the solution to this in Drop duplicates by index, keeping max for each column across duplicates however, this gets the max value of each column. This mixed the data of different rows, keeping the max values. Question: If instead of mixing the values of different rows, I want to keep a single row, where the value of a column "C" is the highest among the rows with the same index (in this case I will select the row with the highest value in "C" and keep all values for that row, not mixing with high values of other columns from other rows), how should the groupby be performed? What I tried: From the question linked, I got df.groupby(df.index).max() and tried to modify it to: df.groupby(df.index)['C'].max() but this deletes the other columns of the dataframe. | You don't provide a sample of your data so I'm just going for a general approach. That said, you can sort the dataframe by C, then groupby with head: # this assumes that index has only one level df.sort_values('C', ascending=False).groupby(level=0).head(1) Or: df.sort_values('C').groupby(level=0).tail(1) Also take a look at this related question (not by the index, but a column): | 2 | 2 |
78,897,960 | 2024-8-21 | https://stackoverflow.com/questions/78897960/annotate-a-scatterplot-with-text-and-position-taken-from-a-pandas-dataframe-dire | What I want to achieve is a more elegant and direct method of annotating points with x and y position from a pandas dataframe with a corresponding label from the same row. This working example works and results in what I want, but I feel there must be a more elegant solution out there without having to store individual columns in separate lists first and having to iterate over them. My concern is that having these separate lists could results in misalignment of labels with data in cases of larger and complicated datasets with missing values, nans, etc. In this example, x = Temperature, y = Sales and the label is the Date. import pandas as pd import matplotlib.pyplot as plt d = {'Date': ['15-08-24', '16-08-24', '17-08-24'], 'Temperature': [24, 26, 20], 'Sales': [100, 150, 90]} df = pd.DataFrame(data=d) Which gives: Date Temperature Sales 0 15-08-24 24 100 1 16-08-24 26 150 2 17-08-24 20 90 Then: temperature_list = df['Temperature'].tolist() sales_list = df['Sales'].tolist() labels_list = df['Date'].tolist() fig, axs = plt.subplots() axs.scatter(data=df, x='Temperature', y='Sales') for i, label in enumerate(labels_list): axs.annotate(label, (temperature_list[i], sales_list[i])) plt.show() What I aim for - but does not work - is something along the lines of: fig, axs = plt.subplots() axs.scatter(data=df, x='Temperature', y='Sales') axs.annotate(data=df, x='Temperature', y='Sales', text='Date') # this is invalid plt.show() Suggestions welcome. If there is no way around the iterative process, perhaps there is at least a fail-safe method to warrant correct attribution of labels to corresponding data points. | You probably can't avoid the iteration, but you can remove the need to create lists by using df.iterrows(). This has the added benefit that you are not decoupling any data from your DataFrame. import pandas as pd import matplotlib.pyplot as plt d = {'Date': ['15-08-24', '16-08-24', '17-08-24'], 'Temperature': [24, 26, 20], 'Sales': [100, 150, 90]} df = pd.DataFrame(data=d) fig, axs = plt.subplots() axs.scatter(data=df, x='Temperature', y='Sales') for i, row in df.iterrows(): axs.annotate(row["Date"], (row["Temperature"], row["Sales"])) plt.show() | 2 | 1 |
78,894,891 | 2024-8-21 | https://stackoverflow.com/questions/78894891/polars-combining-sales-and-purchases-fifo-method | I have two dataframes: One with buys df_buy = pl.DataFrame( { "BuyId": [1, 2], "Item": ["A", "A"], "BuyDate": [date.fromisoformat("2023-01-01"), date.fromisoformat("2024-03-07")], "Quantity": [40, 50], } ) BuyId Item BuyDate Quantity 1 A 2023-01-01 40 2 A 2024-03-07 50 And other with sells: df_sell = pl.DataFrame( { "SellId": [3, 4], "Item": ["A", "A"], "SellDate": [date.fromisoformat("2024-04-01"), date.fromisoformat("2024-05-01")], "Quantity": [10, 45], } ) SellId Item SellDate Quantity 3 A 2024-04-01 10 4 A 2024-05-01 45 I want to determine which sales came from which purchases using the FIFO method. The result should be something like this. BuyId Item BuyDate RemainingQuantity SellId SellDate SellQuantity QuantityAfterSell 1 A 2023-01-01 40 3 2024-04-01 10 30 1 A 2023-01-01 30 4 2024-05-01 30 0 2 A 2024-03-07 50 4 2024-05-01 15 35 I know that I can do it using a for loop but I wanted to know if there is a more vectorized way to do it. Edit: Added new example for testing: df_buy = pl.DataFrame( { "BuyId": [5, 1, 2], "Item": ["B", "A", "A"], "BuyDate": [date.fromisoformat("2023-01-01"), date.fromisoformat("2023-01-01"), date.fromisoformat("2024-03-07")], "Quantity": [10, 40, 50], } ) df_sell = pl.DataFrame( { "SellId": [6, 3, 4], "Item": ["B", "A", "A"], "SellDate": [ date.fromisoformat("2024-04-01"), date.fromisoformat("2024-04-01"), date.fromisoformat("2024-05-01"), ], "Quantity": [5, 10, 45], } ) | Calculate total quantity bought so far and total quantity sold so far. cum_sum() to calculate running total. df_buy_total = ( df_buy .with_columns(QuantityTotal = pl.col.Quantity.cum_sum().over("Item")) ) df_sell_total = ( df_sell .with_columns(QuantityTotal = pl.col.Quantity.cum_sum().over("Item")) ) βββββββββ¬βββββββ¬βββββββββββββ¬βββββββββββ¬ββββββββββββββββ β BuyId β Item β BuyDate β Quantity β QuantityTotal β β --- β --- β --- β --- β --- β β i64 β str β date β i64 β i64 β βββββββββͺβββββββͺβββββββββββββͺβββββββββββͺββββββββββββββββ‘ β 1 β A β 2023-01-01 β 40 β 40 β β 2 β A β 2024-03-07 β 50 β 90 β βββββββββ΄βββββββ΄βββββββββββββ΄βββββββββββ΄ββββββββββββββββ ββββββββββ¬βββββββ¬βββββββββββββ¬βββββββββββ¬ββββββββββββββββ β SellId β Item β SellDate β Quantity β QuantityTotal β β --- β --- β --- β --- β --- β β i64 β str β date β i64 β i64 β ββββββββββͺβββββββͺβββββββββββββͺβββββββββββͺββββββββββββββββ‘ β 3 β A β 2024-04-01 β 10 β 10 β β 4 β A β 2024-05-01 β 45 β 55 β ββββββββββ΄βββββββ΄βββββββββββββ΄βββββββββββ΄ββββββββββββββββ Find which BuyId and SellId belong together. For this we can use join_asof() df_sell2buy = ( df_sell_total .join_asof( df_buy_total, on="QuantityTotal", strategy="forward", by="Item", coalesce=False, suffix="Buy" ) .select( "BuyId", "SellId", "Item", "BuyDate", "SellDate", "QuantityTotalBuy", pl.col.QuantityTotal.alias("QuantityTotalSell") ) .filter(pl.col.BuyId.is_not_null(), pl.col.QuantityTotalBuy > pl.col.QuantityTotalSell) ) df_buy2sell = ( df_buy_total .join_asof( df_sell_total, on="QuantityTotal", strategy="forward", by="Item", coalesce=False, suffix="Sell" ) .select( "BuyId", "SellId", "Item", "BuyDate", "SellDate", pl.col.QuantityTotal.alias("QuantityTotalBuy"), pl.col.QuantityTotalSell.forward_fill().over("Item").fill_null(0) ) ) βββββββββ¬βββββββββ¬βββββββ¬βββββββββββββ¬βββββββββββββ¬βββββββββββββββββββ¬ββββββββββββββββββββ β BuyId β SellId β Item β BuyDate β SellDate β QuantityTotalBuy β QuantityTotalSell β β --- β --- β --- β --- β --- β --- β --- β β i64 β i64 β str β date β date β i64 β i64 β βββββββββͺβββββββββͺβββββββͺβββββββββββββͺβββββββββββββͺβββββββββββββββββββͺββββββββββββββββββββ‘ β 1 β 3 β A β 2023-01-01 β 2024-04-01 β 40 β 10 β β 2 β 4 β A β 2024-03-07 β 2024-05-01 β 90 β 55 β βββββββββ΄βββββββββ΄βββββββ΄βββββββββββββ΄βββββββββββββ΄βββββββββββββββββββ΄ββββββββββββββββββββ βββββββββ¬βββββββββ¬βββββββ¬βββββββββββββ¬βββββββββββββ¬βββββββββββββββββββ¬ββββββββββββββββββββ β BuyId β SellId β Item β BuyDate β SellDate β QuantityTotalBuy β QuantityTotalSell β β --- β --- β --- β --- β --- β --- β --- β β i64 β i64 β str β date β date β i64 β i64 β βββββββββͺβββββββββͺβββββββͺβββββββββββββͺβββββββββββββͺβββββββββββββββββββͺββββββββββββββββββββ‘ β 1 β 4 β A β 2023-01-01 β 2024-05-01 β 40 β 55 β β 2 β null β A β 2024-03-07 β null β 90 β 55 β βββββββββ΄βββββββββ΄βββββββ΄βββββββββββββ΄βββββββββββββ΄βββββββββββββββββββ΄ββββββββββββββββββββ Combine both linked DataFrames df_result = ( pl.concat([df_sell2buy, df_buy2sell]) .filter(~( pl.col.SellId.is_null() & pl.col.SellId.is_not_null().any().over("BuyId") )) .sort("Item", "BuyId", "SellId", nulls_last=True) ) βββββββββ¬βββββββββ¬βββββββ¬βββββββββββββ¬βββββββββββββ¬βββββββββββββββββββ¬ββββββββββββββββββββ β BuyId β SellId β Item β BuyDate β SellDate β QuantityTotalBuy β QuantityTotalSell β β --- β --- β --- β --- β --- β --- β --- β β i64 β i64 β str β date β date β i64 β i64 β βββββββββͺβββββββββͺβββββββͺβββββββββββββͺβββββββββββββͺβββββββββββββββββββͺββββββββββββββββββββ‘ β 1 β 3 β A β 2023-01-01 β 2024-04-01 β 40 β 10 β β 1 β 4 β A β 2023-01-01 β 2024-05-01 β 40 β 55 β β 2 β 4 β A β 2024-03-07 β 2024-05-01 β 90 β 55 β βββββββββ΄βββββββββ΄βββββββ΄βββββββββββββ΄βββββββββββββ΄βββββββββββββββββββ΄ββββββββββββββββββββ Calculate final results ( df_result .with_columns( QuantityRunning = pl.min_horizontal("QuantityTotalBuy", "QuantityTotalSell"), ) .with_columns( QuantityRunningPrev = pl.col.QuantityRunning.shift(1).over("Item").fill_null(0) ) .select( "BuyId", "Item", "BuyDate", (pl.col.QuantityTotalBuy - pl.col.QuantityRunningPrev).alias("RemainingQuantity"), "SellId", "SellDate", (pl.col.QuantityRunning - pl.col.QuantityRunningPrev).alias("SellQuantity"), (pl.col.QuantityTotalBuy - pl.col.QuantityRunning).alias("QuantityAfterSell") ) ) βββββββββ¬βββββββ¬βββββββββββββ¬ββββββββββββββββββββ¬βββββββββ¬βββββββββββββ¬βββββββββββββββ¬ββββββββββββββββββββ β BuyId β Item β BuyDate β RemainingQuantity β SellId β SellDate β SellQuantity β QuantityAfterSell β β --- β --- β --- β --- β --- β --- β --- β --- β β i64 β str β date β i64 β i64 β date β i64 β i64 β βββββββββͺβββββββͺβββββββββββββͺββββββββββββββββββββͺβββββββββͺβββββββββββββͺβββββββββββββββͺββββββββββββββββββββ‘ β 1 β A β 2023-01-01 β 40 β 3 β 2024-04-01 β 10 β 30 β β 1 β A β 2023-01-01 β 30 β 4 β 2024-05-01 β 30 β 0 β β 2 β A β 2024-03-07 β 50 β 4 β 2024-05-01 β 15 β 35 β βββββββββ΄βββββββ΄βββββββββββββ΄ββββββββββββββββββββ΄βββββββββ΄βββββββββββββ΄βββββββββββββββ΄ββββββββββββββββββββ | 3 | 1 |
78,897,279 | 2024-8-21 | https://stackoverflow.com/questions/78897279/confirming-what-happens-when-importing-a-package | I was surprised to find out that both the two call in main.py works: import package_top package_top.module.hello() # I thought this won't work ... package_top.hello() # I thought this is the only way package_top package_top/ βββ __init__.py βββ module.py __init__.py from .module import hello module.py def hello(): print("module_hello") From my understanding, when I call import package_top in main.py, it will run __init__.py which adds hello to namespace package_top, and then package_top is added to namespace main, so I can access hello with package_top.hello(). What I don't understand is why the second call also works? I expect the prgram to throw out an error at the second line. | The Python import machinery modifies the package_top module object and adds the module attribute as part of importing the child module. See the Python import docs on submodules: When a submodule is loaded using any mechanism (e.g. importlib APIs, the import or import-from statements, or built-in __import__()) a binding is placed in the parent moduleβs namespace to the submodule object. Other import statements such as from other_package import x, y only add the x and y names to the importing module object. In your case, from .module import hello, is both importing the function hello, and also importing the submodule module of package_top, so your import statement creates both package_top.hello and package_top.module, which in turn gives you access to package_top.module.hello. You can confirm this by printing the module scope after importing: from .module import hello print(dir()) prints: ['__builtins__', (more default __stuff__ ...), 'hello', 'module'] Therefor package_top.module.hello() works without problems. Remove the from .module import hello statement and you will see the submodule is not imported, and the package_top.module.hello() statement will fail. | 5 | 4 |
78,896,943 | 2024-8-21 | https://stackoverflow.com/questions/78896943/accessing-the-values-used-to-impute-and-normalize-new-data-based-upon-scikit-lea | Using scikit-learn I'm building machine learning models on a training set, and then evaluating them on a test set. On the train set I perform data imputation and scaling with the ColumnTransformer, then build a logistic regression model using Kfold CV, and the final model is used to predict the values on the test set. The final model is also using its results from ColumnTransformer to impute the missing values on the test set. For example min-max scalar would be taking the min and max values from the train set and would use those values when scaling the test set. How can I see these scaling values that are derived from the the train set and then used to predict on the test set? I can't find anything on the scikit-learn documentation about it. Here is the code I'm using: from sklearn.linear_model import SGDClassifier from sklearn.model_selection import RepeatedStratifiedKFold from sklearn.model_selection import GridSearchCV from sklearn.compose import ColumnTransformer from sklearn.impute import SimpleImputer from sklearn.pipeline import Pipeline from sklearn.preprocessing import MinMaxScaler, OneHotEncoder def preprocessClassifierLR(categorical_vars, numeric_vars):###categorical_vars and numeric_vars are lists defining the column names of the categorical and numeric variables present in X categorical_pipeline = Pipeline(steps=[('mode', SimpleImputer(missing_values=np.nan, strategy="most_frequent")), ("one_hot_encode", OneHotEncoder(handle_unknown='ignore'))]) numeric_pipeline = Pipeline(steps=[('numeric', SimpleImputer(strategy="median")), ("scaling", MinMaxScaler())]) col_transform = ColumnTransformer(transformers=[("cats", categorical_pipeline, categorical_vars), ("nums", numeric_pipeline, numeric_vars)]) lr = SGDClassifier(loss='log_loss', penalty='elasticnet') model_pipeline = Pipeline(steps=[('preprocess', col_transform), ('classifier', lr)]) random_grid_lr = {'classifier__alpha': [1e-1, 0.2, 0.5], 'classifier__l1_ratio': [1e-3, 0.5]} kfold = RepeatedStratifiedKFold(n_splits=10, n_repeats=10, random_state=47) param_search = GridSearchCV(model_pipeline, random_grid_lr, scoring='roc_auc', cv=kfold, refit=True) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30) param_search = preprocessClassifierLR(categorical_vars, numeric_vars) train_mod = param_search.fit(X_train, y_train) print("Mod AUC:", train_mod.best_score_) test_preds = train_mod.predict_proba(X_)[:,1] I can't provide the real data, but X is a dataframe with the independent variables and y is the binary outcome variable. train_mod is a pipeline which contains the columntransformer and SGDclassifier steps. I can easily get similar parameter information from the classifier such as the optimal lambda and alpha values by running: train_mod.best_params_, but I cannot figure out the stats used for the column transformer such as 1) the modes used for the simple imputer for the categorical features, 2) the median values used for the simple imputer for the numeric features, and 3) the min and max values used for the scaling of the numeric features. How to access this information? I assumed that train_mod.best_estimator_['preprocess'].transformers_ would contain this information, in a similar way to how train_mod.best_params_ gives me the alpha and lambda values derived from the model training that are then applied to the test set. | Pipelines, ColumnTransformers, GridSearch, and others all have attributes (and sometimes a custom __getitem__ to access these like dictionaries) exposing their component parts, and similarly each of the transformers has fitted statistics as attributes, so it's just a matter of chaining these all together, e.g.: ( train_mod # is a grid search, has the next attribute .best_estimator_ # is a pipeline, has steps accessible by getitem ['preprocess'] # is a columntransformer .named_transformers_ # fitted transformers, accessed by getitem ['cats'] # pipeline ['mode'] # simpleimputer .statistics_ # the computed modes, per column seen by this simpleimputer. ) | 2 | 1 |
78,896,486 | 2024-8-21 | https://stackoverflow.com/questions/78896486/spurious-zero-printed-in-seaborn-barplot-while-plotting-pandas-dataframe | The following is the minimal code. A spurious zero is printed between second and third bars that I am unable to get rid of in the plot. Please help me fix the code. A minimal working example is below: import pandas as pd import matplotlib.pyplot as plt import seaborn as sns data = { 'Temp': [5, 10, 25, 50, 100, 5, 10, 25, 50, 100, 5, 10, 25, 50, 100, 5, 10, 25, 50, 100], 'Measurement': ['mean', 'mean', 'mean', 'mean', 'mean', 'std', 'std', 'std', 'std', 'std', 'min', 'min', 'min', 'min', 'min', 'max', 'max', 'max', 'max', 'max'], 'Value': [-0.03, -0.05, -0.07, -0.09, -0.09, 0.24, 0.44, 0.69, 1.11, 1.45, -2.36, -4.56, -5.86, -11.68, -14.68, 1.30, 3.50, 5.26, 9.28, 11.14] } df_melted = pd.DataFrame(data) print(df_melted) barplot = sns.barplot(x='Temp', y='Value', hue='Measurement', data=df_melted, palette='viridis') for p in barplot.patches: height = p.get_height() x = p.get_x() + p.get_width() / 2. if height >= 0: barplot.text(x, height + 0.1, f'{height:.1f}', ha='center', va='bottom', fontsize=10, color='black', rotation=90) else: barplot.text(x, height - 0.1, f'{height:.1f}', ha='center', va='top', fontsize=10, color='black', rotation=90) plt.ylim([-25,25]) plt.show() Verions: Python 3.12.4, matplotlib 3.9.1 and Seaborn 0.13.2 PS: Edited to include maplotlib version. | The additional 0.0 is from the legend patches (in fact four 0.0s on top of each other). You can iterate over the bar containers instead: for c in barplot.containers: for p in c: height = p.get_height() x = p.get_x() + p.get_width() / 2. if height >= 0: barplot.text(x, height + 0.1, f'{height:.1f}', ha='center', va='bottom', fontsize=10, color='black', rotation=90) else: barplot.text(x, height - 0.1, f'{height:.1f}', ha='center', va='top', fontsize=10, color='black', rotation=90) An easier way to label the bars is, howver, using bar_label instead of re-inventing the wheel. for c in barplot.containers: barplot.bar_label(c, fmt='%.1f', rotation=90, padding=2) | 2 | 2 |
78,894,954 | 2024-8-21 | https://stackoverflow.com/questions/78894954/polars-dataframe-via-django-query | I am exploring a change from pandas to polars. I like what I see. Currently, it is simple to get the data into Pandas. cf = Cashflow.objects.filter(acct=acct).values() df = pd.DataFrame(cf) So I figured it would be a simple change - but this will not work for me. df = pl.DataFrame(cf) What is the difference between using a Django query and putting the data inside Polars? Thank you. | You just need to check the input parameters of Polars and the output data type of the Django queryset. In Polars the pl.DataFrame() constructor expects a list of dictionaries or a list of other data structures. Also when you run Cashflow.objects.filter(acct=acct).values() in Django, the result is a queryset of dictionaries, where each dictionary represents a row of data. an example is provided below: <QuerySet [{'id': 1, 'name': 'Beatles Blog', 'tagline': 'All the latest Beatles news.'}]> So the only thing you need to do is to convert the resulting queryset into a list before passing it to Polars: cf_list = list(Cashflow.objects.filter(account=acct).values()) df = pl.DataFrame(cf_list) | 2 | 1 |
78,895,383 | 2024-8-21 | https://stackoverflow.com/questions/78895383/how-to-use-apply-on-dataframe-using-a-custom-function | I have the following Pandas DataFrame: import pandas as pd from collections import Counter print(sentences) the output is (yes, the column name is 0): 0 0 A 1 B 2 C 3 D 4 EEE ... ... 462064467 FGH 462064468 QRS 462064469 EEE 462064470 VWXYZ 462064471 !!! [462064472 rows x 1 columns] I have a custom function to check whether the content in the column 0 has a length > 1 or not (just an example): def is_more_than_one_character(t): if len(t) > 1: return True else: return False And I apply the function like this: counter = Counter(sentences.apply(is_more_than_one_character)) I wish to count the occurrence of each string having a length > 1. Here is the example output of print(counter): [(EEE, 2), (FGH, 1), (QRS, 1), (!!!, 1)...] but currently, the output is: [(False, 460686058), (True, 1378414)] What did I miss? I think I am close. Thanks in advance. | You could filter with str.len and boolean indexing then pass to value_counts: out = sentences.loc[sentences[0].str.len()>1, 0].value_counts() Or count everything, then filter the keys: out = sentences[0].value_counts() out = out[out.index.str.len()>1] Output: 0 EEE 2 FGH 1 QRS 1 VWXYZ 1 !!! 1 Name: count, dtype: int64 If you really need to use your function and original approach: out = Counter(sentences.loc[sentences[0].apply(is_more_than_one_character), 0]) Or vectorizing the function: from numpy import vectorize @vectorize def is_more_than_one_character(t): if len(t) > 1: return True else: return False s = sentences[0] out = Counter(s[is_more_than_one_character(s)]) Or, actually, since you'll have to loop anyway, better use pure python: out = Counter(filter(is_more_than_one_character, sentences[0])) Output: Counter({'EEE': 2, 'FGH': 1, 'QRS': 1, 'VWXYZ': 1, '!!!': 1}) | 3 | 3 |
78,894,984 | 2024-8-21 | https://stackoverflow.com/questions/78894984/why-can-hexadecimal-python-integers-access-properties-but-not-regular-ints | Decimal (i.e. non-prefixed) integers in Python seem to have fewer features than prefixed integers. If I do 1.real I get a SyntaxError: invalid decimal literal. However, if I do 0x1.real, then I get no error and 1 is the result. (Same for 0b1.real and 0o1.real, though in Python2 01.real gives a syntax error as). | It's because the 1. lead-in is being treated as a floating-point literal and r is not a valid decimal digit. Hex literals, of the form you show, are integers, so are not ambiguous in being treated as a possible floating point (there is no 0x1.1). If you use (1).real to specify that the literal is just the 1, it works fine. The following (annotated) transcript may help: >>> 0x1.real # Can only be integer, so works as expected. 1 >>> 1.real # Treated as floating point, "r" is invalid, File "<stdin>", line 1 1.real ^ SyntaxError: invalid syntax >>> (1).real # Explicitly remove ambiguity, works. 1 >>> 0x1.1 # No float hex literal in this form. File "<stdin>", line 1 0x1.1 ^ SyntaxError: invalid syntax For completeness, the lexical tokens for numerics (integral and float) can be found in the Python docs: integer ::= decinteger | bininteger | octinteger | hexinteger decinteger ::= nonzerodigit (["_"] digit)* | "0"+ (["_"] "0")* bininteger ::= "0" ("b" | "B") (["_"] bindigit)+ octinteger ::= "0" ("o" | "O") (["_"] octdigit)+ hexinteger ::= "0" ("x" | "X") (["_"] hexdigit)+ nonzerodigit ::= "1"..."9" digit ::= "0"..."9" bindigit ::= "0" | "1" octdigit ::= "0"..."7" hexdigit ::= digit | "a"..."f" | "A"..."F" floatnumber ::= pointfloat | exponentfloat pointfloat ::= [digitpart] fraction | digitpart "." exponentfloat ::= (digitpart | pointfloat) exponent digitpart ::= digit (["_"] digit)* fraction ::= "." digitpart exponent ::= ("e" | "E") ["+" | "-"] digitpart You can see there that the floats do not allow for hexadecimal prefixes, which is why it can safely assume the . in 0x1.real is not part of the literal value. That's not the case for 1.real, it assumes that the . is preceding a fractional part of a float. A clever-enough lexer could, of course, detect an invalid decimal digit immediately after the . and therefore assume it's a method to call on an integer. But that introduces complexity and may have other problematic edge cases. | 4 | 7 |
78,894,794 | 2024-8-21 | https://stackoverflow.com/questions/78894794/regex-to-match-starting-numbering-or-alphabet-bullets-like-a | I am trying to find whether string(sentence) starts with numbering or alphabet bullets followed by dot(.) or space. I have regex like: r'^(\(\d|\[a-z]\))\s +' and r"^(?:\(\d+\)|\\[a-z]\.)\s*" I tried it on example strings: (a). this is bullet Not a bullet, (b) its bullet again I am so relaxed that its not bullet. (1) Bullet again. But when I try matches = re.findall(pattern, text, re.M) I get an empty list. How can I fix it? | Executable code in Python Solution import re content = """ (a). this is bullet Not a bullet, (b) its bullet again I am so relaxed that its not bullet. (1) Bullet again. """ patter_str = r'\((?:\w+)\)\.?\s.*' pattern = re.compile(patter_str) matches = pattern.findall(content) for item in matches: print(item) Output (a). this is bullet (b) its bullet again (1) Bullet again. EDIT If you want to rescue the text after the bullet you can use this pattern: patter_str = r'\((?:\w+)\)\.?\s(.*)' | 2 | 1 |
78,886,125 | 2024-8-19 | https://stackoverflow.com/questions/78886125/vscode-python-extension-loading-forever-saying-reactivating-terminals | After updating VS code to v1.92, the Python extension consistently fails to launch, indefinitely showing a spinner next to βReactivating terminalsβ¦β on the status bar. Selecting OUTPUT > Python reveals the error Failed to resolve env "/mnt/data-linux/miniconda3". Hereβs the error trace: 2024-08-07 18:35:35.873 [error] sendStartupTelemetry() failed. s [Error]: Failed to resolve env "/mnt/data-linux/miniconda3" at ae (/home/user/.vscode-insiders/extensions/ms-python.python-2024.12.2-linux-x64/out/client/extension.js:2:1968174) at oe (/home/user/.vscode-insiders/extensions/ms-python.python-2024.12.2-linux-x64/out/client/extension.js:2:1966134) at Immediate.<anonymous> (/home/user/.vscode-insiders/extensions/ms-python.python-2024.12.2-linux-x64/out/client/extension.js:2:1962428) at processImmediate (node:internal/timers:478:21) { code: -4, data: undefined } How do I fix this? Restarting worked, but that's not sustainable. | This appears to be a bug related to the new "native" Python locator. You can go back to the old working version by adding the following line to the user settings JSON (until the bug in the native locator is fixed): "python.locator": "js", Note that this workaround pins you to the legacy version which is not something you'll want to have around forever so you might want to report your issue on Github at https://github.com/microsoft/vscode-python/issues. There've been many issues already filed and many solved but it's a work in progress. Example issues: https://github.com/microsoft/vscode-python/issues/23922 https://github.com/microsoft/vscode-python/issues/23963 https://github.com/microsoft/vscode-python/issues/23956 | 51 | 50 |
78,874,958 | 2024-8-15 | https://stackoverflow.com/questions/78874958/invalid-filter-length-is-error-in-django-template-how-to-fix | Iβm encountering a TemplateSyntaxError in my Django project when rendering a template. The error message Iβm seeing is: TemplateSyntaxError at /admin/dashboard/program/add/ Invalid filter: 'length_is' Django Version: 5.1 Python Version: 3.12.4 Error Location: This error appears in a Django template at line 22 of the fieldset.html file. {% for line in fieldset %} <div class="form-group{% if line.fields|length_is:'1' and line.errors %} errors{% endif %}{% if not line.has_visible_field %} hidden{% endif %}{% for field in line %}{% if field.field.name %} field-{{ field.field.name }}{% endif %}{% endfor %}"> <div class="row"> {% for field in line %} <label class="{% if not line.fields|length_is:'1' and forloop.counter != 1 %}col-auto {% else %}col-sm-3 {% endif %}text-left" for="id_{{ field.field.name }}"> {{ field.field.label|capfirst }} {% if field.field.field.required %} <span class="text-red">* </span> {% endif %} </label> <div class="{% if not line.fields|length_is:'1' %} col-auto fieldBox {% else %} col-sm-7 {% endif %} {% if field.field.name %} field-{{ field.field.name }}{% endif %}"> What I Tried: Checked for Custom Filters: I reviewed my project and its installed packages to verify if there is a custom filter named length_is. I found that no such custom filter is defined in my project. Verified Django Installation: I ensured that Django is correctly installed and up-to-date with version 5.1. Reviewed Template Code: I carefully examined the template code that causes the error. I found that line.fields|length_is:'1' is used, but the length_is filter is not a standard Django filter. Searched for Package Bugs: I searched through documentation and bug reports related to django-jazzmin to see if there is any mention of the length_is filter issue, but I could not find relevant information. What I Expected: I expected to find either: Documentation or a reference indicating that length_is is a standard Django filter or a filter provided by an external package. Guidance on how to define or implement the length_is filter if it is a custom filter not included by default. A resolution indicating that the issue might be related to a version mismatch or configuration issue that could be easily resolved. Since I couldnβt find any useful information or documentation about this filter, Iβm unsure how to proceed. Any help on defining or correctly using the length_is filter in Django templates would be greatly appreciated. | Don't copy the old length_is template tag from Django 5.0.x. If this is in an upstream repository, propose the appropriate change to the project (or use the approach provided here: https://github.com/farridav/django-jazzmin/issues/593#issuecomment-2288096357 Update: The specific fix has been merged in: https://github.com/farridav/django-jazzmin/pull/519) This template filter was removed in Django 5.1. The reason for its removal can be found in the Django 5.1 release notes and is explained in the source code: warnings.warn( "The length_is template filter is deprecated in favor of the length template " "filter and the == operator within an {% if %} tag.", RemovedInDjango51Warning, ) See the source code for more details. Recommended Update Instead of using length_is:'n', update your template to use the length filter with the == operator: Old syntax: {% if value|length_is:'n' %}...{% endif %} New syntax: {% if value|length == n %}...{% endif %} You can also handle alternative outputs like this: {% if value|length == n %}True{% else %}False{% endif %} This approach aligns with Django's recommended practices and ensures your templates are forward-compatible. | 1 | 4 |
78,875,297 | 2024-8-15 | https://stackoverflow.com/questions/78875297/horizontal-sum-in-duckdb | In Polars I can do: import polars as pl df = pl.DataFrame({'a': [1,2,3], 'b': [4, 5, 6]}) df.select(pl.sum_horizontal('a', 'b')) shape: (3, 1) βββββββ β a β β --- β β i64 β βββββββ‘ β 5 β β 7 β β 9 β βββββββ Is there a way to do this with DuckDB? | COLUMNS() can now be unpacked as of DuckDB 1.1.0 This does allow you to use the list_* functions e.g. duckdb.sql(""" from df select list_sum(list_value(*columns(*))) """) ββββββββββββββββββββββββββββββββββββ β list_sum(list_value(df.a, df.b)) β β int128 β ββββββββββββββββββββββββββββββββββββ€ β 5 β β 7 β β 9 β ββββββββββββββββββββββββββββββββββββ | 3 | 1 |
78,875,431 | 2024-8-15 | https://stackoverflow.com/questions/78875431/how-to-disable-functools-lru-cache-when-developing-locally | How do you disable caching when working with pythons functools? We have a boolean setting where you can disable the cache. This is for working locally. I thought something like this: @lru_cache(disable=settings.development_mode) But there is no setting like that. Am I missing something? How are people doing this? | If you want to disable caching conditionally while using functools.lru_cache, you need to manage this manually, You need a decorator that can conditionally apply lru_cache based on your settings I am using the docs as the guide docsOnFunctools Quotes from docs: If maxsize is set to None, the LRU feature is disabled and the cache can grow without bound.If typed is set to true, function arguments of different types will be cached separately. If typed is false, the implementation will usually regard them as equivalent calls and only cache a single result. (Some types such as str and int may be cached separately even when typed is false.) from functools import lru_cache, wraps def remove_lru_cache(maxsize=128, typed=False, enable_cache=True): def decorator(func): if enable_cache: return lru_cache(maxsize=maxsize, typed=typed)(func) return func return decorator Then you can apply it to your code # settings your_settings = { 'dev_mode': True #set False to enable caching } @remove_lru_cache(maxsize=None, typed=False, enable_cache=not settings['dev_mode']) def add(nums: List[int]) -> int: return sum(nums) | 2 | 2 |
78,889,486 | 2024-8-19 | https://stackoverflow.com/questions/78889486/preserving-dataframe-subclass-type-during-pandas-groupby-aggregate | I'm subclassing pandas DataFrame in a project of mine. Most pandas operations preserve the subclass type, but df.groupby().agg() does not. Is this a bug? Is there a known workaround? import pandas as pd class MySeries(pd.Series): pass class MyDataFrame(pd.DataFrame): @property def _constructor(self): return MyDataFrame _constructor_sliced = MySeries MySeries._constructor_expanddim = MyDataFrame df = MyDataFrame({"a": reversed(range(10)), "b": list('aaaabbbccc')}) print(type(df.groupby("b").sum())) # <class '__main__.MyDataFrame'> print(type(df.groupby("b").agg({"a": "sum"}))) # <class 'pandas.core.frame.DataFrame'> It looks like there was an issue (described here) that fixed subclassing for df.groupby, but as far as I can tell df.groupby().agg() was missed. I'm using pandas version 2.0.3. | It turns out that groupby().agg() combines Series to build a DataFrame, so the subclassed Series constructor needs to be properly defined. See this documentation. The following code runs with no errors: import pandas as pd class MySeries(pd.Series): @property def _constructor(self): return MySeries @property def _constructor_expanddim(self): return MyDataFrame class MyDataFrame(pd.DataFrame): @property def _constructor(self): return MyDataFrame @property def _constructor_sliced(self): return MySeries df = MyDataFrame({"a": reversed(range(10)), "b": list('aaaabbbccc')}) assert isinstance(df.groupby("b").agg({"a": "sum"}), MyDataFrame) | 2 | 0 |
78,889,002 | 2024-8-19 | https://stackoverflow.com/questions/78889002/make-image-from-uint8-rgb-pixel-data | I am trying to make a library related to RGB pixel data, but cannot seem to save the image data correctly. That is my output image. This is my code: pixelmanager.py from PIL import Image import numpy as np class MakeImgFromPixelRGB: def createIMG(PixelArray: list, ImgName: str, SaveImgAsFile: bool): # Convert the pixels into an array using numpy array = np.array(PixelArray, dtype=np.uint8) if SaveImgAsFile == True: new_image = Image.fromarray(array) new_image.save(ImgName) class getPixelsFromIMG: def __init__(self, ImagePath): im = Image.open(ImagePath, 'r') width, height = im.size pixel_values = list(im.getdata()) self.output = pixel_values test.py import pixelmanager a = pixelmanager.getPixelsFromIMG(ImagePath="download.jpg") pixelmanager.MakeImgFromPixelRGB.createIMG(a.output, "output.png", True) with open("output.txt", "w") as f: for s in a.output: f.write(str(s) + "," + "\n") I have tried to de-scale the image in paint.net and also tried to mess with the uint size. | Mark Setchell was ALMOST correct. His code did help me get an image, but it was repeated 4 times in one. Mark's line of code had a switchup (with h as height and w as width): array = np.array(PixelArray, dtype=np.uint8).reshape(h,w,3) This is my line of code: array = np.array(PixelArray, dtype=np.uint8).reshape(w, h, 3) | 2 | 1 |
78,891,643 | 2024-8-20 | https://stackoverflow.com/questions/78891643/how-to-make-gtk-interface-work-with-asyncio | I'm trying to write a Python program with a GTK interface that gets output from functions using async/await that take a few seconds to execute, what I'm asking for is the best solution for running this while not freezing the GUI.. I was using threads before and sort of got it working, but both the GUI and the backend need to communicate both ways and sometimes at a high rate (every time the user types a new character into a search box, for instance) and then I found out the 'backend' I'm using also supports async/await so I thought if I could figure out how to get GTK events to play nicely with asyncio, that would probably be the best option.. I've written an example of what I'm trying to do import asyncio import gi gi.require_version('Gtk', '3.0') from gi.repository import Gtk async def simulate_get_results(): await asyncio.sleep(5) return ['Imagine these are results from an API'] class Window(Gtk.Window): def __init__(self): super().__init__() self.search_button = Gtk.Button(label='Press me') async def button_pressed(button: Gtk.Button): results = await simulate_get_results() button.set_label(results[0]) def non_async_button_callback_wrapper(button: Gtk.Button): asyncio.run(button_pressed(button)) self.search_button.connect('clicked', non_async_button_callback_wrapper) self.add(self.search_button) win = Window() win.connect('destroy', Gtk.main_quit) win.show_all() Gtk.main() if you can get this to work in a way that it doesn't freeze the GUI after clicking the button, but still displays the results 5 seconds later, you'll most likely have solved my problem | One option to implement this is to merge event loops as shown in the other answer. The downside of that approach is that it tends to lead to CPU churn, as each event loop busy-loops to avoid blocking the other. An alternative approach is to run both event loops normally, each in its own thread. The GTK event loop insists to run in the main thread, so we'd spawn the asyncio event loop in a background thread, like this: _loop = asyncio.new_event_loop() def run_loop(): # run asyncio loop and wait forever threading.Thread(target=_loop.run_forever, daemon=True).start() run_loop() With that in place, we can write a function that submits a coroutine to the running event loop, using asyncio.run_coroutine_threadsafe. Furthermore, we set up a callback that notifies us when the coroutine is done, and invokes some code in the GTK thread, using GLib.idle_add to do so: def submit(coro, when_done): fut = asyncio.run_coroutine_threadsafe(coro, _loop) def call_when_done(): when_done(fut.result()) fut.add_done_callback(lambda _: GLib.idle_add(call_when_done)) With the submit function in place, you can run any async code from GTK without blocking the GUI, and be notified when it's done by executing a callback invoked in the GTK thread, which can update the GUI accordingly. While the callback-when-done approach is not quite as convenient as the code in the question (where the async function runs asyncio and gtk code interchangeably), it is still practical. It would be used like this: class Window(Gtk.Window): def __init__(self): super().__init__() self.search_button = Gtk.Button(label='Press me') def non_async_button_callback_wrapper(button: Gtk.Button): submit(simulate_get_results(), lambda results: button.set_label(results[0])) self.search_button.connect('clicked', non_async_button_callback_wrapper) self.add(self.search_button) The rest of the code is unchanged. (Full code.) | 4 | 2 |
78,890,441 | 2024-8-20 | https://stackoverflow.com/questions/78890441/wordcloud-with-2-background-colors | I generated this on wordcloud.com using one of the "themes". I'd like to be able to do this with the python wordcloud library, but so far all I can achieve is a single background color (so all black, not grey and black). Can anyone give me a hint on how to add the additional background color, using matlab or imshow? Here is my code: import numpy as np import matplotlib.pyplot as plt from pathlib import Path from PIL import Image from wordcloud import WordCloud tmp = "some text about Dynasty TV show" alexis_mask = np.array(Image.open('resources/alexis-poster-3.png')) print(repr(alexis_mask)) alexis_mask[alexis_mask == 0] = 255 def color_func(word, font_size, position, orientation, random_state=None, **kwargs): return "hsl(0, 100%, 27%)" wc = WordCloud(background_color="black", mask=alexis_mask, max_words=500,contour_width=2, contour_color="black") wc.generate(tmp) plt.figure(figsize=(28, 20)) plt.imshow(wc.recolor(color_func=color_func, random_state=3),interpolation="bilinear") plt.imshow(wc) I've tried starting with a black/white image, and with a black/white/grey image. So far neither works. I don't think it's offered in the Wordcloud library but is it something I could do using imshow(), after I apply wordcloud? Thanks. | You can draw the wordcloud with the desired background color (e.g."grey") and then overlay this plot with a uniformly colored image (e.g. "black") masked using the wordcloud mask. import numpy as np import matplotlib.pyplot as plt from wordcloud import WordCloud from PIL import Image text = "some text about Dynasty TV show" x, y = np.ogrid[:300, :300] mask = (((x - 150)**2 + (y - 150)**2 > 130**2) * 255).astype(int) wc = WordCloud(background_color="grey", repeat=True, mask=mask) wc.generate(text) plt.axis("off") plt.imshow(wc) plt.imshow(np.dstack([Image.new("RGB", (wc.width,wc.height), "black"), wc.mask])) plt.show() The minimal expample works for the case where the mask is 0 and 255 only. In the general case of a 1D mask it would be (wc.mask == 255) * 255, for a RGBA mask it would be ((wc.mask[:,:,0:3] == [255, 255, 255]) * 255)[:,:,0]. | 3 | 3 |
78,878,107 | 2024-8-16 | https://stackoverflow.com/questions/78878107/how-to-add-file-name-pattern-in-aws-glue-etl-job-python-script | I wanted to add file name pattern in AWS Glue ETL job python script where it should generate the files in s3 bucket with pattern dostrp*.csv.gz but could not find way how to provide this file pattern in python script : import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job args = getResolvedOptions(sys.argv, ['target_BucketName', 'JOB_NAME']) sc = SparkContext() glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) job.init(args['JOB_NAME'], args) outputbucketname = args['target_BucketName'] # Script generated for node AWS Glue Data Catalog AWSGlueDataCatalog_node188777777 = glueContext.create_dynamic_frame.from_catalog(database="xxxx", table_name="xxxx", transformation_ctx="AWSGlueDataCatalog_node887777777") # Script generated for node Amazon S3 AmazonS3_node55566666 = glueContext.write_dynamic_frame.from_options(frame=AWSGlueDataCatalog_node8877777777, connection_type="s3", format="csv", format_options={"separator": "|"}, connection_options={"path": outputbucketname, "compression": "gzip", "partitionKeys": []}, transformation_ctx="AmazonS3_node5566677777") job.commit() | Use pandas : import pandas as pd # Convert DynamicFrame to Pandas DataFrame df = dynamic_frame.toDF().toPandas() # Define S3 bucket and prefix s3_bucket = 'your-s3-bucket' s3_prefix = 'your/s3/prefix/' # Define the S3 path for the output file s3_output_path = f"s3://{s3_bucket}/{s3_prefix}output_file.csv.gz" # Create an in-memory buffer buffer = io.BytesIO() # Save the DataFrame as a CSV file in the buffer with gzip compression df.to_csv(buffer, index=False, compression='gzip') # Upload the buffer to S3 s3_client = boto3.client('s3') s3_client.put_object(Bucket=s3_bucket, Key=f"{s3_prefix}output_file.csv.gz", Body=buffer.getvalue()) | 2 | 1 |
78,893,363 | 2024-8-20 | https://stackoverflow.com/questions/78893363/extract-continuous-cell-values-from-multiple-excel-files-using-python | The aim of my task is firstly to extract values from continuous cells of a single excel file. Then the same extraction method will be performed on the remaining excel files of the same folder until the loop ends For example, I want to extract values from row 'A283:A9000' at excel file 1. After the extraction at excel file 1 is finished, the value from row 'A283:A9000' at excel file 2 will be extracted, then the extraction at the same rows 'A283:A9000' will be continued on excel file 3, excel file 4, excel files 5 and so on. I learn how to extract values from multiple excel files from https://www.youtube.com/watch?v=M7YkQpcB4fg The code works well when the values from non-continuous cells are extracted. However, when I try to use the code to extract value from continuous cells ('A283:A9000') of the same sheet, the code fails. I know the problem occurs when I try to use the code to extract values from continuous cells of the same sheet, but I am not sure how to fix the code to custom with my case. I think the problem is located at the line (cells = ['C11', 'C15', 'D15', 'C16', 'A283:A9000']). Could anyone give me help? Cheers Here is the code that I have tried. import os import openpyxl folder = r'C:\PhD study\GIS\Wind_Downscale\test_one' output_file = 'C:\PhD study\GIS\Wind_Downscale\Wind_data_forecast_time.xlsx' output_wb = openpyxl.Workbook() output_sheet = output_wb.active output_sheet.title = 'Wind Data for Forecast Time' cells = ['C11', 'C15', 'D15', 'C16', 'A283:A9000'] for filename in os.listdir(folder): if filename.endswith('.xlsx'): file = os.path.join(folder, filename) workbook = openpyxl.load_workbook(file) values = [workbook.active[cell].value for cell in cells] output_sheet.append(values) output_wb.save(output_file) Here is the error messsage: Traceback (most recent call last): File C:\\Conda5\\lib\\site-packages\\spyder_kernels\\py3compat.py:356 in compat_exec exec(code, globals, locals) File c:\\users\\kxz237.spyder-py3\\temp.py:29 values = \[workbook.active\[cell\].value for cell in cells\] File c:\\users\\kxz237.spyder-py3\\temp.py:29 in \<listcomp\> values = \[workbook.active\[cell\].value for cell in cells\] AttributeError: 'tuple' object has no attribute 'value' | Yes you are trying to use 'A283:A9000' as a single cells' co-ordinate hence the attribute error. An alternative is you can treat every element of your 'cells' list as a range cells = ['C11', 'C15', 'D15', 'C16', 'A283:A9000'] so for each element the code extracts all the cells that the range covers; for 'C11' that would be just 'C11' for 'A283:A9000' that would be 'A283', 'A284', 'A285', 'A286', ... Use the Openpyxl util openpyxl.utils.cell.cols_from_range(<cells element>) on each element in the cells list. import os import openpyxl folder = r'C:\PhD study\GIS\Wind_Downscale\test_one' output_file = 'C:\PhD study\GIS\Wind_Downscale\Wind_data_forecast_time.xlsx' output_wb = openpyxl.Workbook() output_sheet = output_wb.active output_sheet.title = 'Wind Data for Forecast Time' cells = ['C11', 'C15', 'D15', 'C16', 'A283:A9000'] for filename in os.listdir(folder): if filename.endswith('.xlsx'): file = os.path.join(folder, filename) workbook = openpyxl.load_workbook(file) #values = [workbook.active[cell].value for cell in cells] for rng in cells: # Each element in 'cells' list ### Get all cells in the elements range for allcells in openpyxl.utils.cell.cols_from_range(rng): ### allcells is a tuple of all individual cells in the range rng for cell in allcells: # Extract each cell values = workbook.active[cell].value output_sheet.append([values]) output_wb.save(output_file) Additional details FYI There are two Openpyxl utilities that will return the individual cells of a range openpyxl.utils.cell.cols_from_range(range_string) and openpyxl.utils.cell.rows_from_range(range_string) Either could be used in this scenario given the range provided is just one column. However if your range covered two or more columns then the way each would return the individual cells is; cols_from_range, cell from each row down the first column, then same down next column etc rows_from_range, cells across all columns in first row, then all cells in all columns in the second row etc. i.e. a range 'C3:D4' would return cols_from_range, 'C3', 'C4', 'D3', 'D4' rows_from_range, 'C3', 'D3', 'C4', 'D4' | 2 | 1 |
78,894,080 | 2024-8-20 | https://stackoverflow.com/questions/78894080/syncing-matplotlib-imshow-coordinates | I'm trying to create an image using networkx, save that image to use later, and then overlay a plot over top of it later. However, when I try to load the image in and make new points, the scale seems off. I've tried everything I can find to make them sync, and I'm not sure what else to try at this point. Here's a simple example: import networkx as nx import matplotlib.pyplot as plt import numpy as np fig = plt.figure() G = nx.dodecahedral_graph() pos = nx.spring_layout(G) plt.box(False) nx.draw_networkx_edges(G, pos=pos) fig.canvas.draw() data = np.array(plt.gcf().canvas.get_renderer().buffer_rgba(), dtype=np.uint8) extent = list(plt.xlim() + plt.ylim()) So now I have a graph and have saved that image to data, and have saved the range of that graph to extent. I then want to replot that graph from data and overlay the nodes of the graph, in the positions stored in pos. plt.imshow(data, extent=extent) plt.box(False) nx.draw_networkx_nodes(G, pos=pos, node_color='green') For some reason, the scale of the original image is shrunk, so the nodes end up being at a larger scale and not matching the edges. Is it something in the way I'm saving the image? | It seems that matplotlib adds padding to the sides of the image when saving the data from the plot. You can remove this padding by adding fig.tight_layout(pad=0) to the code like so: import networkx as nx import matplotlib.pyplot as plt import numpy as np fig = plt.figure() G = nx.dodecahedral_graph() pos = nx.spring_layout(G) plt.box(False) nx.draw_networkx_edges(G, pos=pos) fig.tight_layout(pad=0) fig.canvas.draw() data = np.array(plt.gcf().canvas.get_renderer().buffer_rgba(), dtype=np.uint8) extent = list(plt.xlim() + plt.ylim()) | 2 | 1 |
78,893,923 | 2024-8-20 | https://stackoverflow.com/questions/78893923/polars-set-missing-value-from-another-row | The following data frame represents basic flatten tree structure, as shown below, where pairs (id, sub-id) and (sub-id, key) are always unique and key always represents the same thing under the same id id1 βββ¬β sub-id β β ββββ key1 β β ββββ value β ββ sub-id2 β ββββ key1 β ββββ None id2 ββββ sub-id3 ββββ key2 ββββ value with graphical representation out of the way, below is the definition as polars.DataFrame df = pl.DataFrame( { "id": [1, 1, 2, 2, 2, 3], "sub-id": [1, 1, 2, 3, 3, 4], "key": ["key_1_1", "key_1_2", "key_2_1", "key_2_1", "key_2_2", "key_3"], "value": ["value1 1", "value 1 2", None, "value 2 1", "value 2 2", "value 3"], } ) The same data frame in table representation: shape: (6, 4) ββββββ¬βββββββββ¬ββββββββββ¬ββββββββββββ β ID β sub-id β key β value β ββββββͺβββββββββͺββββββββββͺββββββββββββ‘ β 1 β 1 β key_1_1 β value 1 β β 1 β 1 β key_1_2 β value 2 β β 2 β 2 β key_2_1 β value 2 1 β β 2 β 3 β key_2_1 β None β β 2 β 3 β key_2_2 β value 2 2 β β 3 β 4 β key_3 β value 3 β ββββββ΄βββββββββ΄ββββββββββ΄ββββββββββββ How would I to fill gaps like shown below using polars. total size of data is about 100k rows. shape: (6, 4) ββββββ¬βββββββββ¬ββββββββββ¬ββββββββββββ β ID β sub-id β key β value β ββββββͺβββββββββͺββββββββββͺββββββββββββ‘ β 1 β 1 β key_1_1 β value 1 β β 1 β 1 β key_1_2 β value 2 β β 2 β 2 β key_2_1 β value 2 1 β β 2 β 3 β key_2_1 β value 2 1 β β 2 β 3 β key_2_2 β value 2 2 β β 3 β 4 β key_3 β value 3 β ββββββ΄βββββββββ΄ββββββββββ΄ββββββββββββ | There is pl.Expr.fill_null to fill missing values. As fill value, we use the first non-null value with the same id and key. As we assume that all values for the same id and key are the same, taking the first value is reasonable. It can be constructed as follows: pl.Expr.filter and pl.Expr.is_not_null to filter for non-null values, pl.Expr.first to select the first such value, the window function pl.Expr.over to evaluate the expression separately for each id and key pair. df.with_columns( pl.col("value").fill_null( pl.col("value").filter(pl.col("value").is_not_null()).first().over("id", "key") ) ) shape: (6, 4) βββββββ¬βββββββββ¬ββββββββββ¬ββββββββββββ β id β sub-id β key β value β β --- β --- β --- β --- β β i64 β i64 β str β str β βββββββͺβββββββββͺββββββββββͺββββββββββββ‘ β 1 β 1 β key_1_1 β value 1 β β 1 β 1 β key_1_2 β value 2 β β 2 β 2 β key_2_1 β value 2 1 β β 2 β 3 β key_2_1 β value 2 1 β β 2 β 3 β key_2_2 β value 2 2 β β 3 β 4 β key_3 β value 3 β βββββββ΄βββββββββ΄ββββββββββ΄ββββββββββββ Note. I needed to adapt your code slightly to match the example you described in the text. df = pl.DataFrame({ "id": [1, 1, 2, 2, 2, 3], "sub-id": [1, 1, 2, 3, 3, 4], "key": ["key_1_1", "key_1_2", "key_2_1", "key_2_1", "key_2_2", "key_3"], "value": ["value 1", "value 2", None, "value 2 1", "value 2 2", "value 3"], }) | 2 | 3 |
78,893,691 | 2024-8-20 | https://stackoverflow.com/questions/78893691/capture-integer-in-string-and-use-it-as-part-of-regular-expression | I've got a string: s = ".,-2gg,,,-2gg,-2gg,,,-2gg,,,,,,,,t,-2gg,,,,,,-2gg,t,,-1gtt,,,,,,,,,-1gt,-3ggg" and a regular expression I'm using import re delre = re.compile('-[0-9]+[ACGTNacgtn]+') #this is almost correct print (delre.findall(s)) This returns: ['-2gg', '-2gg', '-2gg', '-2gg', '-2gg', '-2gg', '-1gtt', '-1gt', '-3ggg'] But -1gtt and -1gt are not desired matches. The integer in this case defines how many subsequent characters to match, so the desired output for those two matches would be -1g and -1g, respectively. Is there a way to grab the integer after the dash and dynamically define the regex so that it matches that many and only that many subsequent characters? | You can't do this with the regex pattern directly, but you can use capture groups to separate the integer and character portions of the match, and then trim the character portion to the appropriate length. import re # surround [0-9]+ and [ACGTNacgtn]+ in parentheses to create two capture groups delre = re.compile('-([0-9]+)([ACGTNacgtn]+)') s = ".,-2gg,,,-2gg,-2gg,,,-2gg,,,,,,,,t,-2gg,,,,,,-2gg,t,,-1gtt,,,,,,,,,-1gt,-3ggg" # each match should be a tuple of (number, letter(s)), e.g. ('1', 'gtt') or ('2', 'gg') for number, bases in delre.findall(s): # print the number, then use slicing to truncate the string portion print(f'-{number}{bases[:int(number)]}') This prints -2gg -2gg -2gg -2gg -2gg -2gg -1g -1g -3ggg You'll more than likely want to do something other than print, but you can format the matched strings however you need! NOTE: this does fail in cases where the integer is followed by fewer matching characters than it specifies, e.g. -10agcta is still a match even though it only contains 5 characters. | 5 | 4 |
78,888,948 | 2024-8-19 | https://stackoverflow.com/questions/78888948/how-to-get-mypy-to-raise-errors-warnings-about-using-the-typing-package-inst | Currently I have a bunch of code that does this: from typing import Dict foo: Dict[str, str] = [] In Python 3.9+, it is preferable to use the built-in types (source): foo: dict[str, str] = [] Is there a way to configure mypy to raise an error/warning when my code uses Dict instead of dict? | According to the mypy maintainers, this fuctionality will not be implemented because it has already been implemented by formatters like ruff: This is already implemented by linters such as Ruff (https://docs.astral.sh/ruff/rules/non-pep585-annotation/). That doesn't mean mypy can't also implement support for checks like this, but it does mean the return on investment for mypy to add this feature is lower. Source | 2 | 1 |
78,892,568 | 2024-8-20 | https://stackoverflow.com/questions/78892568/polars-split-column-and-get-n-th-or-last-element | I have the following code and output. Code. import polars as pl df = pl.DataFrame({ 'type': ['A', 'O', 'B', 'O'], 'id': ['CASH', 'ORB.A123', 'CHECK', 'OTC.BV32'] }) df.with_columns(sub_id=pl.when(pl.col('type') == 'O').then(pl.col('id').str.split('.')).otherwise(None)) Output. shape: (4, 3) ββββββββ¬βββββββββββ¬ββββββββββββββββββ β type β id β sub_id β β --- β --- β --- β β str β str β list[str] β ββββββββͺβββββββββββͺββββββββββββββββββ‘ β A β CASH β null β β O β ORB.A123 β ["ORB", "A123"] β β B β CHECK β null β β O β OTC.BV32 β ["OTC", "BV32"] β ββββββββ΄βββββββββββ΄ββββββββββββββββββ Now, how would I extract the n-th element (or in this case, the last element) of each list? Especially, the expected output is the following. shape: (4, 3) ββββββββ¬βββββββββββ¬βββββββββββββ β type β id β sub_id β β --- β --- β --- β β str β str β str β ββββββββͺβββββββββββͺβββββββββββββ‘ β A β CASH β null β β O β ORB.A123 β "A123" β β B β CHECK β null β β O β OTC.BV32 β "BV32" β ββββββββ΄βββββββββββ΄βββββββββββββ | You can simply append .list.last() to select the last element of each list. Alternatively, there exists .list.get() to get list elements by index. import polars as pl df = pl.DataFrame({ 'type': ['A', 'O', 'B', 'O'], 'id': ['CASH', 'ORB.A123', 'CHECK', 'OTC.BV32'] }) df.with_columns( sub_id=pl.when( pl.col('type') == 'O' ).then( pl.col('id').str.split('.').list.last() ) ) shape: (4, 3) ββββββββ¬βββββββββββ¬βββββββββ β type β id β sub_id β β --- β --- β --- β β str β str β str β ββββββββͺβββββββββββͺβββββββββ‘ β A β CASH β null β β O β ORB.A123 β A123 β β B β CHECK β null β β O β OTC.BV32 β BV32 β ββββββββ΄βββββββββββ΄βββββββββ Note that I've dropped .otherwise(None) as this is the default behaviour of an if-then-otherwise expression. | 2 | 3 |
78,891,481 | 2024-8-20 | https://stackoverflow.com/questions/78891481/pandas-string-replace-with-regex-argument-for-non-regex-replacements | Suppose I have a dataframe in which I want to replace a non-regex substring consisting only of characters (i.e. a-z, A-Z) and/or digits (i.e. 0-9) via pd.Series.str.replace. The docs state that this function is equivalent to str.replace or re.sub(), depending on the regex argument (default False). Apart from most likely being overkill, are there any downsides to consider if the function was called with regex=True for non-regex replacements (e.g. performance)? If so, which ones? Of course, I am not suggesting using the function in this way. Example: Replace 'Elephant' in the below dataframe. import pandas as pd data = {'Animal_Name': ['Elephant African', 'Elephant Asian', 'Elephant Indian', 'Elephant Borneo', 'Elephant Sumatran']} df = pd.DataFrame(data) df = df['Animal_Name'].str.replace('Elephant', 'Tiger', regex=True) | Special characters! Using regular expressions with plain words is generally fine (aside from efficiency concerns), there will however be an issue when you have special characters. This is an often overlooked issue and I've seen many people not understanding why their str.replace failed. Pandas even changed the default regex=True to regex=False, and the original reason for that (#GH24804) was that str.replace('.', '') would remove all characters, which is expected if you know regex, but not at all if you don't. For example, let's try to replace 1.5 with 2.3 and the $ currency by Β£: df = pd.DataFrame({'input': ['This item costs $1.5.', 'We need 125 units.']}) df['out1'] = df['input'].str.replace('1.5', '2.3', regex=False) df['out1_regex'] = df['input'].str.replace('1.5', '2.3', regex=True) df['out2'] = df['input'].str.replace('$', 'Β£', regex=False) df['out2_regex'] = df['input'].str.replace('$', 'Β£', regex=True) Output: input out1 out1_regex \ 0 This item costs $1.5. This item costs $2.3. This item costs $2.3. 1 We need 125 units. We need 125 units. We need 2.3 units. out2 out2_regex 0 This item costs Β£1.5. This item costs $1.5.Β£ 1 We need 125 units. We need 125 units.Β£ Since . and $ have a special meaning in a regex, those cannot be used as is and should have been escaped (1\.5 / \$), which can be done programmatically with re.escape. How does str.replace decide to use a regex or a plain string operation? a pure python replacement will be used if: regex=False pat is a string (passing a compiled regex with regex=False will trigger a ValueError) case is not False no flags are set repl is not a callable In all other cases, re.sub will be used. The code that does this is is core/strings/object_array.py: def _str_replace( self, pat: str | re.Pattern, repl: str | Callable, n: int = -1, case: bool = True, flags: int = 0, regex: bool = True, ): if case is False: # add case flag, if provided flags |= re.IGNORECASE if regex or flags or callable(repl): if not isinstance(pat, re.Pattern): if regex is False: pat = re.escape(pat) pat = re.compile(pat, flags=flags) n = n if n >= 0 else 0 f = lambda x: pat.sub(repl=repl, string=x, count=n) else: f = lambda x: x.replace(pat, repl, n) Efficiency Considering a pattern without special characters, regex=True is about 6 times slower than regex=False in the linear regime: | 3 | 3 |
78,889,556 | 2024-8-19 | https://stackoverflow.com/questions/78889556/create-date-range-with-predefined-number-of-periods-in-polars | When I create a date range in pandas, I often use the periods argument. Something like this: pd.date_range(start='1/1/2018', periods=8) DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04', '2018-01-05', '2018-01-06', '2018-01-07', '2018-01-08'], dtype='datetime64[ns]', freq='D') What would be the equivalent way in polars? I am missing the periods input parameter in pl.date_range. Having said that, there's probably an easy and clever solution ;-) | Adding a periods argument has been an open feature request for a while now. Until the request has been implemented, you can make start an expression and create end by offsetting start by the desired number of periods (using pl.Expr.dt.offset_by). start = pl.lit("1/1/2018").str.to_date() pl.date_range(start=start, end=start.dt.offset_by("7d"), eager=True) shape: (8,) Series: 'literal' [date] [ 2018-01-01 2018-01-02 2018-01-03 2018-01-04 2018-01-05 2018-01-06 2018-01-07 2018-01-08 ] | 3 | 4 |
78,879,247 | 2024-8-16 | https://stackoverflow.com/questions/78879247/ignoring-undefined-symbol-errors-in-ctypes-library-load | I'm loading a not-owned-by-me library with Python's ctypes module as ctypes.CDLL("libthirdparty.so") which produces an error undefined symbol: g_main_context_push_thread_default because libthirdparty.so was overlinking a lot of unneeded / unused stuff like glib. In this particular case, I can work around this problem successfully by LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libglib-2.0.so.0 python ...-ing glib, but I'm curious in two generic related questions touching on how ctypes works: Is it possible to ask ctypes / dlopen to ignore undefined symbols, given that I know that these undefined symbols are not used on the code path I'm interested in (or otherwise I would be okay if it crashed in a hard way)? Is it possible to ask ctypes to load several libraries at once or emulate LD_PRELOAD-ing? I tried to do ctypes.CDLL("/usr/lib/x86_64-linux-gnu/libglib-2.0.so.0") in attempt to force-load glib in the Python's process before loading ctypes.CDLL("libthirdparty.so"), but it did not help. Thanks! | Listing [Python.Docs]: ctypes - A foreign function library for Python. What you're after, is [Man7]: DLOPEN (3) (emphasis is mine): RTLD_LAZY Perform lazy binding. Resolve symbols only as the code that references them is executed. If the symbol is never referenced, then it is never resolved. (Lazy binding is performed only for function references; references to variables are always immediately bound when the shared object is loaded). Since glibc 2.1.1, this flag is overridden by the effect of the LD_BIND_NOW environment variable. Here's an example: common.h: #pragma once #include <stdio.h> #define C_TAG "From C" #define PRINT_MSG_0() printf("%s - [%s] (%d) - [%s]\n", C_TAG, __FILE__, __LINE__, __FUNCTION__) dll01.h: #pragma once #if defined(_WIN32) # define DLL01_EXPORT_API __declspec(dllexport) #else # define DLL01_EXPORT_API #endif #if defined(__cplusplus) extern "C" { #endif DLL01_EXPORT_API int inner(); #if defined(__cplusplus) } #endif dll01.c: #include "dll01.h" #include "common.h" # if !defined(NO_EXPORTS) int inner() { PRINT_MSG_0(); return 0; } #endif dll00.c: #include <stdio.h> #include "common.h" #include "dll01.h" #if defined(_WIN32) # define DLL00_EXPORT_API __declspec(dllexport) #else # define DLL00_EXPORT_API #endif #if defined(__cplusplus) extern "C" { #endif DLL00_EXPORT_API int simple(); DLL00_EXPORT_API int innerWrapper(); #if defined(__cplusplus) } #endif int simple() { PRINT_MSG_0(); return 0; } int innerWrapper() { PRINT_MSG_0(); return inner(); } code00.py: #!/usr/bin/env python import ctypes as cts import os import sys DLL_NAME = "./libdll00.{:s}".format("dll" if sys.platform[:3].lower() == "win" else "so") def main(*argv): mode = cts.DEFAULT_MODE for arg in argv: flag = getattr(os, f"RTLD_{arg.upper()}", None) if flag is not None: mode |= flag print(f"Loading .dll (mode: {mode})...") dll = cts.CDLL(DLL_NAME, mode=mode) for func_name in ("simple", "innerWrapper"): print(f"Loading {func_name} function...") func = getattr(dll, func_name) func.argtypes = () func.restype = cts.c_int res = func() print(f"{func_name} returned: {res}") if __name__ == "__main__": print( "Python {:s} {:03d}bit on {:s}\n".format( " ".join(elem.strip() for elem in sys.version.split("\n")), 64 if sys.maxsize > 0x100000000 else 32, sys.platform, ) ) rc = main(*sys.argv[1:]) print("\nDone.\n") sys.exit(rc) Output: (qaic-env) [cfati@cfati-5510-0:/mnt/e/Work/Dev/StackExchange/StackOverflow/q078879247]> ~/sopr.sh ### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ### [064bit prompt]> ls code00.py common.h dll00.c dll01.c dll01.h [064bit prompt]> [064bit prompt]> gcc -fPIC -shared dll01.c -o libdll01.so [064bit prompt]> LIBRARY_PATH="${LIBRARY_PATH}:." gcc -fPIC -shared dll00.c -o libdll00.so -ldll01 [064bit prompt]> ls code00.py common.h dll00.c dll01.c dll01.h libdll00.so libdll01.so [064bit prompt]> export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:." [064bit prompt]> ldd libdll00.so linux-vdso.so.1 (0x00007ffef55a7000) libdll01.so => ./libdll01.so (0x00007f032db60000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f032d94c000) /lib64/ld-linux-x86-64.so.2 (0x00007f032db6c000) [064bit prompt]> [064bit prompt]> # ----- Normal libdll01.so [064bit prompt]> # --- Symbol resolution: dlopen [064bit prompt]> python ./code00.py Python 3.8.10 (default, Jul 29 2024, 17:02:10) [GCC 9.4.0] 064bit on linux Loading .dll (mode: 0)... Loading simple function... From C - [dll00.c] (26) - [simple] simple returned: 0 Loading innerWrapper function... From C - [dll00.c] (33) - [innerWrapper] From C - [dll01.c] (8) - [inner] innerWrapper returned: 0 Done. [064bit prompt]> # --- Symbol resolution: lazy [064bit prompt]> python ./code00.py lazy Python 3.8.10 (default, Jul 29 2024, 17:02:10) [GCC 9.4.0] 064bit on linux Loading .dll (mode: 1)... Loading simple function... From C - [dll00.c] (26) - [simple] simple returned: 0 Loading innerWrapper function... From C - [dll00.c] (33) - [innerWrapper] From C - [dll01.c] (8) - [inner] innerWrapper returned: 0 Done. [064bit prompt]> [064bit prompt]> gcc -fPIC -shared dll01.c -o libdll01.so -DNO_EXPORTS [064bit prompt]> # ----- Messed up (undefined "inner" function) libdll01.so [064bit prompt]> # --- Symbol resolution: dlopen [064bit prompt]> python ./code00.py Python 3.8.10 (default, Jul 29 2024, 17:02:10) [GCC 9.4.0] 064bit on linux Loading .dll (mode: 0)... Traceback (most recent call last): File "./code00.py", line 41, in <module> rc = main(*sys.argv[1:]) File "./code00.py", line 18, in main dll = cts.CDLL(DLL_NAME, mode=mode) File "/usr/lib/python3.8/ctypes/__init__.py", line 373, in __init__ self._handle = _dlopen(self._name, mode) OSError: ./libdll00.so: undefined symbol: inner [064bit prompt]> [064bit prompt]> # --- Symbol resolution: lazy [064bit prompt]> python ./code00.py lazy Python 3.8.10 (default, Jul 29 2024, 17:02:10) [GCC 9.4.0] 064bit on linux Loading .dll (mode: 1)... Loading simple function... From C - [dll00.c] (26) - [simple] simple returned: 0 Loading innerWrapper function... From C - [dll00.c] (33) - [innerWrapper] python: symbol lookup error: ./libdll00.so: undefined symbol: inner Might also worth reading: [SO]: C function called from Python via ctypes returns incorrect value (@CristiFati's answer) [SO]: Resolving circular shared-object dependencies with ctypes/cffi (@CristiFati's answer) Regarding your 2nd question, loading the library beforehand has no effect cause it can be found by the loader anyway, so no matter if loaded manually before or automatically by libthirdparty.so it's still the (same) wrong one. | 2 | 2 |
78,890,348 | 2024-8-20 | https://stackoverflow.com/questions/78890348/merging-the-two-replace-methods | Is it possible to merge the two "replace" methods in the example below into one? The intention is for scalability. import pandas as pd custom_groups = {"odd": [1, 3, 5], "even": [2, 4]} s = pd.Series([1, 2, 3, 4, 5]) s.replace(custom_groups["odd"], "odd").replace(custom_groups["even"], "even") | Most pythonic/efficient approach: reverse the dictionary with a dictionary comprehension and replace once: d = {v:k for k, l in custom_groups.items() for v in l} s.replace(d) Output: 0 odd 1 even 2 odd 3 even 4 odd dtype: object Intermediate d: {1: 'odd', 3: 'odd', 5: 'odd', 2: 'even', 4: 'even'} timings This is ~2x faster than using a Series+explode on small datasets, and tends to the same speed for large datasets. ## OP's example # dictionary comprehension 745 Β΅s Β± 31.9 Β΅s per loop (mean Β± std. dev. of 7 runs, 1000 loops each) # Series explode 1.58 ms Β± 213 Β΅s per loop (mean Β± std. dev. of 7 runs, 1000 loops each) ## 1000 keys dictionary / 10 items per list / 10_000 items Series # dictionary comprehension 163 ms Β± 6.61 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) # Series explode 203 ms Β± 64.4 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) generalization If you have to do this often with different dictionaries, you can even use a function to invert the dictionary def invert(d): return {v:k for k, l in custom_groups.items() for v in (l if isinstance(l, list) else [l])} s.replace(invert(custom_groups)) | 2 | 2 |
78,889,767 | 2024-8-19 | https://stackoverflow.com/questions/78889767/polars-chain-multiple-operations-on-select-with-values-counts | I'm working with a Polars dataframe and I want to perform a series of operations using the .select() method. However, I'm facing problems when I try to apply value_counts() followed by unnest() to get separate columns instead of a struct column. If I just use the method alone, then I don't have any issues: ( df .select( pl.col("CustomerID"), pl.col("Country").value_counts(sort=True).struct.rename_fields(["Country", "State"]).first().over("CustomerID")).unnest("Country") .unique(maintain_order=True) ) But, since I'm doing a series of operations like this: ( df .select( pl.col("CustomerID"), pl.col("Country").value_counts(sort=True).struct.rename_fields(["Country", "Count"]).first().over("CustomerID").unnest("Country"), Days_Since_Last_Purchase = pl.col("InvoiceDate").max() - pl.col("InvoiceDate").max().over("CustomerID"), ) .unique(maintain_order=True) ) I'm facing the following error: AttributeError: 'Expr' object has no attribute 'unnest' Example Data : import datetime import polars as pl data = {'InvoiceNo': ['541431', 'C541433', '537626', '537626', '537626', '537626', '537626', '537626', '537626', '537626'], 'StockCode': ['23166', '23166', '84997D', '22729', '22492', '22727', '22774', '22195', '22805', '22771'], 'Description': ['MEDIUM CERAMIC TOP STORAGE JAR', 'MEDIUM CERAMIC TOP STORAGE JAR', 'PINK 3 PIECE POLKADOT CUTLERY SET', 'ALARM CLOCK BAKELIKE ORANGE', 'MINI PAINT SET VINTAGE ', 'ALARM CLOCK BAKELIKE RED ', 'RED DRAWER KNOB ACRYLIC EDWARDIAN', 'LARGE HEART MEASURING SPOONS', 'BLUE DRAWER KNOB ACRYLIC EDWARDIAN', 'CLEAR DRAWER KNOB ACRYLIC EDWARDIAN'], 'Quantity': [74215, -74215, 6, 4, 36, 4, 12, 12, 12, 12], 'InvoiceDate': [datetime.datetime(2011, 1, 18, 10, 1), datetime.datetime(2011, 1, 18, 10, 17), datetime.datetime(2010, 12, 7, 14, 57), datetime.datetime(2010, 12, 7, 14, 57), datetime.datetime(2010, 12, 7, 14, 57), datetime.datetime(2010, 12, 7, 14, 57), datetime.datetime(2010, 12, 7, 14, 57), datetime.datetime(2010, 12, 7, 14, 57), datetime.datetime(2010, 12, 7, 14, 57), datetime.datetime(2010, 12, 7, 14, 57)], 'UnitPrice': [1.0399999618530273, 1.0399999618530273, 3.75, 3.75, 0.6499999761581421, 3.75, 1.25, 1.649999976158142, 1.25, 1.25], 'CustomerID': ['12346', '12346', '12347', '12347', '12347', '12347', '12347', '12347', '12347', '12347'], 'Country': ['United Kingdom', 'United Kingdom', 'Iceland', 'Iceland', 'Iceland', 'Iceland', 'Iceland', 'Iceland', 'Iceland', 'Iceland'], 'Transaction_Status': ['Completed', 'Cancelled', 'Completed', 'Completed', 'Completed', 'Completed', 'Completed', 'Completed', 'Completed', 'Completed']} df = pl.DataFrame(data) df | Note that in your first example, you didn't call .unnest() directly on the value_counts() expression, but on the select context. This can also be done if the select context contains multiple expressions. ( df .select( pl.col("CustomerID"), pl.col("Country").value_counts(sort=True).struct.rename_fields(["Country", "State"]).first().over("CustomerID"), Days_Since_Last_Purchase = pl.col("InvoiceDate").max() - pl.col("InvoiceDate").max().over("CustomerID"), ) .unnest("Country") .unique(maintain_order=True) ) shape: (2, 4) ββββββββββββββ¬βββββββββββββββββ¬ββββββββ¬βββββββββββββββββββββββββββ β CustomerID β Country β State β Days_Since_Last_Purchase β β --- β --- β --- β --- β β str β str β u32 β duration[ΞΌs] β ββββββββββββββͺβββββββββββββββββͺββββββββͺβββββββββββββββββββββββββββ‘ β 12346 β United Kingdom β 2 β 0Β΅s β β 12347 β Iceland β 8 β 41d 19h 20m β ββββββββββββββ΄βββββββββββββββββ΄ββββββββ΄βββββββββββββββββββββββββββ | 2 | 1 |
78,889,055 | 2024-8-19 | https://stackoverflow.com/questions/78889055/in-python-why-does-preallocation-of-a-numpy-array-fail-to-limit-its-printed-pre | Here is a minimal example: import numpy as np np.set_printoptions(linewidth=1000, precision=3) # First attempt fails to limit the printed precision of x x = np.array([None]) x[0] = 1/3 print(x) # Second attempt succeeds x = [None] x[0] = 1/3 x = np.array(x) print(x) Running this script yields [0.3333333333333333] [0.333] Why does the "First attempt" above fail to limit the printed precision of x while the second attempt succeeds? | When running: x = np.array([None]) x[0] = 1/3 print(x) x is an object array (that contains python floats), not an array with a float dtype like your second attempt: array([0.3333333333333333], dtype=object) This ignores the print options. You can reproduce this simply with: print(np.array([1/3], dtype=object), np.array([1/3])) Output: [0.3333333333333333] [0.333] As a workaround, convert your array to float: print(x.astype(np.float64)) # [0.333] | 2 | 5 |
78,888,863 | 2024-8-19 | https://stackoverflow.com/questions/78888863/compute-the-number-of-unique-combinations-while-excluding-those-containing-missi | I'd like to count the number of unique values when combining several columns at once. My idea so far was to use pl.struct(...).n_unique(), which works fine when I consider missing values as a unique value: import polars as pl df = pl.DataFrame({ "x": ["a", "a", "b", "b"], "y": [1, 1, 2, None], }) df.with_columns(foo=pl.struct("x", "y").n_unique()) shape: (4, 3) βββββββ¬βββββββ¬ββββββ β x β y β foo β β --- β --- β --- β β str β i64 β u32 β βββββββͺβββββββͺββββββ‘ β a β 1 β 3 β β a β 1 β 3 β β b β 2 β 3 β β b β null β 3 β βββββββ΄βββββββ΄ββββββ However, sometimes I want to exclude a combination from the count if it contains any number of missing values. In the example above, I'd like foo to be 2. However, using .drop_nulls() before counting doesn't work and produces the same output as above. df.with_columns(foo=pl.struct("x", "y").drop_nulls().n_unique()) Is there a way to do this using only Polars expressions? | pl.Expr.drop_nulls does not drop the row as the entirety of the struct is indeed not null. To still achieve the desired result, you can filter out all rows which contain a null values in any of the columns of interest using pl.Expr.filter. ( df .with_columns( foo=pl.struct("x", "y").filter( ~pl.any_horizontal(pl.col("x", "y").is_null()) ).n_unique() ) ) shape: (4, 3) βββββββ¬βββββββ¬ββββββ β x β y β foo β β --- β --- β --- β β str β i64 β u32 β βββββββͺβββββββͺββββββ‘ β a β 1 β 2 β β a β 1 β 2 β β b β 2 β 2 β β b β null β 2 β βββββββ΄βββββββ΄ββββββ | 2 | 2 |
78,888,400 | 2024-8-19 | https://stackoverflow.com/questions/78888400/how-do-i-override-django-db-backends-logging-to-work-when-debug-false | In django's LOGGING configuration for the builtin django.db.backends it states that: "For performance reasons, SQL logging is only enabled when settings.DEBUG is set to True, regardless of the logging level or handlers that are installed." As a result the following LOGGING configuration, which is correctly set up to issue debug level logs showing DB queries, will NOT output the messages I need: DEBUG = False LOGGING = { "version": 1, "disable_existing_loggers": True, "root": {"handlers": [ "gcp_structured_logging"]}, "handlers": { "gcp_structured_logging": { "level": "DEBUG", "class": "django_gcp.logging.GoogleStructuredLogsHandler", } }, "loggers": { 'django.db.backends': { 'handlers': ["gcp_structured_logging"], 'level': 'DEBUG', 'propagate': True, }, }, } This is preventing me from activating this logging in production, where of course I'm not going to durn on DEBUG=True in my settings but where I need to log exactly this information. Ironically, I need this in order to debug a performance issue (I plan to run this for a short time in production and cat my logs so I can set up a realistic scenario for a load test and some benchmarking on the database). How can I override django's override so that sql queries get logged as I intend? NOTE willeM_ Van Onsem's answer as accepted is correct (because it totally is) but it's worth noting that in the end, I came across a library called django-silk. Whilst not an answer to this question directly, silk actually covers the capability I was trying to build for myself when I found this peculiarity of how the db logging works. Perhaps someone else trying to achieve the same thing will make good use of it. | Fortunately we can override this. Indeed, by setting the .force_debug_cursor of the connection to True, for example in one of the AppConfigs (any app config) exists: # my_app/apps.py from django.apps import AppConfig from django.db import connection class MyAppConfig(AppConfig): name = 'my_app' def ready(self): connection.force_debug_cursor = True This works because Django decides whether to log with [GitHub]: @property def queries_logged(self): return self.force_debug_cursor or settings.DEBUG Another option is to work with the CaptureQueriesContext [GitHub], this is a context processor that sets .force_debug_cursor to true in the scope of the context. So if you know where to look for the slow query, you can use: from django.db import connection from django.test.utils import with CaptureQueriesContext(connection): list(MyModel.objects.all()) # sample query | 2 | 2 |
78,888,819 | 2024-8-19 | https://stackoverflow.com/questions/78888819/comparing-multiple-values-across-columns-of-pandas-dataframe-based-on-column-nam | I have a pandas dataframe with a number of thresholds and values associated with epochs. I want to compare the all of the thresholds with their associated values simultaneously to remove rows as needed. I will be doing this many times and the letter designations can change each time I create this dataframe, but there will always be a 1:1 association for threshold and value. The number of thresholds and values can change each time the dataframe is created though. The end goal is to get all of the epochs where all of the values are below their respective thresholds. The sample below does not have any that fit this requirement, but I've included it so people have a visual for what I'm working with. For now, how would I do this comparison in an efficient way? epoch thresholdA thresholdB thresholdC thresholdD thresholdE thresholdF thresholdG valueA valueB valueC valueD valueE valueF valueG 0 1723489899000 30 120 30 30 120 30 2 299.311523 -4.785156 -2.841797 199.541016 0.292969 0.00000 19.775391 1 1723489900000 30 120 30 30 120 30 2 299.311523 -4.785156 -2.841797 199.541016 0.292969 0.00000 19.775391 2 1723489901000 30 120 30 30 120 30 2 299.311523 -4.785156 -2.841797 199.541016 0.292969 0.00000 19.775391 3 1723489902000 30 120 30 30 120 30 2 299.311523 -3.828125 -2.841797 199.780273 2.929688 0.00000 31.054688 4 1723489903000 30 120 30 30 120 30 2 299.311523 -3.828125 -2.841797 199.780273 2.929688 0.00000 31.054688 5 1723489904000 30 120 30 30 120 30 2 299.311523 -3.828125 -2.841797 199.780273 2.929688 0.00000 31.054688 6 1723489905000 30 120 30 30 120 30 2 299.311523 -4.785156 -2.841797 199.541016 2.929688 0.00000 31.347656 7 1723489906000 30 120 30 30 120 30 2 299.311523 -4.785156 -2.841797 199.541016 2.929688 0.00000 31.347656 8 1723489907000 30 120 30 30 120 30 2 299.311523 -4.785156 -2.841797 199.541016 2.929688 0.00000 31.347656 9 1723489908000 30 120 30 30 120 30 2 299.311523 -3.828125 -2.841797 199.541016 2.929688 0.00000 31.347656 10 1723489909000 30 120 30 30 120 30 2 299.311523 -3.828125 -2.841797 199.541016 2.929688 0.00000 31.347656 11 1723489910000 30 120 30 30 120 30 2 299.311523 -3.828125 -2.841797 199.541016 2.929688 0.00000 31.347656 12 1723489911000 30 120 30 30 120 30 2 299.311523 -3.828125 -2.841797 199.541016 2.929688 0.00000 37.500000 13 1723489912000 30 120 30 30 120 30 2 299.311523 -3.828125 -2.841797 199.541016 2.929688 0.00000 37.500000 14 1723489913000 30 120 30 30 120 30 2 299.311523 -3.828125 -2.841797 199.541016 2.929688 0.00000 37.500000 15 1723489914000 30 120 30 30 120 30 2 299.311523 -4.785156 -2.841797 199.541016 2.929688 0.00000 40.869141 16 1723489915000 30 120 30 30 120 30 2 299.311523 -4.785156 -2.841797 199.541016 2.929688 0.00000 40.869141 17 1723489916000 30 120 30 30 120 30 2 299.311523 -4.785156 -2.841797 199.541016 2.929688 0.00000 40.869141 18 1723489950000 30 120 30 30 120 30 2 199.301758 -4.785156 -2.841797 129.677750 2.929688 81.37207 3.222656 19 1723489951000 30 120 30 30 120 30 2 199.301758 -4.785156 -2.841797 129.677750 2.929688 81.37207 3.222656 20 1723489952000 30 120 30 30 120 30 2 199.301758 -4.785156 -2.841797 129.677750 2.929688 81.37207 3.222656 21 1723489953000 30 120 30 30 120 30 2 199.301758 -4.785156 -2.841797 129.677750 2.929688 81.37207 0.439453 22 1723489954000 30 120 30 30 120 30 2 199.301758 -4.785156 -2.841797 129.677750 2.929688 81.37207 0.439453 23 1723489955000 30 120 30 30 120 30 2 199.301758 -4.785156 -2.841797 129.677750 2.929688 81.37207 0.439453 24 1723489956000 30 120 30 30 120 30 2 199.301758 -4.785156 -2.841797 129.677750 0.292969 81.37207 0.585938 25 1723489957000 30 120 30 30 120 30 2 199.301758 -4.785156 -2.841797 129.677750 0.292969 81.37207 0.585938 26 1723489958000 30 120 30 30 120 30 2 199.301758 -4.785156 -2.841797 129.677750 0.292969 81.37207 0.585938 27 1723489959000 30 120 30 30 120 30 2 199.301758 -3.828125 -2.841797 129.677750 0.292969 81.37207 0.439453 28 1723489960000 30 120 30 30 120 30 2 199.301758 -3.828125 -2.841797 129.677750 0.292969 81.37207 0.439453 29 1723489961000 30 120 30 30 120 30 2 199.301758 -3.828125 -2.841797 129.677750 0.292969 81.37207 0.439453 30 1723489962000 30 120 30 30 120 30 2 199.301758 -4.785156 -2.841797 129.677750 0.292969 81.37207 1.171875 31 1723489963000 30 120 30 30 120 30 2 199.301758 -4.785156 -2.841797 129.677750 0.292969 81.37207 1.171875 32 1723489964000 30 120 30 30 120 30 2 199.301758 -4.785156 -2.841797 129.677750 0.292969 81.37207 1.171875 33 1723489965000 30 120 30 30 120 30 2 199.301758 -4.785156 -2.841797 129.677750 2.929688 81.37207 5.566406 34 1723489966000 30 120 30 30 120 30 2 199.301758 -4.785156 -2.841797 129.677750 2.929688 81.37207 5.566406 35 1723489967000 30 120 30 30 120 30 2 199.301758 -4.785156 -2.841797 129.677750 2.929688 81.37207 5.566406 36 1723489968000 30 120 30 30 120 30 2 199.301758 -4.785156 -2.841797 129.677750 0.292969 81.37207 6.005859 37 1723489969000 30 120 30 30 120 30 2 199.301758 -4.785156 -2.841797 129.677750 0.292969 81.37207 6.005859 38 1723489970000 30 120 30 30 120 30 2 199.301758 -4.785156 -2.841797 129.677750 0.292969 81.37207 6.005859 39 1723489971000 30 120 30 30 120 30 2 199.301758 -3.828125 -2.841797 129.677750 2.929688 81.37207 4.687500 40 1723489972000 30 120 30 30 120 30 2 199.301758 -3.828125 -2.841797 129.677750 2.929688 81.37207 4.687500 41 1723489973000 30 120 30 30 120 30 2 199.301758 -3.828125 -2.841797 129.677750 2.929688 81.37207 4.687500 42 1723489974000 30 120 30 30 120 30 2 199.301758 -4.785156 -2.841797 129.677750 0.292969 81.37207 4.833984 | Here's one approach: Use df.filter for both 'value*' and 'threshold*' columns. For 'threshold*', chain df.values to allow element-wise comparison on shape, rather than on column labels. Afterwards check df.all row-wise (axis=1) and use for boolean indexing. Minimal reproducible example import pandas as pd data = {'epoch': {0: 1723489899000, 1: 1723489900000}, 'thresholdA': {0: 30, 1: 30}, 'thresholdB': {0: 120, 1: 120}, 'valueA': {0: 299.311523, 1: 10}, 'valueB': {0: -4.785156, 1: -4.785156}} df = pd.DataFrame(data) epoch thresholdA thresholdB valueA valueB 0 1723489899000 30 120 299.311523 -4.785156 1 1723489900000 30 120 10.000000 -4.785156 # aim: delete row 0, since valueA >= thresholdA Code m = (df.filter(regex=r'^value') < df.filter(regex=r'^threshold').values).all(axis=1) df[m] epoch thresholdA thresholdB valueA valueB 1 1723489900000 30 120 10.0 -4.785156 N.B. The above assumes not just a "1:1 association for threshold and value", but also that the columns occur in the same order. If not, you should sort them first. | 2 | 2 |
78,886,008 | 2024-8-19 | https://stackoverflow.com/questions/78886008/handling-multiple-operations-on-dataframe-columns-with-polars | I'm trying to select all columns of a DataFrame and perform multiple operations on each column using Polars. For example, I discovered that I can use the following code to count non-null values in each column: df.select(pl.col("*").is_not_null().sum() However, when I attempt to concatenate multiple operations like this: ( df .select( pl.col("*").is_not_null().sum().alias("foo"), pl.col("*").is_null().sum().alias("bar") ) ) I encounter a Duplicated Error. This seems to happen because Polars tries to perform the operations but ends up using the same column names, which causes the duplication issue. To work around this, I'm currently using the following approach: a = ( df .select( pl.col("*").is_null().sum(), ) .transpose(include_header=True) .rename( {"column_0" : "null_count"} ) ) b = ( df .select( pl.col("*").is_not_null().sum(), ) .transpose(include_header=True) .rename( {"column_0" : "not_null_count"} ) ) a.join(b, how="left", on="column") My goal is to generate an output that looks like this: shape: (8, 3) βββββββββββββββ¬βββββββββββββ¬βββββββββββββββββ β column β null_count β not_null_count β β --- β --- β --- β β str β u32 β u32 β βββββββββββββββͺβββββββββββββͺβββββββββββββββββ‘ β InvoiceNo β 0 β 541909 β β StockCode β 0 β 541909 β β Description β 1454 β 540455 β β Quantity β 0 β 541909 β β InvoiceDate β 0 β 541909 β β UnitPrice β 0 β 541909 β β CustomerID β 135080 β 406829 β β Country β 0 β 541909 β βββββββββββββββ΄βββββββββββββ΄βββββββββββββββββ | Let's use a simple test input df = pl.DataFrame({ "InvoiceNo": [1,2,3], "StockCode": [1,None,3], "Description": [None,None,6] }) To get your desired output, you can use unpivot() to transpose the DataFrame. Also, you don't really need to calculate both values, as soon as you calculated count of null values, you can use len(df) - null_count to calculate not_null_count. ( df .select(pl.all().is_null().sum()) .unpivot(variable_name="column", value_name="null_count") .with_columns(not_null_count = len(df) - pl.col.null_count) ) ββββββββββββββββ¬βββββββββββββ¬βββββββββββββββββ β column β null_count β not_null_count β β --- β --- β --- β β str β u32 β u32 β ββββββββββββββββͺβββββββββββββͺβββββββββββββββββ‘ β InvoiceNo β 0 β 3 β β StockCode β 1 β 2 β β Description β 2 β 1 β ββββββββββββββββ΄βββββββββββββ΄βββββββββββββββββ And if you want your result in non-unpivoted shape (as in, one row an multiple columns), you can use .name.suffix(): ( df .with_columns( pl.all().is_null().sum().name.suffix("_is_null"), pl.all().is_not_null().sum().name.suffix("_is_not_null") ) ) βββββββββββββ¬ββββββββββββ¬βββββββββββββββ¬ββββββββββββββββ¬ββββ¬βββββββββββββββ¬βββββββββββββββ¬βββββββββββββββ¬βββββββββββββββ β InvoiceNo β StockCode β Description β InvoiceNo_is_ β β¦ β Description β InvoiceNo_is β StockCode_is β Description β β --- β --- β --- β null β β _is_null β _not_null β _not_null β _is_not_null β β i64 β i64 β i64 β --- β β --- β --- β --- β --- β β β β β u32 β β u32 β u32 β u32 β u32 β βββββββββββββͺββββββββββββͺβββββββββββββββͺββββββββββββββββͺββββͺβββββββββββββββͺβββββββββββββββͺβββββββββββββββͺβββββββββββββββ‘ β 1 β 1 β null β 0 β β¦ β 2 β 3 β 2 β 1 β β 2 β null β null β 0 β β¦ β 2 β 3 β 2 β 1 β β 3 β 3 β 6 β 0 β β¦ β 2 β 3 β 2 β 1 β βββββββββββββ΄ββββββββββββ΄βββββββββββββββ΄ββββββββββββββββ΄ββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ And if you want several different aggregates, you can do unpivot() first: ( df .unpivot(variable_name="column") .group_by("column") .agg( not_null_count = pl.col.value.is_not_null().sum(), mean = pl.col.value.mean() ) ) βββββββββββββββ¬βββββββββββββββββ¬βββββββ β column β not_null_count β mean β β --- β --- β --- β β str β u32 β f64 β βββββββββββββββͺβββββββββββββββββͺβββββββ‘ β StockCode β 2 β 2.0 β β InvoiceNo β 3 β 2.0 β β Description β 1 β 6.0 β βββββββββββββββ΄βββββββββββββββββ΄βββββββ | 3 | 1 |
78,888,584 | 2024-8-19 | https://stackoverflow.com/questions/78888584/compute-difference-between-dates-and-convert-into-weeks-months-years-in-polars-d | I have a pl.DataFrame with a start_date and end_date column. I need to compute the difference between those two columns and add new columns representing the result in days, weeks, months and years. I would be fine to get an approximate result, meaning dividing the days by 7 / 30 / 365. My problem is to convert the duration[ns] type into an integer type. import datetime import polars as pl df = pl.DataFrame( {"start_date": datetime.date(2024, 1, 1), "end_date": datetime.date(2024, 7, 31)} ) df = df.with_columns((pl.col("end_date") - pl.col("start_date")).alias("days")) print(df) shape: (1, 3) ββββββββββββββ¬βββββββββββββ¬βββββββββββββββ β start_date β end_date β days β β --- β --- β --- β β date β date β duration[ms] β ββββββββββββββͺβββββββββββββͺβββββββββββββββ‘ β 2024-01-01 β 2024-07-31 β 212d β ββββββββββββββ΄βββββββββββββ΄βββββββββββββββ | you can use dt.total_days() to extract hours from Duration datatype: df.with_columns( days = (pl.col.end_date - pl.col.start_date).total_days() ) ββββββββββββββ¬βββββββββββββ¬βββββββ β start_date β end_date β days β β --- β --- β --- β β date β date β i64 β ββββββββββββββͺβββββββββββββͺβββββββ‘ β 2024-01-01 β 2024-07-31 β 212 β ββββββββββββββ΄βββββββββββββ΄βββββββ | 3 | 2 |
78,888,041 | 2024-8-19 | https://stackoverflow.com/questions/78888041/cast-multiple-columns-with-unix-epoch-to-datetime | I have a dataframe with multiple columns containing unix epochs. In my example I only use 2 of 13 columns I have. I'd like to cast all those columns to a datetime with UTC timezone in a single call to with_columns(). df = pl.from_repr(""" βββββββ¬βββββββββββββ¬βββββββββββββ¬βββββββββββββ β id β start_date β end_date β cancelable β β --- β --- β --- β --- β β i64 β i64 β i64 β bool β βββββββͺβββββββββββββͺβββββββββββββͺβββββββββββββ‘ β 1 β 1566637530 β 1566628686 β true β β 2 β 1561372720 β 1561358079 β true β β 3 β 1561374780 β 1561358135 β false β β 4 β 1558714718 β 1556188225 β false β β 5 β 1558715044 β 1558427697 β true β βββββββ΄βββββββββββββ΄βββββββββββββ΄βββββββββββββ """) Polars provides the user with pl.from_epoch. However, I didn't find a way to apply it to multiple columns as once. Expected result: shape: (5, 4) βββββββ¬ββββββββββββββββββββββ¬ββββββββββββββββββββββ¬βββββββββββββ β id β start_date β end_date β cancelable β β --- β --- β --- β --- β β i64 β datetime[ΞΌs] β datetime[ΞΌs] β bool β βββββββͺββββββββββββββββββββββͺββββββββββββββββββββββͺβββββββββββββ‘ β 1 β 2019-08-24 09:05:30 β 2019-08-24 06:38:06 β true β β 2 β 2019-06-24 10:38:40 β 2019-06-24 06:34:39 β true β β 3 β 2019-06-24 11:13:00 β 2019-06-24 06:35:35 β false β β 4 β 2019-05-24 16:18:38 β 2019-04-25 10:30:25 β false β β 5 β 2019-05-24 16:24:04 β 2019-05-21 08:34:57 β true β βββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββ So far, my code looks as follows. columns_epoch_to_timestamp: list[str] = [ "start_date", "end_date", ] df = df.with_columns(pl.col(*columns_epoch_to_timestamp)) | pl.from_epoch() accepts a single column name, but also Expr objects. column: str | Expr | Series | Sequence[int] pl.col() can select multiple columns, which we can pass directly: df.with_columns(pl.from_epoch(pl.col("start_date", "end_date"))) shape: (5, 4) βββββββ¬ββββββββββββββββββββββ¬ββββββββββββββββββββββ¬βββββββββββββ β id β start_date β end_date β cancelable β β --- β --- β --- β --- β β i64 β datetime[ΞΌs] β datetime[ΞΌs] β bool β βββββββͺββββββββββββββββββββββͺββββββββββββββββββββββͺβββββββββββββ‘ β 1 β 2019-08-24 09:05:30 β 2019-08-24 06:38:06 β true β β 2 β 2019-06-24 10:38:40 β 2019-06-24 06:34:39 β true β β 3 β 2019-06-24 11:13:00 β 2019-06-24 06:35:35 β false β β 4 β 2019-05-24 16:18:38 β 2019-04-25 10:30:25 β false β β 5 β 2019-05-24 16:24:04 β 2019-05-21 08:34:57 β true β βββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββ | 2 | 2 |
78,887,466 | 2024-8-19 | https://stackoverflow.com/questions/78887466/largest-product-in-a-grid-not-in-the-same-direction | ERROR: type should be string, got "https://projecteuler.net/problem=11 Here is the question: int grid[20][20] = { {8, 2, 22, 97, 38, 15, 0, 40, 0, 75, 4, 5, 7, 78, 52, 12, 50, 77, 91, 8}, {49, 49, 99, 40, 17, 81, 18, 57, 60, 87, 17, 40, 98, 43, 69, 48, 4, 56, 62, 0}, {81, 49, 31, 73, 55, 79, 14, 29, 93, 71, 40, 67, 53, 88, 30, 3, 49, 13, 36, 65}, {52, 70, 95, 23, 4, 60, 11, 42, 69, 24, 68, 56, 1, 32, 56, 71, 37, 2, 36, 91}, {22, 31, 16, 71, 51, 67, 63, 89, 41, 92, 36, 54, 22, 40, 40, 28, 66, 33, 13, 80}, {24, 47, 32, 60, 99, 3, 45, 2, 44, 75, 33, 53, 78, 36, 84, 20, 35, 17, 12, 50}, {32, 98, 81, 28, 64, 23, 67, 10, 26, 38, 40, 67, 59, 54, 70, 66, 18, 38, 64, 70}, {67, 26, 20, 68, 2, 62, 12, 20, 95, 63, 94, 39, 63, 8, 40, 91, 66, 49, 94, 21}, {24, 55, 58, 5, 66, 73, 99, 26, 97, 17, 78, 78, 96, 83, 14, 88, 34, 89, 63, 72}, {21, 36, 23, 9, 75, 0, 76, 44, 20, 45, 35, 14, 0, 61, 33, 97, 34, 31, 33, 95}, {78, 17, 53, 28, 22, 75, 31, 67, 15, 94, 3, 80, 4, 62, 16, 14, 9, 53, 56, 92}, {16, 39, 5, 42, 96, 35, 31, 47, 55, 58, 88, 24, 0, 17, 54, 24, 36, 29, 85, 57}, {86, 56, 0, 48, 35, 71, 89, 7, 5, 44, 44, 37, 44, 60, 21, 58, 51, 54, 17, 58}, {19, 80, 81, 68, 5, 94, 47, 69, 28, 73, 92, 13, 86, 52, 17, 77, 4, 89, 55, 40}, {4, 52, 8, 83, 97, 35, 99, 16, 7, 97, 57, 32, 16, 26, 26, 79, 33, 27, 98, 66}, {88, 36, 68, 87, 57, 62, 20, 72, 3, 46, 33, 67, 46, 55, 12, 32, 63, 93, 53, 69}, {4, 42, 16, 73, 38, 25, 39, 11, 24, 94, 72, 18, 8, 46, 29, 32, 40, 62, 76, 36}, {20, 69, 36, 41, 72, 30, 23, 88, 34, 62, 99, 69, 82, 67, 59, 85, 74, 4, 36, 16}, {20, 73, 35, 29, 78, 31, 90, 1, 74, 31, 49, 71, 48, 86, 81, 16, 23, 57, 5, 54}, {1, 70, 54, 71, 83, 51, 54, 69, 16, 92, 33, 48, 61, 43, 52, 1, 89, 19, 67, 48} }; What is the greatest product of four adjacent numbers in the \"same direction\" (up, down, left, right, or diagonally) in the 20 by 20 grid? At first I read the question wrong and did not see the \"same direction\" part. I thought the rule is every number has to be adjacent to at least one other number in the grid, not at most. I was able to solve it fairly quickly (i also used ChatGpt to fix the syntax errors) because I am kind of new to this. Solution: #include <stdio.h> // Function to find the maximum product of four adjacent numbers in a grid int findMaxProduct(int grid[20][20], int* pos) { int maxProduct = 0; // Check horizontally for (int i = 0; i < 20; i++) { for (int j = 0; j < 20 - 3; j++) { int product = grid[i][j] * grid[i][j + 1] * grid[i][j + 2] * grid[i][j + 3]; printf(\"Checking horizontally at (%d, %d) -> product: %d\\n\", i, j, product); if (product > maxProduct) { maxProduct = product; pos[0] = i; pos[1] = j; pos[2] = i; pos[3] = j + 1; pos[4] = i; pos[5] = j + 2; pos[6] = i; pos[7] = j + 3; printf(\"New max horizontal product found: %d at (%d, %d), (%d, %d), (%d, %d), (%d, %d)\\n\", maxProduct, pos[0], pos[1], pos[2], pos[3], pos[4], pos[5], pos[6], pos[7]); } } } // Check vertically for (int i = 0; i < 20 - 3; i++) { for (int j = 0; j < 20; j++) { int product = grid[i][j] * grid[i + 1][j] * grid[i + 2][j] * grid[i + 3][j]; printf(\"Checking vertically at (%d, %d) -> product: %d\\n\", i, j, product); if (product > maxProduct) { maxProduct = product; pos[0] = i; pos[1] = j; pos[2] = i + 1; pos[3] = j; pos[4] = i + 2; pos[5] = j; pos[6] = i + 3; pos[7] = j; printf(\"New max vertical product found: %d at (%d, %d), (%d, %d), (%d, %d), (%d, %d)\\n\", maxProduct, pos[0], pos[1], pos[2], pos[3], pos[4], pos[5], pos[6], pos[7]); } } } // Check diagonally (down-right) for (int i = 0; i < 20 - 3; i++) { for (int j = 0; j < 20 - 3; j++) { int product = grid[i][j] * grid[i + 1][j + 1] * grid[i + 2][j + 2] * grid[i + 3][j + 3]; printf(\"Checking diagonally down-right at (%d, %d) -> product: %d\\n\", i, j, product); if (product > maxProduct) { maxProduct = product; pos[0] = i; pos[1] = j; pos[2] = i + 1; pos[3] = j + 1; pos[4] = i + 2; pos[5] = j + 2; pos[6] = i + 3; pos[7] = j + 3; printf(\"New max diagonal down-right product found: %d at (%d, %d), (%d, %d), (%d, %d), (%d, %d)\\n\", maxProduct, pos[0], pos[1], pos[2], pos[3], pos[4], pos[5], pos[6], pos[7]); } } } // Check diagonally (down-left) for (int i = 0; i < 20 - 3; i++) { for (int j = 3; j < 20; j++) { int product = grid[i][j] * grid[i + 1][j - 1] * grid[i + 2][j - 2] * grid[i + 3][j - 3]; printf(\"Checking diagonally down-left at (%d, %d) -> product: %d\\n\", i, j, product); if (product > maxProduct) { maxProduct = product; pos[0] = i; pos[1] = j; pos[2] = i + 1; pos[3] = j - 1; pos[4] = i + 2; pos[5] = j - 2; pos[6] = i + 3; pos[7] = j - 3; printf(\"New max diagonal down-left product found: %d at (%d, %d), (%d, %d), (%d, %d), (%d, %d)\\n\", maxProduct, pos[0], pos[1], pos[2], pos[3], pos[4], pos[5], pos[6], pos[7]); } } } return maxProduct; } int main(void) { int grid[20][20] = { {8, 2, 22, 97, 38, 15, 0, 40, 0, 75, 4, 5, 7, 78, 52, 12, 50, 77, 91, 8}, {49, 49, 99, 40, 17, 81, 18, 57, 60, 87, 17, 40, 98, 43, 69, 48, 4, 56, 62, 0}, {81, 49, 31, 73, 55, 79, 14, 29, 93, 71, 40, 67, 53, 88, 30, 3, 49, 13, 36, 65}, {52, 70, 95, 23, 4, 60, 11, 42, 69, 24, 68, 56, 1, 32, 56, 71, 37, 2, 36, 91}, {22, 31, 16, 71, 51, 67, 63, 89, 41, 92, 36, 54, 22, 40, 40, 28, 66, 33, 13, 80}, {24, 47, 32, 60, 99, 3, 45, 2, 44, 75, 33, 53, 78, 36, 84, 20, 35, 17, 12, 50}, {32, 98, 81, 28, 64, 23, 67, 10, 26, 38, 40, 67, 59, 54, 70, 66, 18, 38, 64, 70}, {67, 26, 20, 68, 2, 62, 12, 20, 95, 63, 94, 39, 63, 8, 40, 91, 66, 49, 94, 21}, {24, 55, 58, 5, 66, 73, 99, 26, 97, 17, 78, 78, 96, 83, 14, 88, 34, 89, 63, 72}, {21, 36, 23, 9, 75, 0, 76, 44, 20, 45, 35, 14, 0, 61, 33, 97, 34, 31, 33, 95}, {78, 17, 53, 28, 22, 75, 31, 67, 15, 94, 3, 80, 4, 62, 16, 14, 9, 53, 56, 92}, {16, 39, 5, 42, 96, 35, 31, 47, 55, 58, 88, 24, 0, 17, 54, 24, 36, 29, 85, 57}, {86, 56, 0, 48, 35, 71, 89, 7, 5, 44, 44, 37, 44, 60, 21, 58, 51, 54, 17, 58}, {19, 80, 81, 68, 5, 94, 47, 69, 28, 73, 92, 13, 86, 52, 17, 77, 4, 89, 55, 40}, {4, 52, 8, 83, 97, 35, 99, 16, 7, 97, 57, 32, 16, 26, 26, 79, 33, 27, 98, 66}, {88, 36, 68, 87, 57, 62, 20, 72, 3, 46, 33, 67, 46, 55, 12, 32, 63, 93, 53, 69}, {4, 42, 16, 73, 38, 25, 39, 11, 24, 94, 72, 18, 8, 46, 29, 32, 40, 62, 76, 36}, {20, 69, 36, 41, 72, 30, 23, 88, 34, 62, 99, 69, 82, 67, 59, 85, 74, 4, 36, 16}, {20, 73, 35, 29, 78, 31, 90, 1, 74, 31, 49, 71, 48, 86, 81, 16, 23, 57, 5, 54}, {1, 70, 54, 71, 83, 51, 54, 69, 16, 92, 33, 48, 61, 43, 52, 1, 89, 19, 67, 48} }; int pos[8]; // To store the positions of the 4 adjacent numbers with the highest product int maxProduct = findMaxProduct(grid, pos); printf(\"The maximum product of four adjacent numbers is %d.\\n\", maxProduct); printf(\"The positions of these numbers are:\\n\"); for (int i = 0; i < 8; i += 2) { printf(\"(%d, %d)\\n\", pos[i], pos[i + 1]); } return 0; } What if we consider numbers that arenβt in the same direction?. The only rule is every number has to be adjacent to one other number at least instead only being adjacent to one other number. Like the answer itself could be 4 numbers that are all adjacent to each other like a square 2 by 2 matrix? Now we have to search for a lot more things. Manually finding shapes in python def is_in_bounds(x, y, rows, cols): return 0 <= x < rows and 0 <= y < cols def generate_combinations(x, y): return [ [(x, y), (x+1, y), (x+2, y), (x+3, y)], # Horizontal line [(x, y), (x, y+1), (x, y+2), (x, y+3)], # Vertical line [(x, y), (x+1, y+1), (x+2, y+2), (x+3, y+3)], # Diagonal down-right [(x, y), (x-1, y+1), (x-2, y+2), (x-3, y+3)], # Diagonal down-left [(x, y), (x+1, y), (x, y+1), (x+1, y+1)], # 2x2 square [(x, y), (x+1, y), (x+2, y), (x+2, y+1)], # L-shape 1 [(x, y), (x, y+1), (x, y+2), (x+1, y+2)], # L-shape 2 [(x, y), (x+1, y), (x+2, y), (x, y+1)], # L-shape 3 [(x, y), (x, y+1), (x, y+2), (x+1, y)], # L-shape 4 [(x, y), (x+1, y), (x+2, y), (x+1, y+1)], # T-shape 1 [(x, y), (x, y+1), (x, y+2), (x+1, y+1)], # T-shape 2 [(x, y), (x+1, y), (x+1, y+1), (x+1, y+2)], # T-shape 3 [(x, y), (x, y+1), (x-1, y+1), (x+1, y+1)] # T-shape 4 ] def calculate_product(grid, combination): product = 1 for x, y in combination: product *= grid[x][y] return product def find_max_product(grid): max_product = 0 max_combination = [] rows = len(grid) cols = len(grid[0]) for i in range(rows): for j in range(cols): combinations = generate_combinations(i, j) valid_combinations = [comb for comb in combinations if all(is_in_bounds(x, y, rows, cols) for x, y in comb)] for comb in valid_combinations: product = calculate_product(grid, comb) print(f\"Combination: {comb} => Values: {[grid[x][y] for x, y in comb]} => Product: {product}\") if product > max_product: max_product = product max_combination = comb print(f\"Max product combination: {max_combination} => Product: {max_product}\") return max_product # Example usage grid = [[int(x) for x in line.split()] for line in \"\"\" 08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08 49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00 81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65 52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91 22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80 24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50 32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70 67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21 24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72 21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95 78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92 16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57 86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58 19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40 04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66 88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69 04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36 20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16 20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54 01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48 \"\"\".strip().split('\\n')] print(find_max_product(grid)) But if the problem asks for n=5; I will need to manually find more shapes on pen and paper. Is there another way to find all possible \"n\" number combinations(where at leas one number is adjacent to another one) . Do we have a formula for this? where I plug in n and it will tell me how many possible shapes are there?" | Given any connected shape with n tiles, you can: Find the tiles with the the smallest y coordinate, and choose the one of those with the smallest x coordinate. This is the "starting tile" Traverse the remaining tiles using BFS. As you consider the neighboring cells of each tile, keep track of which ones you examine, mark them seen, and only examine cells you haven't seen before. When you examine a new (unseen) cell: If there's a tile in the cell, then output 1 and add the tile to your BFS queue; Otherwise write a 0. This procedure will produce a unique binary code for each tile. If you want to find all possible tiles, you can use a recursive backtracking procedure that finds all shapes corresponding to possible outputs of this procedure: # generate all shapes with n tiles def genTiles(n): if (n<1): return [] alldirs=[] # generate a list of all directions for x in range(-1,2): for y in range(-1,2): if x!=0 or y!=0: alldirs.append((x,y)) # we'll collect all the outputs here ret=[] # BFS q q = [(0,0)] # cells we've seen seen = dict() seen[(0,0)] = True def expand(qpos, nleft, index): if nleft < 1: ret.append(q[:]) return # find the next unseen cell index -= 1 while True: index += 1 if index >= len(alldirs): qpos += 1 index = 0 if qpos >= len(q): return test = ( q[qpos][0] + alldirs[index][0], q[qpos][1] + alldirs[index][1] ) if test[1] < 0 or (test[1] == 0 and test[0] <= 0): continue if seen.get(test) != True: break seen[test] = True # choose q.append(test) expand(qpos, nleft-1, index+1) # unchoose q.pop() expand(qpos, nleft, index+1) seen[test] = False expand(0,n-1,0) return ret Add this test to print out all 110 shapes with 4 tiles. # test N = 4 shapes = genTiles(N) print("there are {0} shapes with {1} tiles:".format(len(shapes), N)) for tile in shapes: print(tile) As you can see, allowing diagonal connections adds a lot of possibilities. Even for 3 tiles there are 20 shapes: there are 20 shapes with 3 tiles: [(0, 0), (-1, 1), (0, 1)] [(0, 0), (-1, 1), (1, 0)] [(0, 0), (-1, 1), (1, 1)] [(0, 0), (-1, 1), (-2, 1)] [(0, 0), (-1, 1), (-2, 2)] [(0, 0), (-1, 1), (-1, 2)] [(0, 0), (-1, 1), (0, 2)] [(0, 0), (0, 1), (1, 0)] [(0, 0), (0, 1), (1, 1)] [(0, 0), (0, 1), (-1, 2)] [(0, 0), (0, 1), (0, 2)] [(0, 0), (0, 1), (1, 2)] [(0, 0), (1, 0), (1, 1)] [(0, 0), (1, 0), (2, 0)] [(0, 0), (1, 0), (2, 1)] [(0, 0), (1, 1), (0, 2)] [(0, 0), (1, 1), (1, 2)] [(0, 0), (1, 1), (2, 0)] [(0, 0), (1, 1), (2, 1)] [(0, 0), (1, 1), (2, 2)] | 2 | 0 |
78,887,911 | 2024-8-19 | https://stackoverflow.com/questions/78887911/is-there-an-efficient-way-to-sum-over-numpy-splits | I would like to split array into chunks, sum the values in each chunk, and return the result as another array. The chunks can have different sizes. This can be naively done by using numpy split function like this def split_sum(a: np.ndarray, breakpoints: np.ndarray) -> np.ndarray: return np.array([np.sum(subarr) for subarr in np.split(a, breakpoints)]) However, this still uses a python for-loop and is thus inefficient for large arrays. Is there a faster way? | You wouldn't really split an array in numpy unless this is the last step. Numpy can handle your operation natively with numpy.add.reduceat (a minor difference with your function is how the breakpoints are defined, you will need to prepend 0 with reduceat): arr = np.arange(20) breakpoints = np.array([2, 5, 10, 12]) def split_sum(a: np.ndarray, breakpoints: np.ndarray) -> np.ndarray: return np.array([np.sum(subarr) for subarr in np.split(a, breakpoints)]) split_sum(arr, breakpoints) # array([ 1, 9, 35, 21, 124]) np.add.reduceat(arr, np.r_[0, breakpoints]) # array([ 1, 9, 35, 21, 124]) | 3 | 5 |
78,886,152 | 2024-8-19 | https://stackoverflow.com/questions/78886152/how-can-pandas-loc-take-three-arguments | I am looking at someones code and this is what they wrote from financetoolkit import Toolkit API_KEY = "FINANCIAL_MODELING_PREP_API_KEY" companies = Toolkit(["AAPL", "MSFT", "GOOGL", "AMZN"], api_key=API_KEY, start_date="2005-01-01") income_statement_growth = companies.get_income_statement(growth=True) display(income_statement_growth.loc[:, "Revenue", :]) Essentially what this code does is it returns the revenue value for a couple of companies starting from 2005 until present day. What I am confused about is income_statement_growth.loc[:, "Revenue", :] why is there three arguments? I dont understand what the third colon is doing in this code All the documentation I read about .loc states that it takes two arguments, one for the row and one for the column, so I am a bit confused how it is able to take three and what the function of the third colon is. | The return value of companies.get_income_statement(growth=True) is a pandas DataFrame with a multi-index. The columns are indexed by period ('2019', '2020', etc.) and the rows are indexed by a combination of company ticker and data item (e.g. ('AAPT', 'Revenue')). You could access a single element like this: print(income_statement_growth['2020'][('AAPL', 'Revenue')]) And to select the 'Revenue' for all tickers and all periods, you use .loc: revenues = income_statement_growth.loc[:, 'Revenue', :] For a simple dataframe, you world normally see two arguments for .loc[] but since this is a multi-index, this needs three arguments. | 2 | 2 |
78,885,636 | 2024-8-18 | https://stackoverflow.com/questions/78885636/optimize-identification-of-quantiles-along-array-columns | I have an array A (of size m x n), and a percentage p in [0,1]. I need to produce an m x n boolean array B, with True in in the (i,j) entry if A[i,j] is in p^{th} quantile of the column A[:,j]. Here is the code I have used so far. import numpy as np m = 200 n = 300 A = np.random.rand(m, n) p = 0.3 quant_levels = np.zeros(n) for i in range(n): quant_levels[i] = np.quantile(A[:,i],p) B = np.array(A >= quant_levels) | I'm not sure it's much faster but you should at least be aware that numpy.quantile has an axis keyword argument so you can compute all the quantiles with one command: quant_levels = np.quantile(A, p, axis=0) B = (A >= quant_levels) | 2 | 3 |
78,873,119 | 2024-8-14 | https://stackoverflow.com/questions/78873119/how-to-prevent-ruff-formatter-from-adding-a-newline-after-module-level-docstring | I'm using ruff as a replacement to black formatter but I wanna keep the diff at minimum. I'm noticing that it automatically inserts a newline between the module-level docstring and the first import statement. For example, given this code: """base api extension.""" import abc from typing import List, Optional, Type After running: ruff format file.py --diff It gives me this: @@ -1,4 +1,5 @@ """base api extension.""" + import abc from typing import List, Optional, Type If I format the file, the output is like this: """base api extension.""" import abc from typing import List, Optional, Type I wanna keep the original formatting without adding that newline. I couldn't find any settings I could use to ignore this. Is there a way to configure ruff to prevent this behaviour? Thank you! My pyproject.toml before adding ruff: [tool.black] line_length = 120 include = '\.py$' [tool.isort] multi_line_output = 3 include_trailing_comma = true force_grid_wrap = 0 line_length = 120 profile = "black" After adding ruff: [tool.ruff] line-length = 120 Context: Python: 3.11.9 Ruff Version: 0.5.7 | If you update your black version you'll find the same issue. This formatting change is internally called module_docstring_newlines: Black version 24.1.0 (26 Jan 2024) introduced the new "2024 stable style" which includes this change (which was previously only in "preview style") - see Black's changelogs (look for #3932). Ruff version 0.3.0 (29 Feb 2024) thus chose to also introduce these changes in their 2024.2 stable style - see Ruff's changelogs (look for #8283). This comment on the Ruff pull request for this change confirms that there is unfortunately no config option to prevent this behaviour. If you read through the linked issues and pull request discussions for Black and Ruff you'll see it mentioned in both cases that the formatting change often affects lots of files, but (ignoring this large caveat) it was otherwise considered uncontroversial. | 2 | 2 |
78,884,251 | 2024-8-18 | https://stackoverflow.com/questions/78884251/unable-to-install-wordnet-with-nltk-3-9-0-as-importing-nltk-requires-installed-w | It is not possible to import nltk, and the solution given by the output required me to import nltk: >>>import nltk Traceback (most recent call last): File "D:\project\Lib\site-packages\nltk\corpus\util.py", line 84, in __load root = nltk.data.find(f"{self.subdir}/{zip_name}") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\project\Lib\site-packages\nltk\data.py", line 579, in find raise LookupError(resource_not_found LookupError: ********************************************************************** Resource wordnet not found. Please use the NLTK Downloader to obtain the resource: >>> import nltk >>> nltk.download('wordnet') For more information see: https://www.nltk.org/data.html Attempted to load corpora/wordnet.zip/wordnet/ Searched in: - 'C:\\Users\\me/nltk_data' - 'D:\\project\\nltk_data' - 'D:\\project\\share\\nltk_data' - 'D:\\project\\lib\\nltk_data' - 'C:\\Users\\me\\AppData\\Roaming\\nltk_data' - 'C:\\nltk_data' - 'D:\\nltk_data' - 'E:\\nltk_data' ********************************************************************** Basically - I cannot import nltk because wordnet is missing, but in order to download wordnet, I have to import nltk which I cannot, because wordnet is missing. Noteworthy is, that it throws this exception twice, but with a different traceback - During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "D:\project\Lib\site-packages\nltk\__init__.py", line 156, in <module> from nltk.stem import * File "D:\project\Lib\site-packages\nltk\stem\__init__.py", line 34, in <module> from nltk.stem.wordnet import WordNetLemmatizer File "D:\project\Lib\site-packages\nltk\stem\wordnet.py", line 13, in <module> class WordNetLemmatizer: File "D:project\Lib\site-packages\nltk\stem\wordnet.py", line 48, in WordNetLemmatizer morphy = wn.morphy ^^^^^^^^^ File "D:\project\Lib\site-packages\nltk\corpus\util.py", line 120, in __getattr__ self.__load() File "D:\project\Lib\site-packages\nltk\corpus\util.py", line 86, in __load raise e File "D:\project\Lib\site-packages\nltk\corpus\util.py", line 81, in __load root = nltk.data.find(f"{self.subdir}/{self.__name}") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\project\Lib\site-packages\nltk\data.py", line 579, in find raise LookupError(resource_not_found) LookupError: ********************************************************************** Resource wordnet not found. Please use the NLTK Downloader to obtain the resource: >>> import nltk >>> nltk.download('wordnet') For more information see: https://www.nltk.org/data.html Attempted to load corpora/wordnet Searched in: - 'C:\\Users\\me/nltk_data' - 'D:\\project\\nltk_data' - 'D:\\project\\share\\nltk_data' - 'D:\\project\\lib\\nltk_data' - 'C:\\Users\\me\\AppData\\Roaming\\nltk_data' - 'C:\\nltk_data' - 'D:\\nltk_data' - 'E:\\nltk_data' ********************************************************************** What is the suggested solution in this case? | This bug was introduced in nltk 3.9.0 (released on 18 August 2024) and is a known issue. It was fixed in 3.9.1: python3 -m pip install nltk~=3.9.1 The most recent full release prior to 3.9.x was nltk 3.8.1. However, be aware that this version is vulnerable to remote code execution. python3 -m pip install nltk==3.8.1 | 5 | 9 |
78,880,609 | 2024-8-16 | https://stackoverflow.com/questions/78880609/polars-gives-exception-for-empty-json-but-pandas-works | While reading json from files, using polars give exception when it has empty json. But it is not the same in pandas. I have some files with only key and not value. Is polars still primitive and not matured like pandas? which package to use? Polars from io import StringIO import polars as pl print(pl.read_json(StringIO('{"key":{}}'))) gives exception pyo3_runtime.PanicException: called `Result::unwrap()` on an `Err` value: ComputeError(ErrString("a StructArray must contain at least one field")) Pandas from io import StringIO import pandas as pd print(pd.read_json(StringIO('{"key":{}}'))) works without exception, gives output as Empty DataFrame Columns: [key] Index: [] | Pandas is mature and Polars is comparatively a new library. There is an issue exists, which is same as yours, refer to this bug link, Polars Issue Link Either you can use exception handling with polars for this scenario or go with pandas. Update: The issue is closed now, you it is working in Polars | 3 | 0 |
78,883,445 | 2024-8-17 | https://stackoverflow.com/questions/78883445/find-absolute-difference-value-between-elements-using-just-numpy-array-operation | a = np.array([101,105,90,102,90,10,50]) b = np.array([99,110,85,110,85,90,60]) expected result = np.array([2,5,5,8,5,20,10]) How can I find minimum absolute difference value between elements using just numpy operations; with modulus of 100 if two values are across 100. | The answer by Derek Roberts with numpy.minimum is almost correct. However, since the input values can be greater than 100, they should first be rescaled to 0-100 with %100 (mod). I'm adding an extra pair of values to demonstrate this: a = np.array([101,105,90,102,90,10,50,1001]) b = np.array([99,110,85,110,85,90,60,2]) x = abs(a%100-b%100) np.minimum(x, 100-x) Generic computation: M = 100 x = abs(a%M-b%M) np.minimum(x, M-x) Output: array([ 2, 5, 5, 8, 5, 20, 10, 1]) | 3 | 2 |
78,877,560 | 2024-8-16 | https://stackoverflow.com/questions/78877560/liboqs-python-throws-attributeerror-module-oqs-has-no-attribute-get-enabled | I'm trying to get this Open Quantum Safe example working: https://github.com/open-quantum-safe/liboqs-python/blob/main/examples/kem.py I'm getting this error: (myenv) user@mx:~ $ python3 '/home/user/Documents/Dev/quantum_algo_tests/# Key encapsulation Python example.py' Enabled KEM mechanisms: Traceback (most recent call last): File "/home/user/Documents/Dev/quantum_algo_tests/# Key encapsulation Python example.py", line 9, in <module> kems = oqs.get_enabled_kem_mechanisms() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: module 'oqs' has no attribute 'get_enabled_kem_mechanisms' (myenv) user@mx:~ $ Running inspect to list functions I get: ExpressionInput FunctionNode OQSInterpreter oqs_engine I tried a basic example from following the documentation: import oqs from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes import os import base64 # Message to be encrypted message = "Hello world!" # Step 1: Key Encapsulation using Kyber kemalg = "Kyber512" with oqs.KeyEncapsulation(kemalg) as client: # Client generates keypair public_key = client.generate_keypair() # Server (could be another party) encapsulates secret with client's public key with oqs.KeyEncapsulation(kemalg) as server: ciphertext, shared_secret_server = server.encap_secret(public_key) # Client decapsulates to get the same shared secret shared_secret_client = client.decap_secret(ciphertext) # The shared secret is now the same for both client and server and will be used as the encryption key assert shared_secret_client == shared_secret_server, "Shared secrets do not match!" # Step 2: Encrypt the message using AES-GCM with the shared secret iv = os.urandom(12) cipher = Cipher(algorithms.AES(shared_secret_client), modes.GCM(iv), backend=None) encryptor = cipher.encryptor() ciphertext = encryptor.update(message.encode()) + encryptor.finalize() # The tag ensures the integrity of the message tag = encryptor.tag # Combine the IV, ciphertext, and tag into a single encrypted package encrypted_message = base64.b64encode(iv + ciphertext + tag) print(f"Encrypted message: {encrypted_message.decode()}") # Step 3: Decryption process (using the shared secret derived from Kyber) # Decrypt the message decryptor = Cipher(algorithms.AES(shared_secret_server), modes.GCM(iv, tag), backend=None).decryptor() decrypted_message = decryptor.update(ciphertext) + decryptor.finalize() print(f"Decrypted message: {decrypted_message.decode()}") I get the same error: $ python3 /home/user/Documents/Dev/quantum_algo_tests/hello_world_example.py Traceback (most recent call last): File "/home/user/Documents/Dev/quantum_algo_tests/hello_world_example.py", line 11, in <module> with oqs.KeyEncapsulation(kemalg) as client: ^^^^^^^^^^^^^^^^^^^^ AttributeError: module 'oqs' has no attribute 'KeyEncapsulation' Why is such a simple program throwing this error? | Chances are, liboqs just wasn't installed correctly. The GitHub page gives several installation steps, some of which are contradictory and could break your install. (When I tried following the instructions from top to bottom, I got a version conflict due to incompatible installs of oqs and liboqs-python.) The safest option is to exclusively follow the "Let liboqs-python install liboqs automatically" section and ignore everything preceding and following it. Just activate your desired venv, then paste the following lines into your terminal: git clone --depth=1 https://github.com/open-quantum-safe/liboqs-python cd liboqs-python pip install . | 3 | 2 |
78,882,910 | 2024-8-17 | https://stackoverflow.com/questions/78882910/complicated-nested-integrals-in-terms-of-python-scipy-integrate | I want to write the following complicated function using python scipy.integrate: where \phi and \Phi are the pdf and cdf of the standard normal respectively and the sum i != 1,2 is over the thetas in *theta, and the product s != 1,i,2 is over the thetas in *theta that is not theta_i To make things simpler, I break down the problem into a few easier pieces by defining a few more functions: And here is what I have tried: import scipy.integrate as integrate from scipy.integrate import nquad from scipy.stats import norm import math import numpy as np def f(v, u, t1, ti, t2, *theta): prod = 1 for t in theta: prod = prod * (1 - norm.cdf(u + t2 - t)) return norm.pdf(v) * norm.cdf(v + ti - t1) * prod def g(v, u, t1, t2, *theta): S = 0 for ti in theta: S = S + integrate.quad(f, -np.inf, u + t2 - ti, args=(u, t1, ti, t2, *theta))[0] return S * norm.pdf(u) def P(t1, t2, *theta): return integrate.quad(g, -np.inf, np.inf, args=(t1, t2, *theta))[0] But it doens't work because I think in the definition of P the integral is integrating w.r.t. the first variable, namely v but we should be integrating u but I don't know how we can do that. Is there any quick way to do it? For checking, here is what the correct P would produce: P(0.2, 0.1, 0.3, 0.4) = 0.08856347190679764 P(0.2, 0.1, 0.4, 0.3, 0.5) = 0.06094233268837703 | The first thing I did was move prod outside of f since it isn't a function of v. I realized that your prod includes the ti term, so I added a check for ts (renamed from t) equal to ti. You also have way too many variables for your functions. g is a function of u and the thetas and f is a function of v and the thetas, so they don't both need u and v. This isn't the most efficient code, but it works. from scipy.integrate import quad from scipy.stats import norm import numpy as np def f(v, t1, ti): return norm.pdf(v) * norm.cdf(v + ti - t1) def g(u, t1, t2, *theta): S = 0 for ti in theta: prod = 1 for ts in theta: if ts != ti: prod *= (1 - norm.cdf(u + t2 - ts)) S += prod*quad(f, -np.inf, u + t2 - ti, args=(t1, ti))[0] return S * norm.pdf(u) def P(t1, t2, *theta): return quad(g, -np.inf, np.inf, args=(t1, t2, *theta))[0] print(P(0.2, 0.1, 0.3, 0.4)) # 0.08856347187084088 print(P(0.2, 0.1, 0.4, 0.3, 0.5)) # 0.060942332693157915 | 2 | 1 |
78,881,821 | 2024-8-17 | https://stackoverflow.com/questions/78881821/how-to-implement-enum-with-mutable-members-comparable-and-hashable-by-their-ind | I am writing a class to represent sequential stages of an industrial process, composed of ADMISSION, PROCESSING, QC and DELIVERY stages. Each stage has a unique, progressive sequence number, a mnemonic name and a field keeping track of the number of instances going through it: @dataclass class Stage: seq_id: int name: str n_instances: int Since stages are well-known and not supposed to change during execution, I decided to gather them in an Enum. I need said enum to have the following requirements: enum members need to: be subclasses of Stage , in order to avoid accessing their value and making them easier to use (akin to IntEnum or StrEnum). In particular: be comparable by their seq_id (e.g. Stages.DELIVERY > Stages.PROCESSING is true) be usable as dictionary keys use name as their __str__ representation. have immutable, sequential seq_ids from 0 to n based on their declaration order name is specified at member declaration. Using auto() results in the lower-cased member name I managed to address point 2 and 3 in my implementation (see below). How can I implement point 1 (and its subpoints)? Final implementation (fixed thanks to @EthanFurman's answer) The idea is to use Stage instances as the enum members, and use their seq_ids as member values. seq_id is no longer in the Stage class as it is stored directly in the enum member's _value_; I made this choice because I deem seq_ids not to have meaning outside the enum. seq_id cannot be specified at member declaration. Instead, it is automatically generated based on declaration order (thanks to auto-numbering enum pattern). This prevents invalid seq_ids from being specified dunder methods __lt__ , __eq__, __str__ and __hash__ have been implemented inside the enum rather than the Stage class because their behavior is tied to the use of the enum members Hashing is based on the member's _value_, which is supposedly immutable (hashing Stage would've been tricky due to its mutability) @dataclass class Stage: label: str n_instances: int = 0 @total_ordering class Stages(Stage, Enum): #req no. 1 (members are subclasses of stage) #req no. 1.1 def __lt__(self, other): if self.__class__ is other.__class__: return self.value < other.value return NotImplemented def __eq__(self, other): if self.__class__ is other.__class__: return self.value == other.value return NotImplemented #req no. 1.2 def __hash__(self): return hash(self.value) #req no. 1.3 def __str__(self): return self.label #req no. 2 def __new__(cls, label): #auto numbering enum pattern for seq_ids value = len(cls.__members__) + 1 obj = Stage.__new__(cls) #enum value set to seq_id obj._value_ = value return obj #req no. 3 @override def _generate_next_value_(name, start, count, last_values): return name.lower() ADMISSION = auto() PROCESSING = auto() QC = auto() DELIVERY = auto() #ordering test assert(Stages.PROCESSING.__lt__(Stages.DELIVERY)) #dictionary key test stage_to_color = { Stages.ADMISSION : "#B10156", Stages.PROCESSING : "#F4B704", Stages.QC : "#FD0002", Stages.DELIVERY : "#7FB857" } assert(stage_to_color[Stages.QC] == "#FD0002") | Both dataclass and Enum do a lot of work to make things simple for the user -- when you start extending and/or combining them you need to be careful. Working code: from dataclasses import dataclass from enum import Enum, auto, unique from functools import total_ordering from typing import override @dataclass class Stage: seq_id: int label: str # name and value are reserved by Enum n_instances: int = 0 # req 1.2 def __hash__(self): return hash(self.seq_id) @total_ordering #req no. 1 (members are subclasses of stage) class Stages(Stage, Enum): #req no. 1.1 (ordering) def __lt__(self, other): if self.__class__ is other.__class__: return self.seq_id < other.seq_id return NotImplemented def __eq__(self, other): if self.__class__ is other.__class__: return self.seq_id == other.seq_id return NotImplemented #req no. 1.3 def __str__(self): return self.label #req nos. 2 & 3 @override def _generate_next_value_(name, start, count, last_values): return count, name.lower() ADMISSION = auto() PROCESSING = auto() QC = auto() DELIVERY = auto() | 2 | 1 |
78,882,187 | 2024-8-17 | https://stackoverflow.com/questions/78882187/dataframe-set-all-in-group-with-value-that-occurs-first-in-corresponding-multi-c | I have multiple A* columns with corresponding B* columns (A and B have the corresponding numbers at the end of the column names). When the REFNO value = A# value and the 'MNGR' is not BOB, I need to put the value from corresponding B# column into the βAGEβ column and the corresponding # number in the 'FLAG' column. The next step is to set the same AGE and FLAG values that occurred first in the group to all employees in that group (group by MNGR, YEAR). The A* and B* columns can be more than 10 columns. But I got this error: Expected a 1D array, got an array with shape (8, 2). df = pd.DataFrame({ 'EMPLID': [12, 13, 14, 15, 16, 17, 18, 19], 'MNGR': ['BOB', 'JIM', 'RHONDA', 'RHONDA', 'JIM', 'RHONDA', 'RHONDA', 'BOB'], 'YEAR': [2012, 2013, 2012, 2012, 2012, 2013, 2012, 2012], 'REFNO': [2, 3, 4, 4, 5, 6, 4, 2], 'A1': [1,3,2,4,5,4,3,1], 'A2': [2,4,4,5,7,5,4,2], 'A3': [3,5,8,6,8,6,5,3], 'B1': [21,31,41,44,51,61,71,81], 'B2': [22,32,42,45,52,62,72,82], 'B3': [23,33,43,46,53,63,73,83] }) for i in(1,3): df['AGE', 'FLAG'] = df.loc[(df['REFNO'] == df[f'A{i}']) & (df['MNGR'] != 'BOB'), [f'B{i}','i']] df2 = df.groupby(['MNGR', 'YEAR']).first() The next step is to fill every record in the group with the same AGE and FLAG that occurs first (this can be done by regular merging of df and df2). Expected output: EMPLID MNGR YEAR REFNO A1 A2 A3 B1 B2 B3 AGE FLAG 0 12 BOB 2012 2 1 2 3 21 22 23 0 0 1 13 JIM 2013 3 3 4 5 31 32 33 31 1 2 14 RHONDA 2012 4 2 4 8 41 42 43 42 2 3 15 RHONDA 2012 4 4 5 6 44 45 46 42 2 4 16 JIM 2012 5 5 7 8 51 52 53 51 1 5 17 RHONDA 2013 6 4 5 6 61 62 63 63 3 6 18 RHONDA 2012 4 3 4 5 71 72 73 42 2 7 19 BOB 2012 2 1 2 3 81 82 83 0 0 | Without the need for a loop: m = df.filter(regex=r'^A\d').eq(df['REFNO'], axis=0).values df['AGE'] = df.filter(regex=r'^B\d').where(m).max(axis=1).astype(int) df['FLAG'] = m.argmax(axis=1) + 1 df[['AGE', 'FLAG']] = (df.groupby(['MNGR', 'YEAR'])[['AGE', 'FLAG']] .transform('first') .where(df['MNGR'].ne('BOB'), 0) ) Output: EMPLID MNGR YEAR REFNO A1 A2 A3 B1 B2 B3 AGE FLAG 0 12 BOB 2012 2 1 2 3 21 22 23 0 0 1 13 JIM 2013 3 3 4 5 31 32 33 31 1 2 14 RHONDA 2012 4 2 4 8 41 42 43 42 2 3 15 RHONDA 2012 4 4 5 6 44 45 46 42 2 4 16 JIM 2012 5 5 7 8 51 52 53 51 1 5 17 RHONDA 2013 6 4 5 6 61 62 63 63 3 6 18 RHONDA 2012 4 3 4 5 71 72 73 42 2 7 19 BOB 2012 2 1 2 3 81 82 83 0 0 Explanation / Intermediates Create an np.ndarray boolean mask (m), comparing all 'A' columns with 'REFNO' using df.filter + df.eq on axis=0 + df.values. # m array([[False, True, False], [ True, False, False], [False, True, False], [ True, False, False], [ True, False, False], [False, False, True], [False, True, False], [False, True, False]]) For 'AGE', take all 'B' columns and keep only the values where m is True (df.where), extracting them per row with df.max on axis=1. (Add Series.astype to restore integers.) # df.filter(regex=r'^B\d').where(m) B1 B2 B3 0 NaN 22.0 NaN # < extract `22.0` with max for `'AGE'` 1 31.0 NaN NaN # < extract `31.0` with max for `'AGE'` 2 NaN 42.0 NaN 3 44.0 NaN NaN 4 51.0 NaN NaN 5 NaN NaN 63.0 6 NaN 72.0 NaN 7 NaN 82.0 NaN For 'FLAG', apply ndarray.argmax to m on axis=1 to get indices for True, adding 1 since indices will start at 0. Finally, use df.groupby on ['MNGR', 'YEAR'] + groupby.transform + groupby.first to propagate first matches for all duplicates, then applying df.where to reset rows for df['MNGR'] == "BOB" to 0. N.B. The above method assumes that both your 'A' and 'B' columns are ordered (i.e. ['A1', 'A2', ...] and ['B1', 'B2', ...]). | 2 | 2 |
78,882,352 | 2024-8-17 | https://stackoverflow.com/questions/78882352/plotting-a-timeseries-as-bar-plot-with-pandas-results-in-an-incorrect-year | I have the following dataframe (except my actual data is over 25 years): import pandas as pd df = pd.DataFrame( dict( date=pd.date_range(start="2020-01-01", end="2020-12-31", freq="MS"), data=[1,2,3,4,5,6,7,8,9,10,11,12] ), ) df Output: date data 0 2020-01-01 1 1 2020-02-01 2 2 2020-03-01 3 3 2020-04-01 4 4 2020-05-01 5 5 2020-06-01 6 6 2020-07-01 7 7 2020-08-01 8 8 2020-09-01 9 9 2020-10-01 10 10 2020-11-01 11 11 2020-12-01 12 And I get different results with matplotlib and pandas default plotting: import matplotlib as mpl import matplotlib.dates as mdates import matplotlib.pyplot as plt fig = mpl.figure.Figure(constrained_layout=True) axs = fig.subplot_mosaic("ac;bd") ax = axs["a"] ax.bar(x="date", height="data", data=df, width=15) ax = axs["b"] ax.bar(x="date", height="data", data=df, width=15) locator = mdates.AutoDateLocator(minticks=12, maxticks=24) formatter = mdates.ConciseDateFormatter(locator) ax.xaxis.set_major_locator(locator) ax.xaxis.set_major_formatter(formatter) ax = axs["c"] df.plot.bar(x="date", y="data", ax=ax, legend=False) ax = axs["d"] df.plot.bar(x="date", y="data", ax=ax, legend=False, ) # incorrect year -> 1970 instead of 2020 locator = mdates.AutoDateLocator(minticks=12, maxticks=24) formatter = mdates.ConciseDateFormatter(locator) ax.xaxis.set_major_locator(locator) ax.xaxis.set_major_formatter(formatter) for k, ax in axs.items(): for label in ax.get_xticklabels(): label.set_rotation(40) label.set_horizontalalignment('right') fig Output: I would like to be able to use pandas for plotting but then format the ticks appropriately for a publication ready plot. However, it appears that I lose the date time information or get the incorrect year when using pandas. Is there a way to format the axis ticklabels using mdates features without using the data directly? i.e. if I resample the data, or slice in a different year, I'd like the axis to reflect that automatically. Here's a more simple illustration of the issue I'm having: import matplotlib as mpl import matplotlib.dates as mdates import matplotlib.pyplot as plt fig = mpl.figure.Figure(constrained_layout=True) axs = fig.subplot_mosaic("a") ax = axs["a"] df.plot.bar(x="date", y="data", ax=ax, legend=False) # incorrect year -> 1970 instead of 2020 formatter = mdates.DateFormatter("%Y - %b") ax.xaxis.set_major_formatter(formatter) fig The dates are all wrong when using DateFormatter. | When you are using a bar plot, the x-coordinates become 0, 1, 2, 3, etc. That's why mdates.DateFormatter returns 1970, as it treats these coordinates as seconds since epoch time. You can set the tick labels manually: ax.set_xticklabels(df["date"].dt.strftime("%Y - %b")) | 3 | 1 |
78,880,721 | 2024-8-16 | https://stackoverflow.com/questions/78880721/iterate-over-groups-created-using-group-by-on-date-column | Update: This was fixed by pull/18251 I am new to Polars. I want to iterate over the groups created by grouping over the column where each cell of that column contains a list of two dates. I used the following (sample) piece of code to achieve and it used to work fine with polars==0.20.18 version: import polars as pl import datetime dt_str = [{'ts': datetime.date(2023, 7, 1), 'files': 'AGG_202307.xlsx', 'period_bins': [datetime.date(2023, 7, 1), datetime.date(2024, 1, 1)]}, {'ts': datetime.date(2023, 8, 1), 'files': 'AGG_202308.xlsx', 'period_bins': [datetime.date(2023, 7, 1), datetime.date(2024, 1, 1)]}, {'ts': datetime.date(2023, 11, 1), 'files': 'KFC_202311.xlsx', 'period_bins': [datetime.date(2023, 7, 1), datetime.date(2024, 1, 1)]}, {'ts': datetime.date(2024, 2, 1), 'files': 'KFC_202402.xlsx', 'period_bins': [datetime.date(2024, 1, 1), datetime.date(2024, 7, 1)]}] dt = pl.from_dicts(dt_str) df_groups = dt.group_by("period_bins") print(df_groups.all().to_dicts()) The above code does not work with polars==1.x and gives the following error: thread 'polars-0' panicked at crates/polars-row/src/encode.rs:289:15: not implemented: Date32 thread 'polars-1' panicked at crates/polars-row/src/encode.rs:289:15: not implemented: Date32 Traceback (most recent call last): File "testpad.py", line 18, in <module> print(df_groups.all().to_dicts()) File "python3.10/site-packages/polars/dataframe/group_by.py", line 430, in all return self.agg(F.all()) File "python3.10/site-packages/polars/dataframe/group_by.py", line 228, in agg self.df.lazy() File "python3.10/site-packages/polars/lazyframe/frame.py", line 2027, in collect return wrap_df(ldf.collect(callback)) pyo3_runtime.PanicException: not implemented: Date32 How do I fix this error? | You could group by the .hash() (or cast) as a workaround. (df.group_by(pl.col("period_bins").hash().alias("key")) .all() ) shape: (2, 4) βββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββ β key β ts β files β period_bins β β --- β --- β --- β --- β β u64 β list[date] β list[str] β list[list[date]] β βββββββββββββββββββββββͺββββββββββββββββββββββββββββββββββͺββββββββββββββββββββββββββββββββββͺββββββββββββββββββββββββββββββββββ‘ β 6836989170623494942 β [2023-07-01, 2023-08-01, 2023-β¦ β ["AGG_202307.xlsx", "AGG_20230β¦ β [[2023-07-01, 2024-01-01], [20β¦ β β 2692156858231355433 β [2024-02-01] β ["KFC_202402.xlsx"] β [[2024-01-01, 2024-07-01]] β βββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββ | 5 | 1 |
78,880,945 | 2024-8-16 | https://stackoverflow.com/questions/78880945/weighted-sum-on-multiple-dataframes | I have several dataframes with an ID, a time of day, and a number. I would like to weight each dataframe number and then sum them for each id/time of day. As an example: weighted 0.2 ID TOD M 0 10 morning 1 1 13 afternoon 3 2 32 evening 2 3 10 evening 2 weighted 0.4 ID TOD W 0 10 morning 1 1 13 morning 3 2 32 afternoon 2 3 10 evening 3 weighted sum: ID TOD weighed_sum_mw 0 10 morning (0.2*1 + 0.4*1) 1 10 evening (0.2*2 + 0.4*3) 2 13 morning (0.4*3) 3 13 afternoon (0.4*2) 3 32 evening (0.2*2) 4 32 afternoon (0.4*2) The following strategy works but is very memory consuming and I'm not sure if there's a way to do this without merging them. I also eventually only need the row with the time of day that has the largest sum for each ID so if that simplifies the process that works as well! (Tie breakers for equal max weighted sums will keep Afternoon, then Evening, then Morning first). I currently do this with 4 dataframes but may add more and they are approximately 10M rows each merged_oc= pd.merge(dfs[0], dfs[3], on=['ID', 'TIME_OF_DAY'], suffixes=('_O', '_C'), how='outer') merged_s = pd.merge(dfs[1], dfs[2], on=['ID', 'TIME_OF_DAY'], suffixes=('_W', 'M'), how='outer') # merge and weighted sum of O and C merged_oc['COUNTS_O_weighted_02']= merged_oc['COUNTS_O'].fillna(0).multiply(0.2) merged_oc['COUNTS_C_weighted_04'] = merged_oc['COUNTS_C'].fillna(0).multiply(0.4) merged_oc['COUNTS'] = merged_oc['COUNTS_O_weighted_02'] + merged_oc['COUNTS_C_weighted_04'] result_oc = merged_oc[['ID', 'TIME_OF_DAY', 'COUNTS', 'COUNTS_O_weighted_02', 'COUNTS_C_weighted_04']] merged_s['COUNTS_W_weighted_04'] = merged_s['COUNTS_W'].fillna(0).multiply(0.4) merged_s['COUNTS_M_weighted_04'] = merged_s['COUNTS_M'].fillna(0).multiply(0.4) merged_s['COUNTS'] = merged_s['COUNTS_W_weighted_04'] + merged_s['COUNTS_M_weighted_04'] result_s = merged_s[['ID', 'TIME_OF_DAY', 'COUNTS', 'COUNTS_W_weighted_04', 'COUNTS_M_weighted_04']] merged_final = pd.merge(result_oc, result_s, on=['ID', 'TIME_OF_DAY'], suffixes=('_OC', '_S'), how='outer') merged_final['COUNTS_OC']= merged_final['COUNTS_OC'].fillna(0) merged_final['COUNTS_S'] = merged_final['COUNTS_S'].fillna(0) merged_final['WEIGHTED_SUM'] = merged_final['COUNTS_OC'] + merged_final['COUNTS_SESSION'] merged_final = merged_final[['ID', 'TIME_OF_DAY', 'WEIGHTED_SUM', 'COUNTS_O_weighted_02', 'COUNTS_C_weighted_04', 'COUNTS_W_weighted_04', 'COUNTS_M_weighted_04']].fillna(0) | IIUC, you can try pd.concat the dataframes after you set index and multiply by your weights for each dataframe, then use groupby and sum: df_out = pd.concat([df1_2.set_index(['ID', 'TOD']).mul(.2), df2_4.set_index(['ID', 'TOD']).mul(.4)])\ .sum(axis=1)\ .groupby(level=[0,1])\ .sum()\ .reset_index() df_out Output: ID TOD 0 0 10 evening 1.6 1 10 morning 0.6 2 13 afternoon 0.6 3 13 morning 1.2 4 32 afternoon 0.8 5 32 evening 0.4 | 2 | 5 |
78,877,618 | 2024-8-16 | https://stackoverflow.com/questions/78877618/polars-struct-fieldliststr-returns-a-single-column-when-dealing-with-listst | Some of my columns in my Polars Dataframe have the dtype pl.List(pl.Struct). I'm trying to replace these columns so that I get multiple columns that are lists of scalar values. Here's an example of a column I'm trying to change: import polars as pl df = pl.DataFrame({ "column_0": [ [{"field_1": "a", "field_2": 1}, {"field_1": "b", "field_2":2}], [{"field_1": "c", "field_2":3}] ] }) col_name = "column_0" df.select( pl.col(col_name).list.eval( pl.element().struct.unnest() ) ) My expectation was that I'll get something like this: shape: (2, 2) ββββββββββββββ¬ββββββββββββ β field_1 β field_2 β β --- β --- β β list[str] β list[i64] β ββββββββββββββͺββββββββββββ‘ β ["a", "b"] β [1, 2] β β ["c"] β [3] β ββββββββββββββ΄ββββββββββββ Instead, I only get the last field (in this case, 'field_2'): shape: (2, 1) βββββββββββββ β column_0 β β --- β β list[i64] β βββββββββββββ‘ β [1, 2] β β [3] β βββββββββββββ | You could unpack the lists/structs with .explode() + .unnest() and group the rows back together. (df.with_row_index() .explode("column_0") .unnest("column_0") .group_by("index", maintain_order=True) .all() ) shape: (2, 3) βββββββββ¬βββββββββββββ¬ββββββββββββ β index β field_1 β field_2 β β --- β --- β --- β β u32 β list[str] β list[i64] β βββββββββͺβββββββββββββͺββββββββββββ‘ β 0 β ["a", "b"] β [1, 2] β β 1 β ["c"] β [3] β βββββββββ΄βββββββββββββ΄ββββββββββββ | 3 | 2 |
78,878,048 | 2024-8-16 | https://stackoverflow.com/questions/78878048/shaky-zoom-with-opencv-python | I want to apply zoom in and out effect on a video using opencv, but as opencv doesn't come with built in zoom, I try cropping the frame to the interpolated value's width, height, x and y and than resize back the frame to the original video size, i.e. 1920 x 1080. But when I rendered the final video, there is shakiness in the final video. I am not sure why its happening, I want a perfect smooth zoom in and out from the specific time I built a easing function that would give interpolated value for each frame for zoom in and out :- import cv2 video_path = 'inputTest.mp4' cap = cv2.VideoCapture(video_path) fps = int(cap.get(cv2.CAP_PROP_FPS)) fourcc = cv2.VideoWriter_fourcc(*'mp4v') out = cv2.VideoWriter('output_video.mp4', fourcc, fps, (1920, 1080)) initialZoomValue={ 'initialZoomWidth': 1920, 'initialZoomHeight': 1080, 'initialZoomX': 0, 'initialZoomY': 0 } desiredValues = { 'zoomWidth': 1672, 'zoomHeight': 941, 'zoomX': 200, 'zoomY': 0 } def ease_out_quart(t): return 1 - (1 - t) ** 4 async def zoomInInterpolation(initialZoomValue, desiredZoom, start, end, index): t = (index - start) / (end - start) eased_t = ease_out_quart(t) interpolatedWidth = round(initialZoomValue['initialZoomWidth'] + eased_t * (desiredZoom['zoomWidth']['width'] - initialZoomValue['initialZoomWidth']), 2) interpolatedHeight = round(initialZoomValue['initialZoomHeight'] + eased_t * (desiredZoom['zoomHeight'] - initialZoomValue['initialZoomHeight']), 2) interpolatedX = round(initialZoomValue['initialZoomX'] + eased_t * (desiredZoom['zoomX'] - initialZoomValue['initialZoomX']), 2) interpolatedY = round(initialZoomValue['initialZoomY'] + eased_t * (desiredZoom['zoomY'] - initialZoomValue['initialZoomY']), 2) return {'interpolatedWidth': int(interpolatedWidth), 'interpolatedHeight': int(interpolatedHeight), 'interpolatedX': int(interpolatedX), 'interpolatedY': int(interpolatedY)} def generate_frame(): while cap.isOpened(): code, frame = cap.read() if code: yield frame else: print("bailsdfing") break for i, frame in enumerate(generate_frame()): if i >= 1 and i <= 60: interpolatedValues = zoomInInterpolation(initialZoomValue, desiredValues, 1, 60, i) crop = frame[interpolatedValues['interpolatedY']:(interpolatedValues['interpolatedHeight'] + interpolatedValues['interpolatedY']), interpolatedValues['interpolatedX']:(interpolatedValues['interpolatedWidth'] + interpolatedValues['interpolatedX'])] zoomedFrame = cv2.resize(crop,(1920, 1080), interpolation = cv2.INTER_CUBIC) out.write(zoomedFrame) # Release the video capture and close windows cap.release() cv2.destroyAllWindows() But the final video I get is shaking :- Final Video I want the video to be perfectly zoom in and out, don't want to any shakiness Here is graph of the interpolated values :- This is a graph if I don't round the number too early and return the integer value only :- As OpenCV would only accept whole numbers for cropping, its not possible to return the values from the interpolation function in decimals | First, let's look at why your approach jitters. Then I'll show you an alternative that doesn't jitter. In your approach, you zoom by first cropping the image, and then resizing it. That cropping only happens by whole pixel rows/columns, not in finer steps. You saw that especially well near the end of the ease, where the image is zoomed very finely. The cropped image would change width/height by less than a pixel per frame, so it only changes every couple of frames. The jerkiness would worsen the closer you zoom because then a pixel becomes larger. Instead of cropping like this, calculate and apply a transform matrix for every frame. This involves warpAffine() or warpPerspective(). Considering the source image to be a texture, these functions run over every destination pixel, use the transform matrix to calculate the point in the source image, then sample that in the source image, with some interpolation mode. The compound transform is calculated from three primitive transforms: move a particular point (the "anchor") of the image to the origin scale around origin (the anchor) move to where it should be in the video frame In the code, I build this transform in this sequence. You could also write that in a single expression as T = translate2(*+zoom_center) @ scale2(s=z) @ translate2(*-anchor). Yes, the operations expressed by these matrices are applied from right to left. import numpy as np import cv2 as cv from tqdm import tqdm # remove that if you don't like it # Those two functions generate simple translation and scaling matrices: def translate2(tx=0, ty=0): T = np.eye(3) T[0:2, 2] = [tx, ty] return T def scale2(s=1, sx=1, sy=1): T = np.diag([s*sx, s*sy, 1]) return T # you know this one already def ease_out_quart(alpha): return 1 - (1 - alpha) ** 4 # some constants to describe the zoom im = cv.imread(cv.samples.findFile("starry_night.jpg")) (imheight, imwidth) = im.shape[:2] (output_width, output_height) = (1280, 720) fps = 60 duration = 5.0 # secs # "anchor": somewhere in the image anchor = np.array([ (imwidth-1) * 0.75, (imheight-1) * 0.75 ]) # position: somewhere in the frame zoom_center = np.array([ (output_width-1) * 0.75, (output_height-1) * 0.75 ]) zoom_t_start, zoom_t_end = 1.0, 4.0 zoom_z_start, zoom_z_end = 1.0, 10.0 # calculates the matrix: def calculate_transform(timestamp): alpha = (timestamp - zoom_t_start) / (zoom_t_end - zoom_t_start) alpha = np.clip(alpha, 0, 1) alpha = ease_out_quart(alpha) z = zoom_z_start + alpha * (zoom_z_end - zoom_z_start) T = translate2(*-anchor) T = scale2(s=z) @ T T = translate2(*+zoom_center) @ T return T # applies the matrix: def animation_callback(timestamp, canvas): T = calculate_transform(timestamp) cv.warpPerspective( src=im, M=T, dsize=(output_width, output_height), dst=canvas, # drawing over the same buffer repeatedly flags=cv.INTER_LANCZOS4, # or INTER_LINEAR, INTER_NEAREST, ... ) # generate the video writer = cv.VideoWriter( filename="output.avi", # AVI container: OpenCV built-in fourcc=cv.VideoWriter_fourcc(*"MJPG"), # MJPEG codec: OpenCV built-in fps=fps, frameSize=(output_width, output_height), isColor=True ) assert writer.isOpened() canvas = np.zeros((output_height, output_width, 3), dtype=np.uint8) timestamps = np.arange(0, duration * fps) / fps try: for timestamp in tqdm(timestamps): animation_callback(timestamp, canvas) writer.write(canvas) cv.imshow("frame", canvas) key = cv.waitKey(1) if key in (13, 27): break finally: cv.destroyWindow("frame") writer.release() print("done") Here are several result videos. https://imgur.com/a/mDfrpre One of them is a very close zoom with nearest neighbor interpolation, so you can see the pixels clearly. As you can see, the image isn't "cropped" to whole pixels. You can see pieces of pixels at the edges. For fun I also made one video with multiple animation segments (pause, in, pause, out, pause). That merely needs some logic in calculate_transform to figure out what time segment you're in. zoom_keyframes = [ # (time, zoom) (0.0, 15.0), (1.0, 15.0), (2.0, 16.0), (3.0, 16.0), (4.0, 15.0), (5.0, 15.0), ] def calculate_transform(timestamp): i0 = i1 = 0 for i, (tq, _) in enumerate(zoom_keyframes): if tq <= timestamp: i0 = i if timestamp <= tq: i1 = i break if i1 == i0: i1 = i0 + 1 # print(f"i0 {i0}, i1 {i1}") zoom_ta, zoom_za = zoom_keyframes[i0] zoom_tb, zoom_zb = zoom_keyframes[i1] alpha = (timestamp - zoom_ta) / (zoom_tb - zoom_ta) alpha = np.clip(alpha, 0, 1) alpha = ease_out_quart(alpha) z = zoom_za + alpha * (zoom_zb - zoom_za) T = translate2(*-anchor) T = scale2(s=z) @ T T = translate2(*+zoom_center) @ T return T | 3 | 4 |
78,876,878 | 2024-8-15 | https://stackoverflow.com/questions/78876878/system-identification-using-an-arx-model-with-gekko | The following is related to this question: Why doesn't the Gekko solver adapt to variations in the system? What I still don't understand is why, sometimes, when the outside temperature rises, the inside temperature remains constant. Normally, it should also increase since beta remains constant. Here are examples of this: In the two images, we can see that on the second day, the outside temperature increases and Ξ² remains constant, but the inside temperature decreases. I checked my data and found that it had been taken every 2 seconds, which represents 43,200 measurements per day. However, during the system identification with the ARX model, I had set the sampling time to 300 sec. I ran the corrected MPC code with 2 sec sampling from my previous question. # Import library import numpy as np import pandas as pd import time from gekko import GEKKO from numpy import array import matplotlib.pyplot as plt # import data tx = pd.read_csv(r'C:\Users\mpc\tx_b.csv') tz = pd.read_csv(r'C:\Users\mpc\tz.csv') data = pd.concat([tx,tz],axis=1) data_1 = data[0:8596800] data_2 = data[8596800:] #%% Initialize Model m = GEKKO(remote=False) # system identification ts = 2 t = np.arange(0,len(data_1)*ts, ts) u_id = data_1[['Tx_i','beta_i']] y_id = data_1[['Tz_i']] #meas : the time-series next step is predicted from prior measurements as in ARX na=10; nb=10 # ARX coefficients print('Identify model') start = time.time() yp,p,K = m.sysid(t,u_id,y_id,na,nb,objf=100,scale=False,diaglevel=0,pred='meas') print('temps de prediction :'+str(time.time()-start)+'s') #%% parametres K = array([[ 0.93688819, -12.22410568]]) p = {'a': array([[ 1.08945931], [-0.00243571], [-0.00247112], [-0.00273341], [-0.00296342], [-0.00319516], [-0.00343794], [-0.00366398], [-0.00394255], [-0.06661506]]), 'b': array([[[-0.05134201, -0.01035174], [ 0.00170311, -0.01551259], [ 0.00172715, -0.01178932], [ 0.00178147, -0.01051817], [ 0.00184694, -0.00821511], [ 0.00192371, -0.00570574], [ 0.00201409, -0.00344425], [ 0.00210016, -0.0014708 ], [ 0.00222189, 0.00021622], [ 0.03789636, 0.04235503]]]), 'c': array([0.0266222])} #%% I used the last day's external temperature data as a disturbance. T_externel = data_2[["Tx_i"]].values m = GEKKO(remote=False) m.y = m.Array(m.CV,1) m.u = m.Array(m.MV,2) m.arx(p,m.y,m.u) # rename CVs m.T = m.y[0] # rename MVs m.beta = m.u[1] # distrubance m.d = m.u[0] # distrubance and parametres m.d = m.Param(T_externel[0]) m.bias = m.Param(0) m.Tb = m.CV() m.Equation(m.Tb==m.T+m.bias) # steady state initialization m.options.IMODE = 1 m.solve(disp=False) # set up MPC m.d.value = T_externel m.options.IMODE = 6 # MPC m.options.CV_TYPE = 1 # the objective is an l1-norm (region) m.options.NODES = 2 # Collocation nodes m.options.SOLVER = 3 # IPOPT m.time = np.arange(0,len(T_externel)*2,2) # step time = 300s # Manipulated variables m.beta.STATUS = 1 # calculated by the optimizer m.beta.FSTATUS = 0 # use measured value m.beta.DCOST = 0.0 # Delta cost penalty for MV movement m.beta.UPPER = 1.0 # Upper bound m.beta.LOWER = 0.0 # Lower bound m.beta.MV_STEP_HOR = 1 m.beta.value = 0 # Controlled variables m.Tb.STATUS = 1 # drive to set point m.Tb.FSTATUS = 0 # receive measurement m.Tb.SPHI = 17.5 # set point high level m.Tb.SPLO = 16.5 # set point low level m.Tb.WSPHI = 100 # set point high priority m.Tb.WSPLO = 100 # set point low priority T_MEAS = 20 m.Tb.value = T_MEAS m.bias.value = T_MEAS - m.T.value[0] m.options.SOLVER = 3 m.solve(disp=False) if m.options.APPSTATUS == 1: # Retrieve new values beta = m.beta.NEWVAL for i in range(43200): print(i) else: # Solution failed beta = 0.0 # Plot the results plt.figure(figsize=(8,3.5)) plt.subplot(3,1,1) plt.plot(m.time,m.Tb.value,'r-',label=r'$T_{int}$') plt.plot([0,m.time[-1]],[m.Tb.SPHI,m.Tb.SPHI],'k--',label='Upper Bound') plt.plot([0,m.time[-1]],[m.Tb.SPLO,m.Tb.SPLO],'k--',label='Lower Bound') plt.legend(loc=1); plt.grid() plt.ylabel('Tin (Β°C)') plt.subplot(3,1,2) plt.plot(m.time,m.d.value,'g:',label=r'$T_{ext}$') plt.ylabel('Tex (Β°C)') plt.subplot(3,1,3) plt.step(m.time,m.beta.value,'b--',label=r'$\beta$') plt.ylabel('optimal control') plt.xlabel('Time (sec)') plt.legend(loc=1); plt.grid() plt.savefig('results7.png',dpi=300) plt.show() For the external temperature over one day, I took one day's worth of data (data_2) and concatenated it with itself to create a profile for two consecutive days. I haven't solved the problem yet. | There is a small error where m.d is redefined and the connection is broken with the m.arx() model. Here is a correction: # distrubance and parametres #m.d = m.Param(T_externel[0]) m.d.value = T_externel[0] There is also a dynamics issue. The response to beta (and likely external temperature) is very slow as shown in the step response here: This could be from using a different sampling time interval for the SysID (System Identification) and the controller. This is especially the case if you used a short time interval for system identification and use a longer time interval for the MPC application. Because it is a discrete ARX model, the dynamics will take longer for the inputs to affect the outputs. Response with Additional Information Thanks for confirming that the identification was performed with 2 second data and the MPC application with 300 second time intervals. This is the source of one of the issues. One method is to use 2 sec for both (as you have shown) or else use 300 sec for both. I recommend going to 300 sec intervals for both because of the slow dynamics and need to calculate over a long time horizon. There is a way to reduce the system identification data to 300 second intervals with a rolling average and sampling every 150 data points with davg.iloc[::150]. Here are the first 500 points of the identification. # Import library import numpy as np import pandas as pd import time from gekko import GEKKO from numpy import array import matplotlib.pyplot as plt # import data tx = pd.read_csv('tx_b.csv') tz = pd.read_csv('tz.csv') data = pd.concat([tx,tz],axis=1) # reduce from 2 sec to 5 min intervals # calculate rolling average davg = data.rolling(window=150).mean() # sample every 150 cycles (2 sec to 5 min) dnew = davg.iloc[::150] dnew.dropna(inplace=True) slice = int(len(dnew)*0.99) data_1 = dnew.iloc[:slice].copy() # 99% for training data_2 = dnew.iloc[slice:].copy() # 1% for testing #%% Initialize Model m = GEKKO(remote=False) # system identification ts = 300 t = np.arange(0,len(data_1)*ts, ts) u_id = data_1[['Tx_i','beta_i']] y_id = data_1[['Tz_i']] #meas : the time-series next step is predicted from prior measurements as in ARX na=10; nb=10 # ARX coefficients print('Identify model') start = time.time() yp,p,K = m.sysid(t,u_id,y_id,na,nb,objf=100,scale=False,diaglevel=0,pred='meas') print('temps de prediction :'+str(time.time()-start)+'s') # new parameters print(f'Parameters: {p}') # new gains print(f'Gains: {K}') # Plot the results nview = 500 plt.figure(figsize=(8,3.5)) plt.subplot(3,1,1) plt.plot(t[0:nview],data_1[['Tz_i']].iloc[0:nview],'r-',label=r'$T_{meas}$') plt.plot(t[0:nview],yp[0:nview],'b--',label=r'$T_{pred}$') plt.legend(loc=1); plt.grid() plt.ylabel('Tin (Β°C)') plt.subplot(3,1,2) plt.plot(t[0:nview],data_1['Tx_i'].iloc[0:nview],'g:',label=r'$T_{ext}$') plt.ylabel('Tex (Β°C)') plt.legend(loc=1); plt.grid() plt.subplot(3,1,3) plt.step(t[0:nview],data_1['beta_i'].iloc[0:nview],'b--',label=r'$\beta$') plt.ylabel('optimal control') plt.xlabel('Time (sec)') plt.legend(loc=1); plt.grid() plt.savefig('results_id.png',dpi=300) plt.show() The new gains look good: Gains: [[ 0.97529892 -14.28674666]] The new parameters are: Parameters: {'a': array([[ 0.65059382], [-0.03027903], [ 0.06080624], [-0.02233488], [ 0.06877417], [ 0.37139369], [-0.22535397], [-0.01737772], [-0.03430238], [ 0.02420281]]), 'b': array([[[ 3.71568256e-01, -8.09407185e+00], [-7.62375332e-02, 4.97505860e+00], [-1.34657379e-01, -1.56110911e+00], [ 7.32255512e-02, 1.89452386e+00], [-2.50583697e-02, -1.68565102e+00], [-2.06554465e-01, 2.03970469e+00], [ 1.50625153e-01, 8.79190526e-01], [-9.20974935e-03, -1.34505749e-01], [ 1.58409954e-03, -6.32745151e-01], [ 4.79076587e-03, 1.21199721e-01]]]), 'c': array([2.08698176])} Here is a step response with T_externel. # Import library import numpy as np import pandas as pd import time from gekko import GEKKO from numpy import array import matplotlib.pyplot as plt #%% parameters p = {'a': array([[ 0.65059382], [-0.03027903], [ 0.06080624], [-0.02233488], [ 0.06877417], [ 0.37139369], [-0.22535397], [-0.01737772], [-0.03430238], [ 0.02420281]]), 'b': array([[[ 3.71568256e-01, -8.09407185e+00], [-7.62375332e-02, 4.97505860e+00], [-1.34657379e-01, -1.56110911e+00], [ 7.32255512e-02, 1.89452386e+00], [-2.50583697e-02, -1.68565102e+00], [-2.06554465e-01, 2.03970469e+00], [ 1.50625153e-01, 8.79190526e-01], [-9.20974935e-03, -1.34505749e-01], [ 1.58409954e-03, -6.32745151e-01], [ 4.79076587e-03, 1.21199721e-01]]]), 'c': array([2.08698176])} m = GEKKO(remote=False) m.y = m.Array(m.CV,1) m.u = m.Array(m.MV,2) m.arx(p,m.y,m.u) # rename CVs m.T = m.y[0] # rename MVs m.beta = m.u[1] # distrubance m.d = m.u[0] # distrubance and parametres m.d.value = 0 m.bias = m.Param(0) m.Tb = m.CV() m.Equation(m.Tb==m.T+m.bias) # steady state initialization m.options.IMODE = 1 m.solve(disp=False) # set up MPC n = 120 # 10 hours at 5 min sampling rate T_externel = np.zeros(n) T_externel[10:] = 15 m.d.value = T_externel m.options.IMODE = 6 # MPC m.options.CV_TYPE = 1 # the objective is an l1-norm (region) m.options.NODES = 2 # Collocation nodes m.options.SOLVER = 3 # IPOPT m.time = np.arange(0,len(T_externel)*300,300) # step time = 300s # Manipulated variables m.beta.STATUS = 0 # calculated by the optimizer m.beta.FSTATUS = 0 # use measured value m.beta.DCOST = 0.0 # Delta cost penalty for MV movement m.beta.UPPER = 1.0 # Upper bound m.beta.LOWER = 0.0 # Lower bound m.beta.MV_STEP_HOR = 1 m.beta.value = 0 # Controlled variables m.Tb.STATUS = 1 # drive to set point m.Tb.FSTATUS = 0 # receive measurement m.Tb.SPHI = 17.5 # set point high level m.Tb.SPLO = 16.5 # set point low level m.Tb.WSPHI = 100 # set point high priority m.Tb.WSPLO = 100 # set point low priority T_MEAS = 20 m.Tb.value = T_MEAS m.bias.value = T_MEAS - m.T.value[0] m.options.SOLVER = 3 m.solve(disp=True) if m.options.APPSTATUS == 1: # Retrieve new values beta = m.beta.NEWVAL else: # Solution failed beta = 0.0 # Plot the results plt.figure(figsize=(8,3.5)) plt.subplot(3,1,1) plt.plot(m.time,m.Tb.value,'r-',label=r'$T_{int}$') plt.legend(loc=1); plt.grid() plt.ylabel('Tin (Β°C)') plt.subplot(3,1,2) plt.plot(m.time,m.d.value,'g:',label=r'$T_{ext}$') plt.ylabel('Tex (Β°C)') plt.legend(loc=1); plt.grid() plt.subplot(3,1,3) plt.step(m.time,m.beta.value,'b--',label=r'$\beta$') plt.ylabel('optimal control') plt.xlabel('Time (sec)') plt.legend(loc=1); plt.grid() plt.tight_layout() plt.savefig('results_step.png',dpi=300) plt.show() Here is the final controller with a sample sinusoidal profile for the external temperature. # Import library import numpy as np import pandas as pd import time from gekko import GEKKO from numpy import array import matplotlib.pyplot as plt total_minutes_in_day = 24 * 60 interval = 5 # minutes num_points = total_minutes_in_day // interval time_points = np.linspace(0, total_minutes_in_day, num_points) # GΓ©nΓ©rer les points de temps T_externel = 8.5 + 11.5 * np.sin((2 * np.pi / total_minutes_in_day) * time_points - (np.pi / 2)) #%% parameters p = {'a': array([[ 0.65059382], [-0.03027903], [ 0.06080624], [-0.02233488], [ 0.06877417], [ 0.37139369], [-0.22535397], [-0.01737772], [-0.03430238], [ 0.02420281]]), 'b': array([[[ 3.71568256e-01, -8.09407185e+00], [-7.62375332e-02, 4.97505860e+00], [-1.34657379e-01, -1.56110911e+00], [ 7.32255512e-02, 1.89452386e+00], [-2.50583697e-02, -1.68565102e+00], [-2.06554465e-01, 2.03970469e+00], [ 1.50625153e-01, 8.79190526e-01], [-9.20974935e-03, -1.34505749e-01], [ 1.58409954e-03, -6.32745151e-01], [ 4.79076587e-03, 1.21199721e-01]]]), 'c': array([2.08698176])} m = GEKKO(remote=False) m.y = m.Array(m.CV,1) m.u = m.Array(m.MV,2) m.arx(p,m.y,m.u) # rename CVs m.T = m.y[0] # rename MVs m.beta = m.u[1] # distrubance m.d = m.u[0] # distrubance and parametres m.d.value = 0 m.bias = m.Param(0) m.Tb = m.CV() m.Equation(m.Tb==m.T+m.bias) # steady state initialization m.options.IMODE = 1 m.solve(disp=False) m.d.value = T_externel m.options.IMODE = 6 # MPC m.options.CV_TYPE = 1 # the objective is an l1-norm (region) m.options.NODES = 2 # Collocation nodes m.options.SOLVER = 3 # IPOPT m.time = time_points # step time = 300s # Manipulated variables m.beta.STATUS = 1 # calculated by the optimizer m.beta.FSTATUS = 0 # use measured value m.beta.DCOST = 1e-4 # Delta cost penalty for MV movement m.beta.UPPER = 1.0 # Upper bound m.beta.LOWER = 0.0 # Lower bound m.beta.MV_STEP_HOR = 1 m.beta.value = 0 # Controlled variables m.Tb.STATUS = 1 # drive to set point m.Tb.FSTATUS = 0 # receive measurement m.Tb.SPHI = 19 # set point high level m.Tb.SPLO = 17 # set point low level m.Tb.WSPHI = 100 # set point high priority m.Tb.WSPLO = 100 # set point low priority T_MEAS = 20 m.Tb.value = T_MEAS m.bias.value = T_MEAS - m.T.value[0] m.options.SOLVER = 3 m.solve(disp=True) if m.options.APPSTATUS == 1: # Retrieve new values beta = m.beta.NEWVAL else: # Solution failed beta = 0.0 # Plot the results plt.figure(figsize=(8,3.5)) plt.subplot(3,1,1) plt.plot(m.time,m.Tb.value,'r-',label=r'$T_{int}$') plt.plot([0,m.time[-1]],[m.Tb.SPHI,m.Tb.SPHI],'k--',label='Upper Bound') plt.plot([0,m.time[-1]],[m.Tb.SPLO,m.Tb.SPLO],'k--',label='Lower Bound') plt.legend(loc=1); plt.grid() plt.ylabel('Tin (Β°C)') plt.subplot(3,1,2) plt.plot(m.time,m.d.value,'g:',label=r'$T_{ext}$') plt.ylabel('Tex (Β°C)') plt.legend(loc=1); plt.grid() plt.subplot(3,1,3) plt.step(m.time,m.beta.value,'b--',label=r'$\beta$') plt.ylabel('optimal control') plt.xlabel('Time (sec)') plt.legend(loc=1); plt.grid() plt.tight_layout() plt.savefig('results_mpc.png',dpi=300) plt.show() | 3 | 2 |
78,879,657 | 2024-8-16 | https://stackoverflow.com/questions/78879657/selecting-rows-that-compose-largest-group-within-another-group-in-a-pandas-dat | Main problem I'm trying to find a way to select the rows that make up the largest sub-group inside another group in a Pandas DataFrame and I'm having a bit of a hard time. Visual example (code below) Here is a sample dataset to help explain exactly what it is I'm trying to do. There is code below to recreate this dataset yourself, if you'd like. Suppose I want to group this table by Col1 and figure out which unique value of Col2 has the most rows (within each group of Col1). Further, I don't want to just know which group is the largest - I want to find a way to select the rows from the original DataFrame that match that description. So, in this case, we can easily see that, for Col1=="Group A", the value of Col2 with most rows is "Type 3", and for Col1=="Group B", the value of Col2 with most rows is "Type 6". That means that I want to select the rows with RowID in [10005, 10006, 10007, 10008, 10009, 10010, 10011, 10012, 10013, 10014, 10015]. Therefore, the output I'm looking for would be the following: My clumsy attempt I found a solution, but it is tremendously convoluted. Here is a step-by-step explanation of what I did: Step 1: First, for each group of Col1, I want to tally up the number of rows that exist for each unique value of Col2. That's quite easy, I can just do a simple groupby(['Col1','Col2']) and see the size of each grouping. Here is how that looks: Notice that for Col1=="Group A", Col2=="Type 1" has 2 observations, Col2=="Type 2" has 2 observations, and Col2=="Type 3" has 6 observations - as expected from our original data. Step 2: This is trickier: for each group of Col1, I want to find the value of Col2 that has the biggest number of rows from Step 1. Here is how that looks: Notice that we only see the cases with "max rows". Step 3: Lastly, I want to filter the original data to show ONLY the rows that fit that specific grouping: the ones found in Step 2. Reproducible example and code Here's some code to illustrate my example: # Importing the relevant library import pandas as pd # Creating my small reproducible example my_df = pd.DataFrame({'RowID':[10001,10002,10003,10004,10005,10006,10007,10008,10009,10010,10011,10012,10013,10014,10015,10016,10017,10018,10019,10020], 'Col1':['Group A','Group A','Group A','Group A','Group A','Group A','Group A','Group A','Group A','Group A','Group B','Group B','Group B','Group B','Group B','Group B','Group B','Group B','Group B','Group B'], 'Col2':['Type 1','Type 1','Type 2','Type 2','Type 3','Type 3','Type 3','Type 3','Type 3','Type 3','Type 6','Type 6','Type 6','Type 6','Type 6','Type 3','Type 3','Type 2','Type 2','Type 2'], 'Col3':[100,200,300,400,500,600,700,800,900,1000,2000,1900,1800,1700,1600,1500,1400,1300,1200,1100], 'Col4':['Alice','Bob','Carl','Dave','Earl','Fred','Greg','Henry','Iris','Jasmine','Kris','Lonnie','Manny','Norbert','Otis','Pearl','Quaid','Randy','Steve','Tana']}) # Solving Step 1: finding the unique groupings and their relative sizes temp1 = my_df.groupby(['Col1','Col2']).agg({'RowID':'count'}).reset_index() # Solving Step 2: finding which grouping is the largest temp2 = temp1.groupby(['Col1']).agg({'RowID':'max'}).reset_index() # Solving Step 3: finding which rows of the original DataFrame match what was # found in Step 2 # Step 3 Part 1: Finding the actual combination of `Col1` & `Col2` that let to # the largest number of rows temp3 = (temp1 .rename(columns={'RowID':'RowID_count'}) .merge(temp2 .rename(columns={'RowID':'RowID_max'}), how='left', on='Col1') .assign(RowID_ismax = lambda _df: _df['RowID_count']== _df['RowID_max']) .query('RowID_ismax') .drop(columns=['RowID_count','RowID_max'])) # Step 3 Part 2: Finding the matching rows in the original dataset and # filtering it down result = (my_df .merge(temp3, how='left', on=['Col1','Col2']) .assign(RowID_ismax = lambda _df: _df['RowID_ismax'].fillna(False)) .query('RowID_ismax') .reset_index(drop=True) .drop(columns=['RowID_ismax'])) The solution above is EXTREMELY convoluted, full of assign and lambda statements alongside several sequential groupbys and reset_indexs, which suggest to me I'm approaching this the wrong way. Any help on this would be greatly appreciated. | A short code to perform this could value_counts + idxmax, then merge: keep = my_df[['Col1', 'Col2']].value_counts().groupby(level='Col1').idxmax() out = my_df.merge(pd.DataFrame(keep.tolist(), columns=['Col1', 'Col2'])) Output: RowID Col1 Col2 Col3 Col4 0 10005 Group A Type 3 500 Earl 1 10006 Group A Type 3 600 Fred 2 10007 Group A Type 3 700 Greg 3 10008 Group A Type 3 800 Henry 4 10009 Group A Type 3 900 Iris 5 10010 Group A Type 3 1000 Jasmine 6 10011 Group B Type 6 2000 Kris 7 10012 Group B Type 6 1900 Lonnie 8 10013 Group B Type 6 1800 Manny 9 10014 Group B Type 6 1700 Norbert 10 10015 Group B Type 6 1600 Otis Intermediates: # my_df[['Col1', 'Col2']].value_counts() Col1 Col2 Group A Type 3 6 Group B Type 6 5 Type 2 3 Group A Type 1 2 Type 2 2 Group B Type 3 2 Name: count, dtype: int64 # my_df[['Col1', 'Col2']].value_counts().groupby(level='Col1').idxmax() Col1 Group A (Group A, Type 3) Group B (Group B, Type 6) Name: count, dtype: object Alternatively, with groupby.transform (which would select multiple groups if there are multiple ones with the maximum size): s = my_df.groupby(['Col1', 'Col2']).transform('size') out = my_df[s.groupby(my_df['Col1']).transform('max').eq(s)] | 2 | 1 |
78,879,293 | 2024-8-16 | https://stackoverflow.com/questions/78879293/define-an-unused-new-symbol-inside-function-scope | I find difficult to really encapsulate a lot of logic in a reusable manner when writing sympy code I would like to write a function in the following way: def compute_integral(fn): s = symbols_new() # do not apply lexicographic identity, this symbol identity is unique return integrate( fn(s), (s, 0, 1)) If I just use the usual symbols() to instantiate s, I risk fn having the same symbol inside its defining expression somewhere, for instance the following can occur: def compute_integral(fn): s = symbols("s") return integrate(fn(s), (s, 0, 1)) ... import sympy as sp from sympy.abc import * t = sp.sin(3 + s) f = sp.lambdify([x], x**2 + t * s * x) compute_integral(f) #< ---- symbol s is already present in the function, now the integral will produce the wrong result What I'm doing right now to minimize this issue is less than ideal: compute_integral_local_var = 0 def compute_integral(fn): global compute_integral_local_var compute_integral_local_var += 1 s = symbols("local_{}".format(compute_integral_local_var)) return integrate(fn(s), (s, 0, 1)) I want to avoid these kind of collisions without having to add a special global variable. Any better approaches to achieve this type of encapsulation within the sympy API? | I'm using the latest sympy, version 1.13.2, and the code block you provided doesn't run: it raises an error. There seems to be some misconceptions that I'd like to clarify. Consider this code block: import sympy as sp from sympy.abc import s, x t = sp.sin(3 + s) print("t:", t) e = x**2 + t * s * x print("e:", e) # t: sin(s + 3) # e: s*x*sin(s + 3) + x**2 Here, I create two symbolic expressions, t and e. t contains a symbolic function, sin, and e is composed of t. You can consider e as a function, but in sympy terminology, e is a symbolic expression. Next, in your example you executed something like this: f = sp.lambdify([x], e) which creates a numerical function which will be evaluated by NumPy. Then, you attempted to integrate symbolically this numerical function, which is absolutely wrong. With SymPy, you can only perform symbolic integrations of symbolic expressions. If all you are trying to perform is a symbolic integration, then this command is sufficient: sp.integrate(e, (x, 0, 1)) # s*sin(s + 3)/2 + 1/3 If you'd like to integrate multiple symbolic expressions over the same range, then you can create a custom function: def compute_integral(expr, sym): return sp.integrate(expr, (sym, 0, 1)) print(compute_integral(e, x)) # s*sin(s + 3)/2 + 1/3 print(compute_integral(t, x)) # sin(s + 3) EDIT: Based on the little details you shared, python's lambda function and sympy's Dummy symbols might get what you want. A Dummy symbol is guaranteed to be unique: it is used when a temporary symbol is needed or when a name of a symbol is not important. def compute_integral(func): s = sp.Dummy() return sp.integrate(func(s), (s, 0, 1)) t = sp.sin(3 + s) f = lambda x: x**2 + t * s * x compute_integral(f) # s*sin(s + 3)/2 + 1/3 Please, please, please: don't use functions created with lambdify as arguments of sympy functions: it generally doesn't work. You provided the following example: from sympy.abc import r, s sp.lambdify([r], r**2)(s).diff(s) # out: 2*s You were lucky, the numerical function generated by lambdify only contains plain python operations (a multiplication, in this cae). Should the original symbolic expression contain other functions, like sin, cos, exp, etc..., you would get an error: sp.lambdify([r], sin(r))(s).diff(s) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) AttributeError: 'Symbol' object has no attribute 'sin' The above exception was the direct cause of the following exception: TypeError Traceback (most recent call last) Cell In[50], line 1 ----> 1 sp.lambdify([r], sin(r))(s).diff(s) File <lambdifygenerated-4>:2, in _lambdifygenerated(r) 1 def _lambdifygenerated(r): ----> 2 return sin(r) TypeError: loop of ufunc does not support argument 0 of type Symbol which has no callable sin method | 2 | 1 |
78,877,275 | 2024-8-16 | https://stackoverflow.com/questions/78877275/python-class-mimic-makefile-dependency | Q: Is there a better way to do this, or the idea itself is wrong I have a processing class that creates something with multiple construction steps, such that the next function depends on the previous ones. I want to have dependencies specified like those in a makefile, and when the dependency does not exist, construct it. I currently use a decorator to achieve it but it feels non pythonic. Please take a look from typing import * from functools import wraps def step_method(dependency: Optional[dict[str, Callable]] = None): if dependency is None: dependency = {} def decorator(method): @wraps(method) def wrapper(self, *args, **kwargs): for attr, func in dependency.items(): if not getattr(self, attr): func(self) ret = method(self, *args, **kwargs) return ret return wrapper return decorator class StepClass: def __init__(self, base_val:int): self.base_val: int = base_val self.a = None self.b = [] self.c = None self.d = [] @step_method({}) def gen_a(self): self.a = self.base_val * 2 @step_method({'a': gen_a}) def create_b(self): self.b = [self.a] * 3 @step_method({ 'a': gen_a, 'b': create_b }) def gen_c(self): self.c = sum(self.b) * self.a @step_method({'c': gen_c}) def generate_d(self): self.d = list(range(self.c)) sc = StepClass(10) sc.base_val = 7 # allow changes before generating starts sc.b = [1, 2, 3] # allow dependency value injection sc.generate_d() print(sc.a, sc.b, sc.c, sc.d, sep='\n') I also wonder if it's possible to detect the usage of variables automatically and generate them through a prespecified dict of function if they don't exist yet | This is a good use case of properties, with which you can generate values on demand so there's no need to build a dependency tree. As a convention the generated values are cached in attributes of names prefixed with an underscore. Since all your getter methods and setter methods are going to access attributes in a uniform way, you can programmatically create a getter and a setter method for each given calculation function when initializing a property subclass. class SettableCachedProperty(property): def __init__(self, func): def fget(instance): if (value := getattr(instance, name, None)) is None: setattr(self, name, value := func(instance)) return value def fset(instance, value): setattr(instance, name, value) name = '_' + func.__name__ super().__init__(fget, fset) class StepClass: def __init__(self, base_val): self.base_val = base_val @SettableCachedProperty def a(self): return self.base_val * 2 @SettableCachedProperty def b(self): return [self.a] * 3 @SettableCachedProperty def c(self): return sum(self.b) * self.a so that (omitting d in your example for brevity): sc = StepClass(10) sc.base_val = 7 # allow changes before generating starts sc.b = [1, 2, 3] # allow dependency value injection print(sc.a, sc.b, sc.c, sep='\n') outputs: 14 [1, 2, 3] 84 Demo here | 2 | 2 |
78,875,835 | 2024-8-15 | https://stackoverflow.com/questions/78875835/polars-rolling-corr-giving-weird-results | I was trying to implement rolling autocorrelation in polars, but got some weird results when there're nulls involved. The code is pretty simple. Let's say I have two dataframes df1 and df2: df1 = pl.DataFrame({'a': [1.06, 1.07, 0.93, 0.78, 0.85], 'lag_a': [1., 1.06, 1.07, 0.93, 0.78]}) df2 = pl.DataFrame({'a': [1., 1.06, 1.07, 0.93, 0.78, 0.85], 'lag_a': [None, 1., 1.06, 1.07, 0.93, 0.78]}) You can see that the only difference is that in df2, the first row for lag_a is None, because it's shifted from a. When I compute the rolling_corr for both dataframes, however, I got different results. # df1.select(pl.rolling_corr('a', 'lag_a', window_size=10, min_periods=5, ddof=1)) shape: (5, 1) ββββββββββββ β a β β --- β β f64 β ββββββββββββ‘ β null β β null β β null β β null β β 0.622047 β ββββββββββββ # df2.select(pl.rolling_corr('a', 'lag_a', window_size=10, min_periods=5, ddof=1)) shape: (6, 1) βββββββββββββ β a β β --- β β f64 β βββββββββββββ‘ β null β β null β β null β β null β β null β β -0.219851 β βββββββββββββ The result from df1, i.e. 0.622047 is what I got from numpy.corrcoef as well. I wonder where the -0.219851 is coming from. | I think this is a bug in the Rust implementation of rolling_corr (in fairness, it is marked unstable in python). It looks it naively applies rolling_mean without first applying the joint null mask. So the rolling mean of a that's used in the computation is df2.get_column("a").rolling_mean(window_size=10, min_periods=5) shape: (6,) Series: 'a' [f64] [ null null null null 0.968 0.948333 ] That's the correct rolling mean in a vacuum, but in this case the first row of df2βs column a should be considered null because lag_a is null there, and so df2βs rolling mean should be the same as df1βs rolling mean, with an extra null up front. df1.get_column("a").rolling_mean(window_size=10, min_periods=5) shape: (5,) Series: 'a' [f64] [ null null null null 0.938 ] I'd suggest filing a bug report or even a PR. It doesn't look like a hard fix, it should just require precomputing the mask and applying filters to all expressions before calculating rolling stats on them. In the meantime, you can apply the mask yourself before computing the correlation: df2.with_columns( pl.when(pl.any_horizontal(pl.all().is_null())) .then(None) .otherwise(pl.all()) .name.keep() ).select(pl.rolling_corr("a", "lag_a", window_size=10, min_periods=5)) shape: (6, 1) ββββββββββββ β a β β --- β β f64 β ββββββββββββ‘ β null β β null β β null β β null β β null β β 0.622047 β ββββββββββββ | 3 | 1 |
78,876,898 | 2024-8-15 | https://stackoverflow.com/questions/78876898/using-the-python-protobuf-library-how-can-i-load-a-desc-or-proto-file-dynami | I've tried using a few different methods but all end up not working. I'm trying to read from a directory and iterate all the .desc files and inspect all the fields in each class. I'm trying to build a dependency tree with the outer classes being the parent nodes and the leaves being the nested classes. I'm trying to do this all dynamically based on a root folder containing the .desc files (or proto files). Nothing is statically typed. The error: UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf8 in position 197: 'utf-8' codec can't decode byte 0xf8 in position 197: invalid start byte in field: google.protobuf.FileDescriptorProto.name def gen1(desc_file, db=None): if not db: db = symbol_database.Default() with open(desc_file, 'rb') as fh: fds = _descriptor_pool.DescriptorPool().AddSerializedFile(fh.read()) # <- this errors generated = {} for prot in fds.file: fd = db.pool.Add(prot) for name in fd.message_types_by_name: mdesc = fd.message_types_by_name[name] Klass = reflection.MakeClass(mdesc) generated[(fd.package, name)] = db.RegisterMessage(Klass) return generated | The Protobuf hierarchy is confusing/complex. The following is a basic approach to enumerating the contents of a Protobuf descriptor file: from google.protobuf.descriptor_pb2 import FileDescriptorSet with open("descriptor.pb", 'rb') as descriptor: data = descriptor.read() fds = FileDescriptorSet.FromString(data) # file: FileDescriptorProto for file in fds.file: print(f"{file.package}/{file.name}") # service: ServiceDescriptorProto for service in file.service: print(service.name) # method: MethodDescriptorProto for method in service.method: print(method.name) # message: DescriptorProto for message_type in file.message_type: print(message_type.name) # field: FieldDescriptorProto for field in message_type.field: print(field.name) | 3 | 3 |
78,877,001 | 2024-8-15 | https://stackoverflow.com/questions/78877001/fast-index-mapping-between-two-numpy-arrays-with-duplicate-values | I'm trying to write a join operation between two Numpy arrays and was surprised to find Numpy's recfunctions.join_by doesn't handle duplicate values. The approach I'm taking is using the column to be joined and finding an index mapping between them. From looking online, majority of Numpy only solutions suffer the same problem of not being able to handle duplicates (you'll see what I mean down in the code section). I'm looking to stay entirely within the Numpy library, if at all possible to take advantage of vectorized operations, so ideally no native Python code, Pandas (for other reasons), or numpy-indexed. Below are a few questions I've looked at: A way to map one array onto another in numpy? Find index mapping between two numpy arrays Numpy: For every element in one array, find the index in another array Index mapping between two sorted partially overlapping numpy arrays For example, arrays X and Y which are to be joined using a column from each of them, x and y respectively. The mapping is defined as, and f is what I'm after mapping = f(x, y) x = y[mapping] So for example, x = np.array([1,1,2,100]) y = np.array([1,2,3,4,5,6,7]) mapping = [0, 0, 1, -] # '-' indicates masked x = y[mapping] From looking at similar questions onlinem, find the mapping from x to y there is np.where(np.isin(x,y)) which deduplicates values. There is also np.searchsorted(x,y) which doesn't handle duplicates in x at all. I'm wondering if there is something else that can be done. Below is not a correct mapping due to duplicates in x import numpy as np x = np.array([1,1,2,100]) y = np.array([1,2,3,4,5,6,7]) mapping = np.searchsorted(x, y) # [0 2 3 3 3 3 3] This is also not a correct mapping because mapping needs to be the same length as x. import numpy as np x = np.array([1,1,2,100]) y = np.array([1,2,3,4,5,6,7]) mapping = np.where(np.isin(x, y))[0] # [0, 1, 2] | Using np.isin() we can literally create a mask that shows us which values are already in the other array, when you have that you only need to figure out the indices. import numpy as np # Arrays to be joined x = np.array([1, 1, 2, 100, 4, 5, 3, 75]) y = np.array([1, 2, 3, 4, 5, 6, 7]) # Get mask with True and False values mask = np.isin(x, y) # [ True True True False True True True False] # Get indices of every element indices = np.searchsorted(y, x) # [0 0 1 7 3 4 2 7] # Match indices with mask mapping = np.where(mask, indices, -1) # [ 0 0 1 -1 3 4 2 -1] We create the mask, get the indices, and then match the indices with the mask. Values that are not in y get the value -1 This solution fully stays inside the Numpy library | 3 | 2 |
78,869,326 | 2024-8-14 | https://stackoverflow.com/questions/78869326/why-is-it-that-calling-standard-sum-on-a-numpy-array-produces-a-different-result | Observe in the following code, creating an numpy array and calling the builtin python sum function produces different results than numpy.sum How is numpy's sum function implemented? And why is the result different? test = [.1]*10 test = [np.float64(x) for x in test] test[5]= np.float64(-.9) d = [np.asarray(test) for x in range(0,60000)] sum(sum(d)) outputs np.float64(-1.7473212210461497e-08) but np.sum(d) outputs np.float64(9.987344284922983e-12) | Numpy uses pairwise summation:https://github.com/numpy/numpy/pull/3685 but python uses reduce summation. The answer is only partially related to FP inaccuracy because if I have an array of FP numbers and use the same algorithm to sum them, I should expect the same result if I sum them in the same order. | 3 | 2 |
78,873,113 | 2024-8-14 | https://stackoverflow.com/questions/78873113/different-behavior-in-custom-class-between-left-addition-right-addition-with-a-n | I am writing a class where one of the stored attributes is cast to an integer in the constuctor. I am also overloading left/right addition, where adding/subtracting an integer means to shift this attribute over by that integer. In principle, addition commutes in all contexts with my class, so as a user I would not expect there to be any difference between left and right addition. However, my code disagrees. I would naively expect addition by a numpy array to fail. The code behaves as expected for left addition, but erroneously runs with right addition! Here is an extemely minimal working example of what I mean: import numpy as np class foo: def __init__(self, val): self.val = int(val) def __add__(self, other): if isinstance(other, self.__class__): return foo(self.val + other.val) try: return foo(self.val + int(other)) except: raise ValueError(f"unsupported operand type(s) for +: {type(self).__name__} and {type(other).__name__}") __radd__ = __add__ def __repr__(self): return str(self.val) Then, when I run the following block: a = foo(6) b = np.arange(10) >>> print(f"left addition by 'a': {a + b}") I get an exception as expected. But when I run this code block, it runs just fine. >>> print(f"right addition by 'a': {b + a}") right addition by 'a': [6 7 8 9 10 11 12 13 14 15] # Adds by b element-wise It seems as though right addition is defaulting to numpy array's addition overload method, as expected from the documentation for radd (emphasis mine) These methods are called to implement the binary arithmetic operations (+, -, *, @, /, //, %, divmod(), pow(), **, <<, >>, &, ^, |) with reflected (swapped) operands. These functions are only called if the left operand does not support the corresponding operation [3] and the operands are of different types. [4] For instance, to evaluate the expression x - y, where y is an instance of a class that has an __rsub__() method, type(y).__rsub__(y, x) is called if type(x).__sub__(x, y) returns NotImplemented. So I think that numpy is cleverly realizing that my own class implements addition with the dtype inside the numpy array and loops over it. For what it's worth, I don't dislike this functionality at all. I only dislike that my addition operation behaves differently between left and right ones. I'm looking for either an error in both cases (probably preferrable for simplicity) or to work in both cases. I'm not sure what the best way to unify their behavior is. Some naive ideas that come to mind (such as having the left addition call addition in the reversed order to invoke numpy's interpretation of things) seems like it might lead to some unintuitive behavior. I had hoped to get a better idea of what was happening by looking at the numpy source code for 'add' directly, but the documentation page for np.add doesn't have a link to its source code the same way others do (such as np.atleast_1d)... Are there any clean workarounds to this problem? --- As a side question, the way the addition operation is written above closely mirrors the structure of addition in my actual class, where the form looks kind of like this: def __add__(self, other): if isinstance(other, self.__class__): ... try: ... except: raise ValueError(...) My thought process was that if the two objects are the same type, I know how to handle this perfectly since I wrote the code for this class. But, if a user is trying to add some other type, for my purposes, all that really matters is that it has some reasonable interpretation as an integer. So I ended up with some weird mix between 'look before you leap' and 'easier to ask forgiveness than permission' coding styles where I'm not sure if it's sacrilegious or not. Is this considered bad coding style? | Numpy provides some hooks for this. In this case you probably want to implement class.__array_ufunc__() on your class. If you simply define it with None it will raise an exception: __array_ufunc__ = None Alternatively, you can actually implement something: def __array_ufunc__(self, ufunc, method, *inputs): print(f"{ufunc} called with {inputs}") return self # b + a just # prints: <ufunc 'add'> called with (array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), 6) # and returns itself | 2 | 3 |
78,872,477 | 2024-8-14 | https://stackoverflow.com/questions/78872477/pytorch-conv2d-outputs-infinity | My input x is a [1,256,60,120] shaped tensor. My Conv2d is defined as follows import torch.nn as nn conv2d = nn.Conv2d( 256, 256, kernel_size=2, stride=2, bias=False, ), Some instances I see that conv2d(x).isinf().any() is True. Note that x.max() = tensor(5140., device='cuda:0', dtype=torch.float16). x.min() = tensor(0., device='cuda:0', dtype=torch.float16), conv2d.weight.max() =tensor(1.5796, device='cuda:0') conv2d.weight.min() = tensor(-0.8045, device='cuda:0') Why do I get infinity? | This is a case of numerical overflow. Consider: import torch import torch.nn as nn # set seed torch.manual_seed(42) # random values between 0 and 5140, like your values x = (torch.rand(1, 256, 60, 120) * 5140) # create conv layer conv2d = nn.Conv2d( 256, 256, kernel_size=2, stride=2, bias=False, ) # set conv weights to a similar range to your values torch.nn.init.uniform_(conv2d.weight, a=-0.8, b=1.57) # compute output y = conv2d(x) print(y.max()) > tensor(1334073.6250, grad_fn=<MaxBackward1>) When I run the code above, I see a max output value of 1334073.6250. You are using fp16 values, which overflow at 32000. You're seeing inf values because your output values exceed the range of fp16. You should preprocess your inputs to have a more reasonable numeric range. Neural networks are very sensitive to numerical scale. Typically you want your input values to be roughly mean 0, variance 1. A max input value of 5140 is huge and likely to cause numeric issues in any situation. | 2 | 3 |
78,872,144 | 2024-8-14 | https://stackoverflow.com/questions/78872144/is-it-expected-to-have-a-epsilon-named-attribute-of-a-python-object-turned-into | If I create an attribute named Ο΅ for a Python object using a dot expression, the name is converted to Ξ΅ without any warning. Is this expected? If it is, where can I find a reference about this? Python 3.12.5 (main, Aug 7 2024, 13:49:14) [GCC 14.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> a = type('',(),{})() >>> a.Ο΅ = 100 >>> hasattr(a,'Ο΅') False >>> hasattr(a,'Ξ΅') True >>> setattr(a,'Ο΅',200) >>> hasattr(a,'Ο΅') True >>> a.Ο΅ 100 >>> | Those two are the same identifier in normal form NFKC. >>> unicodedata.normalize("NFKC", 'Ξ΅') == unicodedata.normalize("NFKC", 'Ο΅') True And as the docs say, All identifiers are converted into the normal form NFKC while parsing; comparison of identifiers is based on NFKC. So β yes, it's expected. | 3 | 4 |
78,870,792 | 2024-8-14 | https://stackoverflow.com/questions/78870792/how-to-avoid-stubtests-symbol-is-not-present-at-runtime-error | Example Given the file ./mylib/minimalistic_repro.py class Foo: def __init__(self, param): self.param = param class Bar: def __init__(self, param): self.param = param def foobar(par1, par2): return par1.param + par2.param and its stub ./mylib/minimalistic_repro.pyi from typing import TypeAlias FooBar: TypeAlias = Foo | Bar class Foo: param: int def __init__(self, param: int) -> None: ... class Bar: param: int def __init__(self, param: int) -> None: ... def foobar(par1: FooBar, par2: FooBar) -> int: ... Note the type alias on the third line: FooBar: TypeAlias = Foo | Bar mypy --strict mylib, flake8 mylib and ruff check --select E,F,B,SIM all pass. When running stubtest however: python -m mypy.stubtest mylib I get the following error: error: mylib.minimalistic_repro.FooBar is not present at runtime β¦ My current workaround is to use an allowlist (stubtest --generate-allowlist). Question(s) β Is there a βbetterβ way to avoid this βerrorβ? / β¦ β β¦ Am I doing something fundamentally wrong? β¦ β β¦ and if not: Might this be worth a feature request? Other approaches β Of course I could declare def foobar(par1: Foo | Bar, par2: Foo | Bar), but my actual task (writing type hints for a third party pypi package) requires a union of up to 18 types. β I got the above example to run with stubtest by placing the FooBar type alias definition in a .py file (tp_aliases.py) and then reimporting. This approach failed to work in the case of with my actual pypi package type hinting task (tp_aliases.py is not part of the pypi package). | stubtest is complaining because it thinks your FooBar is a public API symbol, which might cause type checkers/IDE autocomplete to make incorrect assumptions and suggestions. The "correct" way to fix it is to make it private; that is, precede the name with an underscore: _FooBar: TypeAlias = Foo | Bar def foobar(par1: _FooBar, par2: _FooBar) -> int: ... For classes and functions, you can alternatively use typing.type_check_only: # type_check_only is not available at runtime and can only be used in stubs from typing import type_check_only @type_check_only def this_function_is_not_available_at_runtime() -> None: ... @type_check_only class AndSoDoesThisClass: ... There's also the command line option --ignore-missing-stub which will suppress all errors of this kind. | 3 | 4 |
78,871,375 | 2024-8-14 | https://stackoverflow.com/questions/78871375/selenium-screenshot-of-table-with-custom-font | I have the following Python code for loading the given page in selenium and taking a screenshot of the table. from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from PIL import Image import io import os chrome_options = Options() chrome_options.add_argument("--headless") chrome_options.add_argument("--window-size=1920x1080") chrome_options.add_argument("--force-device-scale-factor=1") driver = webdriver.Chrome(options=chrome_options) path = os.path.abspath("table_minimal.html") driver.get(f"file://{path}") table = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.TAG_NAME, "table"))) png = table.screenshot_as_png im = Image.open(io.BytesIO(png)) im.save("table.png") driver.quit() Here is the table_minimal.html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <link href="https://fonts.googleapis.com/css2?family=Montserrat:wght@400;700&display=swap" rel="stylesheet"> <style> table { font-family: 'Montserrat'; font-size: 20px; } </style> </head> <body> <table><tr><td>Text in the table</td></tr></tbody></table> </body> </html> The above produces this image: If I remove the --headless flag then I get this instead: So first one does not use the custom font, the second does but detects incorrect table width. Is there some settings I am missing to make this work correctly? Preferably in headless mode, but now I am curious why it does not work in normal mode as well... | Note: Changed the font to Matemasie to make it stand out more in the screenshot. The issue is caused by the screenshot being taken before the custom font has loaded. There are some ways to wait for that to finish before continuing, eg: Selenium ChromeDriver "get" doesn't reliably load @import fonts time.sleep(2) also works on such a simple DOM :) Adding that to the code, and using table.screenshot() to take the screenshot gives us: path = os.path.abspath("table_minimal.html") driver.get(f"file://{path}") while True: script = '''return document.fonts.status;''' loaded = driver.execute_script(script) if loaded == 'loaded': break time.sleep(.5) table = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.TAG_NAME, "table"))) table.screenshot('./test3.png') driver.quit() Gives the expected result, with both --headless and without: | 2 | 2 |
78,869,863 | 2024-8-14 | https://stackoverflow.com/questions/78869863/how-fix-found-array-with-dim-4error-when-using-ml-algorthims-to-classify-image | I have a simple ML classification problem. I have 8 folder each one represent class so I have first load these images from folders and assign labels and then save it as csv file (code in below) def load_images_from_folder(root_folder): image_paths = [] images = [] labels = [] for label in os.listdir(root_folder): label_path = os.path.join(root_folder, label) if os.path.isdir(label_path): for filename in os.listdir(label_path): img_path = os.path.join(label_path, filename) if os.path.isfile(img_path) and (filename.endswith(".jpg"): img = Image.open(img_path) img = img.resize((128, 128)) img_array = np.array(img) image_paths.append(img_path) images.append(img_array) labels.append(label) return image_paths, images, labels if __name__ == "__main__": root_folder_path = "./Datasets_1" image_paths, images, labels = load_images_from_folder(root_folder_path) I then convert images and labels to DataFrame and load it data = {"Images": image_paths, "Labels": labels} df = pd.DataFrame(data) df.to_csv("original_data.csv", index=False) csv_file = "original_data.csv" df = pd.read_csv(csv_file) I'm also add a new column 'Encoded_Labels' to the DataFrame with the encoded labels and convert 'Encoded_Labels' column to integers df['Encoded_Labels'] = encoded_labels df['Encoded_Labels'] = df['Encoded_Labels'].astype(int) Finally I have split the dataset into training and testing sets and preprocess images for training train_df, test_df = train_test_split(df, test_size=0.2, random_state=42) def load_and_preprocess_images(file_paths, target_size=(128, 128)): images = [] for file_path in file_paths: img = Image.open(file_path) img = img.resize(target_size) img_array = np.array(img) / 255.0 # Normalize pixel values images.append(img_array) return np.array(images) X_train = load_and_preprocess_images(train_df['Images'].values) y_train = train_df['Encoded_Labels'].values X_test = load_and_preprocess_images(test_df['Images'].values) y_test = test_df['Encoded_Labels'].values**your text** And the output shape of X_train is (20624, 128, 128, 3) For this point I have no problem and I can use it with DL models with no problem but when try to use ML models such as KNN, SVM, DT, etc. For examples codes in below from sklearn.svm import SVC svc = SVC(kernel='linear',gamma='auto') svc.fit(X_train, y_train)` or knn_clf = KNeighborsClassifier() knn_clf.fit(X_train, y_train) y_pred = knn_clf.predict(X_test) accuracy = metrics.accuracy_score(y_test, y_pred) print("Accuracy of KNN Classifier : %.2f" % (accuracy*100)) I got this error ValueError: Found array with dim 4. SVC expected <= 2. How to fix this error? | In the example using sklearn.svm.SVC.fit(), the input is expected to be of shape (n_samples, n_features) (thus being 2-dimensional). In your case, each sample would be an image. To make your code technically work, you would thus need to flatten your X_train input and make each "raw" pixel value a feature, X_train_flat = X_train.reshape(X_train.shape[0], -1) which, in your example, would produce a (20624, 49152)-shaped array (as 128Β·128Β·3=49152), where each row is a flattened version of the corresponding image. What is often done instead of using the "raw" pixels as an input to SVM and similar classifiers, however, is using a set of "features" extracted from the images, to reduce the dimensionality of the data (i.e., in your example, using a (20624, d)-shaped array instead, where d<49152). This could be HOG features, for example, or the result of any other dimensionality reduction technique (it could even be the output of a neural network) β you might want to also have a look at this related question and its answers. | 3 | 1 |
78,868,961 | 2024-8-14 | https://stackoverflow.com/questions/78868961/how-to-get-the-value-of-a-specified-index-number-from-the-sorting-of-a-column-an | import polars as pl df = pl.DataFrame( {"name": list("abcdef"), "age": [21, 31, 32, 53, 45, 26], "country": list("AABBBC")} ) df.group_by("country").agg( pl.col("name").sort_by("age").first().alias("age_sort_1"), pl.col("name").sort_by("age").get(2).alias("age_sort_2"), # OutOfBoundsError: index out of bounds # pl.col("name").sort_by("age").arr.get(2, null_on_oob=True).alias("age_2"), # SchemaError: invalid series dtype: expected `FixedSizeList`, got `str` pl.col("name").sort_by("age").last().alias("age_sort_-1") ) As shown in the code above, I want to get the name in each country whose age is in a specific order. However, Expr.get does not provide the null_on_oob parameter. How to automatically fill in null when an out-of-bounds situation occurs? In addition, the .arr.get method provides the null_on_oob parameter, but reports an error SchemaError: invalid series dtype: expected "FixedSizeList", got "str". I donβt know what this error refers to and how to solve it. ps: The above code uses the repeated code pl.col("name").sort_by("age") many times. Is there a more concise method? | There's an open issue related to your question now - polars.Expr.take returns null if ComputeError: index out of bounds. For now you can either use shift() (maintain_order = True just to make it more readable): exp = pl.col.name.sort_by("age") ( df .group_by("country", maintain_order = True).agg( exp.first().alias("age_sort_1"), exp.shift(-2).first().alias("age_sort_2"), exp.last().alias("age_sort_-1"), ) ) βββββββββββ¬βββββββββββββ¬βββββββββββββ¬ββββββββββββββ β country β age_sort_1 β age_sort_2 β age_sort_-1 β β --- β --- β --- β --- β β str β str β str β str β βββββββββββͺβββββββββββββͺβββββββββββββͺββββββββββββββ‘ β A β a β null β b β β B β c β d β d β β C β f β null β f β βββββββββββ΄βββββββββββββ΄βββββββββββββ΄ββββββββββββββ Or, just aggregate your data into list, and then use .list.get() which allows you to use null_on_oob parameter: ( df .group_by("country").agg( pl.col.name.sort_by("age") ) .with_columns( pl.col.name .list.get(i, null_on_oob = True).alias(f"age_sort_{c}") for i, c in [(0, 1), (2, 2), (-1, -1)] ).drop("name") ) Alternatively you can use list.gather() to get the list of 3 elements you need, convert it to struct via list.to_struct() method and then unnest() it to columns: ( df .group_by("country").agg( pl.col.name.sort_by("age") ) .with_columns( pl.col.name .list.gather([0,2,-1], null_on_oob = True) .list.to_struct(fields=["age_sort_1","age_sort_2","age_sort_-1"]) ).unnest("name") ) | 7 | 6 |
78,868,163 | 2024-8-13 | https://stackoverflow.com/questions/78868163/does-poetry-for-python-use-a-nonstandard-pyproject-toml-how | I am considering introducing my organization to Poetry for Python, and I came across this claim: Avoid using the Poetry tool for new projects. Poetry uses non-standard implementations of key features. For example, it does not use the standard format in pyproject.toml files, which may cause compatibility issues with other tools. --Modern Good Practices for Python Development Is this true? I didn't immediately turn up anything searching. What does Poetry do that is nonstandard, in the pyproject.toml or anywhere else? | Update: Poetry 2.0.0 is released and has support for pyproject.toml project settings! Yes, Poetry as of writing still uses its own [tool.poetry.dependencies] table in pyproject.toml. This is in conflict with PEP621, which specifies among other things that a project's dependencies ought to be listed in [project.dependencies] instead. There is an issue tracking this here https://github.com/python-poetry/poetry/issues/3332 and it seems that as of writing support for PEP621 is on the way, with a draft pull request passing checks here: https://github.com/python-poetry/poetry/pull/9135. The main downside of this is that other tools do not understand which packages your poetry project depends on. I believe the built packages that you upload to PyPI etc do not suffer from this issue, because poetry during the build process creates correct package metadata with a fixed set of version-pinned dependencies, so pip install your-poetry-package would work just fine to install the dependencies. In conclusion, in poetry version 2.0, some time soon poetry might be compatible with pyproject.toml standards. | 3 | 6 |
78,842,605 | 2024-8-7 | https://stackoverflow.com/questions/78842605/16-byte-offset-in-mpeg-4-video-export-from-dicom-file | Short version: Where is the 16-byte offset coming from when exporting an MPEG-4 video stream from a DICOM file with Pydicom via the following code? (And, bonus question, is it always a 16-byte offset?) from pathlib import Path import pydicom in_dcm_filename: str = ... out_mp4_filename: str = ... ds = pydicom.dcmread(in_dcm_filename) Path(out_mp4_filename).write_bytes(ds.PixelData[16:]) # 16-byte offset necessary For reproducibility, one can use e.g. this DICOM file which I found in this old discussion on Google Groups (content warning: the video shows the open brain in a neurosurgical intervention). Long version I have a number of DICOM files containing surgical MPEG-4 video streams (transfer syntax UID 1.2.840.10008.1.2.4.102 Ββ MPEG-4 AVC/H.264 High Profile / Level 4.1). I wanted to export the video streams from the DICOM files for easier handling in downstream tasks. After a bit of googling, I found the following discussion, suggesting the use of dcmdump from DCMTK, as follows (which I was able to reproduce): Run dcmdump +P 7fe0,0010 <in_dcm_filename> +W <out_folder>. From the resulting two files in <out_folder>, mpeg4.dcm.0.raw and mpeg4.dcm.1.raw, discard the first one, which has a size of 0 bytes, and keep the second one (potentially changing its suffix to .mp4), which is a regular, playable video file. From what I saw in the dcmdump command, I concluded this was just a raw dump of tag 7fe0,0010 (which is the Pixel Data attribute)ΒΉ, so I thought I could reproduce this with Pydicom. My first attempt was using Path(out_mp4_filename).write_bytes(ds.PixelData) (see code sample above for complete details); however, I ended up with a file that could not be played. I then compared a hex dump of the dcmdump result and of the Pydicom result: $ hd ./dcmdump.mp4 | head 00000000 00 00 00 20 66 74 79 70 69 73 6f 6d 00 00 02 00 |... ftypisom....| 00000010 69 73 6f 6d 69 73 6f 32 61 76 63 31 6d 70 34 31 |isomiso2avc1mp41| 00000020 00 00 00 08 66 72 65 65 00 ce 97 1d 6d 64 61 74 |....free....mdat| ... $ hd ./pydicom.mp4 | head 00000000 fe ff 00 e0 00 00 00 00 fe ff 00 e0 3e bc ce 00 |............>...| 00000010 00 00 00 20 66 74 79 70 69 73 6f 6d 00 00 02 00 |... ftypisom....| 00000020 69 73 6f 6d 69 73 6f 32 61 76 63 31 6d 70 34 31 |isomiso2avc1mp41| ... From this I noticed that my Pydicom export contained 16 preceding extra bytes. Once I removed them via Path(out_mp4_filename).write_bytes(ds.PixelData[16:]), I got the exact same, playable video export as with dcmdump. So, again, my question is: Where do these 16 extra bytes come from, what is their meaning, and am I safe simply removing them? ΒΉ) Update: In hindsight, I should have gotten suspicious because of the two files that were created by dcmdump. | The reason why you see these bytes is that the pixel data is encapsulated. Using dcmdump shows this clearly: (7fe0,0010) OB (PixelSequence #=2) # u/l, 1 PixelData (fffe,e000) pi (no value available) # 0, 1 Item (fffe,e000) pi 00\00\00\20\66\74\79\70\69\73\6f\6d\00\00\02\00\69\73\6f\6d\69\73... # 13548606, 1 Item (fffe,e0dd) na (SequenceDelimitationItem) # 0, 0 SequenceDelimitationItem If you check the leading bytes that you strip, you can see that they contain the respective delimiter tags as shown in the dump output. You can also see that there are 2 items contained, the first of them empty - these are the ones you get using dcmtk. pydicom 2 To get the encapsulated contents, you can use encaps.defragment_data in pydicom 2.x, which returns all contained fragments in one data block: from pydicom import dcmread, encaps ds = dcmread("test_720.dcm") with open("test_720.mpeg4", "wb") as f: f.write(encaps.defragment_data(ds.PixelData)) Note that in general, the fragments are parts of multi-frame data (in the most common case, one fragment per frame), and you may want to handle them separately. In the case of MPEG4 there is only one continuous datastream with the video data, and merging any fragments this may be split into is the correct way to handle this. Note that the first (empty) item is the Basic Offset Table, that is required to be in the first item of the encapsulated data. It can be empty, and for the MPEG transfer syntax it is always empty. From the DICOM standard: The Basic Offset Table is not used because MPEG2 contains its own mechanism for describing navigation of frames. pydicom 3 In pydicom 3, encaps.defragment_data is deprecated in favor of encaps.generate_fragments, which will yield one fragment at a time. As @scaramallion pointed out in the comments, there are also more convenient new generator functions that yield only the fragments/frames with the actual data, excluding the offset table: generate_fragmented_frames and generate_frames. In this case you don't have to worry about the internal structure (e.g. the offset table): from pydicom import dcmread, encaps ds = dcmread("test_720.dcm") with open("test_720.mpeg4", "wb") as f: for frame in encaps.generate_frames(ds.PixelData): # for other use cases, you may save the frames separately f.write(frame) | 3 | 4 |
78,859,635 | 2024-8-12 | https://stackoverflow.com/questions/78859635/py2app-error-17-file-exists-when-running-py2app-for-the-first-time | I'm trying to make a simple password app on my desktop from a code I wrote. I've done this before with no troubles on another simple app. This is my setup: from setuptools import setup APP = ['main.py'] OPTIONS = { 'argv_emulation': True, 'iconfile':"logo.png", 'packages': ['tkinter', 'random', 'json', 'pyperclip'] } setup( app=APP, options={'py2app': OPTIONS}, setup_requires=['py2app'], ) This is the last few lines of code that run in my terminal if I run Python 3 setup.py py2app /Users/cjsander/Desktop/Password/build/bdist.macosx-10.9-universal2/python3.12- standalone/app/collect/typing_extensions-4.12.2.dist-info/METADATA error: [Errno 17] File exists: '/Users/cjsander/Desktop/Password/build/bdist.macosx-10.9- universal2/python3.12-standalone/app/collect/packaging-24.1.dist-info' The Dist is never created, and even when I move folders that I attempt to run the code from or delete all previous Dist and Build folders this error is produced. I've tried to move the file location, delete previous DIST and BUILD folders, uninstalled and reinstalled py2app including not updating the install. I've also tried on other code files and the same thing happens. All resulted with the same error, even when the file doesn't exist. | This is a known issue in setuptools >= 70.0.2. Pin 70.3.0 to get past it. For example in a pyproject.tml: [tool.poetry.dependencies] python = "^3.12" pyqt5 = "^5.15.10" setuptools = "==70.3.0" # add this line | 4 | 4 |
78,866,188 | 2024-8-13 | https://stackoverflow.com/questions/78866188/ctypes-bitfield-sets-whole-byte | I have the following structure: class KeyboardModifiers(Structure): _fields_ = [ ('left_control', c_bool, 1), ('right_control', c_bool, 1), ('left_shift', c_bool, 1), ('right_shift', c_bool, 1), ('left_alt', c_bool, 1), ('right_alt', c_bool, 1), ('left_meta', c_bool, 1), ('right_meta', c_bool, 1), ('left_super', c_bool, 1), ('right_super', c_bool, 1), ('left_hyper', c_bool, 1), ('right_hyper', c_bool, 1), ] It represents a structure returned by a C function, the fields are properly set and their values returned, issue comes when setting a field. For example, if I were to do something like: my_keyboard_mods.left_shift = True The first 8 fields would add be set to True, similarly with the next 8. What seems to happen is that it sets the value for the whole byte not respecting the bitfield. My question is: I am doing something wrong, so what's wrong? This is a bug with ctypes, is there a workaround? Thanks. | It results from using c_bool. I'm not getting the whole byte set to 1, but bit set resulting in a non-zero byte results in a value of 1, not 0xFF as described. It may be an OS or implementation detail. For me it seems to normalize the boolean value to 0/1 based on zero/non-zero value of the byte. For your implementation it may be normalizing to 0/FF. FYI: make sure to post the entire code that reproduces the problem next time. from ctypes import * class KeyboardModifiers(Structure): _fields_ = [ ('left_control', c_bool, 1), ('right_control', c_bool, 1), ('left_shift', c_bool, 1), ('right_shift', c_bool, 1), ('left_alt', c_bool, 1), ('right_alt', c_bool, 1), ('left_meta', c_bool, 1), ('right_meta', c_bool, 1), ('left_super', c_bool, 1), ('right_super', c_bool, 1), ('left_hyper', c_bool, 1), ('right_hyper', c_bool, 1), ] k = KeyboardModifiers() k.left_control = True k.left_shift = True k.right_hyper = True print(bytes(k)) Expected b'\x05\0x08' but got the following output due to c_bool implementation treating non-zero bytes as 1: b'\x01\x01' Use c_ubyte or c_ushort instead. Note that using a type larger than two bytes would make the structure longer than two bytes if size is a concern: from ctypes import * class KeyboardModifiers(Structure): _fields_ = [ ('left_control', c_ubyte, 1), ('right_control', c_ubyte, 1), ('left_shift', c_ubyte, 1), ('right_shift', c_ubyte, 1), ('left_alt', c_ubyte, 1), ('right_alt', c_ubyte, 1), ('left_meta', c_ubyte, 1), ('right_meta', c_ubyte, 1), ('left_super', c_ubyte, 1), ('right_super', c_ubyte, 1), ('left_hyper', c_ubyte, 1), ('right_hyper', c_ubyte, 1), ] k = KeyboardModifiers() k.left_control = True k.left_shift = True k.right_hyper = True print(bytes(k)) Output: b'\x05\x08' Edit: I see now that each field seems to refer to bit 0 of its containing byte, so even though the byte value of the structure only ever has 0 or 1, the values of the field bits do all change to True when displayed. In any case the fix is the same...don't use c_bool: from ctypes import * class KeyboardModifiers(Structure): _fields_ = [ ('left_control', c_bool, 1), ('right_control', c_bool, 1), ('left_shift', c_bool, 1), ('right_shift', c_bool, 1), ('left_alt', c_bool, 1), ('right_alt', c_bool, 1), ('left_meta', c_bool, 1), ('right_meta', c_bool, 1), ('left_super', c_bool, 1), ('right_super', c_bool, 1), ('left_hyper', c_bool, 1), ('right_hyper', c_bool, 1), ] def __repr__(self): return (f'KeyboardModifiers(\n' f' left_control={self.left_control},\n' f' right_control={self.right_control},\n' f' left_shift={self.left_shift},\n' f' right_shift={self.right_shift},\n' f' left_alt={self.left_alt},\n' f' right_alt={self.right_alt},\n' f' left_meta={self.left_meta},\n' f' right_meta={self.right_meta},\n' f' left_super={self.left_super},\n' f' right_super={self.right_super},\n' f' left_hyper={self.left_hyper},\n' f' right_hyper={self.right_hyper})') k = KeyboardModifiers() k.left_control = True print(bytes(k)) print(k) Output: b'\x01\x00' KeyboardModifiers( left_control=True, right_control=True, left_shift=True, right_shift=True, left_alt=True, right_alt=True, left_meta=True, right_meta=True, left_super=False, right_super=False, left_hyper=False, right_hyper=False) | 4 | 3 |
78,864,879 | 2024-8-13 | https://stackoverflow.com/questions/78864879/make-a-full-width-table-using-borb | I'm trying to reduce the margins of a page using borb, that works, but my Table then is not taking the full width of the page. Please note that not only Table is shifted and not covering the full width, HorizontalRule as well etc... What's the fix for that ? from decimal import Decimal from borb.pdf import Document, Page, SingleColumnLayout, PageLayout, Paragraph, Image, PDF def main(): # Create document pdf = Document() # Add page page = Page() pdf.add_page(page) # create PageLayout page_layout: PageLayout = SingleColumnLayout(page) for i in [Decimal(0), Decimal(16), Decimal(32), Decimal(64)]: page_layout._margin_left = i page_layout._margin_right = i page_layout._margin_bottom = i page_layout._margin_top = i # Add Paragraph page_layout.add(Paragraph(f"padding set at {int(i)}")) page_layout.add( FixedColumnWidthTable( number_of_columns=3, number_of_rows=3 ) ) with open("output.pdf", "wb") as out_file_handle: PDF.dumps(out_file_handle, pdf) if __name__ == "__main__": main() Here's what I would like. | disclaimer: I am the author of borb The "issue" you are running into is that borb applies a sensible default for margins (on Image objects). Looking at the (constructor) code in Image: margin_bottom=margin_bottom if margin_bottom is not None else Decimal(5), margin_left=margin_left if margin_left is not None else Decimal(5), margin_right=margin_right if margin_right is not None else Decimal(5), margin_top=margin_top if margin_top is not None else Decimal(5), So you explicitly need to pass margin_left=Decimal(0) (and other) to have an Image whose total distance to the page-boundary is exactly as specified. Update (for adding Table objects): The main issue is that PageLayout also keeps track of the available width, which is used by a FixedColumnWidthTable but not by the Image. So, your code needs to change that as well. from decimal import Decimal from borb.pdf import Document, Page, SingleColumnLayout, PageLayout, Paragraph, Image, PDF, FixedColumnWidthTable def main(): # Create document pdf = Document() # Add page page = Page() pdf.add_page(page) # create PageLayout page_layout: PageLayout = SingleColumnLayout(page) for i in [Decimal(0), Decimal(16), Decimal(32), Decimal(64)]: # change margin # IMPORTANT: you are not supposed to do this after having created # the PageLayout page_layout._margin_left = i page_layout._margin_right = i page_layout._margin_bottom = i page_layout._margin_top = i page_layout._column_widths = [Decimal(595) - i - i] # Add Paragraph page_layout.add(Paragraph(f"padding set at {int(i)}")) page_layout.add( FixedColumnWidthTable( number_of_columns=2, number_of_rows=3 ).add(Paragraph("Lorem")) .add(Paragraph("Ipsum")) .add(Paragraph("Dolor")) .add(Paragraph("Sit")) .add(Paragraph("Amet")) .add(Paragraph("Consectetur")) ) with open("output.pdf", "wb") as out_file_handle: PDF.dumps(out_file_handle, pdf) if __name__ == "__main__": main() Which outputs the following PDF: | 2 | 0 |
78,850,669 | 2024-8-8 | https://stackoverflow.com/questions/78850669/create-a-custom-pydantic-class-requiring-value-to-be-in-an-interval | I'm working to build a function parameter validation library using pydantic. We want to be able to validate parameters' types and values. Types are easy enough, but I'm having a hard time creating a class to validate values. Specifically, the first class I want to build is one that requires the value to be in a user-defined interval. So far, I've written a decorator and a function-based version of ValueInInterval. However, I would prefer to use a class-based approach. Here's an MRE of my issue: from typing import Any from typing_extensions import Annotated from pydantic import Field, validate_call class ValueInInterval: def __init__( self, type_definition: Any, start: Any, end: Any, include_start: bool = True, include_end: bool = True, ): self.type_definition = type_definition self.start = start self.end = end self.include_start = include_start self.include_end = include_end def __call__(self): return Annotated[self.type_definition, self.create_field()] def create_field(self) -> Field: field_config = {} if self.include_start: field_config.update({"ge": self.start}) else: field_config.update({"gt": self.start}) if self.include_end: field_config.update({"le": self.end}) else: field_config.update({"lt": self.end}) return Field(**field_config) def __get_pydantic_core_schema__( self, handler, ): schema = handler(self.type_definition) if self.include_start: schema.update({"ge": self.start}) else: schema.update({"gt": self.start}) if self.include_end: schema.update({"le": self.end}) else: schema.update({"lt": self.end}) return schema @validate_call() def test_interval( value: ValueInInterval(type_definition=int, start=1, end=10), ): print(value) test_interval(value=1) # should succeed test_interval(value=0) # should fail Running this code in the PyCharm's Python console, I get the following error: Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm 2023.2\plugins\python\helpers\pydev\pydevconsole.py", line 364, in runcode coro = func() File "<input>", line 56, in <module> File "C:\projects\django-postgres-loader\venv\lib\site-packages\pydantic\validate_call_decorator.py", line 56, in validate validate_call_wrapper = _validate_call.ValidateCallWrapper(function, config, validate_return, local_ns) File "C:\projects\django-postgres-loader\venv\lib\site-packages\pydantic\_internal\_validate_call.py", line 57, in __init__ schema = gen_schema.clean_schema(gen_schema.generate_schema(function)) File "C:\projects\django-postgres-loader\venv\lib\site-packages\pydantic\_internal\_generate_schema.py", line 512, in generate_schema schema = self._generate_schema_inner(obj) File "C:\projects\django-postgres-loader\venv\lib\site-packages\pydantic\_internal\_generate_schema.py", line 789, in _generate_schema_inner return self.match_type(obj) File "C:\projects\django-postgres-loader\venv\lib\site-packages\pydantic\_internal\_generate_schema.py", line 856, in match_type return self._callable_schema(obj) File "C:\projects\django-postgres-loader\venv\lib\site-packages\pydantic\_internal\_generate_schema.py", line 1692, in _callable_schema arg_schema = self._generate_parameter_schema(name, annotation, p.default, parameter_mode) File "C:\projects\django-postgres-loader\venv\lib\site-packages\pydantic\_internal\_generate_schema.py", line 1414, in _generate_parameter_schema schema = self._apply_annotations(source_type, annotations) File "C:\projects\django-postgres-loader\venv\lib\site-packages\pydantic\_internal\_generate_schema.py", line 1890, in _apply_annotations schema = get_inner_schema(source_type) File "C:\projects\django-postgres-loader\venv\lib\site-packages\pydantic\_internal\_schema_generation_shared.py", line 83, in __call__ schema = self._handler(source_type) File "C:\projects\django-postgres-loader\venv\lib\site-packages\pydantic\_internal\_generate_schema.py", line 1869, in inner_handler from_property = self._generate_schema_from_property(obj, source_type) File "C:\projects\django-postgres-loader\venv\lib\site-packages\pydantic\_internal\_generate_schema.py", line 677, in _generate_schema_from_property schema = get_schema(source) File "<input>", line 40, in __get_pydantic_core_schema__ TypeError: __call__() takes 1 positional argument but 2 were given I strongly suspect that I've not written the __get_pydantic_core_schema__ correctly. Note that I'm using pydantic version 2.7 and Python 3.11. | I was able to find a solution - I had indeed messed up when writing the __get_pydantic_core_schema__ method. Instead of calling handler as a function, its .generate_schema() method must be used, i.e. I replaced schema = handler(self.type_definition) with schema = handler.generate_schema(self.type_definition) So here is the full class definition that I ultimately went with: from typing import Any, Union from typing_extensions import Annotated from pydantic import Field, GetCoreSchemaHandler, validate_call from pydantic_core import core_schema class ValueInInterval: def __init__( self, type_definition: Any, start: Any = None, end: Any = None, include_start: bool = True, include_end: bool = True, ): self.type_definition = type_definition self.start = start self.end = end self.include_start = include_start self.include_end = include_end def __get_pydantic_core_schema__( self, source_type, handler, ): schema = handler.generate_schema(self.type_definition) if self.start is not None: if self.include_start: schema.update({"ge": self.start}) else: schema.update({"gt": self.start}) if self.end is not None: if self.include_end: schema.update({"le": self.end}) else: schema.update({"lt": self.end}) return schema # usage example @validate_call() def test_interval( value: ValueInInterval(type_definition=int, start=1, end=10), ): print(value) test_interval(value=1) # passes test_interval(value=0) # fails | 2 | 0 |
78,845,834 | 2024-8-7 | https://stackoverflow.com/questions/78845834/selenium-webdriver-unexpectedly-exits-in-aws-mwaa | I'm trying to run selenium periodically within AWS MWAA but chromium crashes with status code -5 every single time. I've tried to google this status code without success. Any ideas as to what's causing this error? Alternatively, how should I be running selenium with AWS MWAA? One suggestion I saw was to run a selenium in a docker container along side airflow but that isn't possible with AWS MWAA. Code from selenium import webdriver from selenium.webdriver.chrome.service import Service as ChromiumService from webdriver_manager.chrome import ChromeDriverManager from webdriver_manager.core.os_manager import ChromeType from selenium.webdriver.chrome.options import Options options = Options() options.add_argument("--headless=new") driver = webdriver.Chrome( service=ChromiumService( ChromeDriverManager(chrome_type=ChromeType.CHROMIUM).install() ), options=options, ) Error: chromedriver exits with status code 5 >>> options = Options() >>> options.add_argument("--headless=new") >>> driver = webdriver.Chrome( ... service=ChromiumService( ... ChromeDriverManager(chrome_type=ChromeType.CHROMIUM).install() ... ), ... options=options, ... ) DEBUG:selenium.webdriver.common.driver_finder:Skipping Selenium Manager; path to chrome driver specified in Service class: /usr/local/airflow/.wdm/drivers/chromedriver/linux64/114.0.5735.90/chromedriver DEBUG:selenium.webdriver.common.service:Started executable: `/usr/local/airflow/.wdm/drivers/chromedriver/linux64/114.0.5735.90/chromedriver` in a child process with pid: 19414 using 0 to output -3 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/airflow/.local/lib/python3.11/site-packages/selenium/webdriver/chrome/webdriver.py", line 45, in __init__ super().__init__( File "/usr/local/airflow/.local/lib/python3.11/site-packages/selenium/webdriver/chromium/webdriver.py", line 55, in __init__ self.service.start() File "/usr/local/airflow/.local/lib/python3.11/site-packages/selenium/webdriver/common/service.py", line 102, in start self.assert_process_still_running() File "/usr/local/airflow/.local/lib/python3.11/site-packages/selenium/webdriver/common/service.py", line 115, in assert_process_still_running raise WebDriverException(f"Service {self._path} unexpectedly exited. Status code was: {return_code}") selenium.common.exceptions.WebDriverException: Message: Service /usr/local/airflow/.wdm/drivers/chromedriver/linux64/114.0.5735.90/chromedriver unexpectedly exited. Status code was: -5 Versions selenium==4.21.0 webdriver-manager==4.0.2 chromedriver==114.0.5735.90 aws-mwaa-local-runner v2.8.1 To reproduce this error, you can download AWS MWAA localrunner v2.8.1, install the requirements above, bash into the container (docker exec -it {container_id} /bin/bash) and run the script. | Setup I mainly tried to make this work without root privileges due to a misunderstanding. Now there are two methods setup the environment! And yes, you need Chrome. Setting up without root privileges I'm proud to say this method does not require root privileges. The way you indicated it to me was that you couldn't run anything that needed it because you said you couldn't install programs. That's okay. Here's a working method. It now sounds like he's leaning more towards this method anyway. I have provided a setup Python script here (setup.py). Run it inside the environment, and it will set up everything for you. Basically what it does is it downloads Chrome, chromeDriver, and libraries that are needed for them to run that I installed using root privileges before. Then, it extracts them, allows them to be executable, and allows them to recognize the libraries. This is what it looks like: import subprocess, zipfile, os def unzip_file(name, path): """ Unzips a file Args: name (str): The name of the zip file to unzip path (str): The path to the extract directory """ print(f"Unzipping {name} to {path}...") # Open the ZIP file with zipfile.ZipFile(name, 'r') as zip_ref: # Extract all contents into the specified directory zip_ref.extractall(path) print("Extraction complete!") delete_file(name) def download_file(url): """ Downloads the file from a given url Args: url (str): The url to download the file from """ download = subprocess.run(["wget", f"{url}"], capture_output=True, text=True) # Print the output of the command print(download.stdout) def delete_file(path): """ Downloads the file from a given url Args: path (str): The path to the file to delete """ # Check if the file exists before attempting to delete if os.path.exists(path): os.remove(path) print(f"File {path} has been deleted.") else: print(f"The file {path} does not exist.") def write_to_bashrc(line): """ Downloads the file from a given url Args: line (str): The line to write """ # Path to the ~/.bashrc file bashrc_path = os.path.expanduser("~/.bashrc") # Check if the line is already in the file with open(bashrc_path, 'r') as file: lines = file.readlines() if line not in lines: with open(bashrc_path, 'a') as file: file.write(line) print(f"{line} has been added to ~/.bashrc") else: print("That is already in ~/.bashrc") if __name__ == '__main__': download_file("https://storage.googleapis.com/chrome-for-testing-public/127.0.6533.119/linux64/chrome-linux64.zip") unzip_file("chrome-linux64.zip", ".") subprocess.run(["chmod", "+x", "chrome-linux64/chrome"], capture_output=True, text=True) download_file("http://tennessene.github.io/chrome-libs.zip") unzip_file("chrome-libs.zip", "libs") download_file("https://storage.googleapis.com/chrome-for-testing-public/127.0.6533.119/linux64/chromedriver-linux64.zip") unzip_file("chromedriver-linux64.zip", ".") subprocess.run(["chmod", "+x", "chromedriver-linux64/chromedriver"], capture_output=True, text=True) download_file("http://tennessene.github.io/driver-libs.zip") unzip_file("driver-libs.zip", "libs") current_directory = os.path.abspath(os.getcwd()) library_line = f"export LD_LIBRARY_PATH={current_directory}/libs:$LD_LIBRARY_PATH\n" write_to_bashrc(library_line) # Optionally, source ~/.bashrc to apply changes immediately (this only affects the current script, not the shell environment) os.system("source ~/.bashrc") Setting up with root privileges First, I would install chrome. Here you can download the .rpm package directly from Google. wget https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm Make sure to install it sudo rpm -i google-chrome-stable_current_x86_64.rpm Next, I would just download chromeDriver. The builds are offered here. wget https://storage.googleapis.com/chrome-for-testing-public/127.0.6533.119/linux64/chromedriver-linux64.zip Extract it unzip chromedriver-linux64.zip Here's a little bit of background info before the last step. As you probably already know, AWS MWAA uses Amazon Linux 2 which is similar to CentOS/RHEL. How I was able to find the libraries needed (the libraries here are for Ubuntu), is I stumbled across one of the libraries I needed except it was for Oracle Linux. They were under different names (e.g. nss instead of libnss3). I then looked at Amazon's package repository and they were there, under similar names to Oracle Linux's packages. The libraries I ended up needing for chromeDriver were nss, nss-utils, nspr, and libxcb. Finally, install those pesky libraries sudo dnf update sudo dnf install nss nss-utils nspr libxcb A lot better than copying them over by hand! It should just work right away after that. Make sure your main.py looks something like mine though. Running the script Here is what my main python script ended up looking like (main.py): from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.wait import WebDriverWait def visit_url(url): """ Navigates to a given url. Args: url (str): The url of the site to visit (e.g., "https://stackexchange.com/"). """ print(f"Visiting {url}") driver.get(url) WebDriverWait(driver, 10).until( lambda driver: driver.execute_script('return document.readyState') == 'complete' ) if __name__ == '__main__': # Set up Chrome options options = Options() options.add_argument("--headless") # Run Chrome in headless mode options.add_argument("--no-sandbox") options.add_argument("--disable-dev-shm-usage") options.add_argument("--remote-debugging-port=9222") options.binary_location = "chrome-linux64/chrome" # ONLY for non-root install # Initialize the WebDriver driver = webdriver.Chrome(options=options, service=Service("chromedriver-linux64/chromedriver")) try: visit_url("https://stackoverflow.com/") # For debugging purposes (if you can even access it) driver.save_screenshot("stack_overflow.png") except Exception as e: print(f"An error occurred: {e}") finally: # Always close the browser print("Finished! Closing...") driver.close() driver.quit() It was very picky as far as getting it to recognize Chrome for the non-root install since it's not in its usual place. But, this is a basic script you can base your program off of. It saves a screenshot and you can watch it work at localhost:9222. Not exactly sure how you would view it though. | 2 | 1 |
78,864,778 | 2024-8-13 | https://stackoverflow.com/questions/78864778/how-can-i-serialize-versionedtransaction-in-solana-with-python | I built sending several txs in one jito bundle with Node.js. I want convert this code to python. When I use jito bundle, I should serialize transactions. When I use Node.js, I can use transaction.serialize(). But when I use python, I used transaction.serialize(), it occurs error. 'solders.transaction.VersionedTransaction' object has no attribute 'serialize' How can I serialize VersionedTransaction in Solana with python? | It looks like VersionedTransaction implements bytes at https://github.com/kevinheavey/solders/blob/c10419c29b7890e69572f0160e5e74406814048b/python/solders/transaction.pyi#L121, so you should be able to serialize the transaction with: bytes(transaction) | 2 | 2 |
78,863,608 | 2024-8-12 | https://stackoverflow.com/questions/78863608/django-5-update-or-create-reverse-one-to-one-field | On Django 4.x Code is working as expected from django.db import models class Project(models.Model): rough_data = models.OneToOneField( "Data", related_name="rough_project", on_delete=models.SET_NULL, null=True, blank=True, ) final_data = models.OneToOneField( "Data", related_name="final_project", on_delete=models.SET_NULL, null=True, blank=True, ) class Data(models.Model): pass data, created = Data.objects.update_or_create( rough_project=project, defaults=data ) On Django 5.x: ValueError: The following fields do not exist in this model: rough_project I do not see any changes related to this in changelog In Django 4.2 and below it worked as one line with update_or_create and in 5.0 it stopped working, so instead I need to do something like this: data = getattr(project, "rough_data", None) if data: # update fields here else: # create Data object | In Django 4.x, it did not raise an exception but also did not save the relationship if the object was created β so it was actually not working as expected. Django ticket: https://code.djangoproject.com/ticket/34586 Django PR that fixed1 it to raise exception: https://github.com/django/django/pull/17112/files It appears to work as expected only if project.save() is called afterwards. 1You could fix it to proactively save the relationship instead, by subclassing QuerySet and overriding the create method, and then specifying Data objects to use your Manager: class CreateSaveReverseOneToOneFieldsQuerySet(models.QuerySet): def create(self, **kwargs): """ Create a new object with the given kwargs, saving it to the database and returning the created object. Also save objects in reverse OneToOne fields β see https://stackoverflow.com/questions/78863608/django-5-update-or-create-reverse-one-to-one-field. """ obj = self.model(**kwargs) self._for_write = True obj.save(force_insert=True, using=self.db) # Save reverse OneToOne fields reverse_one_to_one_fields = frozenset(kwargs).intersection(self.model._meta._reverse_one_to_one_field_names) for field_name in reverse_one_to_one_fields: getattr(obj, field_name).save() return obj class DataManager(models.manager.BaseManager.from_queryset(CreateSaveReverseOneToOneFieldsQuerySet)): pass class Data(models.Model): objects = DataManager() | 3 | 5 |
78,856,057 | 2024-8-10 | https://stackoverflow.com/questions/78856057/fasthtml-htmx-attribute-is-not-translated-in-final-html | I was making a simple Input field in FastHTML. A function MakeInput returns the input field as I wish to clear the input field after processing the form. def MakeInput(): return Input( placeholder='Add new Idea', id='title', hx_swap_oob='true' ) @route('/todos') def get(): form = Form( Group( MakeInput(), Button('Save',hx_post='/',target_id='todo-list',hx_swap='beforeend') ) ) return Titled( 'List', Card( Ul(*todos(),id='todo-list'), header=form ) ) @route('/') def post(todo:Todo): return todos.insert(todo), MakeInput() But when the final HTML is rendered, the attribute becomes hx-swap:oob='true'. This causes problem for the htmx part to replace the existing element, and creates a new element. for eg: Actual Result It should be hx-swap-oob='true'. Is there is issue with the way I am passing the attributes in FastHTML functions? | This is a bug that was fixed in version 0.3.1, see the following issue: https://github.com/AnswerDotAI/fasthtml/issues/254 Update to version 0.3.1 or higher and you should be fine. | 2 | 3 |
78,868,024 | 2024-8-13 | https://stackoverflow.com/questions/78868024/how-to-know-when-to-use-map-elements-map-batches-lambda-and-struct-when-using | import polars as pl import numpy as np df_sim = pl.DataFrame({ "daily_n": [1000, 2000, 3000, 4000], "prob": [.5, .5, .5, .6], "size": 1 }) df_sim = df_sim.with_columns( pl.struct(["daily_n", "prob", "size"]) .map_elements(lambda x: np.random.binomial(n=x['daily_n'], p=x['prob'], size=x['size'])) .cast(pl.Int32) .alias('events') ) df_sim However the following code would fail with the message "TypeError: float() argument must be a string or a number, not 'Expr'" df_sim.with_columns( np.random.binomial(n=col('daily_n'), p=col('prob'), size=col('size')) .alias('events') ) Why do some functions require use of struct(), map_elements() and lambda, while others do not? In my case below I am able to simply refer to polars columns as function arguments by using col(). def local_double(x): return(2*x) df_ab.with_columns(rev_2x = local_double(col("revenue"))) | Let's go back to what a context is/does. polars DataFrames (or LazyFrame) have contexts which is just a generic way of referring to with_columns, select, agg, and group_by. The inputs to contexts are Expressions. To a limited extent, the python side of polars can convert python objects into polars expressions. For example a datetime or an int are easily converted to a polars expression and so when you input col('a')*2. It converts that into an expression of col('a').mul(lit(2)). Functions that return expressions: Here's your function with type annotations. def local_double(x: pl.Expr) -> pl.Expr: return(2*x) It takes an Expr as input and returns another Expr as output. It doesn't do any work, it just gives polars a new Expr. Using this function is the same as doing df_ab.with_columns(rev_2x = 2*col("revenue")). In fact, polars isn't doing anything with your function when you do df_ab.with_columns(rev_2x = local_double(col("revenue"))) because the order of operations by python is going to resolve your function so that python can give polars its output as an input to polars' context. Why do we need map_batches and map_elements Remember that polars contexts are expecting expressions. One of the reasons polars is so fast and efficient is that behind the API is its own query language and processing engine. That language speaks in expressions. To "translate" from python that it doesn't already know you have to use one of the map_* functions. What they do is convert your expression into values. In the case of map_batches it will give the whole pl.Series to whatever function you choose all at once. In the case of map_elements it will give the function one python value at a time. They are the translation layer so that polars can interact with arbitrary functions. Why do we need to wrap columns in struct? Polars is designed to operate multiple expressions in parallel. That means that each expression doesn't know what any other expression is doing. As a side effect of this it means that no expression can be the input of another expression in the same context. This may seem like a limiting design but it's really not because of structs. They are a type of column which can contain multiple columns in one. If you're going to use a function that needs multiple inputs from your DataFrame then they give the way of converting multiple columns into just one expression. If you only need one column from your DataFrame to be handed to your function then you don't need to wrap it in a struct. (bonus) Besides functions that return Exprs are there other times we don't need map_*? Yes. Numpy has what they call Universal Functions, or ufunc. You can use a ufunc directly in a context giving it your col('a'), col('b') directly as inputs. For example, you can do df.with_columns(log_a = np.log(pl.col('a'))) and it'll just work. You can even make your own ufunc with numba which will also just work. The mechanism behind why ufuncs just work is actually the same as Functions that return expressions, but with more hidden steps. When a ufunc gets an input that isn't an np.array, instead of raising an error (as you got with np.random.binomial), it will check if the input has __array_ufunc__ as a method. If it does then it'll run that method. polars implements that method in pl.Expr so the above gets converted into df.with_columns(log_a = pl.col('a').map_batches(np.log)) If you have a ufunc that takes multiple inputs, it will even convert all of those inputs into a struct automatically. Why do you need to use lambda sometimes? You don't ever need lambda, it's just a way to make and use a function in one line. Instead of your example we could do this instead def binomial_elements(x: dict) -> float: return np.random.binomial(n=x['daily_n'], p=x['prob'], size=x['size']) df_sim.with_columns( pl.struct(["daily_n", "prob", "size"]) .map_elements(binomial_elements) .cast(pl.Int32) .alias('events') ) (bonus) When to use map_elements and when to use map_batches? Spoiler alert: Your example should be map_batches Anytime you're dealing with a vectorized function, map_batches is the better choice. I believe most (if not all) of numpy's functions are vectorized, as are scipy's. As such, your example would be more performant as: def binomial_batches(x: pl.Series) -> np.array: return np.random.binomial(n=x.struct['daily_n'], p=x.struct['prob']) df_sim.with_columns( pl.struct("daily_n", "prob") .map_batches(binomial_batches) .cast(pl.Int32) .alias('events') ) Notice that I took out the size parameter because numpy infers the output size from the size of daily_n and prob. Also, when you do map_batches on the Expr, it becomes a Series rather than a dict. To access the individual fields within the struct Series, you need to use the .struct namespace so that's a bit different syntax to be aware of between map_elements and map_batches. You could also do this as a lambda like df_sim.with_columns( pl.struct("daily_n", "prob") .map_batches(lambda x: np.random.binomial(n=x.struct['daily_n'], p=x.struct['prob'])) .cast(pl.Int32) .alias('events') ) One last overlooked thing about map_batches The function that you give map_batches is supposed to return a pl.Series except for in the above it returns an np.array. polars has pretty good interoperability with numpy so it's able to automatically convert the np.array into a pl.Series. One area where you might get tripped up is if you're using pyarrow.compute functions. Polars won't automatically convert that to pl.Series so you'd need to explicitly do it. As an aside: I made this gist of a decorator which will, in principle, take any function and make it look for the __array_ufunc__ method of inputs so that you don't have to use map_*. I say "in principle" because I haven't tested it extensively so don't want to over hype it. A note on np.random.binomial (response to comment) There are 2+1 modes of binomial (and really many np functions). What do I mean 2+1? You can give it a single value in each of n and p and then give it a size to get a 1d array with a length of size. This is essentially what your map_elements approach is doing You can give it an array for n or p and nothing for size then it'll give you a 1d array matching the size of the array you gave it for n. This is what the map_batches approach is doing. (the +1) You can combine the previous two modes and give it an array for n, p, and for size you give it a tuple where the first element is the number of simulations for each n and p with the second element of the tuple being the length of n and p. With that, it'll give you a 2d array with rows equal to the number of simulations and columns for each of the input length. You can get that 3rd mode in polars as long as you transpose to fit polars. That would look like this: df_sim.with_columns( pl.struct("daily_n", "prob") .map_batches(lambda x: ( np.random.binomial( n=x.struct['daily_n'], p=x.struct['prob'], size=(3,x.shape[0]) ).transpose() ) ) .alias('events') ) shape: (4, 4) βββββββββββ¬βββββββ¬βββββββ¬βββββββββββββββββββββ β daily_n β prob β size β events β β --- β --- β --- β --- β β i64 β f64 β i32 β list[i64] β βββββββββββͺβββββββͺβββββββͺβββββββββββββββββββββ‘ β 1000 β 0.5 β 1 β [491, 493, 482] β β 2000 β 0.5 β 1 β [1032, 966, 972] β β 3000 β 0.5 β 1 β [1528, 1504, 1483] β β 4000 β 0.6 β 1 β [2401, 2422, 2367] β βββββββββββ΄βββββββ΄βββββββ΄βββββββββββββββββββββ | 5 | 6 |
78,867,160 | 2024-8-13 | https://stackoverflow.com/questions/78867160/pypdf2-stalling-while-parsing-pdf-for-unknown-reason | I have a script in which I go through and parse a large collection of PDFs. I noticed that when I tried to parse a particular PDF, the script just stalls forever. But it doesn't throw up an error and as far as I can tell, the PDF is not corrupted. I can't tell what the issue is, but I can see that it happens on page 4. Is there a way to find out what is causing this issue, or to just skip the PDF if it is taking longer than one minute to parse? For reference, here is the PDF: https://go.boarddocs.com/fl/palmbeach/Board.nsf/files/CTWGW9459021/$file/22C-001R_2ND%20RENEWAL%20CONTRACT_TERRACON.pdf from PyPDF2 import PdfReader doc = "somefile.pdf" doc_text = "" try: print(doc) reader = PdfReader(doc) for i in range(len(reader.pages)): print(i) page = reader.pages[i] text = page.extract_text() doc_text += text except Exception as e: print(f"The file failed due to error {e}:") doc_text = "" | You should not use PyPDF2 any more unless really required and switch to pypdf instead - see the note on PyPI as well: https://pypi.org/project/PyPDF2/ Running the corresponding migrated code with the latest release does not show any performance issues: from pypdf import PdfReader doc = "78867160.pdf" doc_text = "" try: print(doc) reader = PdfReader(doc) for i, page in enumerate(reader.pages): print(i) text = page.extract_text() doc_text += text except Exception as e: print(f"The file failed due to error {e}:") doc_text = "" | 2 | 1 |
78,864,414 | 2024-8-13 | https://stackoverflow.com/questions/78864414/webpage-only-scrolls-once-using-selenium-despite-new-content-loading | I'm trying to scrape URLs from a dynamically allocated webpage that requires continuous scrolling to load all the content into the DOM. My approach involves running window.scrollTo(0, document.body.scrollHeight); in a loop using Selenium's execute_script function. After each scroll, I compare the number of URLs loaded before and after the scroll. If the number of URLs doesn't change, I assume the end of the page has been reached and break the loop. However, the script assumes that all content has been loaded into the DOM, even though I know new content is being loaded within the given timeout. Below is my code: def _scroll_page_to_bottom(self, timeout: int): # Todo: Fix Bugs while True: urls_before_scroll = self.browser.find_elements( By.XPATH, read_xpath(self.scrape_programs_urls.__name__, "programs_urls") ) self.browser.execute_script("window.scrollTo(0, document.body.scrollHeight);") # Wait for new content to be loaded try: WebDriverWait(self.browser, timeout).until( lambda _: len(self.browser.find_elements( By.XPATH, read_xpath(self.scrape_programs_urls.__name__, "programs_urls")) ) > len(urls_before_scroll) ) except TimeoutException: # If no new content is loaded within the timeout, assume we've reached the end of the page break Can anyone please guess what could be causing the issue in the above code? Edit: i did some debugging and found the issue is specifically related to scroll functionality when i execute window.scrollTo(0, document.body.scrollHeight); in the console of the browser the page doesn't get scrolled to the bottom either which explains why my code is not working. The site am trying to scrape is https://hackerone.com/opportunities/all/search | This code below works well in scrolling down the page, try to embed it into your code: ele = driver.find_element(By.XPATH, '//div[contains(@class,"Pane-module_u1-pane__content")]') driver.execute_script('arguments[0].scrollIntoView(false);', ele) | 2 | 1 |
78,860,938 | 2024-8-12 | https://stackoverflow.com/questions/78860938/why-doesnt-the-gekko-solver-adapt-to-variations-in-the-system | The following is related to this question : predictive control model using GEKKO I am still traying applying MPC to maintain the temperature of a room within a defined range(16,18), as Professor @John explained last time.the gain is normally listed as K=array([[0.93705481,β12.24012156]]). Thus, an increase in Ξ² by +1 leads to a decrease in T by -12.24, while an increase in Text by +1 leads to an increase in T by +0.937. In my case, I tried to implement an external temperature profile for a day (sinusoidal), and I modified the control to vary between 0 and 1, not just 0 or 1, since the equipment can operate within this range. I also adjusted m.bint.MV_STEP_HOR=1 so that the control is updated at each iteration. However, the control still does not react! Even when the external temperature increases or decreases, the control remains unchanged. I run the corrected code from my previous question: # Import library import numpy as np import pandas as pd import time from gekko import GEKKO from numpy import array total_minutes_in_day = 24 * 60 interval = 5 # minutes num_points = total_minutes_in_day // interval # GΓ©nΓ©rer les points de temps time_points = np.linspace(0, total_minutes_in_day, num_points) temperature = 8.5 + 11.5 * np.sin((2 * np.pi / total_minutes_in_day) * time_points - (np.pi / 2)) temperature = np.clip(temperature, -3, 20) plt.plot(time_points, temperature) plt.xlabel('Temps (minutes)') plt.ylabel('TempΓ©rature (Β°C)') plt.title('Variation de Tex sur une journΓ©e') plt.savefig('Tex pour une jrn') plt.grid(True) plt.show() K = array([[ 0.93705481, -12.24012156]]) p = {'a': array([[ 1.08945247], [-0.00242145], [-0.00245978], [-0.00272713], [-0.00295845], [-0.00319119], [-0.00343511], [-0.00366243], [-0.00394247], [-0.06665054]]), 'b': array([[[-0.05160235, -0.01039767], [ 0.00173511, -0.01552485], [ 0.00174602, -0.01179519], [ 0.00180031, -0.01052658], [ 0.00186416, -0.00822121], [ 0.00193947, -0.00570905], [ 0.00202877, -0.00344507], [ 0.00211395, -0.00146947], [ 0.00223514, 0.00021945], [ 0.03800987, 0.04243736]]]), 'c': array([0.0265903])} # i have used only 200 mes of T_externel T_externel = temperature m = GEKKO(remote=False) m.y = m.Array(m.CV,1) m.u = m.Array(m.MV,2) m.arx(p,m.y,m.u) # rename CVs m.T = m.y[0] # rename MVs m.beta = m.u[1] m.free(m.beta) m.bint = m.MV(0,lb=0,ub=1) m.Equation(m.beta==m.bint) # distrubance m.d = m.u[0] # distrubance and parametres m.d = m.Param(T_externel[0]) m.bias = m.Param(0) m.Tb = m.CV() m.Equation(m.Tb==m.T+m.bias) # steady state initialization m.options.IMODE = 1 m.solve(disp=False) # set up MPC m.d.value = T_externel m.options.IMODE = 6 # MPC m.options.CV_TYPE = 1 # the objective is an l1-norm (region) m.options.NODES = 2 # Collocation nodes m.options.SOLVER = 3 # IPOPT m.time = np.arange(0,len(T_externel)*300,300) # step time = 300s # Manipulated variables m.bint.STATUS = 1 # calculated by the optimizer m.bint.FSTATUS = 0 # use measured value m.bint.DCOST = 0.0 # Delta cost penalty for MV movement m.bint.UPPER = 1.0 # Upper bound m.bint.LOWER = 0.0 # Lower bound m.bint.MV_STEP_HOR = 1 m.bint.value = 0 # Controlled variables m.Tb.STATUS = 1 # drive to set point m.Tb.FSTATUS = 0 # receive measurement m.Tb.SPHI = 18 # set point high level m.Tb.SPLO = 16 # set point low level m.Tb.WSPHI = 100 # set point high priority m.Tb.WSPLO = 100 # set point low priority T_MEAS = 20 # Temperature starts at 17 m.Tb.value = T_MEAS m.bias.value = T_MEAS - m.T.value[0] m.options.SOLVER = 1 m.solve(disp=False) if m.options.APPSTATUS == 1: # Retrieve new values beta = m.beta.NEWVAL else: # Solution failed beta = 0.0 | The solution that you showed was able to find a solution with a single step to keep within the temperature range (17-19). Tightening the temperature range (16.5-17.5) shows additional movement of the MV over the time period and shows that it is responsive. Unless there is a violation of this temperature range, the controller does not take action if there is even a small DCOST value to move the MV. The dynamics influence the speed of the response. A better insulated building will have a slower response to changes in the external temperature. One other suggestion is that the bint Manipulated Variable isn't needed when solving for continuous instead of integer solutions. Here is the complete code with bint removed and the tighter temperature target range: # Import library import numpy as np import pandas as pd import time from gekko import GEKKO from numpy import array import matplotlib.pyplot as plt total_minutes_in_day = 24 * 60 interval = 5 # minutes num_points = total_minutes_in_day // interval # GΓ©nΓ©rer les points de temps time_points = np.linspace(0, total_minutes_in_day, num_points) temperature = 8.5 + 11.5 * np.sin((2 * np.pi / total_minutes_in_day) * time_points - (np.pi / 2)) temperature = np.clip(temperature, -3, 20) K = array([[ 0.93705481, -12.24012156]]) p = {'a': array([[ 1.08945247], [-0.00242145], [-0.00245978], [-0.00272713], [-0.00295845], [-0.00319119], [-0.00343511], [-0.00366243], [-0.00394247], [-0.06665054]]), 'b': array([[[-0.05160235, -0.01039767], [ 0.00173511, -0.01552485], [ 0.00174602, -0.01179519], [ 0.00180031, -0.01052658], [ 0.00186416, -0.00822121], [ 0.00193947, -0.00570905], [ 0.00202877, -0.00344507], [ 0.00211395, -0.00146947], [ 0.00223514, 0.00021945], [ 0.03800987, 0.04243736]]]), 'c': array([0.0265903])} T_externel = temperature m = GEKKO(remote=False) m.y = m.Array(m.CV,1) m.u = m.Array(m.MV,2) m.arx(p,m.y,m.u) # rename CVs m.T = m.y[0] # rename MVs m.beta = m.u[1] # distrubance m.d = m.u[0] # distrubance and parametres m.d = m.Param(T_externel[0]) m.bias = m.Param(0) m.Tb = m.CV() m.Equation(m.Tb==m.T+m.bias) # steady state initialization m.options.IMODE = 1 m.solve(disp=False) # set up MPC m.d.value = T_externel m.options.IMODE = 6 # MPC m.options.CV_TYPE = 1 # the objective is an l1-norm (region) m.options.NODES = 2 # Collocation nodes m.options.SOLVER = 3 # IPOPT m.time = np.arange(0,len(T_externel)*300,300) # step time = 300s # Manipulated variables m.beta.STATUS = 1 # calculated by the optimizer m.beta.FSTATUS = 0 # use measured value m.beta.DCOST = 0.0 # Delta cost penalty for MV movement m.beta.UPPER = 1.0 # Upper bound m.beta.LOWER = 0.0 # Lower bound m.beta.MV_STEP_HOR = 1 m.beta.value = 0 # Controlled variables m.Tb.STATUS = 1 # drive to set point m.Tb.FSTATUS = 0 # receive measurement m.Tb.SPHI = 17.5 # set point high level m.Tb.SPLO = 16.5 # set point low level m.Tb.WSPHI = 100 # set point high priority m.Tb.WSPLO = 100 # set point low priority T_MEAS = 20 m.Tb.value = T_MEAS m.bias.value = T_MEAS - m.T.value[0] m.options.SOLVER = 3 m.solve(disp=False) if m.options.APPSTATUS == 1: # Retrieve new values beta = m.beta.NEWVAL else: # Solution failed beta = 0.0 # Plot the results plt.figure(figsize=(8,3.5)) plt.subplot(2,1,1) plt.plot(m.time,m.Tb.value,'r-',label=r'$T_{biased}$') plt.plot(m.time,m.T.value,'r--',label=r'$T_{unbiased}$') plt.plot(m.time,m.d.value,'g:',label=r'$T_{ext}$') plt.plot([0,m.time[-1]],[m.Tb.SPHI,m.Tb.SPHI],'k--',label='Upper Bound') plt.plot([0,m.time[-1]],[m.Tb.SPLO,m.Tb.SPLO],'k--',label='Lower Bound') plt.ylabel('Temperature (Β°C)') plt.legend(loc=1); plt.grid() plt.subplot(2,1,2) plt.step(m.time,m.beta.value,'b--',label=r'$\beta$') plt.ylabel('optimal control') plt.xlabel('Time (sec)') plt.legend(loc=1); plt.grid() plt.savefig('results.png',dpi=300) plt.show() | 2 | 1 |
78,850,049 | 2024-8-8 | https://stackoverflow.com/questions/78850049/pyinstaller-hidden-import-ffmpeg-python-not-found | Trying to convert Python scripts to exe with PyInstaller. In my code, I use ffmpeg-python: import ffmpeg .... def ffmpeg_save_clip(self,output_video: str, clip_start: str, clip_end: str): (ffmpeg .input(self.file.get_videopath(), ) .output(output_video, vcodec='copy', ss=clip_start, to=clip_end, acodec='copy') .global_args('-y') .run()) So Ii call PyInstaller from terminal with related flag: pyinstaller --windowed --hidden-import "ffmpeg-python" --noconsole --icon=app.ico --name "app" main.py I checked also: pip install ffmpeg-python Requirement already satisfied: ffmpeg-python in c:\python38\lib\site-packages (0.2.0) Requirement already satisfied: future in c:\python38\lib\site-packages (from ffmpeg-python) (0.18.3) But I get from PyInstaller: Hidden import 'ffmpeg-python' not found App works in visual-studio, but when I run pyinstaller exe and try to save clip, it freeze without response. Note: I also: try to add ffmpeg.exe into root folder of pyinstaller app. try to use .spec file with: binaries=[('C:\\ffmpeg\\bin\\ffmpeg.exe', 'ffmpeg/),('C:\\ffmpeg\\bin\\ffplay.exe','ffplay/'), ('C:\\ffmpeg\\bin\\ffprobe.exe', 'ffprobe/')], Nothing changes UPDATE Tank you @happy_code_egg, you explained me such a things. I tried what you said and I don't have error any more. But I can't make work ffmpeg anywhere on Pyinstaller exe. I added a try/except block to isolate problem (when I launch app on Visualstudio it works, but not when use exe). block is (very symilar to past def): def ffmpeg_save_clip(self,output_video: str, clip_start: str, clip_end: str): input = 'C:\Users\x\Desktop\input.mp4' output = 'C:\Users\x\Desktop\output.mp4' try: (ffmpeg .input(input) .output(output, vcodec='copy', ss=clip_start, to=clip_end, acodec='copy') .global_args('-y') .run(capture_stdout=True)) except Exception as e: self.logger.error('Video window -> ffmpeg_save_clip -> error ' + str(e)) self.logger.error('Video window -> ffmpeg_save_clip -> input file: ' + input) self.logger.error('Video window -> ffmpeg_save_clip -> output file ' + output) raise ValueError(str(e)) Note: (input and output fixed are only as simple path examples) When I open log I have: 2024-08-09 11:28:50,293 - ERROR - Video window -> ffmpeg_save_clip -> error [WinError 2] File not found error 2024-08-09 11:28:50,293 - ERROR - Video window -> ffmpeg_save_clip -> input file: C:\Users\x\Desktop\input.mp4 2024-08-09 11:28:50,293 - ERROR - Video window -> ffmpeg_save_clip -> output file C:\Users\x\Desktop\ouput.mp4 (I tried with '/' and also '\' to create path) File paths are correct and, on Visualstudio, they can be processed by ffmpeg (no error), but the exe seems to not find (input or output) I tried solution described here: Python ffmpeg won't accept path (without effects) I found also these discussions: github ffmpeg-python issue1 github ffmpeg-python issue2 in both discussions seems to be a problem with ffmpeg + ffmpeg-python libs cohexistence. I tried (as said) to uninstal both and reinstall ffmpeg-python, but nothing change. Note: I've also three other warning during Pyinstaller: (I write them only to complete context, but i think they have no repercussions on this problem.) 57283 ERROR: Hidden import 'fiona._shim' not found 57298 WARNING: Hidden import "fiona._shim" not found! 57392 WARNING: Hidden import "importlib_resources.trees" not found! tried with: --hidden-import fiona._shim --hidden-import fiona.schema tried also with: --add-data="fiona/*;fionaβ but warnings still remains. | For anyone encountered this issue, I finally found a way to make it work. tank you for @happy_code_egg support (and it's explaining capacity) To solve ffmpeg-python including to pyinstaller, follow what @happy_code_egg said. Anyway (as I wrote in update section of my question), it still remained a problem: using ffmpeg python command in pyinstaller .exe I had: error [WinError 2] The system cannot find the file specified It was not referenced to video file, but to ffmpeg.exe itself. (app was not able to find ffmpeg.exe) As said also by @happy_code_egg in comment. to make it works you can use ffmpeg flag to find ffmpeg.exe: (ffmpeg ... ... .run(overwrite_output=True, cmd=r'c:\FFmpeg\bin\ffmpeg.exe') Adding cmd flag it works! To avoid installation of ffmpeg into host computer (every host computer that will use pyinstallar .exe), put ffmpeg.exe into root directory of main python script and set: rel_ffmpeg_path = os.path.dirname(__file__) + '\\ffmpeg.exe' ... (ffmpeg ... ... .run(overwrite_output=True, cmd=rel_ffmpeg_path) Hope this will help. | 4 | 1 |
78,867,121 | 2024-8-13 | https://stackoverflow.com/questions/78867121/fill-several-polars-columns-with-a-constant-value | I am working with the following code... import polars as pl df = pl.DataFrame({ 'region': ['GB', 'FR', 'US'], 'qty': [3, 6, -8], 'price': [100, 102, 95], 'tenor': ['1Y', '6M', '2Y'], }) cols_to_set = ['price', 'tenor'] fill_val = '-' df.with_columns([pl.lit(fill_val).alias(c) for c in cols_to_set]) ...with the following output. shape: (3, 4) ββββββββββ¬ββββββ¬ββββββββ¬ββββββββ β region β qty β price β tenor β β --- β --- β --- β --- β β str β i64 β str β str β ββββββββββͺββββββͺββββββββͺββββββββ‘ β GB β 3 β - β - β β FR β 6 β - β - β β US β -8 β - β - β ββββββββββ΄ββββββ΄ββββββββ΄ββββββββ Instead of using a list of pl.lit expressions, I thought I could use a single pl.lit(fill_val).alias(cols_to_set). However, this crashes with an error TypeError: argument 'name': 'list' object cannot be converted to 'PyString' Is there a way to simplify the above and set all columns in cols_to_set to a specific constant value fill_val? | you could use replace_strict(): df.with_columns(pl.col(cols_to_set).replace_strict(None, None, default = '-')) ββββββββββ¬ββββββ¬ββββββββ¬ββββββββ β region β qty β price β tenor β β --- β --- β --- β --- β β str β i64 β str β str β ββββββββββͺββββββͺββββββββͺββββββββ‘ β GB β 3 β - β - β β FR β 6 β - β - β β US β -8 β - β - β ββββββββββ΄ββββββ΄ββββββββ΄ββββββββ | 3 | 3 |
78,845,218 | 2024-8-7 | https://stackoverflow.com/questions/78845218/fastapi-testclient-not-starting-lifetime-in-test | Example code: import os import asyncio from contextlib import asynccontextmanager from fastapi import FastAPI, Request @asynccontextmanager async def lifespan(app: FastAPI): print(f'Lifetime ON {os.getpid()=}') app.state.global_rw = 0 _ = asyncio.create_task(infinite_1(app.state), name='my_task') yield app = FastAPI(lifespan=lifespan) @app.get("/state/") async def inc(request: Request): return {'rw': request.app.state.global_rw} async def infinite_1(app_rw_state): print('infinite_1 ON') while True: app_rw_state.global_rw += 1 print(f'infinite_1 {app_rw_state.global_rw=}') await asyncio.sleep(10) This is all working fine, every 10 seconds app.state.global_rw is increased by one. Test code: from fastapi.testclient import TestClient def test_all(): from a_10_code import app client = TestClient(app) response = client.get("/state/") assert response.status_code == 200 assert response.json() == {'rw': 0} Problem that I have found is that TestClient(app) will not start async def lifespan(app: FastAPI):. Started with pytest -s a_10_test.py So, how to start lifespan in FastAPI TestClient ? P.S. my real code is more complex, this is just simple example for demonstration purposes. | The main reason that the written test fails is that it doesn't handle the asynchronous nature of the FastAPI app's lifespan context properly. In fact, the global_rw is not set due to improper initialization. If you don't want to utilize an AsyncClient like the one by httpx you can use pytest_asyncio and the async fixture, ensuring that the FastAPI app's lifespan context correctly works and global_rw is initialized properly. Here's the workaround: import pytest_asyncio import pytest import asyncio from fastapi.testclient import TestClient from fastapi_lifespan import app @pytest_asyncio.fixture(scope="module") def client(): with TestClient(app) as client: yield client @pytest.mark.asyncio async def test_state(client): response = client.get("/state/") assert response.status_code == 200 assert response.json() == {"rw": 1} await asyncio.sleep(11) response = client.get("/state/") assert response.status_code == 200 assert response.json() == {'rw': 2} You can also define a conftest.py to place the fixture there to have a clean test files. | 2 | 1 |
78,865,667 | 2024-8-13 | https://stackoverflow.com/questions/78865667/how-to-draw-a-rectangle-with-one-side-in-matplotlib | I want to draw a rectangle in matplotlib and I want only the top edge to show. I tried to draw a line on top of the rectangle to make it work, but I was not satisfied with the result. Here's my code: import matplotlib.pyplot as plt from matplotlib.axes import Axes from matplotlib.figure import Figure from matplotlib.patches import Rectangle LIM = 5 def configure_plot(lim: float, **kwargs) -> tuple[Figure, Axes]: fig: Figure ax: Axes fig, ax = plt.subplots(**kwargs) ax.axis("equal") ax.set_axis_off() ax.set_xlim(-lim, lim) ax.set_ylim(-lim, lim) fig.tight_layout() return fig, ax def main() -> None: fig: Figure ax: Axes fig, ax = configure_plot(LIM) width = 8 height = 3.5 color = "tab:blue" rect = Rectangle( (-width / 2, -height), width, height, facecolor=color, alpha=0.8, linewidth=3, ) ax.add_patch(rect) ax.plot([-width / 2, width / 2], [0, 0], color=color) plt.show() if __name__ == "__main__": main() This gives me this output: It's almost the result I am looking for, but the line of the edge continues slightly beyond the rectangle's corners which makes it look worse in my opinion. I could play with the line end points to shorten it and make it fit, but this doesn't sound like an ideal solution. Is there a way to draw a rectangle and only one of its edges in matplotlib? | You can set the CapStyle for the ax.plot line to butt (instead of the default, which is projecting). You can use the solid_capstyle kwarg for this. To demonstrate, I increased the line width to make the differences more obvious: solid_capstyle='projecting' ax.plot([-width / 2, width / 2], [0, 0], color=color, solid_capstyle='projecting', linewidth=5) solid_capstyle='butt' ax.plot([-width / 2, width / 2], [0, 0], color=color, solid_capstyle='butt', linewidth=5) | 1 | 4 |
78,863,932 | 2024-8-12 | https://stackoverflow.com/questions/78863932/runtimeerror-numpy-is-not-available-transformers | I basically just want to use the transformers pipeline() to classify data, but independent of which model I try to use, it returns the same error, stating Numpy is not available Code I'm running: pipe = pipeline("text-classification", model="AdamLucek/roberta-llama3.1405B-twitter-sentiment") sentiment_pipeline('Today is a great day!') # other model i've tried: sentiment_pipeline = pipeline(model="cardiffnlp/twitter-roberta-base-sentiment-latest", tokenizer="cardiffnlp/twitter-roberta-base-sentiment-latest") sentiment_pipeline('Today is a great day!') Error I receive: RuntimeError Traceback (most recent call last) Cell In[49], line 1 ----> 1 sentiment_pipeline('Today is a great day!') File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\transformers\pipelines\text_classification.py:156, in TextClassificationPipeline.__call__(self, inputs, **kwargs) 122 """ 123 Classify the text(s) given as inputs. 124 (...) 153 If `top_k` is used, one such dictionary is returned per label. 154 """ 155 inputs = (inputs,) --> 156 result = super().__call__(*inputs, **kwargs) 157 # TODO try and retrieve it in a nicer way from _sanitize_parameters. 158 _legacy = "top_k" not in kwargs File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\transformers\pipelines\base.py:1257, in Pipeline.__call__(self, inputs, num_workers, batch_size, *args, **kwargs) 1249 return next( 1250 iter( 1251 self.get_iterator( (...) 1254 ) 1255 ) 1256 else: -> 1257 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\transformers\pipelines\base.py:1265, in Pipeline.run_single(self, inputs, preprocess_params, forward_params, postprocess_params) 1263 model_inputs = self.preprocess(inputs, **preprocess_params) 1264 model_outputs = self.forward(model_inputs, **forward_params) -> 1265 outputs = self.postprocess(model_outputs, **postprocess_params) 1266 return outputs File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\transformers\pipelines\text_classification.py:208, in TextClassificationPipeline.postprocess(self, model_outputs, function_to_apply, top_k, _legacy) 204 outputs = model_outputs["logits"][0] 206 if self.framework == "pt": 207 # To enable using fp16 and bf16 --> 208 outputs = outputs.float().numpy() 209 else: 210 outputs = outputs.numpy() RuntimeError: Numpy is not available I already tried simply un- and reinstalling transformers and numpy and for both the most recent versions are installed (and should be compatible). Anyone has an idea on how to solve this? | Try: pip install "numpy<2" then restart the kernel. | 3 | 4 |
78,859,203 | 2024-8-11 | https://stackoverflow.com/questions/78859203/python-sqlite3-dump-source-to-mariadb-unknown-collation-utf8mb4-0900-ai-ci | I'm following Head First Python (3ed). I'm at the last chapter and have a bug I can't get past. I've got an sqlite3 database that I need to port to MariaDB. I've got the schema and data in separate files: sqlite3 CoachDB.sqlite3 .schema > schema.sql sqlite3 CoachDB.sqlite3 '.dump swimmers events times --data-only' > data.sql I've installed MariaDB via apt, granted privileges to a user, and logged in to get MariaDB prompt: mariadb -u swimuser -p swimDB source schema.sql; source data.sql; Now I run my python app locally that connects to the MariaDB but, and I'm not sure where exactly, the call fails with: mysql.connector.errors.DatabaseError: 1273 (HY000): Unknown collation: 'utf8mb4_0900_ai_ci' Back at MariaDB prompt: show collation like 'utf8%'; # shows no collation 'utf8mb4_0900_ai_ci' select * from INFORMATION_SCHEMA.SCHEMATA; # shows default is is utf8mb4 / utf8mb4_uca1400_ai_ci At shell prompt, file -i schema.sql shows it's plain-text in us-ascii. I tried opening schema.sql and data.sql in notepadqq and saving them as utf8 but I still got the same error. I dropped the database and recreated it with: CREATE DATABASE swimDB DEFAULT CHARACTER SET utf8mb4 DEFAULT COLLATE utf8mb4_unicode_520_ci; ...then again sourced schema.sql and data.sql and still got the same error. I saw a post where someone asked for this so... MariaDB [swimDB]> SHOW VARIABLES WHERE VALUE LIKE 'utf%'; +--------------------------+-------------------------------+ | Variable_name | Value | +--------------------------+-------------------------------+ | character_set_client | utf8mb3 | | character_set_collations | utf8mb4=utf8mb4_uca1400_ai_ci | | character_set_connection | utf8mb3 | | character_set_database | utf8mb4 | | character_set_results | utf8mb3 | | character_set_server | utf8mb4 | | character_set_system | utf8mb3 | | collation_connection | utf8mb3_general_ci | | collation_database | utf8mb4_uca1400_ai_ci | | collation_server | utf8mb4_uca1400_ai_ci | | old_mode | UTF8_IS_UTF8MB3 | +--------------------------+-------------------------------+ So I guess there's a character set or encoding problem with the data (?). At this point I'm lost, madly searching the interwebs for clues. Any help appreciated and sorry for the long post :) | MySQL Connector appears to be forcing a collation that does not exist in MariaDB (utf8mb4_0900_ai_ci). Per comments it appears that attempting to force a later MariaDB collation utf8mb4_uca1400_ai_ci on the connection appears to not resolve the connection. MariaDB Connector/Python its actually tested with MariaDB server. Conforming to the Python Database API Specification v2.0 (PEP 249) it should be compatible with the MySQL (Python) Connector as a replacement connection module for your MariaDB server and application. | 3 | 2 |
78,863,540 | 2024-8-12 | https://stackoverflow.com/questions/78863540/force-pyarrow-table-write-to-ignore-null-type-and-use-original-schema-type-for-a | I have this piece of code that appends two parts of the same data to a PyArrow table. The second write fails because the column gets assigned null type. I understand why it is doing that. Is there a way to force it to use the type in the table's schema, and not use the inferred one from the data in second write? import pandas as pd import pyarrow as pa import pyarrow.parquet as pq data = { 'col1': ['A', 'A', 'A', 'B', 'B'], 'col2': [0, 1, 2, 1, 2] } df1 = pd.DataFrame(data) df1['col3'] = 1 df2 = df1.copy() df2['col3'] = pd.NA pat1 = pa.Table.from_pandas(df1) pat2 = pa.Table.from_pandas(df2) writer = pq.ParquetWriter('junk.parquet', pat1.schema) writer.write_table(pat1) writer.write_table(pat2) My error on second write above: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/anaconda3/lib/python3.10/site-packages/pyarrow/parquet/core.py", line 1094, in write_table raise ValueError(msg) ValueError: Table schema does not match schema used to create file: table: col1: string col2: int64 col3: null -- schema metadata -- pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 578 vs. file: col1: string col2: int64 col3: int64 -- schema metadata -- pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 577 | The problem is that the assignment of pd.NA leads to the incorrect dtype (object): df2 = df1.copy() df2['col3'] = pd.NA print(df2.dtypes) col1 object col2 int64 col3 object dtype: object Simply change it to Int64 first, using Series.astype: df2['col3'] = pd.NA df2['col3'] = df2['col3'].astype('Int64') Or in one statement, using pd.Series: df2['col3'] = pd.Series(pd.NA, dtype='Int64') Both leading to: pat2.schema col1: string col2: int64 col3: int64 -- schema metadata -- pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 577 pat1.schema == pat2.schema # True | 2 | 2 |
78,863,088 | 2024-8-12 | https://stackoverflow.com/questions/78863088/dataframe-multi-columns-condition-with-groupby | I need to set a flag to 1 for every employee in the group (groupby MNGR, YEAR) if any of the employees in the group has any of the columns V1 or V2 or V3 or V4 is greater than 14. I can make it work if only checking 1 column but the V columns can be more than 10 columns. I have this code: DF1['flg'] = [1 for i in range(1, 5) if DF1[f'V{i}'].gt(14).any().groupby(by=['MNGR', 'YEAR']).replace({True: 1, False: 0}).reindex_like(DF1) ] But I got error: AttributeError: 'numpy.bool_' object has no attribute 'groupby' sample dataframes DF1 = pd.DataFrame({'EMPLID': [12, 13, 14, 15, 16, 17, 18], 'MNGR': ['BOB', 'JIM', 'RHONDA', 'RHONDA', 'JIM', 'RHONDA', 'RHONDA'], 'YEAR': [2012, 2013, 2012, 2012, 2012, 2013, 2012], 'V1': [1,2,3,4,5,6,7], 'V2': [2,3,4,50,6,7,8], 'V3': [3,3,3,3,3,3,3], 'V4': [7,15,8,9,10,11,12]}) Expected output: EMPLID MNGR YEAR V1 V2 V3 V4 flg 0 12 BOB 2012 1 2 3 7 NaN 1 13 JIM 2013 2 3 3 15 1 2 14 RHONDA 2012 3 4 3 8 1 3 15 RHONDA 2012 4 50 3 9 1 4 16 JIM 2012 5 6 3 10 NaN 5 17 RHONDA 2013 6 7 3 11 NaN 6 18 RHONDA 2012 7 8 3 12 1 | You can break this transformation into two discrete steps: Find the rows where ANY V{1..4} columns are greater than 14 transform the groups ['MNGR', 'YEAR'] where ANY row meets the above import pandas as pd import numpy as np df = pd.DataFrame({ 'EMPLID': [12, 13, 14, 15, 16, 17, 18], 'MNGR': ['BOB', 'JIM', 'RHONDA', 'RHONDA', 'JIM', 'RHONDA', 'RHONDA'], 'YEAR': [2012, 2013, 2012, 2012, 2012, 2013, 2012], 'V1': [1,2,3,4,5,6,7], 'V2': [2,3,4,50,6,7,8], 'V3': [3,3,3,3,3,3,3], 'V4': [7,15,8,9,10,11,12] }) df['flag'] = ( df.filter(regex=r'^V.+').gt(14).any(axis=1) # 1. .groupby([df['MNGR'], df['YEAR']]).transform('any') # 2. .astype(int) ) print(df) # EMPLID MNGR YEAR V1 V2 V3 V4 flag # 0 12 BOB 2012 1 2 3 7 0 # 1 13 JIM 2013 2 3 3 15 1 # 2 14 RHONDA 2012 3 4 3 8 1 # 3 15 RHONDA 2012 4 50 3 9 1 # 4 16 JIM 2012 5 6 3 10 0 # 5 17 RHONDA 2013 6 7 3 11 0 # 6 18 RHONDA 2012 7 8 3 12 1 | 2 | 4 |
78,855,983 | 2024-8-10 | https://stackoverflow.com/questions/78855983/transform-an-exploded-data-frame-into-a-deeply-nested-dictionary-with-headers | The function I am using to convert my data frame into a nested dictionary strips the column names from the hierarchy, making the dictionary diffictult to naviagte. I have a large dataframe that looks similar to ths: exploded_df = pd.DataFrame({ 'school_code': [1, 1, 1, 1, 2, 2, 2, 2], 'school_name': ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'B'], 'district_code': [10, 10, 10, 10, 20, 20, 20, 20], 'year': [2022, 2022, 2023, 2023, 2022, 2022, 2023, 2023], 'source': ['S1', 'S2', 'S1', 'S2','S1', 'S2', 'S1', 'S2'], 'enrollment_measure_name': ['M1', 'M2', 'M1', 'M2','M1', 'M2', 'M1', 'M2'], 'value': [100, 150, 120, 170, 100, 150, 90, 100] }) I have been trying to use the following function and several variations. `def frame_to_nested_dict(exploded_df, levels): if not levels: return exploded_df.groupby('enrollment_measure_name')['value'].apply(list).to_dict() # Group by the highest level of keys and recursively build the nested dictionary level = levels[0] exploded_df = exploded_df.dropna(subset=[level]) grouped = exploded_df.groupby(level) return { key: frame_to_nested_dict(group, levels[1:]) for key, group in grouped }` with levels = ['school_code', 'school_name', 'district_code', 'year', 'source'] frame_to_nested_dict(exploded_df, levels) output: {1: {'A': {10: {2022: {'S1': {'M1': [100]}, 'S2': {'M2': [150]}}, 2023: {'S1': {'M1': [120]}, 'S2': {'M2': [170]}}}}}, 2: {'B': {20: {2022: {'S1': {'M1': [100]}, 'S2': {'M2': [150]}}, 2023: {'S1': {'M1': [90]}, 'S2': {'M2': [100]}}}}}} The desired output would be: { 'school_code': { 1: { 'school_name': { 'A': { 'district_code': { 10: { 'year': { 2022: { 'source': { 'S1': { 'enrollment_measure_name': { 'M1': {'value': [100]} } }, 'S2': { 'enrollment_measure_name': { 'M2': {'value': [150]} } } } }, 2023: { 'source': { 'S1': { 'enrollment_measure_name': { 'M1': {'value': [120]} } }, 'S2': { 'enrollment_measure_name': { 'M2': {'value': [170]} } } } } } } } } } }, 2: { 'school_name': { 'B': { 'district_code': { 20: { 'year': { 2022: { 'source': { 'S1': { 'enrollment_measure_name': { 'M1': {'value': [100]} } }, 'S2': { 'enrollment_measure_name': { 'M2': {'value': [150]} } } } }, 2023: { 'source': { 'S1': { 'enrollment_measure_name': { 'M1': {'value': [90]} } }, 'S2': { 'enrollment_measure_name': { 'M2': {'value': [100]} } } } } } } } } } } } } | I found the following solution def fold_dataframe(df): columns = df.columns.tolist() if len(columns) < 2: return df columns = df.columns.tolist() grouped = df.groupby(columns[:-1], as_index=False).agg({columns[-1]: list}) grouped[grouped.columns[-1]] = grouped.apply( lambda row: {columns[-1]: row[columns[-1]]}, axis=1) grouped[grouped.columns[-2]] = grouped.apply( lambda row: {row[columns[-2]]: row[columns[-1]]}, axis=1) grouped.drop(columns=[grouped.columns[-1]], inplace=True) return grouped def fold_dataframe_loop(df): original_columns = df.columns.tolist() for _ in range(len(original_columns)): df = fold_dataframe(df) return df The first function takes the values of the column in position -1 (far right column), groups them, then sets them as a dictionary with the key being the column name and the values being the row values of the grouped column. The next step is to append the values from the updated column to the values of the column in the next position. Again setting the data as a dictionary with key of the column name in position -2 and the values from the data frame row in column -1. The second function then loops through the first function for each column until there is only one column. The resulting structure is technically still a data frame, but it suits my needs and could easily be moved into a true dictionary structure from this point. | 2 | 0 |
78,861,446 | 2024-8-12 | https://stackoverflow.com/questions/78861446/scrape-the-latitude-and-longitude-from-the-website | I want to convert a list of zip codes into a DataFrame of latitude and longitude using data from this website: Free Map Tools. https://www.freemaptools.com/convert-us-zip-code-to-lat-lng.htm#google_vignette Hereβs my code, but itβs not returning the latitude and longitude data. How can I improve it ? import requests from bs4 import BeautifulSoup def get_lat_lng(zip_code): # URL of the form processing page url = 'https://www.freemaptools.com/convert-us-zip-code-to-lat-lng.htm' # Create a session to handle cookies and headers session = requests.Session() # Send a GET request to get the initial form and any hidden data response = session.get(url) response.raise_for_status() # Parse the page with BeautifulSoup soup = BeautifulSoup(response.text, 'html.parser') # Find form data (if needed) # Note: The actual form data extraction depends on how the website is structured # For simplicity, assume there's no hidden form data to worry about # Prepare the data to send in the POST request data = { 'zip': zip_code # Include any other required form fields here if necessary } # Send a POST request with the zip code data response = session.post(url, data=data) response.raise_for_status() # Parse the resulting page soup = BeautifulSoup(response.text, 'html.parser') # Extract latitude and longitude (you need to adjust these selectors based on the website's structure) lat = soup.find('span', {'id': 'latitude'}).text.strip() lng = soup.find('span', {'id': 'longitude'}).text.strip() return lat, lng # Example zip_code = ['97048','63640','63628'] latitude, longitude = get_lat_lng(zip_code) print(f'Latitude: {latitude}, Longitude: {longitude}') query latitude and longitude data from https://www.freemaptools.com/convert-us-zip-code-to-lat-lng.htm#google_vignette Querying a list of zip code,i.e. ['97048','63640','63628'], and obtain the latitude and longitude for each zip. It results in error message. | Try: import requests api_url = ( "https://api.promaptools.com/service/us/zip-lat-lng/get/?zip={}&key=17o8dysaCDrgvlc" ) zips = ["97048", "63640", "63628"] headers = { "Origin": "https://www.freemaptools.com", } for z in zips: url = api_url.format(z) data = requests.get(url, headers=headers).json() print(z, data) Prints: 97048 {'status': 1, 'output': [{'zip': '97048', 'latitude': '46.053228', 'longitude': '-122.971330'}]} 63640 {'status': 1, 'output': [{'zip': '63640', 'latitude': '37.747435', 'longitude': '-90.363484'}]} 63628 {'status': 1, 'output': [{'zip': '63628', 'latitude': '37.942778', 'longitude': '-90.484430'}]} | 2 | 1 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.