question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
78,857,047
2024-8-10
https://stackoverflow.com/questions/78857047/working-with-known-indexes-on-a-numpy-array
I'm working on a Python project that generates random tile maps using NumPy. While my code is functional, it's currently too slow, and I'm struggling to vectorize it effectively to improve performance. I start with a small grid (e.g., 4x4) and place a few land tiles. The map is then upscaled by subdividing each tile into 4 smaller tiles. For each of the 4 new tiles, the code decides whether to keep it as land or convert it to water. This decision is based on the surrounding tiles: more nearby water tiles increase the chance of converting to water, and more nearby land tiles increase the chance of remaining land. This process results in a fuzzy, nearest-neighbor type of upscaling. This upscaling process is repeated, progressively increasing the grid size until reaching the desired map size (e.g., from 4x4 to 16x16 to 32x32 and so on). The final output is a landmass with borders that appear naturally textured due to the upscaling process. During the upscaling process, water tiles cannot turn into land, so they are not checked. Since only land tiles can potentially turn into water, and a tile surrounded entirely by land cannot become water, the code doesn’t need to check each land tile. This reduces the number of tiles that need to be checked. The code tracks border tiles (those adjacent to water) and only update those via "check_tile" method. However, I'm struggling to vectorize "check_tile" method, which consumes 90% of the execution time effectively with NumPy. Here is the code (requires numpy version >= 2.0): import numpy as np class MapDataGen: """ Procedurally generates a world map and returns a numpy array representation of it. Water proceeds from borders to inland. Water percentage increases with each iteration. """ def __init__(self, start_size: int, seed: int) -> None: """ Initialize starting world map. start_size: size of the map (start_size x start_size). seed: used for reproducible randomness. """ # Starting map, filled with 0, start_size by start_size big. self.world_map = np.zeros((start_size, start_size), dtype=np.uint8) # Random number generator. self.rng = np.random.default_rng(seed) # List to store border tile indexes. self.borders = [] self.newborders = [] def add_land(self, land_id, from_index, to_index): """ Add land to the world map at any time and at any map resolution. land_id: non-zero uint8, id of the new land tile (0 is reserved for water). from_index: starting index (inclusive) for the land area. to_index: ending index (exclusive) for the land area. """ row_size, column_size = self.world_map.shape from_row = max(0, from_index[0]) to_row = min(to_index[0], row_size) from_col = max(0, from_index[1]) to_col = min(to_index[1], column_size) self.world_map[ from_row:to_row, from_col:to_col, ] = land_id for row in range(from_row, to_row): for column in range(from_col, to_col): self.borders.append((row, column)) def neighbours(self, index, radius) -> np.ndarray: """ Returns neighbour tiles within the given radius of the index. index: tuple representing the index of the tile. radius: the radius to search for neighbours. """ row_size, column_size = self.world_map.shape return self.world_map[ max(0, index[0] - radius) : min(index[0] + radius + 1, row_size), max(0, index[1] - radius) : min(index[1] + radius + 1, column_size), ] def upscale_map(self) -> np.ndarray: """ Divide each tile into 4 pieces and upscale the map by a factor of 2. """ row, column = self.world_map.shape rs, cs = self.world_map.strides x = np.lib.stride_tricks.as_strided( self.world_map, (row, 2, column, 2), (rs, 0, cs, 0) ) self.newmap = x.reshape(row * 2, column * 2) # \/Old version\/. # self.newmap = np.repeat(np.repeat(self.worldmap, 2, axis=0), 2, axis=1) def check_tile(self, index: tuple): """ Texturize borders and update new borders. index: tuple representing the index of the tile to check. """ # Corresponding land id to current working tile. tile = self.world_map[index] # If tile at the index is surrounded by identical tiles within a 2-tile range. if np.all(self.neighbours(index, 2) == tile): # Don't store it; this tile cannot become water because it is surrounded by 2-tile wide same land as itself. pass else: # The values of unique tiles and their counts. values, counts = np.unique_counts(self.neighbours(index, 1)) # Calculate the probability of turning into other tiles for descended tiles. probability = counts.astype(np.float16) probability /= np.sum(probability) # Randomly change each of the 4 newly descended tiles of the original tile to either water or not. for row in range(2): for column in range(2): # One of the four descended tile's index. new_tile_index = (index[0] * 2 + row, index[1] * 2 + column) # Randomly chose a value according to its probability. random = self.rng.choice(values, p=probability) if random == 0: # If tile at the index became water. # Update it on the new map. self.newmap[new_tile_index] = random # Don't store it because it is a water tile and no longer a border tile. elif random == tile: # If the tile remained the same. # Store it because it is still a border tile. self.newborders.append(new_tile_index) else: # If the tile changed to a different land. # Update it on the new map. self.newmap[new_tile_index] = random # Store it because it is still a border tile. self.newborders.append(new_tile_index) def default_procedure(self): """ Default procedure: upscale (or zoom into) the map and texturize borders. """ self.upscale_map() for index in self.borders: self.check_tile(index) self.borders = self.newborders self.newborders = [] self.world_map = self.newmap For to see what it does: import time from matplotlib import pyplot as plt from matplotlib import colors wmg = MapDataGen(13, 3) wmg.add_land(1, (1, 1), (7, 7)) wmg.add_land(1, (8, 8), (12, 12)) plt.title(f"Starting Map") colormap = colors.ListedColormap( [[21.0 / 255, 128.0 / 255, 209.0 / 255], [227.0 / 255, 217.0 / 255, 159.0 / 255]] ) plt.imshow(wmg.world_map, interpolation="nearest", cmap=colormap) plt.savefig(f"iteration {0}.png") plt.show() for i in range(13): start = time.time() wmg.default_procedure() end = time.time() plt.title(f"iteration {i+1} took {end-start} seconds") plt.imshow(wmg.world_map, interpolation="nearest", cmap=colormap) plt.savefig(f"{i}.png") plt.show() Results: I checked numpy game of life implementations but they all check all grids, but this code knows which tile indexes to check so they are not applicable. How can I optimize this algorithm using NumPy, particularly with vectorization, to improve its performance? Are there any specific strategies or functions in NumPy that would help? Any suggestions or examples would be greatly appreciated!
Note: I kept the explanation to a minimum, but this answer is still long. If you're only looking for the code, just scroll to the end. In my opinion, the key to vectorization is to extract the parts that can be vectorized. But before that, there is one part that needs to be corrected. The following code is doing something very redundant. values, counts = np.unique_counts(self.neighbours(index, 1)) probability = counts.astype(np.float16) probability /= np.sum(probability) random = self.rng.choice(values, p=probability) It can be replaced with the following: random = self.rng.choice(self.neighbours(index, 1).ravel()) For example, if there are 9 neighbours and 3 of them are water, the probability of selecting the water will be 3/9. Note, however, that the result will not be the same as the original, even though it does the same thing mathematically. Now, since the operation you wanted to perform was randomly select one of the neighbours, you can directly access the target tile like this: neighbour_offset = np.array([(dx, dy) for dx in (-1, 0, 1) for dy in (-1, 0, 1)]) neighbour_index = self.rng.choice(neighbour_offset) + index random = self.world_map[tuple(neighbour_index)] You can see that rng.choice only needs to select one from the neighbour_offset, which does not change during the loop. Therefore, this can be calculated outside the loop. Below are the necessary changes: def check_tile(self, index: tuple, neighbour_indexes): """ Texturize borders and update new borders. index: tuple representing the index of the tile to check. """ # Corresponding land id to current working tile. tile = self.world_map[index] # If tile at the index is surrounded by identical tiles within a 2-tile range. if np.all(self.neighbours(index, 2) == tile): # Don't store it; this tile cannot become water because it is surrounded by 2-tile wide same land as itself. pass else: # Randomly change each of the 4 newly descended tiles of the original tile to either water or not. for row in range(2): for column in range(2): # One of the four descended tile's index. new_tile_index = (index[0] * 2 + row, index[1] * 2 + column) # Randomly chose a value according to its probability. neighbour_index = neighbour_indexes[row * 2 + column] random = self.world_map[tuple(neighbour_index)] if random == 0: # If tile at the index became water. # Update it on the new map. self.newmap[new_tile_index] = random # Don't store it because it is a water tile and no longer a border tile. elif random == tile: # If the tile remained the same. # Store it because it is still a border tile. self.newborders.append(new_tile_index) else: # If the tile changed to a different land. # Update it on the new map. self.newmap[new_tile_index] = random # Store it because it is still a border tile. self.newborders.append(new_tile_index) def default_procedure(self): """ Default procedure: upscale (or zoom into) the map and texturize borders. """ self.upscale_map() # Choose 4 random neighbours for each border tile. # Note that below is NOT taking into account the case where the land is at the edge of the map. neighbour_offset = np.array([(dx, dy) for dx in (-1, 0, 1) for dy in (-1, 0, 1)]) random_neighbour_offsets = self.rng.choice(neighbour_offset, size=(len(self.borders), 4)) random_neighbour_indexes = np.asarray(self.borders)[:, None, :] + random_neighbour_offsets for index, neighbour_index in zip(self.borders, random_neighbour_indexes): self.check_tile(index, neighbour_index) self.borders = self.newborders self.newborders = [] self.world_map = self.newmap (Note that in this code, the land tiles at the edge of the map are not taken into account. I have omitted this case here because it would prevent vectorization. If you need to deal with this, I recommend splitting the process into multiple steps and processing the edges separately.) With this, rng.choice is now vectorized. On my PC, this step alone made it 8 times faster than the original. You can continue this process repeatedly. I should probably explain everything, but it would be very long, so please let me skip to the end. This is the fully vectorized code: def default_procedure(self): """ Default procedure: upscale (or zoom into) the map and texturize borders. """ self.upscale_map() def take(arr, indexes): return arr[indexes[..., 0], indexes[..., 1]] def put(arr, indexes, values): arr[indexes[..., 0], indexes[..., 1]] = values land_tile_indexes = np.asanyarray(self.borders) land_tiles = take(self.world_map, land_tile_indexes) # Skip tiles that are surrounded by water. neighbour_25_windows = np.lib.stride_tricks.sliding_window_view( np.pad(self.world_map, pad_width=5 // 2, mode='edge'), # Pad the map in order to skip edge cases. window_shape=(5, 5), ) neighbour_25_windows = take(neighbour_25_windows, land_tile_indexes) # Filter only the border tiles. is_not_surrounded_tile = np.any(neighbour_25_windows != land_tiles[..., None, None], axis=(-2, -1)) # Filter land_tile_indexes by the condition. land_tile_indexes = land_tile_indexes[is_not_surrounded_tile] # Choose 4 random neighbours for each border tile. # Note that below is NOT taking into account the case where the land is at the edge of the map. neighbour_offset = np.array([(dx, dy) for dx in (-1, 0, 1) for dy in (-1, 0, 1)]) random_neighbour_offsets = self.rng.choice(neighbour_offset, size=(len(land_tile_indexes), 4)) random_neighbour_indexes = land_tile_indexes[:, None, :] + random_neighbour_offsets random_neighbour_tiles = take(self.world_map, random_neighbour_indexes) # Pre-calculate all the new tile indexes here. upscaling_offset = np.array([(dx, dy) for dx in range(2) for dy in range(2)]) new_tile_indexes = (land_tile_indexes * 2)[:, None, :] + upscaling_offset[None] # Put the new tile if it did NOT remain the same. # In other words, this covers below cases: # - If the tile became water. # - If the tile changed to a different land. condition = random_neighbour_tiles != take(self.world_map, land_tile_indexes)[..., None] put(self.newmap, new_tile_indexes[condition], random_neighbour_tiles[condition]) # Update the border tiles if it did NOT become water. # In other words, this covers below cases: # - If the tile remained the same. # - If the tile changed to a different land. target_indexes = new_tile_indexes[random_neighbour_tiles != 0] self.borders = target_indexes self.world_map = self.newmap At this point, the bottleneck is now self.upscale_map(). On my PC, this process took more than 80% of the runtime. The easiest way to solve this problem is to use OpenCV. def upscale_map(self) -> None: """ Divide each tile into 4 pieces and upscale the map by a factor of 2. """ self.newmap = cv2.resize(self.world_map, (0, 0), fx=2, fy=2, interpolation=cv2.INTER_NEAREST) Here is the result after 13 iterations. Finally, as an alternative, you could use Numba. With Numba, you might find it more intuitive to implement. However, combining Numba with classes can be a bit tricky, so you might need to rewrite a lot of your code anyway. Still, it’s worth looking into.
2
3
78,856,452
2024-8-10
https://stackoverflow.com/questions/78856452/is-a-bab-equivalent-to-minb-a
I came upon a bit of third party Python code that read: count = [remaining, readlen][remaining > readlen] After staring at it for a bit, I have to ask: are there any cases where this construct is NOT equivalent to: count = min(readlen, remaining) i.e. are there there any functional differences between the two?
Let's assume a == b. In this case, [a,b][a>b] returns a, since a>b evaluates to False, which is then cast to 0. But min(b, a) returns b (first argument). This is not relevant for numbers, but it may matter if a and b are mutable objects, like lists: >>> a = [1, 2] >>> b = [1, 2] >>> c = [a, b][a>b] >>> c is a True >>> c is b False >>> c.append(3) >>> a [1, 2, 3] >>> b [1, 2] With lists, it matters which of two equal objects get picked. Note that just switching the min arguments to min(a, b) would remove this difference.
2
6
78,856,047
2024-8-10
https://stackoverflow.com/questions/78856047/how-to-do-group-by-and-then-subtract-group-mean-from-each-entry-in-the-group-in
In pandas I can calculate the centered grouped columns as follows for a dataframe with columns ['a','b','c']: df[['b','c']] = df[['b','c']].sub(df.groupby('a')['b','c'].transform(mean)) What is the equivalent code in Polars? I tried implementing it with sub in polars but it expects expression and not dataframe. I need the original dataframe as output and not the grouped version.
TLDR. In polars, this is achieved with the window function pl.Expr.over. Concretely, your pandas code would look as follows. df.with_columns( pl.col("b", "c") - pl.col("b", "c").mean().over("a") ) Applied to some sample data, this could look as follows. import polars as pl df = pl.DataFrame({ "a": [0, 0, 0, 1, 1, 1], "b": [2, 2, 2, 3, 3, 3], "c": range(6), }) shape: (6, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ 0 ┆ 2 ┆ 0 β”‚ β”‚ 0 ┆ 2 ┆ 1 β”‚ β”‚ 0 ┆ 2 ┆ 2 β”‚ β”‚ 1 ┆ 3 ┆ 3 β”‚ β”‚ 1 ┆ 3 ┆ 4 β”‚ β”‚ 1 ┆ 3 ┆ 5 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ df.with_columns( pl.col("b", "c") - pl.col("b", "c").mean().over("a") ) shape: (6, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ f64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ══════║ β”‚ 0 ┆ 0.0 ┆ -1.0 β”‚ β”‚ 0 ┆ 0.0 ┆ 0.0 β”‚ β”‚ 0 ┆ 0.0 ┆ 1.0 β”‚ β”‚ 1 ┆ 0.0 ┆ -1.0 β”‚ β”‚ 1 ┆ 0.0 ┆ 0.0 β”‚ β”‚ 1 ┆ 0.0 ┆ 1.0 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜
2
3
78,856,001
2024-8-10
https://stackoverflow.com/questions/78856001/how-to-combine-two-columns-into-keyvalue-pairs-in-polars
I'm working with a Polars DataFrame, and I want to combine two columns into a dictionary format, where the values from one column become the keys and the values from the other column become the corresponding values. Here's an example DataFrame: import polars as pl df = pl.DataFrame({ "name": ["Chuck", "John", "Alice"], "surname": ["Dalliston", "Doe", "Smith"] }) I want to transform this DataFrame into a new column that contains dictionaries, where name is the key and surname is the value. The expected outcome should look like this: shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ name β”‚ surname β”‚ name_surname β”‚ β”‚ --- β”‚ --- β”‚ --- β”‚ β”‚ str β”‚ str β”‚ dict[str, str] β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ Chuck β”‚ Dallistonβ”‚ {"Chuck": "Dalliston"} β”‚ β”‚ John β”‚ Doe β”‚ {"John": "Doe"} β”‚ β”‚ Alice β”‚ Smith β”‚ {"Alice": "Smith"} β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ I've tried the following code: df.with_columns( json = pl.struct("name", "surname").map_elements(json.dumps) ) But the result is not as expected. Instead of creating a dictionary with key-value, it produces: {name:Chuck,surname:Dalliston}
You can try this code snippet, This seems to be the closest you can get has pl does not have a naive dict. See reference : data_types_polaris import polars as pl df = pl.DataFrame( {"name": ["Chuck", "John", "Alice"], "surname": ["Dalliston", "Doe", "Smith"]} ) df = df.select( [ "name", "surname", ( pl.struct(["name", "surname"]).map_elements( lambda row: {row["name"]: row["surname"]}, return_dtype=pl.Object ) ).alias("name_surname"), ] ) print(df) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ name ┆ surname ┆ name_surname β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ object β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═══════════β•ͺ════════════════════════║ β”‚ Chuck ┆ Dalliston ┆ {'Chuck': 'Dalliston'} β”‚ β”‚ John ┆ Doe ┆ {'John': 'Doe'} β”‚ β”‚ Alice ┆ Smith ┆ {'Alice': 'Smith'} β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
2
2
78,854,166
2024-8-9
https://stackoverflow.com/questions/78854166/convert-dataframe-to-nested-json-records
I have a spark dataframe as follows: ---------------------------------------------------------------------------------------------- | type | lctNbr | itmNbr | lastUpdatedDate | lctSeqId| T7797_PRD_LCT_TYP_CD| FXT_AIL_ID| pmyVbuNbr | upcId | vndModId| ____________________________________________________________________________ | prd_lct 145 147 2024-07-22T05:24:14 1 1 14 126 008236686661 35216 _____________________________________________________________________________ I want to group this data frame by type, lctNbr, itmNbr, and lastUpdatedDate. I jsut want each record to be in the below json format: "type": "prd_lct", "lctNbr": 145, "itmNbr": 147, "lastUpdatedDate": "2024-07-22T05:24:14", "locations": [ { "lctSeqId": 1, "prdLctTypCd": 1, "fxtAilId": "14" } ], "itemDetails": [ { "pmyVbuNbr": 126, "upcId": "008236686661", "vndModId": "35216" ] } I tried using to_json, collect_list and map_from_entries functions but i just keep getting errors when i do a show command and cant seem to get to the correct format.
You can groupby the desired fields, then aggregate F.collect_list(F.create_map(...)) to get the inner fields for locations and itemDetails. Sample data: pandasDF = pd.DataFrame({ "type": ["prd_lct","prd_lct","test"], "lctNbr": [145, 145, 148], "itmNbr": [147, 147, 150], "lastUpdatedDate": ["2024-07-22T05:24:14", "2024-07-22T05:24:14", "2024-07-22T05:24:15"], "lctSeqId": [1,2,3], "T7797_PRD_LCT_TYP_CD": [1,2,3], "FXT_AIL_ID": ["14","15","16"], "pmyVbuNbr": [126, 127, 128], "upcId": ["008236686661","008236686662","008236686663"], "vndModId": ["35216","35217","35218"] }) +-------+------+------+-------------------+--------+--------------------+----------+---------+------------+--------+ | type|lctNbr|itmNbr| lastUpdatedDate|lctSeqId|T7797_PRD_LCT_TYP_CD|FXT_AIL_ID|pmyVbuNbr| upcId|vndModId| +-------+------+------+-------------------+--------+--------------------+----------+---------+------------+--------+ |prd_lct| 145| 147|2024-07-22T05:24:14| 1| 1| 14| 126|008236686661| 35216| |prd_lct| 145| 147|2024-07-22T05:24:14| 2| 2| 15| 127|008236686662| 35217| | test| 148| 150|2024-07-22T05:24:15| 3| 3| 16| 128|008236686663| 35218| +-------+------+------+-------------------+--------+--------------------+----------+---------+------------+--------+ Resulting DataFrame and conversion to a list of JSON encoded strings. resultDF = sparkDF.groupby( 'type', 'lctNbr', 'itmNbr', 'lastUpdatedDate' ).agg( F.collect_list( F.create_map( F.lit('lctSeqId'), F.col('lctSeqId'), F.lit('prdLctTypCd'), F.col('T7797_PRD_LCT_TYP_CD'), F.lit('fxtAilId'), F.col('FXT_AIL_ID'), ) ).alias('locations'), F.collect_list( F.create_map( F.lit('pmyVbuNbr'), F.col('pmyVbuNbr'), F.lit('upcId'), F.col('upcId'), F.lit('vndModId'), F.col('vndModId'), ) ).alias('itemDetails') ) resultJSON = result.toJSON().collect() Since resultJSON will be a list of JSON encoded strings, you can convert it into a dictionary using the following: import ast result_dict = [ast.literal_eval(x) for x in resultJSON] [ { "type": "prd_lct", "lctNbr": 145, "itmNbr": 147, "lastUpdatedDate": "2024-07-22T05:24:14", "locations": [ { "lctSeqId": "1", "prdLctTypCd": "1", "fxtAilId": "14" }, { "lctSeqId": "2", "prdLctTypCd": "2", "fxtAilId": "15" } ], "itemDetails": [ { "pmyVbuNbr": "126", "upcId": "008236686661", "vndModId": "35216" }, { "pmyVbuNbr": "127", "upcId": "008236686662", "vndModId": "35217" } ] }, { "type": "test", "lctNbr": 148, "itmNbr": 150, "lastUpdatedDate": "2024-07-22T05:24:15", "locations": [ { "lctSeqId": "3", "prdLctTypCd": "3", "fxtAilId": "16" } ], "itemDetails": [ { "pmyVbuNbr": "128", "upcId": "008236686663", "vndModId": "35218" } ] }
4
0
78,854,478
2024-8-9
https://stackoverflow.com/questions/78854478/how-can-i-replace-null-values-in-polars-with-a-prefix-with-ascending-numbers
I am trying to replace null values in my dataframe column by a prefix and ascending numbers(to make each unique).ie df = pl.from_repr(""" β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ name ┆ asset_number β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════════║ β”‚ Office Chair ┆ null β”‚ β”‚ Office Chair ┆ null β”‚ β”‚ Office Chair ┆ null β”‚ β”‚ Office Chair ┆ CMP - 001 β”‚ β”‚ Office Chair ┆ CMP - 005 β”‚ β”‚ Office Chair ┆ null β”‚ β”‚ Table ┆ null β”‚ β”‚ Table ┆ CMP - 007 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ """) the null values should be replaced to something like PREFIX - 001,PREFIX - 002,...
df = df.with_columns( pl.col("asset_number").fill_null( "PREFIX - " + pl.int_range(pl.len()).cast(pl.String) ) )
3
3
78,854,275
2024-8-9
https://stackoverflow.com/questions/78854275/how-to-differentiate-and-split-os-environ-into-defaults-and-addons
Is it possible to split os.environ into default environment variables and custom addon variables? For example, using syntax of sets representing the key/ENV_VAR_NAME: custom_addon_env_var_keys = set(os.environ.keys()) - set(os.environ.default.keys()) # is there this `os.environ.default` kinda thing? Is there some kind of os.environ.default to get pure environment variables, after the program has started (that is, exclude all the custom keys I've added since the program started)? I've tried looking into the source code of os module, the environ is initialized by a function _createenviron, but it's been deleted as soon as the environ been initialized, so it's impossible to re-initialize a environ_2 and do the job, and copying the whole initializing function looks like a dumb way. The following code gives a brief-view of the expecting behavior. I'm using this for cross file variables, so saving a original copy at start is not what I need: import os # I'm using this for cross file variables # so it is impossible to save a original copy at start ORIGINAL_ENV_COPY = os.environ.copy() os.environ["MY_CUSTOM_ENV_KEY"] = "HelloWorld" # the "ORIGINAL_ENV_COPY" should be a method of getting the original version of "os.environ" custom_addon_env_var_keys = set(os.environ.keys()) - set(ORIGINAL_ENV_COPY.keys()) print(custom_addon_env_var_keys) # output: {"MY_CUSTOM_ENV_KEY"} Edit My apologies not giving enough information in the post: I have a file named environment.py that will initialize constants, and it will be packed into binary executable along with main.py, here's a part of it: import os os.environ["TIMEZONE"] = "Asia/Taipei" os.environ["PROJECT_NAME"] = "Project name" os.environ["APP_USER_MODEL_ID"] = f"{os.environ['PROJECT_NAME']}.App.User.Model.Id" os.environ["SERVER_URL"] = f"https://www.my.server.io/projects/{os.environ['PROJECT_NAME']}" os.environ["STORAGE_URL"] = f"https://www.my.server.io/projects/{os.environ['PROJECT_NAME']}/Storage" os.environ["MODULES_URL"] = f"https://www.my.server.io/projects/{os.environ['PROJECT_NAME']}/Modules" os.environ["USER_AGENT"] = random.choice([ "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36", "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36", "Mozilla/5.0 (Windows NT 10.0; WOW64) Gecko/20100101 Firefox/61.0", "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36", "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.62 Safari/537.36", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36", "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)", "Mozilla/5.0 (Macintosh; U; PPC Mac OS X 10.5; en-US; rv:1.9.2.15) Gecko/20110303 Firefox/3.6.15" ]) if getattr(sys, "frozen", False): sys.argv = [arg for arg in sys.argv if not arg.startswith("-")] os.environ["EXECUTABLE_ROOT"] = os.path.dirname(sys.executable) else: os.environ["EXECUTABLE_ROOT"] = os.path.dirname(os.path.abspath(sys.modules["__main__"].__file__)) Why do I need to store variables and constants into os.environ? This environment.py will be packed into binary executable along with main.py, and therefore, to all other modules, they don't even know there's even a file called environment.py (since now, all that's left is main.exe) As you can see, some of the variables, e.g. USER_AGENT, EXECUTABLE_ROOT, takes quite some space, if don't store it, there will be alots of duplicated codes. Finally why do I need to get custom add on env keys Because I store variables in it, when receiving crash reports, I need debug if the problem is caused by any of them. Take the USER_AGENT as an example, if one of them is no longer available, I have to know which one is chosen and raised the error, and then remove it from the list, instead of testing them one by one. And there are a lot of default environment variable keys, which if I send everything in os.environ would be a mess, I only need the variable keys I've added. Answer (thanks to @zondo) Save the original os.environ under os using setattr, and in order to prevent re-import overwriting the original save (mentioned by the comment from @wim), I've put a if statement to judge it. import os # if this is the first time running # there won't be a "default_environ" attr if not hasattr(os, "default_environ"): setattr(os, "default_environ", os.environ.copy()) # ... other variables
Python modules are only ever imported once within the same program instance. It doesn't matter how many different files you have importing something, that first time it's imported is the only time it's run, and everything else reuses it. There are two ways to take advantage of this. First, the more clean way, which is choose a file to set everything you want. For example, startup.py: import os environ = os.environ.copy() # other settings you want And then all your other files can use: import os import startup new_vars = set(os.environ) - set(startup.environ) The second way is really more of a hack, and I don't like it, but if the environ is the only thing you're doing, it would be easier. And that's just this: import os os.startup_environ = os.environ.copy() Again, since the os import will be the same across everything, that variable will be accessible everywhere.
3
1
78,852,566
2024-8-9
https://stackoverflow.com/questions/78852566/colormap-of-imshow-not-linked-to-surface-plot
I have created a figure with two subplots. On the left is a 3D surface plot and on the right side is 2D projection of the 3D plot with imshow. I want the colormap of the imshow plot to be linked to the 3D plot. But the scale does not fit. import numpy as np import matplotlib.pyplot as plt id = np.arange(-600,750,150) iq = np.arange(-600,750,150) xx,yy = np.meshgrid(id,iq) T = np.array([[-8988, -8697.5, -7847, -6923, -5610.5, -4536, -3374.35, -2572.85, -1987.65], [-8162, -7910, -7206.5, -6160, -4798.5, -3476.55, -2479.4, -1711.5, -1374.45], [-6513.5, -6814.5, -6398, -5218.5, -3706.5, -2362.15, -1414.7, -1013.6,-883.05], [-4224.5,-4669, -4686.5, -3755.5, -2217.6, -1079.4, -591.85, -313.005,-211.925], [0, 0, 0, 0, 0, 0, 0, 0, 0], [4420.5, 4833.5, 4749.5, 3787, 2221.1, 1054.55, 567.35, 280.42, 146.51], [6793.5, 7017.5, 6510, 5260.5, 3692.5, 2295.3, 1360.45, 914.55, 806.4], [8421, 8008, 7287, 6160, 4725, 3400.25, 2318.4, 1628.2, 1285.55], [9170, 8750, 7864.5, 6874, 5498.5, 4413.5, 3206.7, 2351.3, 1865.5]]) m = plt.cm.ScalarMappable() m.set_array(T) m.set_clim(-9000, 9000) # optional fig = plt.figure(1, figsize=(12,6)) ax = fig.add_subplot(121, projection='3d') ax.set_xlabel(r'$I_d$', fontsize=15) ax.set_ylabel(r'$I_q$', fontsize=15) ax.set_zlabel(r'$T$', fontsize=15) ax.plot_surface(xx, yy, T, cmap=m.cmap, norm = m.norm) ax2 = fig.add_subplot(122) ax2.imshow(T, cmap=m.cmap, norm=m.norm, extent=(np.min(xx), np.max(xx), np.min(yy), np.max(yy)), interpolation=None) cbar = fig.colorbar( plt.cm.ScalarMappable(), ax=ax2 ) plt.tight_layout() plt.show() I added a mappable to the plot but unfortunately the colormap is not linked to values of the z axis of the surface plot. The scale only shows values between 0 and 1. How can I connect the values of the T Matix to the colormap ?
There are two things here: The default origin for imshow (on your 2D plot) is "upper", meaning the vertical axis points downwards by default. Since you have the lower numbers at the bottom of your axis, you want to set origin="lower". You can use the mappable object that is created by imshow to create the colorbar, which will then make the colorbar limits match the data. Below I give the mappable returned by ax2.imshow the name im, which you can then use in fig.colorbar. im = ax2.imshow(T, cmap=m.cmap, norm=m.norm, origin='lower', extent=(np.min(xx), np.max(xx), np.min(yy), np.max(yy)), interpolation=None) cbar = fig.colorbar(im, ax=ax2)
2
1
78,852,837
2024-8-9
https://stackoverflow.com/questions/78852837/file-path-good-for-windows-and-android
I have a python script that uses images from a folder. The script might work on Windows and Android as well. The problem is I should indicate different file path for the same images (included in the folder where the script is). Is there a format I can use valid for both? For Android I am using "./image.png", but this does not work on Windows.
If you want some dynamic path, you can use os: import os path = os.path.join('folder1', 'folder2', '...', 'image.png') You can use as many as you need, or just image.png.
2
3
78,852,009
2024-8-9
https://stackoverflow.com/questions/78852009/pandas-re-arranging-columns-with-date-year-as-header
Wish to re-arrange date columns (with Month Year as header) in descending order from left to right. All non-date cols are shifted to extreme left before the date columns begin. If possible as the Column headers are mix-case, so need to process as case-insensitive. demo = { 'Name': ['Alice', 'Bob', 'Charlie', 'David'], 'Age': [25, 30, 35, 40], 'Apr 2022': ['New', 'Los', 'Chic', 'Hous'], 'aPRil 2024': ['York', 'Anges', 'cago', 'ston'], 'Vanilla': ['nfd', 'bdfh', 'tyii', 'liu'], 'aUg 2023': ['NewYork', 'LosAngeles', 'Chicago', 'Houston'], 'deC 2022': ['Neork', 'Logeles', 'Chago', 'Hoston'] } result = { 'Name': ['Alice', 'Bob', 'Charlie', 'David'], 'Age': [25, 30, 35, 40], 'Vanilla': ['nfd', 'bdfh', 'tyii', 'liu'], 'aPRil 2024': ['York', 'Anges', 'cago', 'ston'], 'aUg 2023': ['NewYork', 'LosAngeles', 'Chicago', 'Houston'], 'deC 2022': ['Neork', 'Logeles', 'Chago', 'Hoston'], 'Apr 2022': ['New', 'Los', 'Chic', 'Hous'] }
You can convert to_datetime with errors='coerce' as key to sort_index. In addition pass ascending=False, na_position='first' as parameters to get the dates in descending order and the non-dates first: df = pd.DataFrame(demo) result = df.sort_index( axis=1, key=lambda x: pd.to_datetime(x, errors='coerce', format='mixed'), ascending=False, na_position='first', ) Note depending on the pandas version, it might be needed to first preprocess the string to homogenize the case: key=lambda x: pd.to_datetime(x.str.title(), errors='coerce'). Output: Name Age Vanilla aPRil 2024 aUg 2023 deC 2022 Apr 2022 0 Alice 25 nfd York NewYork Neork New 1 Bob 30 bdfh Anges LosAngeles Logeles Los 2 Charlie 35 tyii cago Chicago Chago Chic 3 David 40 liu ston Houston Hoston Hous This will use the following intermediate to sort the columns: pd.to_datetime(df.columns, errors='coerce', format='mixed') DatetimeIndex(['NaT', 'NaT', '2022-04-01', '2024-04-01', 'NaT', '2023-08-01', '2022-12-01'], dtype='datetime64[ns]', freq=None)
2
1
78,850,636
2024-8-8
https://stackoverflow.com/questions/78850636/what-is-password-based-authentication-in-the-usercreationform-in-django-and-how
I made a form that inherits from the UserCreationForm and use class based view that inherits CreateView and when I use runserver and display the form, there is a section at the bottom Password-based authentication that I don't notice forms.py from django.contrib.auth import get_user_model from django.contrib.auth.forms import UserCreationForm class RegisterForm(UserCreationForm): """Form to Create new User""" class Meta: model = get_user_model() fields = ["username", "password1", "password2"] views.py from django.views.generic import CreateView from .forms import RegisterForm from django.urls import reverse_lazy class SignUp(CreateView): form_class = RegisterForm template_name = "register.html" success_url = reverse_lazy("core:Login") def form_valid(self, form): user = form.save() if user: login(self.request, user) return super().form_valid(form) register.html <h1>signup</h1> {{form}} And when I ran the code I saw this output So I didn't expect password-based authentication. My question is about What exactly is this? Should it be displayed here? How do I hide it?
From Django version 5.1 onwards the UserCreationForm has a usable_password field by default. This relates to the feature Django has for setting unusable passwords for users. This is useful in case you're using some kind of external authentication like Single Sign-On or LDAP. Since your form seems to be a user facing one and showing this field to your user doesn't make much sense (this particular form is by default geared more towards usage in the admin site) you should simply remove the field from your form by setting it to None: class RegisterForm(UserCreationForm): """Form to Create new User""" usable_password = None class Meta: model = get_user_model() fields = ["username", "password1", "password2"]
4
7
78,851,490
2024-8-9
https://stackoverflow.com/questions/78851490/is-it-possible-to-not-get-nan-for-the-first-value-of-pct-change
My DataFrame is: import pandas as pd df = pd.DataFrame( { 'a': [20, 30, 2, 5, 10] } ) Expected output is pct_change() of a: a pct_change 0 20 -50.000000 1 30 50.000000 2 2 -93.333333 3 5 150.000000 4 10 100.000000 I want to compare df.a.iloc[0] with 40 for the first value of pct_change. If I use df['pct_change'] = df.a.pct_change().mul(100), the first value is NaN. My Attempt: def percent(a, b): result = ((a - b) / b) * 100 return result.round(2) df.loc[df.index[0], 'pct_change'] = percent(df.a.iloc[0], 40) Is there a better/more efficient way?
You can use the fill_value keyword argument in pct_change. The pct_change documentation says: Additional keyword arguments are passed into DataFrame.shift or Series.shift. and Series.shift accepts a fill_value argument to fill missing rows. import pandas as pd df = pd.DataFrame({"a": [20, 30, 2, 5, 10]}) df["pct_change"] = df["a"].pct_change(fill_value=40).mul(100) print(df) Output: a pct_change 0 20 -50.000000 1 30 50.000000 2 2 -93.333333 3 5 150.000000 4 10 100.000000
5
5
78,850,424
2024-8-8
https://stackoverflow.com/questions/78850424/how-to-set-visual-studio-code-python-interpreter-to-a-python-virtual-environment
I have a python virtual environment in /Documents and a project in /Documents/Code/Python/example. I want to use this venv in my project but I can't get the vs code python interpreter to recognize that the venv in /Documents is a venv. I tried to use set the interpreter path by using the "Find..." button to manually select /Documents/.venv/bin/python with the finder. However, it is recognizing it as something other than a virtual environment as it looks different when compared to a venv in the same directory as the project. Here is the output below: 2024-08-08 16:53:55.325 [info] Discover tests for workspace name: example - uri: /Users/varun/Documents/Code/Python/example 2024-08-08 16:53:55.326 [info] Python interpreter path: /opt/homebrew/Cellar/[email protected]/3.12.4/Frameworks/Python.framework/Versions/3.12/bin/python3.12 How it looks when venv is in a parent directory How it looks when venv is in same directory I suspect that it is because /Documents/.venv/bin/python is referencing a "faulty" version of python (I'm not sure though). I'm able to activate and use the venv from the terminal to run the code but with vs code not recognizing the venv it isn't giving me any autofill suggestions. Please let me know if you need more info, thanks.
it could be that vscode didn't find the correct interpreter path. Are you creating a virtual environment using the command python -m venv myvenv? Or try restarting vscode after clearing the cache and then opening it up to create a new virtual environment. If you still can't recognize it, you can set the path to the virtual environment in the .vscode/settings.json file. Add "python.pythonPath":"path/to/your/project/venv/bin/python" and "python.defaultInterpreterPath": "path/to/your/venv/bin/python". See if that solves the problem.
2
1
78,850,502
2024-8-8
https://stackoverflow.com/questions/78850502/does-shelve-write-to-disk-on-every-change
I wish to use shelve in an asyncio program and I fear that every change will cause the main event loop to stall. While I don't mind the occasional slowdown of the pickling operation, the disk writes may be substantial. Every how often does shelve sync to disk? Is it a blocking operation? Do I have to call .sync()? If I schedule the sync() to run under a different thread, a different asyncio task may modify the shelve at the same time, which violates the requirement of single-thread writes.
shelve, by default, is backed by the dbm module, in turn backed by some dbm implementation available on the system. Neither the shelve module, nor the dbm module, make any effort to minimize writes; an assignment of a value to a key causes a write every time. Even when writeback=True, that just means that new assignments are placed in the cache and immediately written to the backing dbm; they're written to make sure the original value is there, and the cache entry is made because the object assigned might change after assignment and needs to be handled just like a freshly read object (meaning it will be written again when synced or closed, in case it changed). While it's possible some implementation of the underlying dbm libraries might include some caching, AFAICT, most do try to write immediately (that is, pushing data to the kernel immediately without user-mode buffering), they just don't necessarily force immediate synchronization to disk (though it can be requested, e.g. with gdbm_sync). writeback=True will make it worse, because when it does sync, it's a major effort (it literally rewrites every object read or written to the DB since the last sync, because it has no way of knowing which of them might have been modified), as opposed to the small effort of rewriting a single key/value pair at a time. In short, if you're really concerned about blocking writes, you can't use unthreaded async code without potential blocking, but said blocking is likely short-lived as long as writeback=True is not involved (or as long as you don't sync/close it until performance considerations are no longer relevant). If you need to have truly non-blocking async behavior, all shelve interactions will need to occur under a lock in worker threads, and either writeback must be False (to avoid race conditions pickling data) or if writeback is True, you must take care to avoid modifying any object that might be in the cache during the sync/close.
5
5
78,850,381
2024-8-8
https://stackoverflow.com/questions/78850381/how-to-export-dask-html-high-level-graph-to-disk
There is a way to generate a HTML high level graph in a jupyter notebook as shown in dasks' documentation: https://docs.dask.org/en/stable/graphviz.html#high-level-graph-html-representation Taking the example from the docs, you put the following code in a jupyter cell import dask.array as da x = da.ones((15, 15), chunks=(5, 5)) y = x + x.T y.dask # shows the HTML representation in a Jupyter notebook And you get a nice interactive html view of the graph in the jupyter notebook. My question is if there a way to get the html from this graph outside of the jupyter context. My immediate interest is to export a static html file to disk as a record of the graph that was executed for a task. I could also see other applications, such as embedding a widget in a gui.
By default, the Notebook interface will display the _repr_html_() method's output for whatever it's trying to display. In the case of a Dask Array, the dask attribute is an instance of a HighLevelGraph, whose implementation is here: https://github.com/dask/dask/blob/ed5f68897b3a097f7c5ec1a9ec13ce49c112a544/dask/highlevelgraph.py#L840 That method should return a string, so you can call it directly and save the output to a file: from pathlib import Path Path("dask.html").write_text(y.dask._repr_html_()) There are also ways to use the IPython APIs to run through the process that the Notebook kernel actually goes through when it goes to display some data, but I didn't look those up πŸ˜€
3
2
78,847,944
2024-8-8
https://stackoverflow.com/questions/78847944/using-xlsxwriter-in-python-how-insert-27-digit-number-in-cell-and-display-it-as
I'm writing an Excel using XLSXWRITER package in Python. Formats are applied to cells and this all works, except for one cell when the value assigned is a 27 digit text string (extracted from some source). I've read How to apply format as 'Text' and 'Accounting' using xlsxwriter. It suggests to set the number format to '@', but when I try: WSFMT_TX_REFINFO = wb.add_format({'num_format': '@' , 'align': 'right' , 'border_color': WS_TX_BORDERCOLOR , 'right': WS_TX_BORDERSYTLE }) and write a cell with: refdata = '001022002024080400002400105' ws.write(wsRow, WS_COL_REF_INFO, refdata, WSFMT_TX_REFINFO) The cell is shown as 1.041E+24 and in the editor field as 1.041002024073E+24 If I change the format specification from '@' to 0, i.e. change WSFMT_TX_REFINFO = wb.add_format({'num_format': '@' to WSFMT_TX_REFINFO = wb.add_format({'num_format': 0 the cell is shown as 1022002024080400000000000 Note that the digits after the 14th are replaced by zeros. In the editor field it shows as 1.0220020240804E+24 What I need: The number shall be show as 27 digit string, exactly as found in refdata Note: There are cases, where refdata may contain alphanumeric strings in some cases, besides pure 27 digit strings. Any hint?
This may be a case where it is better to use write_string() instead of the more generic write().
1
4
78,849,429
2024-8-8
https://stackoverflow.com/questions/78849429/convert-the-same-local-time-to-utc-on-different-dates-respecting-the-local-dst
I have several local time points: import datetime from zoneinfo import ZoneInfo as zi wmr = datetime.time(hour=12, tzinfo=zi("GMT")) ecb = datetime.time(hour=14, minute=15, tzinfo=zi("CET")) jpx = datetime.time(hour=14, tzinfo=zi("Japan")) which I want to convert to UTC times given a date. E.g., local2utc(datetime.datetime(2024,1,1), wmr) ---> "2024-01-01 12:00:00" local2utc(datetime.datetime(2024,6,1), wmr) ---> "2024-06-01 11:00:00" (DST active) local2utc(datetime.datetime(2024,1,1), ecb) ---> "2024-01-01 13:15:00" local2utc(datetime.datetime(2024,6,1), ecb) ---> "2024-06-01 12:15:00" (DST active) local2utc(datetime.datetime(2024,1,1), jpx) ---> "2024-01-01 05:00:00" local2utc(datetime.datetime(2024,6,1), jpx) ---> "2024-06-01 05:00:00" (no DST in Japan) The following implementation def local2utc(date, time): local_dt = datetime.datetime.combine(date,time) tm = local_dt.utctimetuple() return datetime.datetime(*tm[:7]) seems for work for Japan and CET, but not for GMT/WET (because London is on BST/WEST in the summer). So, what do I do?
Python uses the IANA time zone database. The list of time zone names can be found here. According to this table, "GMT" is a time zone that has a 0 UTC offset and does not observe daylight saving. Perhaps "Europe/London" would give you the results you are looking for.
2
2
78,844,584
2024-8-7
https://stackoverflow.com/questions/78844584/have-regex-skip-a-match-if-it-occurs-within-1024-characters
I have the following regex.replace: self.reply = raw_reply.replace(b"<" + rcid + b":", b"") where rcid is a command reference. raw_reply is a huge mass of data in bytes e.g. <35:\x07\x98c\x45\x09 etc. I want it to remove all instances of, for example <35: but only if one has not been replaced less than 1024 characters ago. Is there a way to do this with regex? I've tried looking at exclusions and negative lookahead but not sure how to implement it when i want it to ignore any matches within 1024 characters of the previous match.
Use a regular expression that matches up to 1024 characters after the pattern you're replacing. Capture the excess 1024 characters in a capture group so you can copy them to the replacement. The next match will have to be after this, since overlapping matches are not processed. self.reply = re.sub(b"<" + rcid + b":(.{,1024})", br"\1", raw_reply, flags=re.DOTALL)
3
3
78,847,998
2024-8-8
https://stackoverflow.com/questions/78847998/list-to-dataframe-with-row-and-column-headers
I need to convert a list (including headers) to a Dataframe. If I do it directly using pl.DataFrame(list), the headers are created and everything is kept as a string. Moreover, the table is transposed, such that the first element in the list becomes the first column in the dataframe. Input list. [ ['Earnings estimate', 'Current qtr. (Jun 2024)', 'Next qtr. (Sep 2024)', 'Current year (2024)', 'Next year (2025)'], ['No. of analysts', '13', '11', '26', '26'], ['Avg. Estimate', '1.52', '1.62', '6.27', '7.23'], ['Low estimate', '1.36', '1.3', '5.02', '5.88'], ['High estimate', '1.61', '1.74', '6.66', '8.56'], ['Year ago EPS', '1.76', '1.36', '5.74', '6.27'], ] Expected output.
You can explicitly define the orient= to prevent transposition: pl.DataFrame(data[1:], orient="row", schema=data[0]) shape: (5, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Earnings estimate ┆ Current qtr. (Jun 2024) ┆ Next qtr. (Sep 2024) ┆ Current year (2024) ┆ Next year (2025) β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════════════════β•ͺ══════════════════════β•ͺ═════════════════════β•ͺ══════════════════║ β”‚ No. of analysts ┆ 13 ┆ 11 ┆ 26 ┆ 26 β”‚ β”‚ Avg. Estimate ┆ 1.52 ┆ 1.62 ┆ 6.27 ┆ 7.23 β”‚ β”‚ Low estimate ┆ 1.36 ┆ 1.3 ┆ 5.02 ┆ 5.88 β”‚ β”‚ High estimate ┆ 1.61 ┆ 1.74 ┆ 6.66 ┆ 8.56 β”‚ β”‚ Year ago EPS ┆ 1.76 ┆ 1.36 ┆ 5.74 ┆ 6.27 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
4
2
78,845,357
2024-8-7
https://stackoverflow.com/questions/78845357/python-polars-how-can-i-convert-the-values-of-a-column-with-type-enum-into
The polars user guide suggests that enums have a physical, integer representation. Is it possible to access the integers associated with an enum value? For example, is there a nicer way to get the integer representation in the following example? import numpy as np import polars as pl np.random.seed(556) enum_vals = [ "".join([chr(c_code) for c_code in np.random.randint(97, 123, 3)]) for n in range(10) ] enum_dtype = pl.Enum(pl.Series(enum_vals)) ( pl.Series( "enum_vals", [enum_vals[x] for x in np.random.randint(0, len(enum_vals), 5)], dtype=enum_dtype, ) .to_frame() .with_columns( enum_repr=pl.col("enum_vals").map_elements( lambda x: enum_vals.index(x), return_dtype=pl.Int64() ) ) ) shape: (5, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ enum_vals ┆ enum_repr β”‚ β”‚ --- ┆ --- β”‚ β”‚ enum ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ loo ┆ 8 β”‚ β”‚ sby ┆ 5 β”‚ β”‚ cqm ┆ 3 β”‚ β”‚ cbn ┆ 2 β”‚ β”‚ vtk ┆ 9 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
For this, pl.Expr.to_physical exists. The documentation also lists the dtype of the underlying physical representation, which is pl.UInt32 for pl.Categorical / pl.Enum. df.with_columns( pl.col("enum_vals").to_physical() ) shape: (5, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ enum_vals β”‚ β”‚ --- β”‚ β”‚ u32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ 8 β”‚ β”‚ 5 β”‚ β”‚ 3 β”‚ β”‚ 2 β”‚ β”‚ 9 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
3
5
78,845,297
2024-8-7
https://stackoverflow.com/questions/78845297/how-do-define-custom-auto-imports-for-pylance-visual-studio-code
When I type out something like np. I think this triggers Visual Studio Code + Pylance's (not sure) auto-import completion by suggesting that import numpy as np might be relevant. I would like to create similar custom auto-import/complete associations. For example: between pl and polars, so that if I type something like pl. then import polars as pl is given as an auto-import suggestion. How can I do this? Is this specific to the Pylance extension I am using, or something about Visual Studio Code? Please note that auto-import/import-completion is very different from custom code snippets, as covered in How to add custom code snippets in VSCode? The reason for this being: VS Code adds a new import statement (and therefore has to figure out if it is possible to resolve that import) at the top of the file. It does not add code where the cursor is, which is what a snippet would do. This functionality relies on a language server of some sort (hence my suspicion it is Pylance that is providing this functionality) both to resolve the import, and to insert the import statement at the appropriate location in the file.
This is tracked by an open enhancement request: Allow auto-import abbreviations to be configured #2589. I suggest that you give that discussion an upvote to show support for it. You can also subscribe to it to get notified about discussion and progress. Please avoid making noisy comments there like ones that just consist of "+1" / "bump".
3
1
78,845,657
2024-8-7
https://stackoverflow.com/questions/78845657/python-playwright-locator-not-returning-expected-value
I'm not getting the expected value returned from the below code. from playwright.sync_api import sync_playwright import time import random def main(): with sync_playwright() as p: browser = p.firefox.launch(headless=False) page = browser.new_page() url = "https://www.useragentlist.net/" page.goto(url) time.sleep(random.uniform(2,4)) test = page.locator('xpath=//span[1][@class="copy-the-code-wrap copy-the-code-style-button copy-the-code-inside-wrap"]/pre/code/strong').inner_text() print(test) count = page.locator('xpath=//span["copy-the-code-wrap copy-the-code-style-button copy-the-code-inside-wrap"]/pre/code/strong').count() print(count) browser.close() if __name__ == '__main__': main() page.locator().count() returns a value of 0, I have no issue getting the text from the lines above it, but I need to access all elements, what is wrong with my implementation of locator and count?
Your second locator XPath has no @class=, so it's different than the first one that works. Store the string in a variable so you don't have to type it twice or encounter copy-paste or stale data errors. In any case, your approach seems overcomplicated. Each user agent is in a <code> tag--just scrape that: from playwright.sync_api import sync_playwright # 1.44.0 def main(): with sync_playwright() as p: browser = p.firefox.launch() page = browser.new_page() url = "https://www.useragentlist.net/" page.goto(url, wait_until="domcontentloaded") agents = page.locator("code").all_text_contents() print(agents) browser.close() if __name__ == "__main__": main() Locators auto-wait so there's no need to sleep. Avoid XPaths 99% of the time--they're brittle and difficult to read and maintain. Just use CSS selectors or user-visible locators. The goal is to choose the simplest selector necessary to disambiguate the elements you want, and nothing more. span/pre/code/strong is a rigid hierarchy--if one of these changes, your code breaks unnecessarily. By the way, the user agents are in the static HTML, so unless you're trying to circumvent a block, you can do this faster with requests and Beautiful Soup: from requests import get # 2.31.0 from bs4 import BeautifulSoup # 4.10.0 response = get("https://www.useragentlist.net") response.raise_for_status() print([x.text for x in BeautifulSoup(response.text, "lxml").select("code")]) Better still (possibly), use a library like fake_useragent to generate your random user agent.
2
1
78,845,737
2024-8-7
https://stackoverflow.com/questions/78845737/convert-xml-file-into-dictionary-with-elementtree
I have an XML configuration file used by legacy software, which I cannot change or format. The goal is to use Python 3.9 and transform the XML file into a dictionary, using only xml.etree.ElementTree library. I was originally looking at this reply, which produces almost the expected results. Scenario.xml file contents: <Scenario Name="{{ env_name }}"> <Gateways> <Alpha Host="{{ host.alpha_name }}" Order="1"> <Config>{{ CONFIG_DIR }}/alpha.xml</Config> <Arguments>-t1 -t2</Arguments> </Alpha> <Beta Host="{{ host.beta_name }}" Order="2"> <Config>{{ CONFIG_DIR }}/beta.xml</Config> <Arguments>-t1</Arguments> </Beta> <Gamma Host="{{ host.gamma_name }}" Order="3"> <Config>{{ CONFIG_DIR }}/gamma.xml</Config> <Arguments>-t2</Arguments> <!--<Data Count="58" />--> </Gamma> </Gateways> </Scenario> Python code to convert XML file to dictionary: from pprint import pprint from xml.etree import ElementTree def format_xml_to_dictionary(element: ElementTree.Element): ''' Format xml to dictionary :param element: Tree element :return: Dictionary formatted result ''' try: return { **element.attrib, '#text': element.text.strip(), **{i.tag: format_xml_to_dictionary(i) for i in element} } except ElementTree.ParseError as e: raise e if __name__ == '__main__': tree = ElementTree.parse('Scenario.xml').getroot() scenario = format_xml_to_dictionary(tree) pprint(scenario) Functional code output with <!--<Data Count="58" />--> commented: $ python test.py {'#text': '', 'Gateways': {'#text': '', 'Alpha': {'#text': '', 'Arguments': {'#text': '-t1 -t2'}, 'Config': {'#text': '{{ CONFIG_DIR }}/alpha.xml'}, 'Host': '{{ host.alpha_name }}', 'Order': '1'}, 'Beta': {'#text': '', 'Arguments': {'#text': '-t1'}, 'Config': {'#text': '{{ CONFIG_DIR }}/beta.xml'}, 'Host': '{{ host.beta_name }}', 'Order': '2'}, 'Gamma': {'#text': '', 'Arguments': {'#text': '-t2'}, 'Config': {'#text': '{{ CONFIG_DIR }}/gamma.xml'}, 'Host': '{{ host.gamma_name }}', 'Order': '3'}}, 'Name': '{{ env_name }}'} I'm trying to address two issues: Scenario is missing from dictionary keys, because root node is already the Scenario tag, I'm not sure what I need to do, in order to make it part of dictionary If I uncomment <Data Count="58" />, I get the following error: AttributeError: 'NoneType' object has no attribute 'strip' I'm not sure what type of if/else condition I need to implement, I tried something like that, but it is setting all #text values to '' instead of stripping them: '#text': element.text.strip() if isinstance( element.text, ElementTree.Element ) else '',
To get Scenario into the result, use tree.tag as the key in the outermost dictionary when calling the function. To handle nodes with no text, add the #text key to the dictionary in a separate statement, so it can be conditional. def format_xml_to_dictionary(element: ElementTree.Element): ''' Format xml to dictionary :param element: Tree element :return: Dictionary formatted result ''' try: result = { **element.attrib, **{i.tag: format_xml_to_dictionary(i) for i in element} } if element.text: result['#text'] = element.text.strip() return result except ElementTree.ParseError as e: raise e if __name__ == '__main__': tree = ElementTree.parse('Scenario.xml').getroot() scenario = { tree.tag: format_xml_to_dictionary(tree) } pprint(scenario)
2
3
78,844,160
2024-8-7
https://stackoverflow.com/questions/78844160/getting-a-flat-view-of-a-nested-list
In Python, is it possible to get a flat view of a list of lists that dynamically adapts to changes to the original, nested list? To be clear, I am not looking for a static snapshot, but for a view that reflects changes. Further, the sub-lists should not be restricted to a primitive type, but be able to contain arbitrary objects, and not tied to a static size, but be allowed to shrink or expand freely. Simple example: a = ["a", "b", "c"] b = ["d", "e", "f"] view = flat_view([a, b]) # `view` should show ["a", "b", "c", "d", "e", "f"] b[0] = "x" # `view` should show ["a", "b", "c", "x", "e", "f"] The implementation of flat_view() is what I'm looking for.
You would need to create a class that holds a reference to the original lists. You don't want a copy of the lists, you just need a reference. This class knows how to access and update the values at each of the lists it holds. You can access any item in the list in O(log n) search complexity (binary search) without using any additional memory to store a flattend list. Implementation Imports for types to follow: from typing import Any, Callable, Iterable, List, Tuple, Union If you want the view to listen to changes in the underlying lists, you will need to create an ListWrapper that can delegate to the underlying list and notify the view that things changed in the lists. We want to make sure self._notify() is called whenever a list changes. class ListWrapper: def __init__(self, lst: List[Any]): self._list: List[Any] = lst self.callbacks: List[Callable[[], None]] = [] def __getitem__(self, index: int) -> Any: return self._list[index] def __setitem__(self, index: int, value: Any) -> None: self._list[index] = value self._notify() def __len__(self) -> int: return len(self._list) def append(self, item: Any) -> None: self._list.append(item) self._notify() def extend(self, iterable: Iterable[Any]) -> None: self._list.extend(iterable) self._notify() def insert(self, index: int, item: Any) -> None: self._list.insert(index, item) self._notify() def remove(self, item: Any) -> None: self._list.remove(item) self._notify() def pop(self, index: int = -1) -> Any: item = self._list.pop(index) self._notify() return item def clear(self) -> None: self._list.clear() self._notify() def _notify(self) -> None: for callback in self.callbacks: callback() def add_callback(self, callback: Callable[[], None]) -> None: self.callbacks.append(callback) Here is a class which provides a "flat view" of multiple lists, where updates to the original lists are reflected in the flat view. class FlatView: def __init__(self, lists: List[ListWrapper]) -> None: self.lists = lists self.update_lengths() for lst in self.lists: lst.add_callback(self.update_lengths) def update_lengths(self) -> None: self.sub_lengths = self._compute_overall_length() def _compute_overall_length(self) -> List[int]: lengths = [0] for lst in self.lists: lengths.append(lengths[-1] + len(lst)) return lengths def __getitem__(self, index: int) -> Any: if index < 0 or index >= len(self): raise IndexError("list index out of range") list_index = self._find_list_index(index) sublist_index = index - self.sub_lengths[list_index] return self.lists[list_index][sublist_index] def _find_list_index(self, index: int) -> int: # Binary search to find the list that contains the index low, high = 0, len(self.sub_lengths) - 1 while low < high: mid = (low + high) // 2 if self.sub_lengths[mid] <= index < self.sub_lengths[mid + 1]: return mid elif index < self.sub_lengths[mid]: high = mid else: low = mid + 1 return low def __len__(self) -> int: return self.sub_lengths[-1] def __repr__(self) -> str: return repr([item for lst in self.lists for item in lst]) Usage The following wrapper function creates a FlatView instance for the provided list of lists. We wrap each list in a ListWrapper so that we can attach a callback function to update the view's overall lengths that are used to access the data. def flat_view(lists: List[List[Any]]) -> Tuple[FlatView, List[ListWrapper]]: wrappers = [ListWrapper(lst) for lst in lists] return FlatView(wrappers), wrappers Here is how you would use it. Please note that we need to modify the wrappers for the view to understand how the lists change. For example, b[0] = "x" would not work. if __name__ == "__main__": a = ["a", "b", "c"] b = ["d", "e", "f"] view, [a_wrapper, b_wrapper] = flat_view([a, b]) print(view) # Output: ['a', 'b', 'c', 'd', 'e', 'f'] b_wrapper[0] = "x" print(view) # Output: ['a', 'b', 'c', 'x', 'e', 'f'] print(view[3], len(view)) # Output: 'x' 6 a_wrapper.append("y") print(view, len(view)) # Output: ['a', 'b', 'c', 'y', 'x', 'e', 'f'] 7 print(a, len(a)) # Output: ['a', 'b', 'c', 'y'] 4 print(b, len(b)) # Output: ['x', 'e', 'f'] 3
3
3
78,842,466
2024-8-7
https://stackoverflow.com/questions/78842466/split-a-polars-dataframe-into-multiple-chunks-with-groupby
Consider the following pl.DataFrames: import datetime import polars as pl df_orig = pl.DataFrame( { "symbol": [*["A"] * 10, *["B"] * 8], "date": [ *pl.datetime_range( start=datetime.date(2024, 1, 1), end=datetime.date(2024, 1, 10), eager=True, ), *pl.datetime_range( start=datetime.date(2024, 1, 1), end=datetime.date(2024, 1, 8), eager=True, ), ], "data": [*range(10), *range(8)], } ) df_helper = pl.DataFrame({"symbol": ["A", "B"], "start_idx": [[0, 5], [0, 4]]}) chunk_size = 5 with pl.Config(tbl_rows=30): print(df_orig) print(df_helper) shape: (18, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ date ┆ data β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ datetime[ΞΌs] ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═════════════════════β•ͺ══════║ β”‚ A ┆ 2024-01-01 00:00:00 ┆ 0 β”‚ β”‚ A ┆ 2024-01-02 00:00:00 ┆ 1 β”‚ β”‚ A ┆ 2024-01-03 00:00:00 ┆ 2 β”‚ β”‚ A ┆ 2024-01-04 00:00:00 ┆ 3 β”‚ β”‚ A ┆ 2024-01-05 00:00:00 ┆ 4 β”‚ β”‚ A ┆ 2024-01-06 00:00:00 ┆ 5 β”‚ β”‚ A ┆ 2024-01-07 00:00:00 ┆ 6 β”‚ β”‚ A ┆ 2024-01-08 00:00:00 ┆ 7 β”‚ β”‚ A ┆ 2024-01-09 00:00:00 ┆ 8 β”‚ β”‚ A ┆ 2024-01-10 00:00:00 ┆ 9 β”‚ β”‚ B ┆ 2024-01-01 00:00:00 ┆ 0 β”‚ β”‚ B ┆ 2024-01-02 00:00:00 ┆ 1 β”‚ β”‚ B ┆ 2024-01-03 00:00:00 ┆ 2 β”‚ β”‚ B ┆ 2024-01-04 00:00:00 ┆ 3 β”‚ β”‚ B ┆ 2024-01-05 00:00:00 ┆ 4 β”‚ β”‚ B ┆ 2024-01-06 00:00:00 ┆ 5 β”‚ β”‚ B ┆ 2024-01-07 00:00:00 ┆ 6 β”‚ β”‚ B ┆ 2024-01-08 00:00:00 ┆ 7 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ start_idx β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ list[i64] β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ A ┆ [0, 5] β”‚ β”‚ B ┆ [0, 3] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Now, I need to split the dataframe into two chunks of length 5 (chunk_size) grouped by the symbol column. The column start_idx indicate the rows to start the chunk in each group. That is, group A will be split into two chunks of length 5 starting in row 0 and 5, while the chunks of grouß B start in row 0 and 3. Finally, all chunks need to be concatenated on axis=0, whereby a new column split_idx indicates where the split is coming from. Here's what I am looking for: shape: (20, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ split_idx ┆ symbol ┆ date ┆ data β”‚ β”‚ ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ datetime[ΞΌs] ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ════════β•ͺ═════════════════════β•ͺ══════║ β”‚ 0 ┆ A ┆ 2024-01-01 00:00:00 ┆ 0 β”‚ β”‚ 0 ┆ A ┆ 2024-01-02 00:00:00 ┆ 1 β”‚ β”‚ 0 ┆ A ┆ 2024-01-03 00:00:00 ┆ 2 β”‚ β”‚ 0 ┆ A ┆ 2024-01-04 00:00:00 ┆ 3 β”‚ β”‚ 0 ┆ A ┆ 2024-01-05 00:00:00 ┆ 4 β”‚ β”‚ 0 ┆ B ┆ 2024-01-01 00:00:00 ┆ 0 β”‚ β”‚ 0 ┆ B ┆ 2024-01-02 00:00:00 ┆ 1 β”‚ β”‚ 0 ┆ B ┆ 2024-01-03 00:00:00 ┆ 2 β”‚ β”‚ 0 ┆ B ┆ 2024-01-04 00:00:00 ┆ 3 β”‚ β”‚ 0 ┆ B ┆ 2024-01-05 00:00:00 ┆ 4 β”‚ β”‚ 1 ┆ A ┆ 2024-01-06 00:00:00 ┆ 5 β”‚ β”‚ 1 ┆ A ┆ 2024-01-07 00:00:00 ┆ 6 β”‚ β”‚ 1 ┆ A ┆ 2024-01-08 00:00:00 ┆ 7 β”‚ β”‚ 1 ┆ A ┆ 2024-01-09 00:00:00 ┆ 8 β”‚ β”‚ 1 ┆ A ┆ 2024-01-10 00:00:00 ┆ 9 β”‚ β”‚ 1 ┆ B ┆ 2024-01-04 00:00:00 ┆ 3 β”‚ β”‚ 1 ┆ B ┆ 2024-01-05 00:00:00 ┆ 4 β”‚ β”‚ 1 ┆ B ┆ 2024-01-06 00:00:00 ┆ 5 β”‚ β”‚ 1 ┆ B ┆ 2024-01-07 00:00:00 ┆ 6 β”‚ β”‚ 1 ┆ B ┆ 2024-01-08 00:00:00 ┆ 7 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ Keep in mind that list in column start_idx may be of variable length for each individual row. The length of each list determines the number of chunks for each group.
Here is a solution that fully stays within the polars expression API. The primary idea is to preprocess the helper dataframe into a dataframe of symbol, split_idx, and row_idx. Here, row_idx is the index of a row within a group defined by symbol and split index. It can serve as a "skeleton" and we can (after adding such a row index to df_orig) easily use it for a left-merge with df_orig. pl.Config().set_tbl_rows(-1) def preprocess_helper(df_helper: pl.DataFrame) -> pl.DataFrame: return ( df_helper .explode("start_idx") .with_columns( pl.int_range(pl.len()).over("symbol").alias("split_idx"), pl.int_ranges(pl.col("start_idx"), pl.col("start_idx") + chunk_size).alias("row_idx"), ) .explode("row_idx") ) ( preprocess_helper(df_helper) .join( df_orig.with_columns(pl.int_range(pl.len()).over("symbol").alias("row_idx")), on=["symbol", "row_idx"], how="left", ) .drop("row_idx", "start_idx") .sort("split_idx", "symbol") ) Note. The final pl.DataFrame.drop / pl.DataFrame.sort can be omitted if the exact columns / order of the output does not matter. shape: (20, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ symbol ┆ split_idx ┆ date ┆ data β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ datetime[ΞΌs] ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═══════════β•ͺ═════════════════════β•ͺ══════║ β”‚ A ┆ 0 ┆ 2024-01-01 00:00:00 ┆ 0 β”‚ β”‚ A ┆ 0 ┆ 2024-01-02 00:00:00 ┆ 1 β”‚ β”‚ A ┆ 0 ┆ 2024-01-03 00:00:00 ┆ 2 β”‚ β”‚ A ┆ 0 ┆ 2024-01-04 00:00:00 ┆ 3 β”‚ β”‚ A ┆ 0 ┆ 2024-01-05 00:00:00 ┆ 4 β”‚ β”‚ B ┆ 0 ┆ 2024-01-01 00:00:00 ┆ 0 β”‚ β”‚ B ┆ 0 ┆ 2024-01-02 00:00:00 ┆ 1 β”‚ β”‚ B ┆ 0 ┆ 2024-01-03 00:00:00 ┆ 2 β”‚ β”‚ B ┆ 0 ┆ 2024-01-04 00:00:00 ┆ 3 β”‚ β”‚ B ┆ 0 ┆ 2024-01-05 00:00:00 ┆ 4 β”‚ β”‚ A ┆ 1 ┆ 2024-01-06 00:00:00 ┆ 5 β”‚ β”‚ A ┆ 1 ┆ 2024-01-07 00:00:00 ┆ 6 β”‚ β”‚ A ┆ 1 ┆ 2024-01-08 00:00:00 ┆ 7 β”‚ β”‚ A ┆ 1 ┆ 2024-01-09 00:00:00 ┆ 8 β”‚ β”‚ A ┆ 1 ┆ 2024-01-10 00:00:00 ┆ 9 β”‚ β”‚ B ┆ 1 ┆ 2024-01-04 00:00:00 ┆ 3 β”‚ β”‚ B ┆ 1 ┆ 2024-01-05 00:00:00 ┆ 4 β”‚ β”‚ B ┆ 1 ┆ 2024-01-06 00:00:00 ┆ 5 β”‚ β”‚ B ┆ 1 ┆ 2024-01-07 00:00:00 ┆ 6 β”‚ β”‚ B ┆ 1 ┆ 2024-01-08 00:00:00 ┆ 7 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜
2
2
78,843,191
2024-8-7
https://stackoverflow.com/questions/78843191/numpy-on-small-arrays-elementary-arithmetic-operations-performances
I am not 100% positive that this question has a solution besides "that's the overhead, live with it", but you never know. I have a very simple set of elementary mathematical operations done on rather small 1D NumPy arrays (6 to 10 elements). The arrays' dtype is np.float32, while other inputs are standard Python floats. The differences in timings are reproducible on all machines I have (Windows 10 64 bit, Python 3.9.10 64 bit, NumPy 1.21.5 MKL). An example: def NumPyFunc(array1, array2, float1, float2, float3): output1 = (array2 - array1) / (float2 - float1) output2 = array1 + output1 * (float3 - float1) return output1, output2 Given these inputs: import numpy sz = 6 array1 = 3000.0 * numpy.random.uniform(size=(sz,)).astype(numpy.float32) array2 = 2222.0 * numpy.random.uniform(size=(sz,)).astype(numpy.float32) float1 = float(numpy.random.uniform(100000, 1e7)) float2 = float(numpy.random.uniform(100000, 1e7)) float3 = float(numpy.random.uniform(100000, 1e7)) I get on machine 1: %timeit NumPyFunc(array1, array2, float1, float2, float3) 3.33 Β΅s Β± 18 ns per loop (mean Β± std. dev. of 7 runs, 100000 loops each) And on machine 2: %timeit NumPyFunc(array1, array2, float1, float2, float3) 1.5 Β΅s Β± 19.4 ns per loop (mean Β± std. dev. of 7 runs, 1000000 loops each) All nice and well, but I have to do these operations millions upon millions of times. One suggestion would be to use Numba LLVM JIT-compiler (which I know nothing about), but I heard it can get cumbersome to distribute an application with py2exe when Numba is involved. So I thought I'd make a simple Fortran subroutine and wrap it with f2py, just for fun: pure subroutine f90_small_arrays(n, array1, array2, float1, float2, float3, output1, output2) implicit none integer, intent(in) :: n real(4), intent(in), dimension(n) :: array1, array2 real(4), intent(in) :: float1, float2, float3 real(4), intent(out), dimension(n) :: output1, output2 output1 = (array2 - array1) / (float2 - float1) output2 = array1 + output1 * (float3 - float1) end subroutine f90_small_arrays and time it in a Python function like this: from f90_small_arrays import f90_small_arrays def FortranFunc(array1, array2, float1, float2, float3): output1, output2 = f90_small_arrays(array1, array2, float1, float2, float3) return output1, output2 I get on machine 1: %timeit FortranFunc(array1, array2, float1, float2, float3) 654 ns Β± 0.869 ns per loop (mean Β± std. dev. of 7 runs, 1000000 loops each) And on machine 2: %timeit FortranFunc(array1, array2, float1, float2, float3) 286 ns Β± 5.92 ns per loop (mean Β± std. dev. of 7 runs, 1000000 loops each) Which is more than 5 times faster than NumPy, even though I am just doing basic math operations. While I get it that array creation has its own overhead, I wasn't expecting such a big ratio between the two. I have also tried to upgrade to NumPy 1.26.3, but it is actually 15% slower than NumPy 1.21.5... I can of course get the answer "just replace the NumPy code with the Fortran one", which will imply a loss of readability - the code doing the actual operation is in another file, a Fortran file. It may also be that there is nothing that can be done to narrow the gap between NumPy and Fortran, and the overhead of operations in NumPy arrays is what it is. But of course any ideas/suggestions are more than welcome :-) .
I am not 100% positive that this question has a solution besides "that's the overhead, live with it", but you never know. Generally, Numpy is not optimized for computing small arrays (also for computing arrays where the last target axis is small too). For example, creating arrays, collecting them, analysing types, etc. is pretty expensive. All nice and well, but I have to do these operations millions upon millions of times. One suggestion would be to use Numba (which I know nothing about), but I heard it can get cumbersome to distribute an application with py2exe when Numba is involved. This is indeed, a usual way to fix that. Note that Cython can be better for the packaging. However, Numba can easily speed up the creation of Numpy arrays (without fixing it completely) while this is more difficult in Cython. THe main reason is that Numba use its own implementation of Numpy and it can optimize its code thanks to the JIT compiler while Cpython and Cython do not. The way to speed up Numpy operations in Cython is typically to create views on it. Creating new small arrays and Numpy functions is still expensive. Alternative include calling directly native functions (C, C++, Fortran, Rust, etc.) possibly with the help of some binding library/tool. Such native function can be called from Cython, Ctypes, CFFI, etc. So I thought I'd make a simple Fortran subroutine This is a pretty good solution regarding the target granularity (though calling it in a portable way can be sometimes a bit tedious). Note that the function do not create new array and switch from CPython to native code only once (not to mention the same thing applies for type checking). This matters regarding overheads. While I get it that array creation has its own overhead, I wasn't expecting such a big ratio between the two. Numpy needs to perform significantly more operations than your Fortran function. Numpy is designed to be generic while your Fortran function address specifically one problem. NumPyFunc performs at least 6 function calls to Numpy and Numpy cannot optimize that because of the CPython interpreter. Interpreters are painfully slow (especially CPython due to the very-dynamic properties of the language and its numerous high-level feature). It also creates 6 temporary arrays (some might be recycled now so to speed up operations on large arrays, but this optimization introduce even more overheads). Each array (which is actually a view on a raw memory buffer dynamically reference counted) is a CPython object which needs to be allocated, reference counted and freed. During each operation, Numpy needs to dynamically check the type, shape, dimension, stride, target axis of the input array objects. It also needs to check/perform some high-level features like wrap-around, broadcasting (for each dimension). Because of the numerous possible input parameters, Numpy performs this by creating generic internal multi-dimensional generators (1 per input array). This part of the Numpy code is sub-optimal (so it can be optimized though it is pretty complicated IMHO so it has not been done much yet). For example, each of them is AFAIK dynamically allocated/freed. On top of that, Numpy is designed to be rather user-friendly so it checks/reports issues like division by zeros, support special values like NaN, Inf, etc. (while trying to mitigate the associated overhead as much as possible). Last but not least, the CPython GIL (global interpreter lock) needs to be released and acquired again. All of this is not an exhaustive list: there are more (technical) operations to do I did not mentioned here (eg. pickling the best function to run so for the operation to to be done in a SIMD-friendly way, the CPython fetching of Numpy module variables, etc.). Almost none of that is done by your Fortran code at runtime, and when it is, it is only done once instead of 6 times. Most of the operations are either not done or simply done at compile-time. This is mainly possible because the code is specialized. This reasonably justify the "5 times faster" code at this granularity (especially since a single cache miss requiring a fetch from the DRAM typically takes 50-100 ns). It may also be that there is nothing that can be done to narrow the gap between NumPy and Fortran, and the overhead of operations in NumPy arrays is what it is. While there are certainly ways to do micro-optimizations of the Numpy code, this will not results in a huge performance improvement. The key is simply not to use Numpy for arrays with only 6-10 items. In fact, calling a native function is already very expensive for that. Indeed, modern x86-64 CPUs can add 6-10 items between two arrays in **only few CPU cycles thanks to SIMD instructions! The slowest purely-computing part is certainly to check the size of the array and support non-power-of-two sizes without doing any out-of-bound accesses. The computing part of your Fortran function should take a very tiny fraction of the reported time! Without considering possibly cache misses or DRAM accesses, it should certainly take no more than 5-30 ns! All the rest (90-98%) is actually pure overheads! At this scale, even a (non-inlined) native function call can be expensive so there is not even a chance for Numpy/CPython to do that so fast. To emphasise how fast your Fortran code can be, here is its assembly code (compiled with gfortran and -O2 optimizations): f90_small_arrays_: mov r11, rdi mov rax, rcx movss xmm1, DWORD PTR [r8] mov rdi, rdx movss xmm2, DWORD PTR [rax] movsx rdx, DWORD PTR [r11] mov rcx, QWORD PTR [rsp+8] mov r10, QWORD PTR [rsp+16] subss xmm1, xmm2 test edx, edx jle .L1 sal rdx, 2 xor eax, eax .L3: movss xmm0, DWORD PTR [rdi+rax] subss xmm0, DWORD PTR [rsi+rax] divss xmm0, xmm1 movss DWORD PTR [rcx+rax], xmm0 add rax, 4 cmp rdx, rax jne .L3 movss xmm1, DWORD PTR [r9] xor eax, eax subss xmm1, xmm2 .L4: movss xmm0, DWORD PTR [rcx+rax] mulss xmm0, xmm1 addss xmm0, DWORD PTR [rsi+rax] movss DWORD PTR [r10+rax], xmm0 add rax, 4 cmp rdx, rax jne .L4 .L1: ret On my i5-9600KF CPU, the header footer takes 5-20 cycle. The first loop takes 3 cycle/item and the second takes 1.5 cycle/items. Note the number of cycle is so small because instructions are pipelined and executed in parallel. In the end, the whole function should take 50-100 cycles on my machine for 10 items, that is, less than 10-20 ns! In practice, this code does not benefit from SIMD instructions. It can be even faster with -O3 (producing a 128-bit SIMD code), since 4 items can be computed at once for nearly the same cost than 1. In practice, the arrays are so small that is should not results in more than twice faster (~10 ns) execution and most of the time should not even be really spent in the main SIMD loop (<5 ns on my CPU). Moreover, the two loops can be merged. In this case, I expect the main computing overhead to actually come from miss-prediction of loop iterations (because the CPU does not know the size of the array unless this code is run very frequently with the same array size), not to mention calling this function CPython which takes hundreds of nanoseconds. Thus, if you really care about performance, you should avoid calling the Fortran many times and put as much work as possible in your native function(s) (currently done in CPython). Moreover, providing the size of the arrays at compile-time can also strongly improve performance in this case. Using -ffast-math can improve that even further (by using a fast-reciprocal instruction instead of a slow division) though you should be careful about its implications. The later results in a code generating 36 instructions taking only 5-10 ns on my CPU (compared to ~300 ns for calling the Fortran function from CPython and ~3000 ns for the Numpy code). It is so fast that the performance of the native code is bound by the latency of the CPU instructions. Thus, my CPU can pipeline the computation of many arrays (of 10 items) simultaneously while taking only 2 ns per array! Newer CPUs can even reach 1 ns/array! This is much faster than executing any CPython statement (including a basic variable assignment) or also much faster then even calling a native function call. Not to mention reference counting on a single CPython object/Numpy-array or GIL operations are far slower than this too (the atomic operations performed usually take dozens of ns). This is why nearly-all CPython module cannot really help you (compiler-based modules are the best solution if you want to keep your main code in CPython despite the huge overhead required to convert CPython types to native ones and call native functions from the interpreter).
3
2
78,843,497
2024-8-7
https://stackoverflow.com/questions/78843497/how-to-collect-process-local-state-after-multiprocessing-pool-imap-unordered-com
After using a Pool from Python's multiprocessing to parallelize some computationally intensive work, I wish to retrieve statistics that were kept local to each spawned process. Specifically, I have no real-time interest in these statistics, so I do not want to bear the overhead that would be involved with using a synchronized data structure in which statistics could be kept. I've found some suggestions where the idea would be to use a second pool.map() with a different function which returns the state local to its worker. I believe this to be incorrect since there is no guarantee that this second invocation would lead to exactly one job being distributed to every worker process in the pool. Is there a mechanism that would achieve this? Skeleton snippet where it's unclear what can be done after the imap_unordered() completes. import multiprocessing as mp import random local_stats = {"success": 0, "fails": 0} def do_work(_): if random.choice([True, False]): local_stats["success"] += 1 else: local_stats["fails"] += 1 if __name__ == "__main__": with mp.Manager() as manager: with mp.Pool(processes=2) as pool: results = list(pool.imap_unordered(do_work, range(1000))) # after .imap_unordered() completes, aggregate "local_stats" from each process in the pool # by either retrieving its local_stats, or having them push those stats to the main process # ???
IDK if this is the best solution (could you just log to a file as you go, then parse the files from each child afterwards?), but you mentioned ensuring a task is evenly distributed to all workers. This would commonly be achieved with a Barrier. It is somewhat difficult to pass certain things to child processes like locks queues and such so we will pass it as an argument to the initialization function for all child processes. Here's an example: import multiprocessing as mp import random local_stats = {"success": 0, "fails": 0} def do_work(_): global local_stats if random.choice([True, False]): local_stats["success"] += 1 else: local_stats["fails"] += 1 def init_pool(barrier): # save barrier to child process globals as pool init (barrier must be sent to child at process creation) global sync_barrier sync_barrier = barrier def return_stats(_): # needs dummy arg to call with "map" functions global sync_barrier sync_barrier.wait() # wait for other processes to also be waiting # may raise BrokenBarrierError global local_stats return local_stats if __name__ == "__main__": nprocs = 2 # re-use number of processes with barrier and pool constructor to make sure you use the same number barrier = mp.Barrier(nprocs) with mp.Pool(processes=nprocs, initializer=init_pool, initargs=(barrier,)) as pool: results = list(pool.imap_unordered(do_work, range(1000))) stats = list(pool.imap_unordered(return_stats, range(nprocs), 1)) # force chunksize to 1 to ensure 1 task per child process print(stats)
2
1
78,843,479
2024-8-7
https://stackoverflow.com/questions/78843479/use-pandas-operations-to-transpose-and-reindex
I have the following dataframe: Sample ID 'Deinococcus soli' Cha et al. 2014 16SrX (Apple proliferation group) 16SrXII (Stolbur group) 0 C1day1_barcode01 21 1 0 1 C1day21_barcode19 22 0 0 2 C3day1_barcode03 13 0 0 3 C3day14_barcode15 14 2 2 4 T1day21_barcode22 19 1 1 This is my desired output: Sample ID C1day1_barcode01 C1day21_barcode19 C3day1_barcode03 C3day14_barcode15 T1day21_barcode22 0 'Deinococcus soli' Cha et al. 2014 21 22 13 14 19 1 16SrX (Apple proliferation group) 1 0 0 2 1 2 16SrXII (Stolbur group) 0 0 0 2 1 I have used transpose and I tried inserting a new index & renaming the columns as well as resetting the index but in either cases I get something like this: 0 Sample ID 0 1 2 3 4 0 Sample ID C1day1_barcode01 C1day21_barcode19 C3day1_barcode03 C3day14_barcode15 T1day21_barcode22 1 'Deinococcus soli' Cha et al. 2014 21 22 13 14 19 2 16SrX (Apple proliferation group) 1 0 0 2 1 3 16SrXII (Stolbur group) 0 0 0 2 1 or Sample ID index C1day1_barcode01 C1day21_barcode19 C3day1_barcode03 C3day14_barcode15 T1day21_barcode22 0 'Deinococcus soli' Cha et al. 2014 21 22 13 14 19 1 16SrX (Apple proliferation group) 1 0 0 2 1 2 16SrXII (Stolbur group) 0 0 0 2 1 Though I tried renaming index to Sample ID I'm unable to remove the "Sample ID" title of the index even by setting index.name = None. What do I do?
set_index on the "Sample ID" column to set it aside, transpose to reshape, rename_axis to exchange the axis names, and reset_index to move back the IDs as column: out = (df.set_index('Sample ID').T .rename_axis(index='Sample ID', columns=None) .reset_index() ) Alternative: col = 'Sample ID' out = (df.set_index(col).T .rename_axis(index=col, columns=None) .reset_index() ) Output: Sample ID C1day1_barcode01 C1day21_barcode19 C3day1_barcode03 C3day14_barcode15 T1day21_barcode22 0 'Deinococcus soli' Cha et al. 2014 21 22 13 14 19 1 16SrX (Apple proliferation group) 1 0 0 2 1 2 16SrXII (Stolbur group) 0 0 0 2 1
2
2
78,843,358
2024-8-7
https://stackoverflow.com/questions/78843358/efficient-excel-sumif-equivalent-in-python
I am trying to figure out how to create the equivalent of SUMIF in Python. The solution I have currently works, but it is way too inefficient and it takes 20 minutes to run. What would be the most efficient way to reach the result that i want? Here is the what I am doing currently boiled down to a very simple form. In the "real" code there are many more conditions. **sales_data customer_1** Transactions | Product Dimension 4 | Product Dimension 2 | Product Dimension 3 | sum_of_sales -------------- | ------------------- | ------------------- | --------------------| ------------- 1 | 50 | F80 | ETQ546 | 80 2 | 50 | F80 | SAS978 | 20 3 | 50 | C36 | JBH148 | 10 4 | 50 | F80 | ETQ546 | 80 5 | 50 | F80 | SAS978 | 20 6 | 50 | C36 | JBH148 | 10 7 | 20 | A20 | OPW269 | 15 8 | 20 | A20 | DUW987 | 65 9 | 20 | v90 | OWQ897 | 47 **condition_types BEFORE ADDING SUMIF TO TABLE** Transactions | Type | Product Dimensions | -------------- | ------------------- | ------------------- | customer_1 | ABC | 50 | customer_1 | DEF | F80 | customer_1 | GHI | JBH148 | **condition_types AFTER ADDING SUMIF TO TABLE** Transactions | Type | Product Dimensions | sum_of_sales -------------- | ------------------- | ------------------- | ------------- customer_1 | ABC | 50 | 220 customer_1 | DEF | F80 | 200 customer_1 | GHI | JBH148 | 20 Define the sumif function def sumif(row, value_column): if row['Type'] == "ABC": filtered_data = sales_data.loc[ (sales_data['Product_dimension_4'] == row['Product Dimensions']) ] elif row['Type'] == "DEF" and row['Product Dimensions'] in sales_data['Product_dimension_2'].unique(): filtered_data = sales_data.loc[ (sales_data['Product_dimension_2'] == row['Product Dimensions']) ] elif row['Type'] == "GHI" and row['Product Dimensions'] in sales_data['Product_dimension_3'].unique(): filtered_data = sales_data.loc[ (sales_data['Product_dimension_3'] == row['Product Dimensions']) ] else: return 0 # Return 0 instead of an empty string for consistency return filtered_data[value_column].sum() Apply the sumif function using loc condition_types['sum_of_sales'] = condition_types.apply(lambda row: sumif(row, value_column="sum_of_sales"), axis=1) I hope this is clear enough and the example is not too complicated.
For a more general approach, you can melt the sales_data to get a relationship between 'Product Dimensions' and 'sum_of_sales', then groupby on 'Product Dimensions', aggregate with sum, and merge it on conditions_data: sales_data = ( pd.melt( sales_data, id_vars=["sum_of_sales"], value_vars=sales_data.filter(like="Product Dimension").columns, value_name="Product Dimensions", ) .groupby("Product Dimensions", as_index=False)["sum_of_sales"] .sum() ) condition_types = condition_types.merge(sales_data, how="left") Transactions Type Product Dimensions sum_of_sales 0 customer_1 ABC 50 220 1 customer_1 DEF F80 200 2 customer_1 GHI JBH148 20 But if each 'Type' must respect a relationship with a specific 'Product Dimension' column, you can create a mapping and also use 'Type' to group and merge: m = { "Product Dimension 4": "ABC", "Product Dimension 2": "DEF", "Product Dimension 3": "GHI", } sales_data = pd.melt( sales_data, id_vars=["Transactions", "sum_of_sales"], value_vars=sales_data.filter(like="Product Dimension").columns, var_name="Product Dimension Type", value_name="Product Dimensions", ) sales_data["Type"] = sales_data["Product Dimension Type"].map(m) sales_data = sales_data.groupby(["Product Dimensions", "Type"], as_index=False)[ "sum_of_sales" ].sum() condition_types = condition_types.merge( sales_data, on=["Product Dimensions", "Type"], how="left" ) Output is the same for the provided data, but may differ if you have repeated values in different 'Product Dimension' columns.
3
2
78,820,308
2024-8-1
https://stackoverflow.com/questions/78820308/how-can-i-call-a-java-class-method-from-python
I am making an Android app in Python using briefcase from BeeWare that must start a service. And I have this code... This is the relevant code from file MainActivity.java: package org.beeware.android; import com.chaquo.python.Kwarg; import com.chaquo.python.PyException; import com.chaquo.python.PyObject; import com.chaquo.python.Python; import com.chaquo.python.android.AndroidPlatform; public class MainActivity extends AppCompatActivity { public static MainActivity singletonThis; protected void onCreate(Bundle savedInstanceState) { singletonThis = this; ... start Python } public void startMyService() { Intent intent = new Intent(this, MyService.class); startService(intent); } And this is the relevant code from app.py that my intuition came up with: from chaquopy import Java class Application(toga.App): ...UI code here def start_tcp_service(self, widget): msg = 'START pressed !' print(msg); self.LogMessage(msg) self.CallJavaMethod('startMyService') def CallJavaMethod(self, method_name): MainActClass = Java.org.beeware.android.MainActivity MainActivity = MainActClass.singletonThis method = getattr(MainActivity, method_name) method() Now, when I try to run the project with briefcase run android -u on my Android phone, through the USB debugging bridge, I get the error: E/AndroidRuntime: java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.myapp/org.beeware.android.MainActivity}: com.chaquo.python.PyException: ModuleNotFoundError: No module named 'chaquopy' It seems that there isn't any module with name chaquopy. I tried to install it with pip, but it is not found. But then, how can I access the MainActivity methods from Python? What is the correct module to include? I found here some documentation that says "The java module provides facilities to use Java classes and objects from Python code.". I tried to import java bun this is not found either... It seems that on this page it tells how to access Java from Python, but I don’t understand all that is there, because this is my first interaction with Java and Android...
I found it! It goes like this... from org.beeware.android import MainActivity class Application(toga.App): # ...UI code here def start_tcp_service(self, widget): msg = 'START pressed!' print(msg); self.LogMessage(msg) self.CallJavaMethod('startMyService') def CallJavaMethod(self, method_name): MainActInst = MainActivity.singletonThis method = getattr(MainActInst, method_name) method()
2
4
78,836,105
2024-8-5
https://stackoverflow.com/questions/78836105/why-isnt-the-pytest-addoption-hook-run-with-the-configured-testpaths-usag
Summary: I'm trying to set up a custom pytest option with the pytest_addoption feature. But when trying to configure my project with a project.toml file while using the said custom option, I'm getting the following error: $ pytest --foo foo ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...] pytest: error: unrecognized arguments: --foo inifile: /home/vmonteco/Code/MREs/pytest__addoption__pyproject_toml/01_adding_pyproject.toml/pyproject.toml rootdir: /home/vmonteco/Code/MREs/pytest__addoption__pyproject_toml/01_adding_pyproject.toml Why is this problem occurring despite the configured test path and how could I solve it? Used versions are: Python 3.10.13 Pytest 8.1.1 How to reproduce: Step 1 - before organizing the project, it works: I start with a very simple test in a single directory. $ tree . β”œβ”€β”€ conftest.py └── test_foo.py 1 directory, 2 files $ conftest.py: import pytest def pytest_addoption(parser): parser.addoption("--foo", action="store") @pytest.fixture def my_val(request): return request.config.getoption("--foo") test_foo.py: def test_foo(my_val): assert my_val == "foo" $ pytest --foo bar =============================== test session starts =============================== platform linux -- Python 3.10.13, pytest-8.1.1, pluggy-1.5.0 rootdir: /home/vmonteco/code/MREs/pytest__addoption__pyproject_toml/00_simplest_case collected 1 item test_foo.py F [100%] ==================================== FAILURES ===================================== ____________________________________ test_foo _____________________________________ my_val = 'bar' def test_foo(my_val): > assert my_val == "foo" E AssertionError: assert 'bar' == 'foo' E E - foo E + bar test_foo.py:2: AssertionError ============================= short test summary info ============================= FAILED test_foo.py::test_foo - AssertionError: assert 'bar' == 'foo' ================================ 1 failed in 0.01s ================================ $ Step 2 - When adding an empty pyproject.toml and reorganizing the project, it fails: $ tree . β”œβ”€β”€ my_project β”‚ └── my_tests β”‚ β”œβ”€β”€ conftest.py β”‚ β”œβ”€β”€ __init__.py β”‚ └── test_foo.py └── pyproject.toml 3 directories, 4 files $ pytest --foo bar ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...] pytest: error: unrecognized arguments: --foo inifile: /home/vmonteco/code/MREs/pytest__addoption__pyproject_toml/01_adding_pyproject_toml/pyproject.toml rootdir: /home/vmonteco/code/MREs/pytest__addoption__pyproject_toml/01_adding_pyproject_toml $ Notes: In doubt, I also added an __init__.py file. The pyproject.toml seems to be recognized despite not having the required [tool.pytest.ini_options] table, thus apparently contradicting the documentation. However, there's a simple workaround that seems to work in this specific case: Just manually passing the test path to my tests as command line argument seems enough to make things work correctly again: $ pytest --foo bar my_project/my_tests =============================== test session starts =============================== platform linux -- Python 3.10.13, pytest-8.1.1, pluggy-1.5.0 rootdir: /home/vmonteco/code/MREs/pytest__addoption__pyproject_toml/01_adding_pyproject_toml configfile: pyproject.toml collected 1 item my_project/my_tests/test_foo.py F [100%] ==================================== FAILURES ===================================== ____________________________________ test_foo _____________________________________ my_val = 'bar' def test_foo(my_val): > assert my_val == "foo" E AssertionError: assert 'bar' == 'foo' E E - foo E + bar my_project/my_tests/test_foo.py:2: AssertionError ============================= short test summary info ============================= FAILED my_project/my_tests/test_foo.py::test_foo - AssertionError: assert 'bar' == 'foo' ================================ 1 failed in 0.02s ================================ $ But I'd like to avoid that and I'd rather have my project correctly configured. Step 3 - Unsuccessfully trying to configure testpaths in the pyproject.toml. The best explanation I found so far relies on the following points from the documentation: The necessity to put pytest_addoption in an "initial" conftest.py: This hook is only called for initial conftests. pytest_addoption documentation What is actually an "initial conftest.py": Initial conftest are for each test path the files whose path match <test_path>/conftest.py or <test_test>/test*/conftest.py. by loading all β€œinitial β€œconftest.py files: determine the test paths: specified on the command line, otherwise in testpaths if defined and running from the rootdir, otherwise the current dir for each test path, load conftest.py and test*/conftest.py relative to the directory part of the test path, if exist. Before a conftest.py file is loaded, load conftest.py files in all of its parent directories. After a conftest.py file is loaded, recursively load all plugins specified in its pytest_plugins variable if present. Plugin discovery order at tool startup So, if I understand well: my error seems to occurs because if I don't provide an explicit test path, the current directory is used. But in that case, my conftest.py is too deep into the arborescence to be used as an initial conftest. With my workaround, explicitly passing a deeper test path solves this by making the conftest "initial" again. From this, it would seem appropriate to try to translate my command line argument path into a bit of configuration (testpaths) as shown in the relevant documentation. But when trying to run my command again, I still get the same error: $ cat pyproject.toml [tool.pytest.ini_options] testpaths = [ "my_project/my_tests", ] $ tree . β”œβ”€β”€ my_project β”‚ └── my_tests β”‚ β”œβ”€β”€ conftest.py β”‚ β”œβ”€β”€ __init__.py β”‚ └── test_foo.py └── pyproject.toml 3 directories, 4 files $ pytest --foo bar ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...] pytest: error: unrecognized arguments: --foo inifile: /home/vmonteco/code/MREs/pytest__addoption__pyproject_toml/02_attempt_to_solve/pyproject.toml rootdir: /home/vmonteco/code/MREs/pytest__addoption__pyproject_toml/02_attempt_to_solve $ I also tried to use a different kind of configuration file: $ cat pytest.ini [pytest] testpaths = my_project/my_tests $ pytest --foo bar ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...] pytest: error: unrecognized arguments: --foo inifile: /home/vmonteco/code/MREs/pytest__addoption__pyproject_toml/03_with_pytest_ini/pytest.ini rootdir: /home/vmonteco/code/MREs/pytest__addoption__pyproject_toml/03_with_pytest_ini $ But it still doesn't solve the problem despite the equivalent as command line argument seems to work. Why? Addendum - raw output of new step 2 reproduction session: Script started on 2024-08-06 03:23:13+02:00 [TERM="tmux-256color" TTY="/dev/pts/8" COLUMNS="126" LINES="69"] [1m[7m%[27m[1m[0m k..ew_pytest_MRE\]7;file://vmonteco-desktop/home/vmonteco/code/MREs/new_pytest_MRE\ [0m[27m[24m[J[01;32m➜ [36mnew_pytest_MRE[00m [K[?1h=[?2004httree[?1l>[?2004l ktree\[01;34m.[0m β”œβ”€β”€ [01;34mmy_project[0m β”‚ └── [01;34mmy_tests[0m β”‚ β”œβ”€β”€ conftest.py β”‚ β”œβ”€β”€ __init__.py β”‚ └── test_foo.py └── pyproject.toml 3 directories, 4 files [1m[7m%[27m[1m[0m k..ew_pytest_MRE\]7;file://vmonteco-desktop/home/vmonteco/code/MREs/new_pytest_MRE\ [0m[27m[24m[J[01;32m➜ [36mnew_pytest_MRE[00m [K[?1h=[?2004hccat pyproject.toml[1m [0m[0m [?1l>[?2004l kcat\[1m[7m%[27m[1m[0m k..ew_pytest_MRE\]7;file://vmonteco-desktop/home/vmonteco/code/MREs/new_pytest_MRE\ [0m[27m[24m[J[01;32m➜ [36mnew_pytest_MRE[00m [K[?1h=[?2004hccat my_project[1m/[0m[0m/my_tests[1m/[0m[0m/conftest.py[1m [0m[0m [?1l>[?2004l kcat\import pytest def pytest_addoption(parser): parser.addoption("--foo", action="store") @pytest.fixture def my_val(request): return request.config.getoption("--foo") [1m[7m%[27m[1m[0m k..ew_pytest_MRE\]7;file://vmonteco-desktop/home/vmonteco/code/MREs/new_pytest_MRE\ [0m[27m[24m[J[01;32m➜ [36mnew_pytest_MRE[00m [K[?1h=[?2004hcat my_project/my_tests/conftest.py __init__.py[1m [0m[0m [?1l>[?2004l kcat\[1m[7m%[27m[1m[0m k..ew_pytest_MRE\]7;file://vmonteco-desktop/home/vmonteco/code/MREs/new_pytest_MRE\ [0m[27m[24m[J[01;32m➜ [36mnew_pytest_MRE[00m [K[?1h=[?2004hcat my_project/my_tests/__init__.py test_foo.py[1m [0m[0m [?1l>[?2004l kcat\def test_foo(my_val): assert my_val == "foo" [1m[7m%[27m[1m[0m k..ew_pytest_MRE\]7;file://vmonteco-desktop/home/vmonteco/code/MREs/new_pytest_MRE\ [0m[27m[24m[J[01;32m➜ [36mnew_pytest_MRE[00m [K[?1h=[?2004hppython --version[?1l>[?2004l kpython\Python 3.10.13 [1m[7m%[27m[1m[0m k..ew_pytest_MRE\]7;file://vmonteco-desktop/home/vmonteco/code/MREs/new_pytest_MRE\ [0m[27m[24m[J[01;32m➜ [36mnew_pytest_MRE[00m [K[?1h=[?2004hpython --version est --version[?1l>[?2004l kpytest\pytest 8.1.1 [1m[7m%[27m[1m[0m k..ew_pytest_MRE\]7;file://vmonteco-desktop/home/vmonteco/code/MREs/new_pytest_MRE\ [0m[27m[24m[J[01;32m➜ [36mnew_pytest_MRE[00m [K[?1h=[?2004hppytest -fo -foo bar[?1l>[?2004l kpytest\[31mERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...] pytest: error: unrecognized arguments: --foo inifile: /home/vmonteco/code/MREs/new_pytest_MRE/pyproject.toml rootdir: /home/vmonteco/code/MREs/new_pytest_MRE [0m [1m[7m%[27m[1m[0m k..ew_pytest_MRE\]7;file://vmonteco-desktop/home/vmonteco/code/MREs/new_pytest_MRE\ [0m[27m[24m[J[01;31m➜ [36mnew_pytest_MRE[00m [K[?1h=[?2004h[?2004l Script done on 2024-08-06 03:24:49+02:00 [COMMAND_EXIT_CODE="4"] link to pastebin
It looks like it indeed was an usage error (and also facepalm-worthy material): pytest --foo=bar rather than pytest --foo bar. $ pytest --foo bar ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...] pytest: error: unrecognized arguments: --foo inifile: /home/vmonteco/code/MREs/pytest__addoption__pyproject_toml/02_attempt_to_solve/pyproject.toml rootdir: /home/vmonteco/code/MREs/pytest__addoption__pyproject_toml/02_attempt_to_solve $ pytest --foo=bar =============================== test session starts =============================== platform linux -- Python 3.10.13, pytest-8.3.2, pluggy-1.5.0 rootdir: /home/vmonteco/code/MREs/pytest__addoption__pyproject_toml/02_attempt_to_solve configfile: pyproject.toml testpaths: my_project/my_tests collected 1 item my_project/my_tests/test_foo.py F [100%] ==================================== FAILURES ===================================== ____________________________________ test_foo _____________________________________ my_val = 'bar' def test_foo(my_val): > assert my_val == "foo" E AssertionError: assert 'bar' == 'foo' E E - foo E + bar my_project/my_tests/test_foo.py:2: AssertionError ============================= short test summary info ============================= FAILED my_project/my_tests/test_foo.py::test_foo - AssertionError: assert 'bar' == 'foo' ================================ 1 failed in 0.02s ================================ $ Addendum: Here's on explanation on why pytest is designed like thas.
2
2
78,839,084
2024-8-6
https://stackoverflow.com/questions/78839084/how-can-i-remove-nulls-in-the-process-of-unpivoting-a-polars-dataframe
I have a large polars dataframe that I need to unpivot. This dataframe contains lots of null values (at least half). I want to drop the nulls while unpivoting the dataframe. I already tried to unpivot the dataframe first and then filter it with drop_nulls() or similar approaches. However, this is too memory-intensive (on a machine with about 1TB RAM). Is there any way in which I can filter the dataset already during the process of unpivot? Any help is appreciated! Sample data. # in reality, this dataset has about 160k rows and columns # (square matrix), and is about 100GB df = { "A": [None, 2, 3], "B": [None, None, 2], "C": [None, None, None], "names": ["A", "B", "C"] } df = pl.DataFrame(df) df.unpivot(index = "names", variable_name = "names_2", value_name = "distance") Output. shape: (9, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ names ┆ names_2 ┆ distance β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═════════β•ͺ══════════║ β”‚ A ┆ A ┆ null β”‚ β”‚ B ┆ A ┆ null β”‚ β”‚ C ┆ A ┆ null β”‚ β”‚ A ┆ B ┆ 2 β”‚ β”‚ B ┆ B ┆ null β”‚ β”‚ C ┆ B ┆ null β”‚ β”‚ A ┆ C ┆ 3 β”‚ β”‚ B ┆ C ┆ 2 β”‚ β”‚ C ┆ C ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ This could then be filtered (e.g. using df = df.drop_nulls()), but I would like to get this desired result directly from the unpivot. Expected output. shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ names ┆ names_2 ┆ distance β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═════════β•ͺ══════════║ β”‚ A ┆ B ┆ 2 β”‚ β”‚ A ┆ C ┆ 3 β”‚ β”‚ B ┆ C ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
Very likely you can make the operation more efficient in runtime and memory consumption by using pl.LazyFrames and polars' streaming engine. By using pl.LazyFrames, the melt / unpivot and filter / drop_nulls won't be operations eagerly, but first aggregated into a query plan. When collecting the lazy DataFrame (i.e. materialising it into a pl.DataFrame), the query plan can be optimised, taking into account subsequent operations. Streaming will enable the processing to not done all-at-once, but executed in batches, ensuring that the processed batches don't grow larger-than-memory. ( df # convert to pl.LazyFrame .lazy() # create query plan .unpivot( index="names", variable_name="names_2", value_name="distance" ) .drop_nulls() # collect pl.LazyFrame while using streaming engine .collect(streaming=True) ) Note. Tentative tests on my machine give large improvements in runtime and memory consumptions.
3
2
78,829,500
2024-8-3
https://stackoverflow.com/questions/78829500/querying-data-from-simbad-using-astroquery
I'm making a script in Python to get information for all objects from the NGC and IC catalogs. Actually, I already have this information from OpenNGC, however, coordinates don't have the same precision, so I need to combine both dataframes. What I want is: the name, RA in J2000, Dec in J2000 and the type. What I also would like, but it seems still more difficult: the constellation and the magnitude (flux B). What I'm getting: a lot of repeated results. For example, for the cluster NGC 188 I got a lot of lines, one for every object into this cluster, with the individual magnitudes. So I removed the magnitudes (flux B) to get only the objects. I will insert the magnitudes manually, if it's necessary, from the OpenNGC. This is my code: #!/usr/bin/env python # -*- coding: utf-8 -*- from astroquery.simbad import Simbad from astropy.table import Table # Get all the NGC/IC objects def query_deep_sky_objects(catalog): adql_query = f""" SELECT TOP 10000 main_id, otype, ra, dec FROM basic WHERE main_id LIKE 'NGC%' OR main_id LIKE 'IC%' ORDER BY main_id ASC """ result = Simbad.query_tap(adql_query) return result objects = query_deep_sky_objects('NGC') objects.write('simbad_objects.csv', format='csv', overwrite=True) I'm very tired, since I've spent all day searching catalogs with precision enough. I also used an another catalog taken from Vizier ('VII/118/ngc2000') without the precision I need, like in Open NGC. I'm making a sky atlas for my publications. Without the required precision, DSO and stars appear out of their place.
You're almost there. You can change a bit the ADQL by joining the table ident that contains all identifiers (and not only the main one), so that you don't miss any sources. The fluxes are in the table flux. I remove the duplicates with DISTINCT. SELECT DISTINCT TOP 100 main_id, otype, ra, dec, flux, flux_err, filter, flux.bibcode as flux_origin_of_value FROM basic JOIN ident on oid = ident.oidref JOIN flux on oid = flux.oidref WHERE (id LIKE 'NGC%' OR id LIKE 'IC%') AND filter = 'B' ORDER BY main_id ASC This gives me results like main_id |otype| ra | dec | flux |flux_err|filter|flux_origin_of_value -------------------------|-----|------------------|-------------------|------|--------|------|--------------------- "* 4 Cas" |"V*" |351.2094267145825 |62.28281016386695 |6.592 |0.015 |"B" |"2000A&A...355L..27H" "* 7 Sgr" |"*" |270.71291093894 |-24.28246728687 |5.86 | |"B" | "* 9 Mon" |"*" |96.75365908689 |-4.355646374689999 |6.34 |0.015 |"B" |"2000A&A...355L..27H" "* 9 Sgr" |"Em*"|270.96852085459 |-24.36073118167 |5.97 | |"B" |"2002yCat.2237....0D" "* 10 Mon" |"bC*"|96.98986680181083 |-4.7621476108591665|4.861 |0.014 |"B" |"2000A&A...355L..27H" "* 12 CMa" |"a2*"|101.75618136296002|-21.01540324902 |5.89 | |"B" |"2002yCat.2237....0D" "* 12 Mon" |"V*" |98.08001622857 |4.85599827378 |6.82 |0.007 |"B" |"1993A&AS..100..591O" "* 15 Mon" |"Be*"|100.24441512044264|9.895756366805493 |4.45 |0.01 |"B" |"2002A&A...384..180F" "* 18 Vul" |"SB*"|302.63973830658 |26.90416796478 |5.586 |0.014 |"B" |"2000A&A...355L..27H" "* 19 Vul" |"*" |302.94989153336996|26.808991088130004 |6.879 |0.015 |"B" |"2000A&A...355L..27H" "* 20 Vul" |"Be*"|303.00292297134 |26.47880660233 |5.81 | |"B" |"2002yCat.2237....0D" "* 61 And" |"s*b"|34.76855215623 |57.13549863046 |7.0 | |"B" |"2002yCat.2237....0D" "* 103 Tau" |"SB*"|77.02759314779 |24.265174608780004 |5.56 | |"B" |"2002yCat.2237....0D" "* b Per" |"SB*"|64.56092274545583 |50.295493065878055 |4.639 |0.014 |"B" |"2000A&A...355L..27H" "* c Pup" |"*" |116.31373389138793|-37.968585259939175|5.34 | |"B" |"2002yCat.2237....0D" "* d01 Pup" |"*" |114.86391100082002|-38.30802345296194 |4.65 | |"B" |"2002yCat.2237....0D" "* d02 Pup" |"Pu*"|114.93256487363001|-38.1392981022 |5.62 |0.01 |"B" |"2002A&A...384..180F" "* d03 Pup" |"Pu*"|114.94947883364 |-38.26065232835 |5.667 |0.014 |"B" |"2000A&A...355L..27H" "* d04 Pup" |"Be*"|114.99162956189 |-37.579424278110004|5.944 |0.014 |"B" |"2000A&A...355L..27H" "* iot Ori" |"SB*"|83.85827580502625 |-5.909888523506666 |2.53 | |"B" |"2002yCat.2237....0D" "* kap Cru" |"SB*"|193.45383256528 |-60.37624162151 |6.12 |0.01 |"B" |"2000A&A...355L..27H" "* mu. Nor" |"s*b"|248.52091401572375|-44.04531127887388 |4.99 | |"B" |"2002yCat.2237....0D" "* omi Vel" |"Pu*"|130.0733305508675 |-52.921909895522504|3.44 | |"B" | "* phi Cas" |"s*y"|20.020485277440002|58.23161323440999 |5.66 | |"B" |"2002yCat.2237....0D" "* tau CMa" |"SB*"|109.67702081318917|-24.954361319371664|4.25 | |"B" |"2002yCat.2237....0D" "* tet Car" |"SB*"|160.73917486416664|-64.39445022111111 |2.54 | |"B" |"2002yCat.2237....0D" "* tet02 Ori A" |"SB*"|83.84542183261 |-5.416064620030001 |6.3 | |"B" |"2002yCat.2237....0D" "* zet01 Sco" |"s*b"|253.49886304786997|-42.36202981513001 |5.31 | |"B" |"2002yCat.2237....0D" "2MASS J00225403-7205169"|"AB*"|5.725253264990001 |-72.08804841359 |13.802|0.003 |"B" |"2017A&A...607A.135W" "2MASS J00240156+6119328"|"RG*"|6.00651816708 |61.32580282815999 |14.68 | |"B" | "2MASS J00244007+6123511"|"RG*"|6.16707844506 |61.39751148121 |12.79 | |"B" | "2MASS J00254390-7206508"|"LP*"|6.43312095062 |-72.11409712722 |13.43 | |"B" | "2MASS J00294512+6011508"|"NIR"|7.438073724230001 |60.19745324021001 |15.37 | |"B" |"1961PUSNO..17..343H"
2
1
78,839,103
2024-8-6
https://stackoverflow.com/questions/78839103/how-to-return-plain-text-or-json-depending-on-condition
Is there a way to do something like this using FastAPI: @app.post("/instance/new", tags=["instance"]) async def MyFunction(condition): if condition: response = {"key": "value"} return response else: return some_big_plain_text The way it is coded now, the JSON is returned fine, but some_big_plain_text is not human friendly. If I do: @app.post("/instance/new", tags=["instance"], PlainTextResponse) I get an error when returning a JSON response.
It is just needed to add a FastAPI Response: from fastapi import FastAPI, Response @app.post("/instance/new", tags=["instance"]) async def MyFunction(condition): if condition: response = {"key": "value"} return response else: return Response(content=some_big_plain_text, media_type="text/plain")
3
0
78,814,860
2024-7-31
https://stackoverflow.com/questions/78814860/adding-status-text-to-a-textual-footer
I'm trying to create an enditor where the Footer contains the usual bindings on the left and some status information on the right, for example the line number. The Footer in textual is very simple so I thought to extend it, but I'm unable to see both my label and the binding of the base Footer. This is my code: class MyFooter(Footer): DEFAULT_CSS = """ MyFooter { .right-label { text-align: right; } } """ def compose(self) -> ComposeResult: for widget in super().compose(): yield widget yield Label("This is the right side label", id="right-label") To test it, you can use the first example of the tutorial: from textual.app import App, ComposeResult from textual.widgets import Header, Footer,Label class MyFooter(Footer): DEFAULT_CSS = """ MyFooter { .right-label { text-align: right; } } """ def compose(self) -> ComposeResult: """Create child widgets for the footer.""" for widget in super().compose(): yield widget yield Label("This is the right side label", id="right-label") class StopwatchApp(App): """A Textual app to manage stopwatches.""" BINDINGS = [("d", "toggle_dark", "Toggle dark mode")] def compose(self) -> ComposeResult: """Create child widgets for the app.""" yield Header() yield MyFooter() def action_toggle_dark(self) -> None: """An action to toggle dark mode.""" self.dark = not self.dark if __name__ == "__main__": app = StopwatchApp() app.run()
I'd recommend solving this by laying out multiple widgets, instead of overriding the Footer class. The Footer widget uses dock: bottom; layout: grid; grid-columns: auto, which makes this a little tricky. But you can wrap the Footer in a fixed-sized container, and lay out your label next to that. from textual.app import App, ComposeResult from textual.widgets import Header, Footer, Label from textual.containers import Horizontal class StopwatchApp(App): """A Textual app to manage stopwatches.""" BINDINGS = [("d", "toggle_dark", "Toggle dark mode")] CSS = """ Horizontal#footer-outer { height: 1; dock: bottom; } Horizonal#footer-inner { width: 75%; } Label#right-label { width: 25%; text-align: right; } """ def compose(self) -> ComposeResult: """Create child widgets for the app.""" yield Header() with Horizontal(id="footer-outer"): with Horizontal(id="footer-inner"): yield Footer() yield Label("This is the right side label", id="right-label") def action_toggle_dark(self) -> None: """An action to toggle dark mode.""" self.dark = not self.dark if __name__ == "__main__": app = StopwatchApp() app.run()
3
4
78,839,287
2024-8-6
https://stackoverflow.com/questions/78839287/f2py-in-numpy-2-0-1-does-not-expose-variables-the-way-numpy-1-26-did-how-can-i
I used to run a collection of Fortran 95 subroutines from Python by compiling it via f2py. In the Fortran source I have a module with my global variables: MODULE GEOPLOT_GLOBALS IMPLICIT NONE INTEGER, PARAMETER :: N_MAX = 16 INTEGER, PARAMETER :: I_MAX = 18 INTEGER, PARAMETER :: J_MAX = 72 ... END MODULE GEOPLOT_GLOBALS The compiled file has the name "geoplot.cpython-312-darwin.so" and is in a subfolder named "geo". When using f2py in numpy 1.26, I could do this: import geo.geoplot as geo maxN = geo.geoplot_globals.n_max maxI = geo.geoplot_globals.i_max maxJ = geo.geoplot_globals.j_max Now, with numpy 2.0.1, I do the same but get the error message AttributeError: module 'geo.geoplot' has no attribute 'geoplot_globals' Which can be confirmed by listing the __dict__ attribute or using the getmembers module: They all list the Fortran subroutines and modules which contain source code, except for the geoplot_globals module which contains only variable declarations. So my question is: How am I supposed to access global Fortran variables from Python when using numpy 2.0? And please do not suggest to write all to a file in Fortran only to read it in Python. There should be a more direct way.
This must be a bug in f2py. See here: https://github.com/numpy/numpy/issues/27167 What got me unstuck is an ugly workaround: I simply added some useless dummy code to the module, like that: MODULE GEOPLOT_GLOBALS USE MOD_TYPES IMPLICIT NONE INTEGER, PARAMETER :: N_MAX = 16 INTEGER, PARAMETER :: I_MAX = 18 INTEGER, PARAMETER :: J_MAX = 72 ... CONTAINS SUBROUTINE DUMMY (UNSINN) INTEGER :: UNSINN OPEN(UNIT=29, FILE="FOR29.txt", STATUS = 'UNKNOWN') WRITE(29,"(I8)") UNSINN CLOSE (29) END SUBROUTINE DUMMY END MODULE GEOPLOT_GLOBALS Now the before missing module is carried over into Python and can be accessed in the usual way. >>> print(geo.geoplot_globals.__doc__) n_max : 'i'-scalar i_max : 'i'-scalar j_max : 'i'-scalar ...
4
3
78,836,505
2024-8-5
https://stackoverflow.com/questions/78836505/how-to-evaluate-nested-boolean-logical-expressions-in-python
I'm working on a complex rule parser that has the following properties: A space character separates rules A "+" character indicates an "AND" operator A "," character indicates an "OR" operator A "-" indicates an optional element Tokens in parenthesis should be evaluated together I'm able to do simple rules but having trouble evaluating complex rules in nested parenthesis. Here's nested rule definition I'm trying to evaluate: definition = '((K00925 K00625),K01895) (K00193+K00197+K00194) (K00577+K00578+K00579+K00580+K00581-K00582-K00583+K00584) (K00399+K00401+K00402) (K22480+K22481+K22482,K03388+K03389+K03390,K08264+K08265,K03388+K03389+K03390+K14127+(K14126+K14128,K22516+K00125))' Rule 1: ((K00925 K00625),K01895) This one is kind of tricky. Basically this rule either K00925 then separately K00625 OR just K01895 alone. Since it's all within a parenthesis set then that translates to (K00925 & K00625) OR K01895 as indicated by "," character. Rule 2: (K00193+K00197+K00194) All 3 items must be present as indicated by "+" sign Rule 3: (K00577+K00578+K00579+K00580+K00581-K00582-K00583+K00584) Everything except K00582 and K00583 because they are prefixed by "-" characters and when "+" is present then all items must be present Rule 4: (K00399+K00401+K00402) All 3 items must be present as indicated by "+" sign Rule 5: (K22480+K22481+K22482,K03388+K03389+K03390,K08264+K08265,K03388+K03389+K03390+K14127+(K14126+K14128,K22516+K00125)) This is simpler than it looks. Either (K22480+K22481+K22482) OR (K03388+K03389+K03390). For the last subrule, it is K08264+K08265,K03388+K03389+K03390+K14127+(Either K14126+K14128 OR K22516+K00125)) Here's the code I have that is almost correct: import re def rule_splitter(rule: str, split_characters: set = {"+", "-", ",", "(", ")", " "}) -> set: """ Split rule by characters. Args: rule (str): Boolean logical string. split_characters (list): List of characters to split in rule. Returns: set: Unique tokens in a rule. """ rule_decomposed = str(rule) if split_characters: for character in split_characters: character = character.strip() if character: rule_decomposed = rule_decomposed.replace(character, " ") unique_tokens = set(filter(bool, rule_decomposed.split())) return unique_tokens def find_rules(definition: str) -> list: """ Find and extract rules from the definition string. Args: definition (str): Complex boolean logical string with multiple rules. Returns: list: List of extracted rules as strings. """ rules = [] stack = [] current_rule = "" outside_rule = "" for char in definition: if char == '(': if stack: current_rule += char if outside_rule.strip(): rules.append(outside_rule.strip()) outside_rule = "" stack.append(char) elif char == ')': stack.pop() if stack: current_rule += char else: current_rule = f"({current_rule.strip()})" rules.append(current_rule) current_rule = "" else: if stack: current_rule += char else: outside_rule += char # Add any remaining outside_rule at the end of the loop if outside_rule.strip(): rules.append(outside_rule.strip()) return rules def evaluate_rule(rule: str, tokens: set, replace={"+": " and ", ",": " or "}) -> bool: """ Evaluate a string of boolean logicals. Args: rule (str): Boolean logical string. tokens (set): List of tokens in rule. replace (dict, optional): Replace boolean characters. Defaults to {"+":" and ", "," : " or "}. Returns: bool: Evaluated rule. """ # Handle optional tokens prefixed by '-' rule = re.sub(r'-\w+', '', rule) # Replace characters for standard logical formatting if replace: for character_before, character_after in replace.items(): rule = rule.replace(character_before, character_after) # Split the rule into individual symbols unique_symbols = rule_splitter(rule, replace.values()) # Create a dictionary with the presence of each symbol in the tokens token_to_bool = {sym: (sym in tokens) for sym in unique_symbols} # Parse and evaluate the rule using a recursive descent parser def parse_expression(expression: str) -> bool: expression = expression.strip() # Handle nested expressions if expression.startswith('(') and expression.endswith(')'): return parse_expression(expression[1:-1]) # Evaluate 'OR' conditions if ' or ' in expression: parts = expression.split(' or ') return any(parse_expression(part) for part in parts) # Evaluate 'AND' conditions elif ' and ' in expression: parts = expression.split(' and ') return all(parse_expression(part) for part in parts) # Evaluate individual token presence else: return token_to_bool.get(expression.strip(), False) return parse_expression(rule) def evaluate_definition(definition: str, tokens: set) -> dict: """ Evaluate a complex definition string involving multiple rules. Args: definition (str): Complex boolean logical string with multiple rules. tokens (set): Set of tokens to check against the rules. Returns: dict: Dictionary with each rule and its evaluated result. """ # Extract individual rules from the definition rules = find_rules(definition) # Evaluate each rule rule_results = {} for rule in rules: try: cleaned_rule = rule[1:-1] if rule.startswith('(') and rule.endswith(')') else rule # Remove outer parentheses if they exist result = evaluate_rule(cleaned_rule, tokens) except SyntaxError: # Handle syntax errors from eval() due to incorrect formatting result = False rule_results[rule] = result return rule_results # Example usage definition = '((K00925 K00625),K01895) (K00193+K00197+K00194) (K00577+K00578+K00579+K00580+K00581-K00582-K00583+K00584) (K00399+K00401+K00402) (K22480+K22481+K22482,K03388+K03389+K03390,K08264+K08265,K03388+K03389+K03390+K14127+(K14126+K14128,K22516+K00125))' tokens = { 'K00925', 'K00625', # 'K01895', 'K00193', 'K00197', 'K00194', 'K00577', 'K00578', 'K00579', 'K00580', 'K00581', # 'K00582', 'K00584', 'K00399', 'K00401', 'K00402', 'K22480', # 'K22481', # 'K22482', 'K03388', 'K03389', 'K03390', # 'K08264', # 'K08265', 'K14127', 'K14126', 'K14128', 'K22516', # 'K00125' } result = evaluate_definition(definition, tokens) # result # {'((K00925 K00625),K01895)': False, # '(K00193+K00197+K00194)': True, # '(K00577+K00578+K00579+K00580+K00581-K00582-K00583+K00584)': True, # '(K00399+K00401+K00402)': True, # '(K22480+K22481+K22482,K03388+K03389+K03390,K08264+K08265,K03388+K03389+K03390+K14127+(K14126+K14128,K22516+K00125))': True} Note that the implementation is splitting the first rule. Here is the following output I'm expecting: { '(K00925 K00625),K01895)':True, # Note this is True because of `K00925` and `K00625` together (b/c the parenthesis) OR K01895 being present '(K00193+K00197+K00194)':True, '(K00577+K00578+K00579+K00580+K00581-K00582-K00583+K00584)':True, # This is missing optional tokens '(K00399+K00401+K00402)':True, '(K22480+K22481+K22482,K03388+K03389+K03390,K08264+K08265,K03388+K03389+K03390+K14127+(K14126+K14128,K22516+K00125)':True, # This combination allows for {K03388, K03389, K03390, K14127} +{K14126+K14128} } Here's a graphical representation of the information flow for this complex rule definition: It should also be able to handle this rule: rule_edge_case='((K00134,K00150) K00927,K11389)' with the following query tokens: tokens = { 'K00134', 'K00927'}. This should be True because either (K00134 OR K00150) with K00927 are present which is sufficient. Edit 1: I had an error before and the last rule was actually True and not False. Edit 2: I've changed "items" to "tokens" and modified which tokens are used for evaluation to capture better edge cases.
A great tool for this job would be a proper parser for expression grammars. I'm using parsimonious for this answer, which allows you to define a BNF or eBNF like syntax for your grammar, to assist with decoding your DSL. edit I updated the grammar to check the or "," operator before checking for the rule break operator " ". This also means you check for the maybe_or in the brace_expression definition, rather than for the rules definition. Defining the grammar Given the rules, you've defined a few operators with different precedence. Break all values by the or operator "," Break all top level items by the rules " " Break all values by the and operator "+" or ignore operator "-" "+" and "-" have the same precedence Values may be defined in braces "(...)" These are to be recursively parsed by referencing the root definition. Given these rules, here's how you can define a grammar. from parsimonious.grammar import Grammar grammar = Grammar( """ maybe_or = (maybe_rule or+) / maybe_rule or = "," maybe_rule maybe_rule = (maybe_and rule+) / maybe_and rule = " " maybe_and maybe_and = (expression and+) / expression and = and_op expression and_op = "+" / "-" expression = brace_expression / variable brace_expression = "(" maybe_or ")" variable = ~r"[A-Z0-9]+" """ ) In this example, I've made explicit steps that allow following the operator precedence in the grammar. Depending on your descent parser, you may wish to order differently. In the case of parsimonious, items are greedily captured, meaning we want to evaluate the long form of the definition first. Defining containers Now that we have a grammar, we can construct containers to represent its hierarchy. We'll want to be able to test against the items set by evaluating each container against its children. At this point we can consider which of the operators need to be defined by separate containers. The and-operation "+" and rules operator " " have the same boolean evaluation, so we can combine them to a single And class. Other containers would be Or, Ignored and Var to represent the bool var codename. We'll also benefit from a str repr to see whats going on in the case of errors, which we can use to dump out a normalized version of the expression. from dataclasses import dataclass from typing import List, Protocol class Matcher(Protocol): def matches(self, variables: List[str]) -> bool: pass @dataclass class Var: value: str def __str__(self): return self.value def matches(self, variables: List[str]) -> bool: return self.value in variables @dataclass class Ignored: value: Matcher def __str__(self): return f"{self.value}?" def matches(self, variables: List[str]) -> bool: return True @dataclass class And: values: List[Matcher] def __str__(self): return "(" + "+".join(map(str, self.values)) + ")" def matches(self, variables: List[str]) -> bool: return all(v.matches(variables) for v in self.values) @dataclass class Or: values: List[Matcher] def __str__(self): return "(" + ",".join(map(str, self.values)) + ")" def matches(self, variables: List[str]) -> bool: return any(v.matches(variables) for v in self.values) Parsing the grammar With a grammar and a set of containers we can now begin to unpack the statement. We can use the NodeVistor class from parsimonious for this. Each definition in the grammar can be given its own handler method to use while unpacking values. You'll want to add lots of test cases for this class, as the parsing logic used is highly dependent on the grammar being evaluated. I've found if the grammar is logical, so will the decoder methods be on the NodeVistor implementation. from parsimonious.nodes import NodeVisitor class Visitor(NodeVisitor): def visit_maybe_rule(self, node, visited_children): # If there are multiple rules, combine them. children, *_ = visited_children if isinstance(children, list): return And([children[0], *children[1]]) return children def visit_rule(self, node, visited_children): # Strip out the " " rule operator child token return visited_children[1] def visit_maybe_or(self, node, visited_children): # If there are multiple or values, combine them. children, *_ = visited_children if isinstance(children, list): return Or([children[0], *children[1]]) return children def visit_or(self, node, visited_children): # Strip out the "," or operator child token return visited_children[1] def visit_maybe_and(self, node, visited_children): # If there are multiple and values, combine them. children, *_ = visited_children if isinstance(children, list): return And([children[0], *children[1]]) return children def visit_and(self, node, visited_children): # Strip out the operator child token, and # handle the case where we ignore values. if visited_children[0] == "-": return Ignored(visited_children[1]) return visited_children[1] def visit_and_op(self, node, visited_children): # get the text of the operator. return node.text def visit_expression(self, node, visited_children): # expressions only have one item return visited_children[0] def visit_brace_expression(self, node, visited_children): # Strip out the "(" opening and ")" closing braces return visited_children[1] def visit_variable(self, node, visited_children): # Parse the variable name return Var(node.text) def generic_visit(self, node, visited_children): # Catchall response. return visited_children or node Testing Following your example set, we can see each output and how it is being evaluated quite clearly. items = { 'K00925', 'K00193', 'K00197', 'K00194', 'K00577', 'K00578', 'K00579', 'K00580', 'K00581', 'K00582', 'K00584', 'K00399', 'K00401', 'K00402', 'K22480', 'K22481', 'K22482', 'K03388', 'K03389', 'K03390', 'K08264', 'K08265', 'K14127', 'K14126', 'K22516', } definitions = [ "((K00925 K00625),K01895)", "(K00193+K00197+K00194)", "(K00577+K00578+K00579+K00580+K00581-K00582-K00583+K00584)", "(K00399+K00401+K00402)", "(K22480+K22481+K22482,K03388+K03389+K03390,K08264+K08265,K03388+K03389+K03390+K14127+(K14126+K14128,K22516+K00125))", "((K00134,K00150) K00927,K11389)", ] for definition in definitions: tree = grammar.parse(definition) val = Visitor().visit(tree) print(f"Test: {definition}") print(f"Parsed as: {val}") print(f"Result: {val.matches(items)}") print() Which outputs as expected: Test: ((K00925 K00625),K01895) Parsed as: ((K00925+K00625),K01895) Result: False Test: (K00193+K00197+K00194) Parsed as: (K00193+K00197+K00194) Result: True Test: (K00577+K00578+K00579+K00580 K00581-K00582-K00583+K00584) Parsed as: ((K00577+K00578+K00579+K00580)+(K00581+K00582?+K00583?+K00584)) Result: True Test: (K00399+K00401+K00402) Parsed as: (K00399+K00401+K00402) Result: True Test: (K22480+K22481+K22482,K03388+K03389+K03390,K08264+K08265,K03388+K03389+K03390+K14127+(K14126+K14128,K22516+K00125)) Parsed as: ((K22480+K22481+K22482),(K03388+K03389+K03390),(K08264+K08265),(K03388+K03389+K03390+K14127+((K14126+K14128),(K22516+K00125)))) Result: True Test: ((K00134,K00150) K00927,K11389) Parsed as: (((K00134,K00150)+K00927),K11389) Result: False
5
3
78,838,421
2024-8-6
https://stackoverflow.com/questions/78838421/ollama-with-rag-for-local-utilization-to-chat-with-pdf
I am trying to build ollama usage by using RAG for chatting with pdf on my local machine. I followed this GitHub repo: https://github.com/tonykipkemboi/ollama_pdf_rag/tree/main The issue is when I am running code, there is no error, but the code will stop at embedding and will stop after that. I have attached all possible logs along with ollama list. import logging from langchain_community.document_loaders import UnstructuredPDFLoader from langchain_community.embeddings import OllamaEmbeddings from langchain_text_splitters import RecursiveCharacterTextSplitter from langchain_community.vectorstores import Chroma from langchain.prompts import ChatPromptTemplate, PromptTemplate from langchain_core.output_parsers import StrOutputParser from langchain_community.chat_models import ChatOllama from langchain_core.runnables import RunnablePassthrough from langchain.retrievers.multi_query import MultiQueryRetriever # Configure logging logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') local_path = "D:/KnowledgeSplice/ollama_pdf_rag-main/WEF_The_Global_Cooperation_Barometer_2024.pdf" try: # Local PDF file uploads if local_path: loader = UnstructuredPDFLoader(file_path=local_path) data = loader.load() logging.info("Loading of PDF is done") else: logging.error("Upload a PDF file") raise ValueError("No PDF file uploaded") # Preview first page logging.info(f"First page content preview: {data[0].page_content[:500]}...") # Split and chunk text_splitter = RecursiveCharacterTextSplitter(chunk_size=7500, chunk_overlap=100) logging.info("Text splitter created") chunks = text_splitter.split_documents(data) logging.info(f"Created {len(chunks)} chunks") # Add to vector database logging.info("Creating Vector db") try: embedding_model = OllamaEmbeddings(model="nomic-embed-text", show_progress=True) print("Embedding", embedding_model) vector_db = Chroma.from_documents( documents=chunks, embedding=embedding_model, collection_name="local-rag" ) logging.info("Local db created successfully") except Exception as e: logging.error(f"Error creating vector db: {e}") raise # Re-raise the exception to stop further execution # Verify vector database creation if vector_db: logging.info("Vector db verification successful") else: logging.error("Vector db creation failed") raise ValueError("Vector db creation failed") # LLM from Ollama local_model = "llama3" llm = ChatOllama(model=local_model) logging.info("LLM model loaded") QUERY_PROMPT = PromptTemplate( input_variables=["question"], template="""You are an AI language model assistant. Your task is to generate five different versions of the given user question to retrieve relevant documents from a vector database. By generating multiple perspectives on the user question, your goal is to help the user overcome some of the limitations of the distance-based similarity search. Provide these alternative questions separated by newlines. Original question: {question}""", ) logging.info("Query prompt created") retriever = MultiQueryRetriever.from_llm( vector_db.as_retriever(), llm, prompt=QUERY_PROMPT ) logging.info("Retriever created") # RAG prompt template = """Answer the question based ONLY on the following context: {context} Question: {question} """ prompt = ChatPromptTemplate.from_template(template) logging.info("RAG prompt created") chain = ( {"context": retriever, "question": RunnablePassthrough()} | prompt | llm | StrOutputParser() ) logging.info("Chain created") response = chain.invoke("What are the 5 pillars of global cooperation?") logging.info("Chain invoked") logging.info(f"Response: {response}") except Exception as e: logging.error(f"An error occurred: {e}") The code is showing no error but did not work after embedding. Output: 2024-08-06 14:59:59,858 - INFO - Text splitter created 2024-08-06 14:59:59,861 - INFO - Created 11 chunks 2024-08-06 14:59:59,861 - INFO - Creating Vector db Embedding base_url='http://localhost:11434' model='nomic-embed-text' embed_instruction='passage: ' query_instruction='query: ' mirostat=None mirostat_eta=None mirostat_tau=None num_ctx=None num_gpu=None num_thread=None repeat_last_n=None repeat_penalty=None temperature=None stop=None tfs_z=None top_k=None top_p=None show_progress=True headers=None model_kwargs=None 2024-08-06 15:00:00,662 - INFO - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information. OllamaEmbeddings: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 11/11 [00:27<00:00, 2.46s/it] Below is my ollama list : NAME ID SIZE MODIFIED nomic-embed-text:latest 0a109f422b47 274 MB 3 hours ago mistral:latest f974a74358d6 4.1 GB 17 hours ago phi3:latest d184c916657e 2.2 GB 2 weeks ago llama3:latest 365c0bd3c000 4.7 GB 2 weeks ago How to resolve this issue?
ChromaDB does not support large tokens of more than 768 I suggest we change the vector base to FAISS because the chroma has issues with dimensionality which is not comparable with the embedding model, to be precise the database chromadb allows 768 while embedding model offers 1028. Here is the reviewed code import logging import ollama from langchain.prompts import ChatPromptTemplate, PromptTemplate from langchain.retrievers.multi_query import MultiQueryRetriever from langchain_community.chat_models import ChatOllama from langchain_community.document_loaders import UnstructuredPDFLoader from langchain_community.embeddings import OllamaEmbeddings from langchain_community.vectorstores import FAISS from langchain_core.output_parsers import StrOutputParser from langchain_core.runnables import RunnablePassthrough from langchain_text_splitters import RecursiveCharacterTextSplitter # Configure logging logging.basicConfig( level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s" ) local_path = "WEF_The_Global_Cooperation_Barometer_2024.pdf" try: # Local PDF file uploads if local_path: loader = UnstructuredPDFLoader(file_path=local_path) data = loader.load() logging.info("Loading of PDF is done") else: logging.error("Upload a PDF file") raise ValueError("No PDF file uploaded") # Preview first page # logging.info(f"First page content preview: {data[0].page_content[:500]}...") # Split and chunk text_splitter = RecursiveCharacterTextSplitter(chunk_size=7500, chunk_overlap=100) logging.info("Text splitter created") chunks = text_splitter.split_documents(data) logging.info(f"Created {len(chunks)} chunks") # Add to vector database logging.info("Creating Vector db") try: ollama.embeddings( model="mxbai-embed-large", # prompt='Llamas are members of the camelid family', ) embedding_model = (OllamaEmbeddings(model="mxbai-embed-large"),) vectorstore_db = FAISS.from_documents( documents=chunks, embedding=embedding_model ) vectorstore_db.save_local("faiss_index") vector_retriever = vectorstore_db.as_retriever() except Exception as e: logging.error(f"Error creating vector db: {e}") raise # Re-raise the exception to stop further execution # LLM from Ollama local_model = "mistral" llm = ChatOllama(model=local_model) print("local llm modal", local_model) logging.info("LLM model loaded") QUERY_PROMPT = PromptTemplate( input_variables=["question"], template="""You are an AI language model assistant. Your task is to generate five different versions of the given user question to retrieve relevant documents from a vector database. By generating multiple perspectives on the user question, your goal is to help the user overcome some of the limitations of the distance-based similarity search. Provide these alternative questions separated by newlines. Original question: {question}""", ) logging.info("Query prompt created") retriever = MultiQueryRetriever.from_llm( vector_retriever, llm, prompt=QUERY_PROMPT # Use the correct retriever ) logging.info("Retriever created") # RAG prompt template = """Answer the question based ONLY on the following context: {context} Question: {question} """ prompt = ChatPromptTemplate.from_template(template) logging.info("RAG prompt created") chain = ( {"context": retriever, "question": RunnablePassthrough()} | prompt | llm | StrOutputParser() ) logging.info("Chain created") response = chain.invoke("What are the 5 pillars of global cooperation?") logging.info("Chain invoked") logging.info(f"Response: {response}") except Exception as e: logging.error(f"An error occurred: {e}")
4
3
78,828,009
2024-8-3
https://stackoverflow.com/questions/78828009/how-can-i-get-the-group-that-has-the-largest-streak-of-negative-numbers-in-a-col
This is an extension to this accepted answer. My DataFrame: import pandas as pd df = pd.DataFrame( { 'a': [-3, -1, -2, -5, 10, -3, -13, -3, -2, 1, 2, -100], 'b': [1, 2, 3, 4, 5, 10, 80, 90, 100, 99, 1, 12] } ) Expected output: a b 5 -3 10 6 -13 80 7 -3 90 8 -2 100 Logic: a) Selecting the longest streak of negatives in a. b) If for example there are two streaks that has same size, I want the one that has a greater sum of b. In df there are two streaks with size of 4 but I want the second one because sum of b is greater. My Attempt: import numpy as np s = np.sign(df['a']) df['g'] = s.ne(s.shift()).cumsum() df['size'] = df.groupby('g')['g'].transform('size') df['b_sum'] = df.groupby('g')['b'].transform('sum') Edit 1: I have provided an extra df to clarify the point. I want the negative streaks under any circumstance. In this df the positive streak is longer and its b is greater but I still want the last two rows which is the longest negative streak: df = pd.DataFrame( { 'a': [-0.65, 11, 18, 1, -2, -3], 'b': [1, 20, 30000, 4322, 300, 3] } ) #output 4 -2.00 300 5 -3.00 3 This is my attempt to get this output but if there are no negative rows in a dataframe then it throws an error: df['sign'] = np.sign(df.a) df['g'] = df.sign.ne(df.sign.shift()).cumsum() df = df.loc[df.a.lt(0)] out = df[df.g.eq(df.groupby('g')['b'].agg(['size', 'sum']) .query('size == size.max()')['sum'].idxmax())]
You can keep the same logic, just add one extra filtering step (e.g. with query) to get all max sizes, before getting the idxmax of sum of "b": # negative numbers m = df['a'].lt(0) # form groups g = m.ne(m.shift()).cumsum() out = df[g.eq(df[m] .groupby(g)['b'].agg(['size', 'sum']) .query('size == size.max()') ['sum'].idxmax())] Output: a b 5 -3 10 6 -13 80 7 -3 90 8 -2 100 Intermediate: df.groupby(g)['b'].agg(['size', 'sum']) size sum a 1 4 10 2 1 5 3 4 280 4 2 100 5 1 12 Or, using your approach (note that this wouldn't guarantee a unique group if two or more have the max length and the same sum of b): s = np.sign(df['a']) g = df.groupby(s.ne(s.shift()).cumsum()) s1 = g['a'].transform('size') s2 = g['b'].transform('sum') out = df[s1.eq(s1.max()) & s2.eq(s2.max())]
3
1
78,814,702
2024-7-31
https://stackoverflow.com/questions/78814702/toggle-geometry-layer-within-plotly-dash-mapbox
I've used the following post to plot maki symbols over a plotly mapbox. Plotly Mapbox Markers not rendering (other than circle) import dash from dash import Dash, dcc, html, Input, Output import dash_bootstrap_components as dbc import plotly.express as px import plotly.graph_objs as go import numpy as np import requests import svgpath2mpl, shapely.geometry, shapely.affinity from pathlib import Path from zipfile import ZipFile import pandas as pd import geopandas as gpd import json # download maki icons # https://github.com/mapbox/maki/tree/main/icons f = Path.cwd().joinpath("maki") if not f.is_dir(): f.mkdir() f = f.joinpath("maki.zip") if not f.exists(): r = requests.get("https://github.com/mapbox/maki/zipball/main") with open(f, "wb") as f: for chunk in r.iter_content(chunk_size=128): f.write(chunk) fz = ZipFile(f) fz.extractall(f.parent) def to_shapely(mpl, simplify=0): p = shapely.geometry.MultiPolygon([shapely.geometry.Polygon(a).simplify(simplify) for a in mpl]) p = shapely.affinity.affine_transform(p,[1, 0, 0, -1, 0, 0],) p = shapely.affinity.affine_transform(p,[1, 0, 0, 1, -p.centroid.x, -p.centroid.y],) return p # convert SVG icons to matplolib geometries and then into shapely geometries # keep icons in dataframe for further access... SIMPLIFY=.1 dfi = pd.concat( [ pd.read_xml(sf).assign( name=sf.stem, mpl=lambda d: d["d"].apply( lambda p: svgpath2mpl.parse_path(p).to_polygons() ), shapely=lambda d: d["mpl"].apply(lambda p: to_shapely(p, simplify=SIMPLIFY)), ) for sf in f.parent.glob("**/*.svg") ] ).set_index("name") # build a geojson layer that can be used in plotly mapbox figure layout def marker(df, marker="marker", size=1, color="green", lat=51.379997, lon=-0.406042): m = df.loc[marker, "shapely"] if isinstance(lat, float): gs = gpd.GeoSeries( [shapely.affinity.affine_transform(m, [size, 0, 0, size, lon, lat])] ) elif isinstance(lat, (list, pd.Series, np.ndarray)): gs = gpd.GeoSeries( [ shapely.affinity.affine_transform(m, [size, 0, 0, size, lonm, latm]) for latm, lonm in zip(lat, lon) ] ) return {"source":json.loads(gs.to_json()), "type":"fill", "color":color} This works fine when plotting straight onto the map using the method outlined in the post. But I want to include a component that allows the user to toggle these symbols off and on. I'm trying to append them to a single layer and using that to update the layout. Is it possible to do so? us_cities = pd.read_csv( 'https://raw.githubusercontent.com/plotly/datasets/master/us-cities-top-1k.csv' ) external_stylesheets = [dbc.themes.SPACELAB, dbc.icons.BOOTSTRAP] app = dash.Dash(__name__, external_stylesheets = external_stylesheets) app.layout = html.Div([ dcc.Checklist( id="symbol_on", options=[{"label": "Symbol", "value": True}], value=[], inline=True ), html.Div([ dcc.Graph(id="the_graph") ]), ]) @app.callback( Output("the_graph", "figure"), Input('symbol_on', 'value') ) def update_graph(symbol_on): fig = go.Figure() scatter = px.scatter_mapbox(data_frame = us_cities, lat = 'lat', lon = 'lon', zoom = 0, hover_data = ['State', 'lat', 'lon'] ) fig.add_traces(list(scatter.select_traces())) fig.update_layout( height = 750, mapbox=dict( style='carto-positron', ), ) star = marker( dfi, "star", size=.1, color="red", lon=[-70, -80, -90], lat=[30, 40, 45] ), airport = marker( dfi, "airport", size=.1, color="green", lon=[-70, -80, -90], lat=[30, 40, 45] ), layers = [] for lyr in symbol_on: layers.append(star) layers.append(airport) fig.update_layout(mapbox={"layers": layers}) return fig if __name__ == '__main__': app.run_server(debug=True, port = 8050)
The issue is the trailing commas: star = marker( dfi, "star", size=.1, color="red", lon=[-70, -80, -90], lat=[30, 40, 45] ), airport = marker( dfi, "airport", size=.1, color="green", lon=[-70, -80, -90], lat=[30, 40, 45] ), This causes both star and airport to be a tuple instead of a dict, and breaks the creation of the mapbox layer. If we remove the commas (and offset the airport markers slightly, just so that they are distinct from the star markers), the app renders correctly: star = marker( dfi, "star", size=.1, color="red", lon=[-70, -80, -90], lat=[30, 40, 45] ) airport = marker( dfi, "airport", size=.1, color="green", lon=[-71, -81, -91], lat=[31, 41, 41] )
3
3
78,829,984
2024-8-3
https://stackoverflow.com/questions/78829984/configuring-pytest-to-find-tests-across-multiple-project-directories
I'm looking to unit test all my AWS Lambda code in my project using pytest. Due to how I have to configure the directory structure to work with infrastructure as code tooling, each Lambda sits within it's own CloudFormation stack, I've got a pretty non-standard directory structure. I'm unable to get pytest to run all tests across all my Lambda functions - ideally I'd like this to work by just running 'pytest' in the root directory of the project. The directory structure is as follows (it's worth noting that changing the structure is not an option): - Project_Directory - stack1 - product.template.yaml - src - lambda1 - lambda_function.py - requirements.txt - tests - __init__.py - test_functions - __init__.py - test_lambda1.py - stack2 - product.template.yaml - src - lambda2 - lambda_function.py - requirements.txt - tests - __init__.py - test_functions - __init__.py - test_lambda2.py - conftest.py - pytest.ini Each test_lambda.py file imports from the lambda_function.py file as follows: from src.lambdax.lambda_function import func1, func2 When only a single stack is in the project directory pytest has no issue picking up the tests. However when a second stack is added pytest fails with the following error: ModuleNotFoundError: No module named 'tests.test_functions.test_lambda2' Also, when running pytest directly against each stack directory the tests run with no issue. That is running pytest stack1 and pytest stack2. Expectation: Running 'pytest' from project_directory yields all tests for all lambdas. I've tried adding testpaths = stack1 stack2 and testpaths = stack1/tests stack2/tests to pytest.ini to no success. What am I missing here, I'm guessing maybe some module namespace collision but I'm not sure how to resolve it! Any advice on this issue is much appreciated! Edit: This definitely looks to be a tests module namespace collision. After modifying the tests directory in stack2 to be called tests2 all tests run as expected. I'm still keen for advice here, I'd really like to avoid enforcing each stack to have a different name for the tests directory!
Managed to solve this after painstakingly going through different approachs. The solution is a combination of using the new(ish) importlib import mode and having some custom sys.path manipulation in conftest.py. Configure pytest.ini as this: [pytest] addopts = --import-mode=importlib Add the following code into conftest.py: def add_path(directory): src_path = os.path.abspath(directory) if src_path not in sys.path: sys.path.insert(0, src_path) add_path(os.path.join(os.path.dirname(__file__), "stack1")) add_path(os.path.join(os.path.dirname(__file__), "stack2")) The above conftest.py code could definitely be written a bit nicer using PathLib. I'll leave that as an exercise to the reader! Another nice consequence to using the importlib import mode is that the __init__.py files are no longer required in the tests and test_functions directories in each stack!
4
1
78,841,209
2024-8-6
https://stackoverflow.com/questions/78841209/syntax-improvement-with-working-sympy-statement-novice-level-question
My question may be about how to avoid putting an array into another unneeded array in SymPy. There may be still more to the question I'm not aware of, though. But within my limitations, that is the question I have at hand. To make this question explicit and concrete, see the following... I want to compute the magnitude of a complex-valued expression. In effect: where H here is a complex expression and the asterisk means conjugation. The actual computation I want to perform is: I have a function in SymPy that computes the Butterworth terms. Here's some examples of what it produces (so that it is very clear what I'm starting with): Butterworth(2) [s**2 + sqrt(2)*s + 1] Butterworth(3) [s + 1, s**2 + s + 1] Butterworth(4) [s**2 + s*(sqrt(2)*sqrt(2 - sqrt(2)) + sqrt(2)*sqrt(sqrt(2) + 2) + 2*sqrt(sqrt(2) + 2))/4 + 1, s**2 - s*(-sqrt(2)*sqrt(sqrt(2) + 2) - 2*sqrt(2 - sqrt(2)) + sqrt(2)*sqrt(2 - sqrt(2)))/4 + 1] Butterworth(6) [s**2 + s*(sqrt(2) + sqrt(6))/2 + 1, s**2 + sqrt(2)*s + 1, s**2 - s*(-sqrt(6) + sqrt(2))/2 + 1] The function creates an array of 1st and 2nd order factors. I combine these using prod() and expand() in order to compose the fuller expression. The complex variable s is then replaced with I*omega, where omega is declared to be a real, positive variable. What I use right now is the following, in SymPy: -20*ln(sqrt(expand(prod([[i,conjugate(i)] for i in [expand(prod(Butterworth(4))).subs(s,I*omega)]][0])).subs(omega,1.8)),10).n() -20.4610324752877 And that is the correct answer for a low-pass 4th order Butterworth filter at a normalized frequency of 1.8. But to get there I had to use what I could figure out through some guesswork. Let me break down my thinking in the above statement: # expand out the product of the array of terms and replace complex `s` with `I*omega` expand(prod(Butterworth(4))).subs(s,I*omega) # put that into an array of just 1 element so I can apply a "for" statement [expand(prod(Butterworth(4))).subs(s,I*omega)] # construct an array of 2 elements, with the 2nd one being the conjugate [[i,conjugate(i)] for i in [expand(prod(Butterworth(4))).subs(s,I*omega)] # the unfortunate side effect of the above method is that it creates an array # within an array so the following extracts out the array I want. [[i,conjugate(i)] for i in [expand(prod(Butterworth(4))).subs(s,I*omega)][0] # ^^^^^^^^^^^^^^^^^ ^^^ # can I do that would allow me to avoid # something above here having to add this [0] here? # ... the above may be where I want some help # at this point I can just apply the prod(), expand(), sqrt(), etc: -20*ln(sqrt(expand(prod([[i,conjugate(i)] for i in [expand(prod(Butterworth(4))).subs(s,I*omega)]][0])).subs(omega,1.8)),10).n() -20.4610324752877 This question is about syntax. I'm in a learning-mode and trying to expand my knowledge of SymPy syntax and I would like to know if there is a simpler syntax for that single-line expression I've already worked out.
This is how I would approach it: from sympy import * init_printing() var("s") omega = symbols("omega", real=True, positive=True) # Butterworth(4) b4 = [ s**2 + s*(sqrt(2)*sqrt(2 - sqrt(2)) + sqrt(2)*sqrt(sqrt(2) + 2) + 2*sqrt(sqrt(2) + 2))/4 + 1, s**2 - s*(-sqrt(2)*sqrt(sqrt(2) + 2) - 2*sqrt(2 - sqrt(2)) + sqrt(2)*sqrt(2 - sqrt(2)))/4 + 1 ] H = prod(b4) -20 * log(sqrt(H * conjugate(H)).subs(s, I*omega).subs(omega, 1.8), 10).n() # -20.4610324752877 EDIT to satisfy comment: By removing the outer list, you create an iterator. Then, you can call next to retrieve the first and only element. -20 * log(sqrt(prod(next([i,conjugate(i)] for i in [expand(prod(b4)).subs(s,I*1.8)]))), 10).n()
2
3
78,841,211
2024-8-6
https://stackoverflow.com/questions/78841211/how-to-get-the-dimensions-of-a-toga-canvas-in-python
In a Python BeeWare project, I want to use toga.Canvas to draw some horizontal rectangles, but I don't know from where to get the Canvas width. I can't find any documentation for the toga.Canvas() dimensions on the internet... def redraw_canvas(self): x = 4; y = 4; for i in range(7): with self.canvas.context.Fill(color=self.clRowBkg) as fill: fill.rect(x, y, 100, self.row_height) y += self.row_height + 4
There doesn't appear to be any way to get this information from the public API, but there are some non-public properties used in the Toga Canvas example: canvas.layout.content_width canvas.layout.content_height These should work for any Toga widget, not just Canvas.
2
0
78,841,010
2024-8-6
https://stackoverflow.com/questions/78841010/format-datetime-in-polars
I have a polars dataframe that contains a datetime column. I want to convert this column to strings in the format %Y%m. For example, all dates in January 2024 should be converted to "202401". from datetime import datetime import polars as pl data = { "ID" : [1,2,3], "dates" : [datetime(2024,1,2),datetime(2024,1,3),datetime(2024,1,4)], } df = pl.DataFrame(data) I have tried using strftime. However, the following AttributeError is raised. AttributeError: 'Expr' object has no attribute 'strftime'
Note that pl.Expr.dt.strftime is available under the pl.Expr.dt namespace. Hence, it is called on the dt attribute of an expression and not the expression directly. df.with_columns( pl.col("dates").dt.strftime("%Y%m") ) shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ ID ┆ dates β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ════════║ β”‚ 1 ┆ 202401 β”‚ β”‚ 2 ┆ 202401 β”‚ β”‚ 3 ┆ 202401 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜
7
5
78,839,874
2024-8-6
https://stackoverflow.com/questions/78839874/why-if-cant-be-used-in-scipy-optimize-inequality-constraint
Consider a simple question using Scipy.optimize: Maximize(xy) s.t x^2+y^2=200. The right code is this : import numpy as np from scipy.optimize import minimize def objective(var_tmp): x, y = var_tmp return -x * y def constraint(var_tmp): x, y = var_tmp return 200 - (x ** 2 + y ** 2) initial_guess = [1, 1] constraints = {'type': 'ineq', 'fun': constraint} result = minimize(objective, initial_guess, constraints=constraints) optimal_x, optimal_y = result.x optimal_value = -result.fun print(f"Optimal x: {optimal_x}") print(f"Optimal y: {optimal_y}") print(f"Maximum xy value: {optimal_value}") Which gives the right answer 100. However, if the constraint is written as follows: def constraint(var_tmp): x, y = var_tmp if x ** 2 + y ** 2 <= 200: return 1 return -1 It will give infinity as an answer. Why is the case?
By default, SciPy uses SLSQP to minimize a problem which has constraints. (Several other minimizers have support for constraints; see the "Constrained Minimization" section of the minimize() documentation.) SLSQP requires that its constraints be differentiable. Here is a passage from the SLSQP paper showing this. In this context, f is the function you are minimizing, and g is your equality and inequality constraints. ...where the problem functions f : R^n -> R^1 and g : R^n -> R^m are assumed to be continuously differentiable and have no specific structure Source: Kraft D (1988), A software package for sequential quadratic programming. Tech. Rep. DFVLR-FB 88-28, DLR German Aerospace Center β€” Institute for Flight Mechanics, Koln, Germany. page 8, section 2.1.1 (I mentioned before that SciPy has multiple minimize methods which can handle constraints. However, I checked, and none of them seem to be able to handle non-differentiable constraints.)
2
4
78,834,627
2024-8-5
https://stackoverflow.com/questions/78834627/replace-an-empty-value-with-nan-in-dataframe
I have a dataframe with empty values in some rows like this: ID Date Price Curr A Jan 21 (10,0) USD B Aug 8 (10,0) USD C Sep 29 (10,0) USD settle Aug 24 ( ,) where the last row has 2 empty values in Price and Curr columns. How can I either replace the empty values with nan so I can dropna() or drop the rows that contain empty values to get a dataframe like: ID Date Price Curr A Jan 21 (10,0) USD B Aug 8 (10,0) USD C Sep 29 (10,0) USD sample: data = { "ID": ["A", "B", "C", "settle"], "Date": ["Jan 21", "Aug 8", "Sep 29", "Aug 24"], "Price": [(10,0), (10,0), (10,0), ()], "Curr": ["USD", "USD", "USD", ""] } df = pd.DataFrame(data)
To drop the rows that are empty, you can try something like this: import pandas as pd data = { "ID": ["A", "B", "C", "settle"], "Date": ["Jan 21", "Aug 8", "Sep 29", "Aug 24"], "Price": [(10, 0), (10, 0), (10, 0), ()], "Curr": ["USD", "USD", "USD", ""] } df = pd.DataFrame(data) # Filter out rows where Price is an empty tuple df = df[df['Price'].apply(lambda x: len(x) > 0)] # Convert the 'Price' tuples to floats by taking the first element of each tuple df['Price'] = df['Price'].apply(lambda x: float(x[0])) print(df) print(df.dtypes) # Output ID Date Price Curr 0 A Jan 21 10.0 USD 1 B Aug 8 10.0 USD 2 C Sep 29 10.0 USD ID object Date object Price float64 Curr object dtype: object
2
1
78,832,340
2024-8-4
https://stackoverflow.com/questions/78832340/get-an-item-of-the-output-after-applying-str-split-to-a-polars-dataframe-column
how can i select last item of list in paths column after applying the str.split("/") function? dataNpaths = pl.scan_csv("test_data/file*.csv", has_header=True, include_file_paths = "paths").collect() dataNpaths.with_columns(pl.col("paths").str.split("/").alias("paths")) >>> dataNpaths.with_columns(pl.col("paths").str.split("/").alias("paths")) shape: (30, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Column1 ┆ Column2 ┆ Column3 ┆ Column4 ┆ paths β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ f64 ┆ f64 ┆ f64 ┆ list[str] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ══════════β•ͺ══════════β•ͺ══════════β•ͺ════════════════════════════║ β”‚ 0.603847 ┆ 0.509877 ┆ 0.091579 ┆ 0.43821 ┆ ["test_data", "file1.csv"] β”‚ β”‚ 0.572299 ┆ 0.817647 ┆ 0.087951 ┆ 0.397217 ┆ ["test_data", "file1.csv"] β”‚ β”‚ 0.886123 ┆ 0.159805 ┆ 0.766246 ┆ 0.083915 ┆ ["test_data", "file1.csv"] β”‚ β”‚ 0.142208 ┆ 0.413847 ┆ 0.043408 ┆ 0.147779 ┆ ["test_data", "file1.csv"] β”‚ β”‚ 0.105215 ┆ 0.924754 ┆ 0.309823 ┆ 0.724407 ┆ ["test_data", "file1.csv"] β”‚ β”‚ … ┆ … ┆ … ┆ … ┆ … β”‚ β”‚ 0.381675 ┆ 0.849887 ┆ 0.498281 ┆ 0.733085 ┆ ["test_data", "file3.csv"] β”‚ β”‚ 0.697427 ┆ 0.950464 ┆ 0.999596 ┆ 0.645253 ┆ ["test_data", "file3.csv"] β”‚ β”‚ 0.49979 ┆ 0.172414 ┆ 0.679287 ┆ 0.091804 ┆ ["test_data", "file3.csv"] β”‚ β”‚ 0.668585 ┆ 0.640259 ┆ 0.932463 ┆ 0.579558 ┆ ["test_data", "file3.csv"] β”‚ β”‚ 0.077462 ┆ 0.802565 ┆ 0.966791 ┆ 0.29297 ┆ ["test_data", "file3.csv"] β”‚ but neither of these approaches worked dataNpaths.with_columns(pl.col("paths").str.split("/")[-1].alias("paths")) dataNpaths.with_columns(pl.col("paths").str.split("/",-1).alias("paths"))
You should the List accessor .list which is similar to .str: dataNpaths.with_columns(pl.col("paths").str.split("/").list[-1]) .alias('paths') is superfluous as you are using the old column name not creating a new column. Alternatively as the last element of the List is needed, the same result can be obtained using a .list method: dataNpaths.with_columns(pl.col("paths").str.split("/").list.last()) Details of the comprehensive .list methods are given in LINK. These include .slice, .contains, .head, .tail etc.
3
3
78,829,950
2024-8-3
https://stackoverflow.com/questions/78829950/in-python-what-is-the-space-complexity-of-list1-list2
This code (a solution to this LeetCode challenge) first iterates through a list nums, updating counts of integers 0, 1, 2, also called red, white, and blue respectively. nums is guaranteed to only have the integers 0, 1, and/or 2. After finding the counts, the code uses [::], a trick to modify a list in-place, to sort nums. def sortColors(nums: List[int]) -> None: red = white = blue = 0 for num in nums: match num: case 0: red += 1 case 1: white += 1 case 2: blue += 1 # [::] to modify nums in-place - Space O(1) nums[::] = ([0] * red) + ([1] * white) + ([2] * blue) I thought ([0] * red) + ([1] * white) + ([2] * blue) would be evaluated before modifying nums, meaning that list would have to be created and stored in memory before nums[::] = can proceed. To me, this makes sense, since in Python the right side of = is evaluated before variable assignment, which makes things like x = x + 1 work. So, under this understanding, at this point in the code, both the original nums list and the new list would be stored in memory. Because the new list will be the same length as nums, O(n) additional space is needed. However, LeetCode's analyzer said this code was O(1) space. The only thing I can think of is that the moment nums[::] = is called, the code ignores the original contents of nums and modifies nums in-place to the new list. How is this O(1) space, and is my understanding of space complexity and variable assignment correct?
Your assessment that the code you've shown takes O(n) extra space to temporarily build a new list with the sorted values is correct. It's possible to do a counting sort like this in O(1) space, but the code to implement it in Python is just a little more complicated: def sortColors(nums: List[int]) -> None: red = white = blue = 0 for num in nums: match num: case 0: red += 1 case 1: white += 1 case 2: blue += 1 assert len(nums) == red+white+blue for i in range(len(nums)): nums[i] = (i >= red) + (i >= red+white) # note: bool+bool->int This will use only O(1) extra memory, since only the red, white and blue counters take up space beyond the starting nums list. Note that a lot of other, more Pythonic styles of programming are likely to allocate O(n) temporary memory in the background, at least in some situations. For example, if you assign an itertools.repeat iterator to a slice (e.g. nums[:red] = itertools.repeat(0, red)), you won't consume O(n) memory directly in the same way as the original code (since the iterator takes only constant memory), but the slice assignment will dump the iterator into a temporary list in the background so that it can know if it needs to resize the list (it doesn't, but it can't know that in advance). Similarly, if you clear() the list and then extend() it with itertools.repeat iterators, the resizing logic of the list might temporarily consume O(n) extra memory as it copies the list items into a larger section of memory (thanks to user no comment who pointed this out in the comments). I'd also note that list.sort is probably faster for most reasonable lengths of list, even though its asymptotic complexity is worse than your code's counting sort! Having a very well written implementation in C is a big advantage! Of course, it probably does allocate at least some extra memory (probably O(n) space in the worst case, but I haven't checked).
2
4
78,823,898
2024-8-2
https://stackoverflow.com/questions/78823898/measure-balanceness-of-a-weighted-numpy-array
I have player A and B who both played against different opponents. player opponent days ago A C 1 A C 2 A D 10 A F 100 A F 101 A F 102 A G 1 B C 1 B C 2 B D 10 B F 100 B F 101 B F 102 B G 1 B G 2 B G 3 B G 4 B G 5 B G 6 B G 7 B G 8 First, I want to find the opponent that is the most common one. My definition of "most common" is not the total number of matches but more like the balanced number of matches. If for example, player 1 and 2 played respectively 99 and 1 time(s) against player 3 I prefer opponent 4 where A and B played both 49 times against. In order to measure the "balanceness" I write the following function: import numpy as np from collections import Counter def balanceness(array: np.ndarray): classes = [(c, cnt) for c, cnt in Counter(array).items()] m = len(classes) n = len(array) H = -sum([(cnt / n) * np.log((cnt / n)) for c, cnt in classes]) return H / np.log(m) This functions works as expected: >> balanceness(array=np.array([0, 0, 0, 1, 1, 1])) 1.0 If I run the function on the different opponents I see the following results: opponent balanceness n_matches C 1 4 D 1 2 F 1 6 G 0.5032583347756457 9 Clearly, opponent F is the most common one. However, the matches of A and B against F are relatively old. How should I incorporate a recency-factor into my calculation to find the "most recent common opponent"? Edit After thinking more about it I decided to weight each match using the following function def weight(days_ago: int, epilson: float=0.005) -> float: return np.exp(-1 * days_ago * epilson) I sum the weight of all the matches against each opponent opponent balanceness n_matches weighted_n_matches C 1 4 3.9701246258837 D 1 2 1.90245884900143 F 1 6 3.62106362790388 G 0.5032583347756457 9 8.81753570603108 Now, opponent C is the "most-recent balanced opponent". Nevertheless, this method ignores the "recentness" on a player-level because we sum the values. There could be a scenario where player 1 played recently a lot of matches against player 3 whereas player 2 faced player 3 in the distant past. How can we find the opponent that is the most balanced / equally-distributed between two players the opponent with the most recent matches against the two players
First, I think "balanceness" needs to consider how many days ago the matches were played. For example, suppose A and B played 1 match against C, both 100 days ago. Again, let A and B both play 1 match against E, 1 day and 199 days ago respectively. Although the number of matches is the same, their recency is different, and they shouldn't have the same balanceness score. By using the defined weight(days_ago) function, it will be as if A and B both played 0.60 matches against C, while they played 0.995 and 0.36 matches against E respectively. These two scenarios should have different balanceness. Second, just balanceness is obviously not enough. If A and B played 1 match each against D, both 100 years ago, and against E, both 200 years ago---both scenarios are equally "balanced". You need to define a "recency" score (between 0 and 1); I think average weight might work. And then you can combine the two metrics together in some way, e.g. B * R, or (B * R)/(B + R), or alpha * B + (1 - alpha) * R. import numpy as np import pandas as pd data = [ ["A", "C", 2], ["A", "D", 10], ["A", "F", 100], ["A", "F", 101], ["A", "F", 102], ["A", "G", 1], ["B", "C", 1], ["B", "C", 2], ["B", "D", 10], ["B", "F", 100], ["B", "F", 101], ["B", "F", 102], ["B", "G", 1], ["B", "G", 2], ["B", "G", 3], ["B", "G", 4], ["B", "G", 5], ["B", "G", 6], ["B", "G", 7], ["B", "G", 8] ] def weight(days_ago: int, epilson: float=0.005) -> float: return np.exp(-1 * days_ago * epilson) def weighted_balanceness(array: np.ndarray, weights: np.ndarray): classes = np.unique(array) cnt = np.array([weights[array == c].sum() for c in classes]) m = len(classes) n = weights.sum() H = -(cnt / n * np.log(cnt / n)).sum() return H / np.log(m) df = pd.DataFrame(data=data, columns=["player", "opponent", "days_ago"]) df["effective_count"] = weight(df["days_ago"]) scores = [] for opponent in df["opponent"].unique(): df_o = df.loc[df["opponent"] == opponent] player = np.where(df_o["player"].values == "A", 0, 1) balanceness = weighted_balanceness(array=player, weights=df_o["effective_count"]) recency = df_o["effective_count"].mean() scores.append([opponent, balanceness, recency]) df_out = pd.DataFrame(scores, columns=["opponent", "balanceness", "recency"]) df_out["br"] = df_out["balanceness"] * df_out["recency"] df_out["mean_br"] = 0.5 * df_out["balanceness"] + 0.5 * df_out["recency"] df_out["harmonic_mean_br"] = df_out["balanceness"] * df_out["recency"] / ( (df_out["balanceness"] + df_out["recency"])) print(df_out) This gives me the following: opponent balanceness recency br mean_br harmonic_mean_br 0 C 0.917739 0.991704 0.910125 0.954721 0.476644 1 D 1.000000 0.951229 0.951229 0.975615 0.487503 2 F 1.000000 0.603511 0.603511 0.801755 0.376368 3 G 0.508437 0.979726 0.498129 0.744082 0.334728 Note that D and F have perfect balanceness. They both played with A & B with same number of matches and same days ago. However, F played a while back (100-102 days ago), so they have a lower recency score, which hurts their combined scores. Depending on how you combine b and r, most likely D or C would be the best choice (C may win if you give more weight to recency).
6
3
78,836,766
2024-8-5
https://stackoverflow.com/questions/78836766/is-there-a-way-to-enforce-the-number-of-members-an-enum-is-allowed-to-have
Making an enum with exactly n many members is trivial if I've defined it myself: class Compass(enum.Enum): NORTH = enum.auto() EAST = enum.auto() SOUTH = enum.auto() WEST = enum.auto() ## or ## Coin = enum.Enum('Coin', 'HEADS TAILS') But what if this enum will be released into the wild to be subclassed by other users? Let's assume that some of its extra behaviour depends on having the right number of members so we need to enforce that users define them correctly. Here's my desired behaviour: class Threenum(enum.Enum): """An enum with exactly 3 members, a 'Holy Enum of Antioch' if you will. First shalt thou inherit from it. Then shalt though define members three, no more, no less. Three shall be the number thou shalt define, and the number of the members shall be three. Four shalt thou not define, neither define thou two, excepting that thou then proceed to three. Five is right out. Once member three, being the third member, be defined, then employest thou thy Threenum of Antioch towards thy problem, which, being intractible in My sight, shall be solved. """ ... class Triumvirate(Threenum): # success CEASAR = enum.auto() POMPEY = enum.auto() CRASSUS = enum.auto() class TeenageMutantNinjaTurtles(Threenum): # TypeError LEONARDO = 'blue' DONATELLO = 'purple' RAPHAEL = 'red' MICHELANGELO = 'orange' Trinity = Threenum('Trinity', 'FATHER SON SPIRIT') # success Schwartz = Threenum('Schwartz', 'UPSIDE DOWNSIDE') # TypeError Overriding _generate_next_value_() allows the enforcement of a maximum number of members, but not a minimum.
An easier approach would be to check the number of Enum members of a subclass in an __init_subclass__ method: class Threenum(enum.Enum): def __init_subclass__(cls): if len(cls) != 3: raise TypeError('Subclass of Threenum must have exactly 3 members.') Demo here You can also create such a class with a factory function: def exact_enum(number): class _ExactEnum(enum.Enum): def __init_subclass__(cls): if len(cls) != number: raise TypeError( f'This Enum subclass must have exactly {number} members.') return _ExactEnum Usage: class TeenageMutantNinjaTurtles(exact_enum(3)): # TypeError LEONARDO = 'blue' DONATELLO = 'purple' RAPHAEL = 'red' MICHELANGELO = 'orange' Demo here
5
4
78,835,754
2024-8-5
https://stackoverflow.com/questions/78835754/is-pythons-list-clear-thread-safe
In Python, suppose one thread is appending/popping items to/from a list/collections.deque/similar built-in container, while another thread occasionally empties the container via its clear() method. Is this interaction thread-safe? Or is it possible for the clear() to interfere with a concurrent append()/pop() operation, leaving the list uncleared or corrupted? My interpretation of the accepted answer here suggests that the GIL should prevent such interference, at least for lists. Am I correct? As a related follow-up, if this is not thread-safe, I suppose I should use a queue.Queue instead. But what is the best (i.e., cleanest, safest, fastest) way to clear it from the second thread? See the comments on this answer for concerns about using the (undocumented) queue.Queue().queue.clear() method. Need I really use a loop to get() all the items one by one?
update Methods like list.clear are not atomic in the sense that other threads can add elements to the list (or other container) before the method returns to the current code. They are "thread safe" in the sense that they won't ever be in an inconsistent state that will cause an exception - but not "atomic" . In other words: the list object will never be "broken" with or without the use of a Lock - but whatever will be inside it is not deterministic. The following snippet inserts data before list.clear() returns both in the same thread, and from other thread: import threading, time class A: def __init__(self, container, delay=0.2): self.container = container self.delay = delay def __del__(self): time.sleep(self.delay) self.container.append("thing") def doit(): target = [] def interferer(): time.sleep(0.1) target.append("this tries to be first") target.append(A(target, delay=0.3)) t = threading.Thread(target=interferer) t.start() target.clear() return target In [37]: doit() Out[37]: ['this tries to be first', 'thing'] So, if one needs a "thread-safe" and "atomic" sequence - it has to be crafted from collections.abc.MutableSequence and the appropriate locks in the methods that perform mutations. original answer As put in the comments: all operations on built-in data structures are thread safe in Python - what have ensured this up to this day is the GIL (global interpreter lock), which otherwise penalizes multi-threading code in Python. For Python3.13 onwards, there will be the option of running Python code without the GIL, but it is a language guarantee that such operations on built-in data structures will remain thread-safe, by the use of finer grained locking - check the Container Thread Safety session on PEP 703 (as it not only explains the mechanism forward, as re-asserts the current status quo of these modifications being effectively atomic thead safe, though not "atomic") However, depending on the code you have, you may wish to express the list modification with another operation instead os a method calling - since some methods can't be atomic. The linked session on PEP 703 above gives the example of list.extend, which, if used with a generator object simply can't be atomic. So to lessen the chances of someone changing your code in the future, clearing the list can be expressed with mylist[:] = () - I have the feeling one would think twice before replacing this with a method call which could lead to undesired race conditions.
2
5
78,836,369
2024-8-5
https://stackoverflow.com/questions/78836369/what-is-the-fastest-solution-to-remove-lines-from-a-text-file
In my project, I need to develop a program in Python to insert a UUID in a device. There will be 1.000 devices produced per day, the UUIDs need to be inserted into each device, and only once. I have a large input file (100.000 lines) containing all the available UUIDs, as follows: str[32] str[32] ... str[32] str[32] Example: deadbeefdeadbeefdeadbeefdeadbeefdeadbeef coolcafebabecoolcafebabecoolcafecoolcafe ... deadbeefdeadbeefcoolcafebabecoolcafebabe coolcafebabecoolcafebabedeadbeefdeadbeef My program will be executed and closed every day. I need to make sure I won't use the same UUID twice. I was thinking of two solutions: Delete the line of the UUID in the input file once I have used it. Then, when I need a UUID, I only have to read the first line. Change the format of the input file by adding a byte on each line, that is equal to 0 when the UUID is free, and set it to 1 when it has been used. When my program is first executed, I read the file to get the index of the first 0, then I can keep track of the index and set the byte to 1 each time I consume one UUID. For example : deadbeefdeadbeefdeadbeefdeadbeefdeadbeef;1 coolcafebabecoolcafebabecoolcafecoolcafe;1 ... deadbeefdeadbeefcoolcafebabecoolcafebabe;0 coolcafebabecoolcafebabedeadbeefdeadbeef;0 It seems both solution imply having to rewrite the whole content of the file, am I right? Is there a better solution?
Here: Here's how to remove used UUID to solve your issue: import os def next_uuid(in_file, used_file): used = set() if os.path.exists(used_file): with open(used_file) as f: used = set(l.strip() for l in f) with open(in_file) as f: for l in f: u = l.strip() if u not in used: with open(used_file, 'a') as uf: uf.write(f"{u}\n") return u raise Exception("No more UUIDs") in_file = 'uuids.txt' used_file = 'used.txt' next_u = next_uuid(in_file, used_file) print(f"Next UUID: {next_u}") This script grabs the next available UUID from a file. It keeps track of used UUIDs in a separate file to avoid duplicates. Basically, it works like this: Read all the used UUIDs into memory Go through the main UUID file When it finds a UUID that hasn't been used, it: Marks it as used Returns it If all UUIDs are used up, it throws an error The script is pretty quick because it only reads what it needs and just adds to the 'used' file instead of rewriting everything. It should work fine for your daily prod.
2
0
78,835,461
2024-8-5
https://stackoverflow.com/questions/78835461/i-am-trying-to-add-alt-text-to-images-inside-a-pdf-programatically
I have the ALT text generated just need to add it somehow to the images under the figure tag. A little background - I want to my my pdf accessible to the WCAG 2.1 AA standards and i am using adobe autotag feature to tag the pdf. It tags the images as /figure. I can totally extract the figures and generate alt text but I cant find a way to embed or add that alt text to the image and make it WCAG 2.1 AA compliant. I ultimately also want to add this to a lambda function in AWS. Is there any way I could do so? Thank you! I tried using multiple open source libraries pikepdf,pymupdf, and some more and also tried converting the pdf to html or xml but the issue with that is the pdf cant be converted back exactly to what it was. I also tried adding it directly in code but the file goes corrupt.
The MCID for Alt text is either allocated at time of PDF generation (so for this WEB.HTML page by the browsers PDF generator), or can be easily be manually assigned in a GUI, when checking for the other human verified content. Thus Acrobat pre-flight is the simplest and easiest point to index Alt Text in the mandatory PDF/AU post production. In a web page there is a 1:1 direct relationship the alt= is directly combined with the img src. That direct association is not maintained in a PDF. <div class="gravatar-wrapper-32"> <img src="https://www.gravatar.com/avatar/f50f5b351c1d07d2a5a8f023e1731768?s=64&amp;d=identicon&amp;r=PG&amp;f=y&amp;so-version=2" alt="Aryan Khanna's user avatar" width="32" height="32" class="bar-sm"></div> Attempting to add all the production interconnected components inside a PDF stream is usually fraught with problems, since all existing file components need re indexing & renumbering, thus becomes a massive slow internal task. To add the required object and all it's dependents or ancestors (Parent = 119) and or any children objects midway through a PDF is not easy. This is object 120 of 156. the image can be anywhere in the file as the image and /Alt text are not directly related, but just numbers in a page index. Actually in this case, the image was placed way back as document number 11 object. 120 0 obj <</Type /StructElem /S /Figure /Alt (Aryan Khanna's user avatar) /P 119 0 R /K [ << /Type /MCR /Pg 2 0 R /MCID 34 >> ] /ID (node00000026) >> endobj How to place the Tag is find the image number and look for it in the page contents here it is added as /X11. /X11 Do now inject the related Tag /MCID 34 number before it /P<</MCID 34>>BDC /X11 Do That is where the link to a tag is manually placed before the correct image as a child reference. So it will be seen as a tag for an image. However since EVERY PDF needs AT LEAST two manual visual checks to verify images, it is easiest to check the image alt data at the same time.
2
1
78,833,796
2024-8-5
https://stackoverflow.com/questions/78833796/reload-module-that-is-imported-into-init-py
What works: I have a package version_info in which I define a string version_info. When I increment version_info.version_info, the main code prints out the incremented value after the reload. What doesn't work: When I increment the value in version_info_sub.py, it is not updated upon reload. I suspect that the importlib.reload doesn't go accross the from .sub import version_info_sub statement. How can I achieve that also the value from sub.py gets reloaded? Main code: from importlib import reload import version_info ... reload(version_info) print(version_info.version_string) # successfully updated print(version_info.version_info_sub) # stays on the old value version_info/__init__.py: from .sub import version_info_sub version_string = "6" version_info/sub.py: version_info_sub = "6"
I think I got your problem. I made a reload_package function for you. import inspect def reload_package(package_): modules_names_paths = inspect.getmembers(package_, inspect.ismodule) modules_names = [x[0] for x in modules_names_paths] for module_name in modules_names: module_ = getattr(package_, module_name) reload(module_) reload(package_) The problem is that you reload the package but the sub module is not reloaded. You could reload the sub also. But in order to be agnostic (I guess the caller does not have to know about "sub") you will have to loop over the modules in the package, reload them and then reload the package. This function will do. Happy Coding
2
1
78,831,434
2024-8-4
https://stackoverflow.com/questions/78831434/new-column-in-pandas-dataframe-using-least-squares-from-scipy-optimize
I have a Pandas dataframe that looks like the following: Race_ID Date Student_ID feature1 1 1/1/2023 3 0.02167131 1 1/1/2023 4 0.17349148 1 1/1/2023 6 0.08438952 1 1/1/2023 8 0.04143787 1 1/1/2023 9 0.02589056 1 1/1/2023 1 0.03866752 1 1/1/2023 10 0.0461553 1 1/1/2023 45 0.09212758 1 1/1/2023 23 0.10879326 1 1/1/2023 102 0.186921 1 1/1/2023 75 0.02990676 1 1/1/2023 27 0.02731904 1 1/1/2023 15 0.06020158 1 1/1/2023 29 0.06302721 3 17/4/2022 5 0.2 3 17/4/2022 2 0.1 3 17/4/2022 3 0.55 3 17/4/2022 4 0.15 I would like to create a new column using the following method: Define the following function using integrals import numpy as np from scipy import integrate from scipy.stats import norm import scipy.integrate as integrate from scipy.optimize import fsolve from scipy.optimize import least_squares def integrandforpi_i(xi, ti, *theta): prod = 1 for t in theta: prod = prod * (1 - norm.cdf(xi - t)) return prod * norm.pdf(xi - ti) def pi_i(ti, *theta): return integrate.quad(integrandforpi_i, -np.inf, np.inf, args=(ti, *theta))[0] for each Race_ID, the value for each Student_ID in the new column is given by solving a system of nonlinear equations using least_squares in scipy.optimize as follows: For example, for race 1, there are 14 students in the race and we will have to solve the following system of 14 nonlinear equations and we restrict the bound in between -2 and 2: def equations(params): t1,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,t13,t14 = params return (pi_i(t1,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,t13,t14) - 0.02167131, pi_i(t2,t1,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,t13,t14) - 0.17349148, pi_i(t3,t2,t1,t4,t5,t6,t7,t8,t9,t10,t11,t12,t13,t14) - 0.08438952, pi_i(t4,t2,t3,t1,t5,t6,t7,t8,t9,t10,t11,t12,t13,t14) - 0.04143787, pi_i(t5,t2,t3,t4,t1,t6,t7,t8,t9,t10,t11,t12,t13,t14) - 0.02589056, pi_i(t6,t2,t3,t4,t5,t1,t7,t8,t9,t10,t11,t12,t13,t14) - 0.03866752, pi_i(t7,t2,t3,t4,t5,t6,t1,t8,t9,t10,t11,t12,t13,t14) - 0.0461553, pi_i(t8,t2,t3,t4,t5,t6,t7,t1,t9,t10,t11,t12,t13,t14) - 0.09212758, pi_i(t9,t2,t3,t4,t5,t6,t7,t8,t1,t10,t11,t12,t13,t14) - 0.10879326, pi_i(t10,t2,t3,t4,t5,t6,t7,t8,t9,t1,t11,t12,t13,t14) - 0.186921, pi_i(t11,t2,t3,t4,t5,t6,t7,t8,t9,t10,t1,t12,t13,t14) - 0.02990676, pi_i(t12,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t1,t13,t14) - 0.02731904, pi_i(t13,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,t1,t14) - 0.06020158, pi_i(t14,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t13,t14,t1) - 0.06302721) res = least_squares(equations, (1,1,1,1,1,1,1,1,1,1,1,1,1,1), bounds = ((-2,-2,-2,-2,-2,-2,-2,-2,-2,-2,-2,-2,-2,-2), (2,2,2,2,2,2,2,2,2,2,2,2,2,2))) t1,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,t13,t14 = res.x Solving gives t1,t2,t3,t4,t5,t6,t7,t8,t9,t10,t11,t12,t13,t14 = [1.38473533 0.25616609 0.6935956 1.07314877 1.30201502 1.10781642 1.01839475 0.64349646 0.54630158 0.20719836 1.23347391 1.27642131 0.879412 0.83338882] Similarly, for race 2, there are 4 students competing and we will have to solve the following system of 4 nonlinear equations: def equations(params): t1,t2,t3,t4 = params return (pi_i(t1,t2,t3,t4) - 0.2, pi_i(t2,t1,t3,t4) - 0.1, pi_i(t3,t2,t1,t4) - 0.55, pi_i(t4,t2,t3,t1) - 0.15) res = least_squares(equations, (1,1,1,1), bounds = ((-2,-2,-2,-2), (2,2,2,2))) t1,t2,t3,t4 = res.x which gives t1,t2,t3,t4 = [0.9209873 1.37615468 0.12293934 1.11735818]. Hence the desired outcome looks like Race_ID Date Student_ID feature1 new_column 1 1/1/2023 3 0.02167131 1.38473533 1 1/1/2023 4 0.17349148 0.25616609 1 1/1/2023 6 0.08438952 0.6935956 1 1/1/2023 8 0.04143787 1.07314877 1 1/1/2023 9 0.02589056 1.30201502 1 1/1/2023 1 0.03866752 1.10781642 1 1/1/2023 10 0.0461553 1.01839475 1 1/1/2023 45 0.09212758 0.64349646 1 1/1/2023 23 0.10879326 0.54630158 1 1/1/2023 102 0.186921 0.20719836 1 1/1/2023 75 0.02990676 1.23347391 1 1/1/2023 27 0.02731904 1.27642131 1 1/1/2023 15 0.06020158 0.879412 1 1/1/2023 29 0.06302721 0.83338882 3 17/4/2022 5 0.2 0.9209873 3 17/4/2022 2 0.1 1.37615468 3 17/4/2022 3 0.55 0.12293934 3 17/4/2022 4 0.15 1.11735818 I have no idea how to generate the new column. Also, my actual dataframe is much larger with many races so I would like to ask is there any way to speed up the computation too, thanks a lot.
Here is how to parametrize and automatize your regression for each group. First we load your dataset: import io import numpy as np import pandas as pd from scipy import integrate, stats, optimize data = pd.read_fwf(io.StringIO("""Race_ID Date Student_ID feature1 1 1/1/2023 3 0.02167131 1 1/1/2023 4 0.17349148 1 1/1/2023 6 0.08438952 1 1/1/2023 8 0.04143787 1 1/1/2023 9 0.02589056 1 1/1/2023 1 0.03866752 1 1/1/2023 10 0.0461553 1 1/1/2023 45 0.09212758 1 1/1/2023 23 0.10879326 1 1/1/2023 102 0.186921 1 1/1/2023 75 0.02990676 1 1/1/2023 27 0.02731904 1 1/1/2023 15 0.06020158 1 1/1/2023 29 0.06302721 3 17/4/2022 5 0.2 3 17/4/2022 2 0.1 3 17/4/2022 3 0.55 3 17/4/2022 4 0.15 """)) We slightly rewrite your functions: def integrand(x, ti, *theta): product = 1. for t in theta: product = product * (1 - stats.norm.cdf(x - t)) return product * stats.norm.pdf(x - ti) def integral(ti, *theta): return integrate.quad(integrand, -np.inf, np.inf, args=(ti, *theta))[0] Then we parametrize the system of equations: def change_order(parameters, i): return [parameters[i]] + parameters[0:i] + parameters[i+1:] def system(parameters, t): parameters = parameters.tolist() values = np.full(len(t), np.nan) for i in range(len(parameters)): parameters = change_order(parameters, i) values[i] = integral(*parameters) - t[i] return values At this stage we can solve any system of length n, now we create a function that allow us to use groupby in pandas: def solver(x): t = x["feature1"].values u = np.ones_like(t) solution = optimize.least_squares(system, u, bounds=[-2*u, 2*u], args=(t,)) return solution.x And we apply it to the dataframe: data["new_column"] = data.groupby("Race_ID").apply(solver, include_groups=False).explode().values Which leads to: Race_ID Date Student_ID feature1 new_column 0 1 1/1/2023 3 0.021671 1.383615 1 1 1/1/2023 4 0.173491 0.25823 2 1 1/1/2023 6 0.084390 0.695116 3 1 1/1/2023 8 0.041438 1.073675 4 1 1/1/2023 9 0.025891 1.301445 5 1 1/1/2023 1 0.038668 1.108209 6 1 1/1/2023 10 0.046155 1.019114 7 1 1/1/2023 45 0.092128 0.645103 8 1 1/1/2023 23 0.108793 0.548053 9 1 1/1/2023 102 0.186921 0.209302 10 1 1/1/2023 75 0.029907 1.23329 11 1 1/1/2023 27 0.027319 1.276228 12 1 1/1/2023 15 0.060202 0.880537 13 1 1/1/2023 29 0.063027 0.855987 14 3 17/4/2022 5 0.200000 0.920987 15 3 17/4/2022 2 0.100000 1.376155 16 3 17/4/2022 3 0.550000 0.122939 17 3 17/4/2022 4 0.150000 1.117358 Indeed the whole operation is rather slow for high dimensional groups. There are few things you can do to speed up the whole process: use numba for the solving step; parallelize group solving; But it still is an heavy operation if your dataset is huge or have high dimensional groups.
2
2
78,830,224
2024-8-4
https://stackoverflow.com/questions/78830224/how-can-i-get-around-an-unresolved-hostname-or-unrecognized-name-error-using-htt
I a trying to access a website's information programmatically, but on both Java and Python it is unable to resolve a hostname. If I specify the IP address, it changes the error to TLSV1_UNRECOGNIZED_NAME. This website is able to resolve without any additional work through any browser though. I have looked through a lot of potential solutions on here, but for Python it says this issue should have been resolved in 2.7 or 2.8, but I am using 3.10 and still getting that error. In Java, it claims that this is a known error, but the solutions presented such as removing the SNI header through a compilation option, or passing an empty hostname array to HTTPSURLConnection to cancel out the creation of the SNI header doesn't solve it. I have also tried setting the user agent to Mozilla as suggested by an answer on here, but that didn't change anything either. I am sure that it is something unusual about the website, but it is not one I own so I am unable to check much about its configuration. Specifically the website I am trying to see is: URL -> https://epic7db.com/heroes IP -> 157.230.84.20 DNS Lookup -> https://www.nslookup.io/domains/epic7db.com/webservers/ When using nslookup locally, I get back: nslookup epic7db.com Server: UnKnown Address: 10.0.0.1 Non-authoritative answer: Name: epic7db.com Address: 157.230.84.20 Any help would be appreciated as I am essentially throwing things at the wall to see what sticks at this point. EDIT: Adding code samples Python: import requests headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.76 Safari/537.36'} # This is chrome, you can set whatever browser you like url = 'https://epic7db.com' a = requests.get(url,headers) print(a.content) Kotlin using Java's HttpsUrlConnection: import http.SSLSocketFactoryWrapper import java.net.URL import javax.net.ssl.* fun main() { HttpsURLConnection.setDefaultHostnameVerifier { hostName, session -> true } val url = URL("https://epic7db.com") val sslParameters = SSLParameters() val sniHostNames: MutableList<SNIHostName> = ArrayList<SNIHostName>(1) // sniHostNames.add(SNIHostName(url.getHost())) sslParameters.setServerNames(sniHostNames as List<SNIServerName>?) val wrappedSSLSocketFactory: SSLSocketFactory = SSLSocketFactoryWrapper(SSLContext.getDefault().socketFactory, sslParameters) HttpsURLConnection.setDefaultSSLSocketFactory(wrappedSSLSocketFactory) val conn = url.openConnection() as HttpsURLConnection conn.hostnameVerifier = HostnameVerifier { s: String?, sslSession: SSLSession? -> true } println(String(conn.inputStream.readAllBytes())) } Suggested helper class in Kotlin/Java: package http; import java.io.IOException; import java.net.InetAddress; import java.net.Socket; import java.net.UnknownHostException; import javax.net.ssl.SSLParameters; import javax.net.ssl.SSLSocket; import javax.net.ssl.SSLSocketFactory; public class SSLSocketFactoryWrapper extends SSLSocketFactory { private final SSLSocketFactory wrappedFactory; private final SSLParameters sslParameters; public SSLSocketFactoryWrapper(SSLSocketFactory factory, SSLParameters sslParameters) { this.wrappedFactory = factory; this.sslParameters = sslParameters; } @Override public Socket createSocket(String host, int port) throws IOException, UnknownHostException { SSLSocket socket = (SSLSocket) wrappedFactory.createSocket(host, port); setParameters(socket); return socket; } @Override public Socket createSocket(String host, int port, InetAddress localHost, int localPort) throws IOException, UnknownHostException { SSLSocket socket = (SSLSocket) wrappedFactory.createSocket(host, port, localHost, localPort); setParameters(socket); return socket; } @Override public Socket createSocket(InetAddress host, int port) throws IOException { SSLSocket socket = (SSLSocket) wrappedFactory.createSocket(host, port); setParameters(socket); return socket; } @Override public Socket createSocket(InetAddress address, int port, InetAddress localAddress, int localPort) throws IOException { SSLSocket socket = (SSLSocket) wrappedFactory.createSocket(address, port, localAddress, localPort); setParameters(socket); return socket; } @Override public Socket createSocket() throws IOException { SSLSocket socket = (SSLSocket) wrappedFactory.createSocket(); setParameters(socket); return socket; } @Override public String[] getDefaultCipherSuites() { return wrappedFactory.getDefaultCipherSuites(); } @Override public String[] getSupportedCipherSuites() { return wrappedFactory.getSupportedCipherSuites(); } @Override public Socket createSocket(Socket s, String host, int port, boolean autoClose) throws IOException { SSLSocket socket = (SSLSocket) wrappedFactory.createSocket(s, host, port, autoClose); setParameters(socket); return socket; } private void setParameters(SSLSocket socket) { socket.setSSLParameters(sslParameters); } }
How can I get around an unresolved hostname or unrecognized name error using HTTP(S) in java or python? This is most likely not a problem in the Java or Python code1. A failure to resolve a DNS name is most likely caused by: not talking to the correct DNS server(s), or a failure in the authoritative server, or a change in the authoritative server that hasn't propagated yet. (And 3. will more likely manifest as the DNS name resolving to the wrong IP address, not a resolution failure. Read about DNS entry TTLs.) So ... you don't get around it >in the program<. You get around it by looking outside the program to figure out why the DNS resolution is failing. Then you make changes to fix that. A couple of reasons why a browser might work but a Java or Python application might not (or vice versa): The browser may be configured to send all HTTP / HTTPS requests through a proxy. The proxy may have different DNS access to the machine you are running your code on. In the case of Java (at least on Linux), Java has its own way of configuring the "local" DNS server. Normally that is not a problem, but sometimes it could mean that your application and browser are using different servers. When you tried to use the IP address in the URL, you bypassed the DNS lookup step. But then you ran into the problem that TLS certificates will not work with IP addresses. (You could "hack" around this, but the hacks have potentially serious security implications.) 1 - As others have pointed out, there is nothing wrong with the code you added to the question. Indeed, ThorbjΓΈrn attests that the code works for him exactly as written!
2
3
78,831,215
2024-8-4
https://stackoverflow.com/questions/78831215/how-to-plot-a-histogram-as-a-scatter-plot
How to plot a similar graph in python? import matplotlib.pylab as plt import numpy as np from scipy.stats import binom y = binom.rvs(n = 10, p = 0.5, size = 100) counts, bins = np.histogram(y, bins=50) plt.scatter(bins[:len(counts)], counts) plt.grid() plt.show()
First off, when the data is discrete, the bin edges should go in between the values. Simply setting bins=50 chops the distance between the lowest and the highest value into 50 equally-sized regions. Some of these regions might get no values if their start and end both lie between the same integers. To show the values in a scatter plot, you can use the centers of the bins as x-position, and the values 1, 2, ... till the count of the bin as the y position. import matplotlib.pyplot as plt import numpy as np from scipy.stats import binom y = binom.rvs(n=10, p=0.5, size=100) counts, bins = np.histogram(y, bins=np.arange(y.min() - 0.5, y.max() + 1, 1)) centers = (bins[:-1] + bins[1:]) / 2 for center, count in zip(centers, counts): plt.scatter(np.repeat(center, count), np.arange(count) + 1, marker='o', edgecolor='blue', color='none') plt.grid(axis='y') plt.ylim(ymin=0) plt.show() Here is an example with a continuous distribution and using filled squares instead of hollow circles as markers: import matplotlib.pyplot as plt from matplotlib.ticker import MultipleLocator import numpy as np y = np.random.randn(150).cumsum() counts, bins = np.histogram(y, bins=30) centers = (bins[:-1] + bins[1:]) / 2 for center, count in zip(centers, counts): plt.scatter(np.repeat(center, count), np.arange(count) + 1, marker='s', color='crimson') plt.rc('axes', axisbelow=True) plt.grid(True, axis='y') plt.gca().yaxis.set_major_locator(MultipleLocator(1)) plt.show()
4
2
78,832,050
2024-8-4
https://stackoverflow.com/questions/78832050/405-method-not-allowed-error-for-post-request-by-flask-app
I have a simple web app to message (using Twilio API) selected respondents with the following code: app.py client = Client(account_sid, auth_token) @app.route('/') def index(): return render_template('index.html') @app.route('/send_sms',methods=['POST']) def send_sms(): message = request.form['message'] selected_groups = request.form.getlist('groups') selected_secretariat_member = request.form.get('selected_secretariat_member') # more code ... return redirect(url_for('index')) templates/index.html <div class="container mt-5"> <h1 class="text-center">Send Mass SMS</h1> <form method="post" action="{{ url_for('send_sms') }}"> <div class="form-group"> <label for="message">Message</label> <textarea class="form-control" id="message" name="message" rows="3" required></textarea> </div> <div class="form-group"> <label>Select Groups</label> <!-- Groups --> </div> <button type="submit" class="btn btn-primary">Send SMS</button> </form> </div> I was using the live server provided by the VS Code live server extension I believe. When I clicked on submit for my form, I received a 405 Error. When I look at the network section in developer tools, the Response Headers has a row stating which methods are allowed and only GET, HEAD, OPTIONS are allowed. I tried other solutions people had proposed on SO such as: Adding the following to web.config <system.webServer> <modules> <remove name="WebDAVModule" /> </modules> <handlers> <remove name="WebDAV" /> </handlers> </system.webServer> Defining the methods before the index() function @app.route('/', methods=['GET','POST']) def index(): return render_template('index.html') Checking the handler mappings to see if POST is part of their allowed Verbs Adding GET to the methods for send_sms() app.route('/send_sms',methods=['GET','POST']) def send_sms(): I am really confused on what to do. I used to have this error when learning HTML Forms and php but I brushed it off as I didn't have a dedicated web server. However this seems to be a problem with the methods in general. Appreciate the help.
I think I understand the problem. I think the procedures you're following are: You have your index.html open in VS Code. You clicked "Go Live" in order to start Live Server. When using your form, it returns a 405 in browser, like this: I see the exact same header information that you describe when I follow these procedures. The problem isn't with your methods in your Flask app. It is that you're using Live Server (the VS Code extension) to serve your index.html instead of running your app.py code. Try running your Flask app using flask run. My advice for procedures would be: Create a virtual environment. Activate the environment and have Flask installed. Open a shell at your project's root (where app.py lives) (I'm using a bash shell), and use flask run to run your app using the Flask development server. Go to your browser and make your form request. You shouldn't have any problem. #bash python -m venv .venv source .venv/bin/activate pip install --upgrade pip pip install flask flask run #or python app.py You will actually want to run your Flask app.py file to let Flask do the work of rendering your application. Remember that using the Flask development server is not intended for production, but if you want to scale your app, Gunicorn is a more robust WSGI server package that can serve your Flask app.py code. Like so: #bash pip install gunicorn gunicorn -w 4 -b 0.0.0.0:8000 app:app
2
2
78,830,658
2024-8-4
https://stackoverflow.com/questions/78830658/how-to-achieve-the-same-rolling-result-in-polars-as-in-pandas-with-duplicate-tim
I am working with both pandas and polars for rolling operations, but I am encountering different results when dealing with duplicate timestamps. I want to replicate the pandas behavior in polars. Here’s an example illustrating the issue with rolling_sum: Pandas Code: import pandas as pd import polars as pl data = { "timestamp": [ "2023-08-04 10:00:00", "2023-08-04 10:05:00", "2023-08-04 10:10:00", "2023-08-04 10:10:00", "2023-08-04 10:20:00", "2023-08-04 10:20:00", ], "value": [1, 2, 3, 4, 5, 6], } pddf = pd.DataFrame(data) pddf["timestamp"] = pd.to_datetime(pddf["timestamp"]) pddf.set_index("timestamp", inplace=True) pddf["rolling_sum"] = pddf["value"].rolling("10min").sum() print(pddf) Output: value rolling_sum timestamp 2023-08-04 10:00:00 1 1.0 2023-08-04 10:05:00 2 3.0 2023-08-04 10:10:00 3 5.0 2023-08-04 10:10:00 4 9.0 2023-08-04 10:20:00 5 5.0 2023-08-04 10:20:00 6 11.0 Polars Code: pldf = ( pl.DataFrame(data) .with_columns(pl.col("timestamp").str.strptime(pl.Datetime)) .sort("timestamp") .with_columns( pl.col("value") .rolling_sum_by(by="timestamp", window_size="10m") .alias("rolling_sum") ) ) print(pldf) Output: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ timestamp ┆ value ┆ rolling_sum β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ datetime[ΞΌs] ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ═════════════║ β”‚ 2023-08-04 10:00:00 ┆ 1 ┆ 1 β”‚ β”‚ 2023-08-04 10:05:00 ┆ 2 ┆ 3 β”‚ β”‚ 2023-08-04 10:10:00 ┆ 3 ┆ 9 β”‚ β”‚ 2023-08-04 10:10:00 ┆ 4 ┆ 9 β”‚ β”‚ 2023-08-04 10:20:00 ┆ 5 ┆ 11 β”‚ β”‚ 2023-08-04 10:20:00 ┆ 6 ┆ 11 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Question: How can I achieve the same rolling_* result in polars as in pandas when dealing with duplicate timestamps? The pandas output includes separate rolling sum calculations for each row with the same timestamp, but polars aggregates them differently. Is there a way to configure polars to handle duplicate timestamps in a similar manner to pandas? My question is related to this polars issue. Thank you in advance for your help!
AFAIK there are currently no parameters of pl.DataFrame.rolling or pl.Expr.rolling_sum_by allowing to control the handling of duplicate values in the by column. This would probably make a good feature request. Until then, you could explicitly aggregate all values in the window into a list column. For duplicate values in the by column you'd take increasing subsets of the list column and aggregate afterwards. ( pl.DataFrame(data) .with_columns( pl.col("timestamp").str.strptime(pl.Datetime) ) .sort("timestamp") # aggregate values into list column .rolling(index_column="timestamp", period="10m").agg(pl.all()) # take sum of sliced list column to get desired behaviour .with_columns( pl.col("value") .list.slice( 0, pl.col("value").list.len() - pl.int_range(pl.len()).reverse().over("timestamp") ) .list.sum() .name.suffix("_rolling_sum") ) ) shape: (6, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ timestamp ┆ value ┆ value_rolling_sum β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ datetime[ΞΌs] ┆ list[i64] ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════β•ͺ═══════════════════║ β”‚ 2023-08-04 10:00:00 ┆ [1] ┆ 1 β”‚ β”‚ 2023-08-04 10:05:00 ┆ [1, 2] ┆ 3 β”‚ β”‚ 2023-08-04 10:10:00 ┆ [2, 3, 4] ┆ 5 β”‚ β”‚ 2023-08-04 10:10:00 ┆ [2, 3, 4] ┆ 9 β”‚ β”‚ 2023-08-04 10:20:00 ┆ [5, 6] ┆ 5 β”‚ β”‚ 2023-08-04 10:20:00 ┆ [5, 6] ┆ 11 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
3
2
78,829,918
2024-8-3
https://stackoverflow.com/questions/78829918/is-there-any-way-to-know-when-im-passed-the-last-item-while-iterating-through-a
I'm trying to create a text representation of a JSONField that has some data structured as an array of dictionaries like this: [ { "key1":"value1", "key2":"value2" }, { "key3":"value3", "key4":"value4", "key5":"value5" } ] My goal is to represent this data in the Django template like this: ( key1=value1 & key2=value2 ) || ( key3=value3 & key4=value4 & key5=value5 ) Now I'd iterate through the array and see if I'm not hitting the last dictionary so I can add an || between the condition representation text since it's already an array list like: {% for dict in data %} // Do stuff with dict {% if data|last != dict %} || {% endif %} {% endfor %} However, since a dictionary doesn't have a last thing, I'm having a hard time to iterate through the "key,value" when I'm doing stuff with each dictionary object when I've got to append an "&" only if I'm not hitting the end of this dict items. {% for k,v in dict %} k=v // append "&" if this is not the last key being iterated? {% endfor %} Any suggestions/workarounds/ideas would be much appreciated :)
Just found it! Silly me, Seems like Django already provides a pretty neat built-in forloop object for templates that works like a charm! Will drop it here for anyone who might have the same problem {% for k,v in dict %} k=v {% if forloop.last != True %} &amp; {% endif %} {% endfor %}
2
1
78,829,678
2024-8-3
https://stackoverflow.com/questions/78829678/is-there-a-way-to-do-this-with-a-list-comprehension
I have a list that looks something like this: data = ['1', '12', '123'] I want to produce a new list, that looks like this: result = ['$1', '1', '$2', '12', '$3', '123'] where the number after the $ sign is the length of the next element. The straightforward way to do this is with a for loop: result = [] for element in data: result += [f'${len(element)}'] + [element] but I was wondering if it's possible to do it in a more elegant way - with a list comprehension, perhaps? I could do result = [[f'${len(e)}', e] for e in data] but this results in a list of lists: [['$1', '1'], ['$2', '12'], ['$3', '123']] I could flatten that with something like result = sum([[f'${len(e)}', e] for e in data], []) or even result = [x for xs in [[f'${len(e)}', e] for e in data] for x in xs] but this is getting rather difficult to read. Is there a better way to do this?
You can do it with two loops: result = [item for s in data for item in (f"${len(s)}", s)] ['$1', '1', '$2', '12', '$3', '123']
3
7
78,817,543
2024-7-31
https://stackoverflow.com/questions/78817543/griddb-tql-invalid-column
I'm currently working with GridDB for a project involving IoT data, and I'm facing an issue with executing SQL-like queries using GridDB's TQL (Time Series SQL-like Query Language). Here is a brief description of what I am trying to achieve: I have a container in GridDB which stores IoT sensor data. I am trying to query this data using TQL to fetch records based on certain conditions. Here is a sample of my container schema and the data insertion code: import griddb_python as griddb factory = griddb.StoreFactory.get_instance() gridstore = factory.get_store( host='127.0.0.1', port=10001, cluster_name='defaultCluster', username='admin', password='admin' ) # Define container schema conInfo = griddb.ContainerInfo( name="sensorData", column_info_list=[ ["TimeStamp", griddb.Type.TIMESTAMP], ["Sensor_id", griddb.Type.STRING], ["Value", griddb.Type.DOUBLE] ], type=griddb.ContainerType.TIME_SERIES, row_key=True ) # Create container ts = gridstore.put_container(conInfo) ts.set_auto_commit(False) # Insert sample data import datetime ts.put([datetime.datetime.now(), "sensor_1", 25.5]) ts.put([datetime.datetime.now(), "sensor_2", 26.7]) ts.commit() Now, I am trying to execute the following TQL query to fetch records: query = ts.query("SELECT * FROM sensorData WHERE value > 26") rs = query.fetch() while rs.has_next(): data = rs.next() print(data) Im getting the following error though: InvalidColumnException: Column (value) not found I've checked the schema and value exists so i'm not sure if it is talking about some other column or something wrong with value exactly? Any help would be appreciated.
By default, SQL column names are case insensitive in some languages like MYSQL. TQL is case sensitive though. change value to Value: query = ts.query("SELECT * FROM sensorData WHERE Value > 26")
2
2
78,828,192
2024-8-3
https://stackoverflow.com/questions/78828192/what-caused-python-3-13-0b3-compiled-with-gil-disabled-be-slower-than-3-12-0
I did a simple performance test on python 3.12.0 against python 3.13.0b3 compiled with a --disable-gil flag. The program executes calculations of a Fibonacci sequence using ThreadPoolExecutor or ProcessPoolExecutor. The docs on the PEP introducing disabled GIL says that there is a bit of overhead mostly due to biased reference counting followed by per-object locking (https://peps.python.org/pep-0703/#performance). But it says the overhead on pyperformance benchmark suit is around 5-8%. My simple benchmark shows a significant difference in the performance. Indeed, python 3.13 without GIL utilize all CPUs with a ThreadPoolExecutor but it is much slower than python 3.12 with GIL. Based on the CPU utilization and the elapsed time we can conclude that with python 3.13 we do multiple times more clock cycles comparing to the 3.12. Program code: from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor import datetime from functools import partial import sys import logging import multiprocessing logging.basicConfig( format='%(levelname)s: %(message)s', ) logger = logging.getLogger(__name__) logger.setLevel(logging.INFO) cpus = multiprocessing.cpu_count() pool_executor = ProcessPoolExecutor if len(sys.argv) > 1 and sys.argv[1] == '1' else ThreadPoolExecutor python_version_str = f'{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}' logger.info(f'Executor={pool_executor.__name__}, python={python_version_str}, cpus={cpus}') def fibonacci(n: int) -> int: if n < 0: raise ValueError("Incorrect input") elif n == 0: return 0 elif n == 1 or n == 2: return 1 else: return fibonacci(n-1) + fibonacci(n-2) start = datetime.datetime.now() with pool_executor(8) as executor: for task_id in range(30): executor.submit(partial(fibonacci, 30)) executor.shutdown(wait=True) end = datetime.datetime.now() elapsed = end - start logger.info(f'Elapsed: {elapsed.total_seconds():.2f} seconds') Test results: # TEST Linux 5.15.0-58-generic, Ubuntu 20.04.6 LTS INFO: Executor=ThreadPoolExecutor, python=3.12.0, cpus=2 INFO: Elapsed: 10.54 seconds INFO: Executor=ProcessPoolExecutor, python=3.12.0, cpus=2 INFO: Elapsed: 4.33 seconds INFO: Executor=ThreadPoolExecutor, python=3.13.0b3, cpus=2 INFO: Elapsed: 22.48 seconds INFO: Executor=ProcessPoolExecutor, python=3.13.0b3, cpus=2 INFO: Elapsed: 22.03 seconds Can anyone explain why do I experience such a difference when comparing the overhead to the one from pyperformance benchmark suit? EDIT 1 I have tried with pool_executor(cpus) instead of pool_executor(8) -> still got the similar results. I watched this video https://www.youtube.com/watch?v=zWPe_CUR4yU and executed the following test: https://github.com/ArjanCodes/examples/blob/main/2024/gil/main.py Results: Version of python: 3.12.0a7 (main, Oct 8 2023, 12:41:37) [GCC 9.4.0] GIL cannot be disabled Single-threaded: 78498 primes in 6.67 seconds Threaded: 78498 primes in 7.89 seconds Multiprocessed: 78498 primes in 5.85 seconds Version of python: 3.13.0b3 experimental free-threading build (heads/3.13.0b3:7b413952e8, Jul 27 2024, 11:19:31) [GCC 9.4.0] GIL is disabled Single-threaded: 78498 primes in 61.42 seconds Threaded: 78498 primes in 32.29 seconds Multiprocessed: 78498 primes in 39.85 seconds so yet another test on my machine when we end up with multiple times slower performance. Btw. On the video we can see the similar overhead results as it is described in the PEP. EDIT 2 As @ekhumoro suggested I did configure the build with the following flags: ./configure --disable-gil --enable-optimizations and it seems the --enable-optimizations flag makes a significant difference in the considered benchmarks. The previous build was done with the following configuration: ./configure --with-pydebug --disable-gil. Tests results: Fibonacci benchmark: INFO: Executor=ThreadPoolExecutor, python=3.12.0, cpus=2 INFO: Elapsed: 10.25 seconds INFO: Executor=ProcessPoolExecutor, python=3.12.0, cpus=2 INFO: Elapsed: 4.27 seconds INFO: Executor=ThreadPoolExecutor, python=3.13.0, cpus=2 INFO: Elapsed: 6.94 seconds INFO: Executor=ProcessPoolExecutor, python=3.13.0, cpus=2 INFO: Elapsed: 6.94 seconds Prime numbers benchmark: Version of python: 3.12.0a7 (main, Oct 8 2023, 12:41:37) [GCC 9.4.0] GIL cannot be disabled Single-threaded: 78498 primes in 5.77 seconds Threaded: 78498 primes in 7.21 seconds Multiprocessed: 78498 primes in 3.23 seconds Version of python: 3.13.0b3 experimental free-threading build (heads/3.13.0b3:7b413952e8, Aug 3 2024, 14:47:48) [GCC 9.4.0] GIL is disabled Single-threaded: 78498 primes in 7.99 seconds Threaded: 78498 primes in 4.17 seconds Multiprocessed: 78498 primes in 4.40 seconds So the general gain from moving from python 3.12 multiprocessing to python 3.12 no-gil multi-threading are significant memory savings (we do have only a single process). When we compare CPU overhead for the machine with only 2 cores: [Fibonacci] Python 3.13 multi-threading against Python 3.12 multiprocessing: (6.94 - 4.27) / 4.27 * 100% ~= 63% overhead [Prime numbers] Python 3.13 multi-threading against Python 3.12 multiprocessing: (4.17 - 3.23) / 3.23 * 100% ~= 29% overhead
From the latest question edits, it seems the version of Python-3.13 used for testing was built with debug mode enabled and without optimisations enabled. The former flag in particular can have a large impact on performance testing, whilst the latter will have a much smaller, but still significant, impact. In general, it's best to avoid drawing any conclusions about performance issues when testing with development builds of Python.
5
4
78,828,636
2024-8-3
https://stackoverflow.com/questions/78828636/valueerror-while-saving-a-dataframe
I am facing hurdle while saving a pandas data frame to parquet file Code I am using - import pandas as pd import yfinance as yf start_date = "2022-08-06" end_date = "2024-08-05" ticker = 'RELIANCE.NS' data = yf.download(tickers=ticker, start=start_date, end=end_date, interval="1h") data.reset_index(inplace=True) data['Date'] = data['Datetime'].dt.date data['Time'] = data['Datetime'].dt.time data.to_parquet('./RELIANCE.parquet') The error it produces is - ValueError: Can't infer object conversion type: 0 Can someone tell me how to fix this. PS: Detailed error below- --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[1], line 15 12 data['Date'] = data['Datetime'].dt.date 13 data['Time'] = data['Datetime'].dt.time ---> 15 data.to_parquet('./RELIANCE.parquet') File ~/python_venv/lib/python3.10/site-packages/pandas/util/_decorators.py:333, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs) 327 if len(args) > num_allow_args: 328 warnings.warn( 329 msg.format(arguments=_format_argument_list(allow_args)), 330 FutureWarning, 331 stacklevel=find_stack_level(), 332 ) --> 333 return func(*args, **kwargs) File ~/python_venv/lib/python3.10/site-packages/pandas/core/frame.py:3113, in DataFrame.to_parquet(self, path, engine, compression, index, partition_cols, storage_options, **kwargs) 3032 """ 3033 Write a DataFrame to the binary parquet format. 3034 (...) 3109 >>> content = f.read() 3110 """ 3111 from pandas.io.parquet import to_parquet -> 3113 return to_parquet( 3114 self, 3115 path, 3116 engine, 3117 compression=compression, 3118 index=index, 3119 partition_cols=partition_cols, 3120 storage_options=storage_options, 3121 **kwargs, 3122 ) File ~/python_venv/lib/python3.10/site-packages/pandas/io/parquet.py:480, in to_parquet(df, path, engine, compression, index, storage_options, partition_cols, filesystem, **kwargs) 476 impl = get_engine(engine) 478 path_or_buf: FilePath | WriteBuffer[bytes] = io.BytesIO() if path is None else path --> 480 impl.write( 481 df, 482 path_or_buf, 483 compression=compression, 484 index=index, 485 partition_cols=partition_cols, 486 storage_options=storage_options, 487 filesystem=filesystem, 488 **kwargs, 489 ) 491 if path is None: 492 assert isinstance(path_or_buf, io.BytesIO) File ~/python_venv/lib/python3.10/site-packages/pandas/io/parquet.py:349, in FastParquetImpl.write(self, df, path, compression, index, partition_cols, storage_options, filesystem, **kwargs) 344 raise ValueError( 345 "storage_options passed with file object or non-fsspec file path" 346 ) 348 with catch_warnings(record=True): --> 349 self.api.write( 350 path, 351 df, 352 compression=compression, 353 write_index=index, 354 partition_on=partition_cols, 355 **kwargs, 356 ) File ~/python_venv/lib/python3.10/site-packages/fastparquet/writer.py:1304, in write(filename, data, row_group_offsets, compression, file_scheme, open_with, mkdirs, has_nulls, write_index, partition_on, fixed_text, append, object_encoding, times, custom_metadata, stats) 1301 check_column_names(data.columns, partition_on, fixed_text, 1302 object_encoding, has_nulls) 1303 ignore = partition_on if file_scheme != 'simple' else [] -> 1304 fmd = make_metadata(data, has_nulls=has_nulls, ignore_columns=ignore, 1305 fixed_text=fixed_text, 1306 object_encoding=object_encoding, 1307 times=times, index_cols=index_cols, 1308 partition_cols=partition_on, cols_dtype=cols_dtype) 1309 if custom_metadata: 1310 kvm = fmd.key_value_metadata or [] File ~/python_venv/lib/python3.10/site-packages/fastparquet/writer.py:904, in make_metadata(data, has_nulls, ignore_columns, fixed_text, object_encoding, times, index_cols, partition_cols, cols_dtype) 902 se.name = column 903 else: --> 904 se, type = find_type(data[column], fixed_text=fixed, 905 object_encoding=oencoding, times=times, 906 is_index=is_index) 907 col_has_nulls = has_nulls 908 if has_nulls is None: File ~/python_venv/lib/python3.10/site-packages/fastparquet/writer.py:122, in find_type(data, fixed_text, object_encoding, times, is_index) 120 elif dtype == "O": 121 if object_encoding == 'infer': --> 122 object_encoding = infer_object_encoding(data) 124 if object_encoding == 'utf8': 125 type, converted_type, width = (parquet_thrift.Type.BYTE_ARRAY, 126 parquet_thrift.ConvertedType.UTF8, 127 None) File ~/python_venv/lib/python3.10/site-packages/fastparquet/writer.py:357, in infer_object_encoding(data) 355 s += 1 356 else: --> 357 raise ValueError("Can't infer object conversion type: %s" % data) 358 if s > 10: 359 break ValueError: Can't infer object conversion type: 0 2022-08-08 1 2022-08-08 2 2022-08-08 3 2022-08-08 4 2022-08-08 ... 3398 2024-08-02 3399 2024-08-02 3400 2024-08-02 3401 2024-08-02 3402 2024-08-02 Name: Date, Length: 3403, dtype: object
Your code worked for me without any issues. I assume this issue is arising from the parquet engine that you are using to save the file as parquet( io.parquet.engine, pyarrow or fastparquet). Here are the versions of the libraries I have used: Pandas version: 2.2.2 PyArrow version: 16.1.0 To make sure you are using pyarrow or any desired engine make sure you include it like this: data.to_parquet('./RELIANCE.parquet', engine='pyarrow') Here's first two rows of the saved parquet file: {"Datetime":1659930300000,"Open":2532.25,"High":2572.75,"Low":2531.39990234375,"Close":2569,"Adj Close":2569,"Volume":0,"Date":"2022-08-08T00:00:00.000Z","Time":33300000000} {"Datetime":1659933900000,"Open":2568.60009765625,"High":2571,"Low":2562.10009765625,"Close":2567.300048828125,"Adj Close":2567.300048828125,"Volume":392815,"Date":"2022-08-08T00:00:00.000Z","Time":36900000000} If you would not consider changing your parquet engine, you should investigate what data type conversions might go wrong. As an example you can try converting time to string.
3
0
78,828,151
2024-8-3
https://stackoverflow.com/questions/78828151/is-there-a-way-to-prevent-setup-from-launching-a-browser-for-each-test-method
I'm practicing writing test cases for web automation and I have written functions to test login, find my username in the user home page and test logout functionality of GitHub. However, I've learned through both experience and reading that setUp() is initiated before each test method, and my problem is that before every test method it opens a new browser. I want all my test methods to continue testing on the same browser in the same session. Here is my code to show you what I've tried: import unittest from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.support.wait import WebDriverWait from selenium.webdriver.support.expected_conditions import element_to_be_clickable class GitHubLoginTest(unittest.TestCase): initialized = 0 completed = 0 def setUp(self): if self.initialized < 1: self.initialized = 1 chrome_options = Options() chrome_options.add_experimental_option("detach", True) self.driver = webdriver.Chrome(options=chrome_options) else: pass def test_login(self): driver = self.driver driver.get("https://github.com") driver.find_element(By.LINK_TEXT, "Sign in").click() username_box = WebDriverWait(driver, 10).until(element_to_be_clickable((By.ID, "login_field"))) username_box.send_keys("<username>") password_box = driver.find_element(By.NAME, "password") password_box.send_keys("<password>") password_box.submit() self.completed += 1 print(self.completed) print(self.initialized) def test_username_presence(self): print(self.completed) print(self.initialized) self.assertIn("SubjectofthePotentate", self.driver.page_source) self.driver.find_element(By.CLASS_NAME, "AppHeader-user").click() profile_label = self.driver.find_element(By.CLASS_NAME, "lh-condensed") user_label = profile_label.get_attribute("innerHTML") print(user_label) self.assertIn("SubjectofthePotentate", user_label) self.completed += 1 print(self.completed) def test_logout(self): self.driver.find_element(By.CLASS_NAME, "DialogOverflowWrapper") self.driver.find_element(By.ID, ":r11:").click() sign_out = WebDriverWait(self.driver, 10).until(element_to_be_clickable((By.NAME, "commit"))) sign_out.click() self.completed += 1 print(self.completed) def tearDown(self): if self.completed == 3: self.driver.close() else: pass if __name__ == "__main__": unittest.main() I tried creating attributes which I called initialization and completed to prevent setUp() from loading another browser and also to prevent tearDown() from closing the browser before all the tests are finished, but a new browser is opened three times, one for each test function. I noticed that when print(self.completed) and print(self.initialized) are executed in the test_login(self) method that they both equal 1, but when print(self.completed) and print(self.initialized) are executed again in the test method test_username_presence(self), self.completed is equal to 0 and self.initialized is equal to 1, so I think that means that the setUp(self) method is being executed before each test method and the attributes I defined at the class level are being reset for some reason. I've tried initializing these attributes using setUpClass(cls) and I also tried this: def __init__(self, other): super().__init__(other) self.completed = 0 self.initialized = 0 But I got the same results, multiple browsers and the last two are empty webpages with no URL. Anyone know how I can continue the tests on one browser in one session?
In python unittest you have 3 levels of setup / teardown. Method level: setUp / tearDown: called before / after each test method. Class level: setUpClass / tearDownClass: called before / after tests in an individual class. Module level: setUpModule / tearDownModule: called before / after tests in an individual module. So in your case you want to define class level setup / teardown. @classmethod def setUpClass(cls): pass @classmethod def tearDownClass(cls): pass See also, https://stackoverflow.com/a/23670844/4413446.
2
1
78,825,612
2024-8-2
https://stackoverflow.com/questions/78825612/trouble-resizing-widgets-in-custom-tkinter-when-going-into-another-page
I was trying to use the set_widget_scale function to resize my widgets based on the resolution of the window, when I resize the widgets on the main page it works fine when the resolution is chosen, when I go to the other page it raises this error: ("_tkinter.TclError: invalid command name ".!ctkoptionmenu.!dropdownmenu"") import customtkinter as ct class App(ct.CTk): def __init__(self): super().__init__() self.title("Test") self.geometry("640x480") self.Page1() def Page1(self): for i in self.winfo_children(): i.destroy() self.page1 = ct.CTkButton(self , text = "Page 2" , command = lambda: (self.Page2() , self.Resize())) self.page1.pack() self.menu = ct.CTkOptionMenu(self) self.menu.pack() def Page2(self): for i in self.winfo_children(): i.destroy() self.page2 = ct.CTkButton(self , text = "Page 1" , command = lambda: (self.Page1() , self.Resize1())) self.page2.pack() self.menu = ct.CTkOptionMenu(self) self.menu.pack() def Resize(self): ct.set_widget_scaling(2) def Resize1(self): ct.set_widget_scaling(0.5) if __name__ == "__main__": app = App() app.mainloop() This is an example of my problem that I am facing
This method avoids using destroy constantly. Instead, it uses frames to separate pages. You can destroy the frames when not used, as demonstrated by the added destroy button. import customtkinter as ct class App(ct.CTk): def __init__(self): super().__init__() self.title("Test") self.geometry("640x480") self.end_button = ct.CTkButton(self, text="DESTROY") self.end_button.pack(padx=10, pady=30) self.page1_frame = ct.CTkFrame(self) self.page1_button = ct.CTkButton(self.page1_frame, text="Page 1", command=self.show_page2) self.page1_button.pack() self.page1_menu = ct.CTkOptionMenu(self.page1_frame) self.page1_menu.pack() self.page2_frame = ct.CTkFrame(self) self.page2_button = ct.CTkButton(self.page2_frame, text="Page 1", command=self.show_page1) self.page2_button.pack() self.page2_menu = ct.CTkOptionMenu(self.page2_frame) self.page2_menu.pack() self.page1_frame.pack() self.end_button.configure(command=lambda: (self.page1_frame.destroy(), self.page2_frame.destroy())) def show_page2(self): ct.set_widget_scaling(0.5) self.page1_frame.pack_forget() self.page2_frame.pack() def show_page1(self): ct.set_widget_scaling(2) self.page2_frame.pack_forget() self.page1_frame.pack() if __name__ == "__main__": app = App() app.mainloop() I combined the resize and page functions, but it works even when using command=lambda: (self.resize_small(), self.show_page2()) hope this helps
2
1
78,827,639
2024-8-3
https://stackoverflow.com/questions/78827639/playwright-issue-with-the-footer-template-parsing
Playwright (Python) save a page as PDF function works fine when there's no customisation in the header or footer. However, when I try to introduce a custom footer, the values don't seem to get injected appropriately. Example code: from playwright.sync_api import sync_playwright def generate_pdf_with_page_numbers(): with sync_playwright() as p: browser = p.chromium.launch() context = browser.new_context() page = context.new_page() # Navigate to the desired page page.goto('https://example.com') # Generate PDF with page numbers in the footer pdf = page.pdf( path="output.pdf", format="A4", display_header_footer=True, footer_template=""" <div style="width: 100%; text-align: center; font-size: 10px;"> Page {{pageNumber}} of {{totalPages}} </div> """, margin={"top": "40px", "bottom": "40px"} ) browser.close() # Run the function to generate the PDF generate_pdf_with_page_numbers() I was expecting: Page 1 of 1 But actually I get: Page {{pageNumber}} of {{totalPages}} Do you see any issue with this code?
From the Playwright docs, header_template (which has the same format as footer_template) provides a number of magic class names you can use to inject certain pieces of data into the page: from playwright.sync_api import sync_playwright # 1.44.0 def generate_pdf_with_page_numbers(): with sync_playwright() as p: browser = p.chromium.launch() context = browser.new_context() page = context.new_page() page.goto("https://example.com") pdf = page.pdf( path="output.pdf", format="A4", display_header_footer=True, footer_template=""" <div style="width: 100%; text-align: center; font-size: 10px;"> Page <span class="pageNumber"></span> of <span class="totalPages"></span> <div class="title"></div> <div class="url"></div> <div class="date"></div> </div> """, margin={"top": "40px", "bottom": "40px"} ) browser.close() if __name__ == "__main__": generate_pdf_with_page_numbers() Run the code, open output.pdf and observe the title, date, URL and pages injected into the footer.
2
2
78,827,479
2024-8-2
https://stackoverflow.com/questions/78827479/any-efficient-way-to-create-a-heatmap-matrix
Given a set of coordination and values like coors=np.array([[0,0],[1,1],[2,2]]) heat_values=[3,2,1] I would like to generate a matrix like mtx=[[3,0,0], [0,2,0], [0,0,1]] Any functions can do the jobs?
I would use numpy.zeros and indexing: mtx = np.zeros(coors.max(0)+1, dtype=int) mtx[tuple(coors.T)] = heat_values Output: array([[3, 0, 0], [0, 2, 0], [0, 0, 1]]) NB. if heat_values is an array, better use dtype=heat_values.dtype. This should generalize to any dimension of coors: coors=np.array([[1,0,0],[0,1,1],[0,2,3]]) heat_values=[9,8,7] mtx = np.zeros(coors.max(0)+1, dtype=int) mtx[tuple(coors.T)] = heat_values array([[[0, 0, 0, 0], [0, 8, 0, 0], [0, 0, 0, 7]], [[9, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]) If you want to force a "square" output: coors=np.array([[0,0],[1,1],[2,3]]) heat_values=[1,2,3] mtx = np.zeros((np.max(coors)+1,)*coors.shape[1], dtype=int) mtx[tuple(coors.T)] = heat_values array([[1, 0, 0, 0], [0, 2, 0, 0], [0, 0, 0, 3], [0, 0, 0, 0]])
2
3
78,817,794
2024-7-31
https://stackoverflow.com/questions/78817794/how-to-preserve-input-and-output-values-when-adding-new-tabs
I'm encountering a peculiar issue when developing my Python shiny app. My app currently has the functionality to dynamically generate new tabs with the press of a navset tab called "+". However, after pressing "+", the state (including input and output values) of the previous tabs reset back to empty. Is there a way to preserve the state of any previously existing tabs? My code is outlined below: from shiny import App, Inputs, Outputs, Session, module, reactive, render, ui # Create a Module UI @module.ui def textbox_ui(panelNum): return ui.nav_panel( f"Tab {panelNum}", ui.input_text_area(id=f"test_text", label = "Enter some text"), ui.output_ui(f"display_text_{panelNum}"), value = f"Tab_{panelNum}" ) # Set up module server @module.server def textbox_server(input, output, session, panelNum): @output(id=f"display_text_{panelNum}") @render.text def return_text(): return input[f"test_text"]() # Set up app UI app_ui = ui.page_fluid( ui.tags.head( ui.tags.style( """ body { height: 100vh; overflow-y: auto !important; } """ ) ), ui.h2('Test App', align='center', fillable=True), ui.output_ui("tab_UI"), title = "Test App" ) # Set up server def server(input, output, session): # Set up reactive values navs = reactive.value(0) # Add tabs if the user presses "+" @reactive.effect @reactive.event(input.shiny_tabs) def add_tabs(): if input.shiny_tabs() == "+": navs.set(navs.get() + 1) @output @render.ui def tab_UI(): [textbox_server(str(x), panelNum=x+1) for x in range(navs.get())] ui.update_navs("shiny_tabs", selected = f"Tab_{navs.get()}") return ui.navset_tab( ui.nav_panel("Home", ui.card( ui.card_header("Overview"), ui.p("An example of the outputs clearing") ), value = "panel0" ), *[textbox_ui(str(x), panelNum=x+1) for x in range(navs.get())], ui.nav_panel("+"), id = "shiny_tabs" ) app = App(app_ui, server)
The reason for the reset is that the navset_tab is re-rendered each time a new nav_panel gets appended. So an approach would be better where we have the navset_tab outside of the server and then append a nav_panel on click without re-rendering everything. The difficulty is on the one hand that ui.insert_ui does not seem to be suitable for the insert and on the other hand that Shiny for Python currently does not carry functions for dynamic navs, see e.g. posit-dev/py-shiny#089. However, within the PR#90 is a draft for an insert function nav_insert which is suitable for this application. I adapted this below and re-wrote your app, we now only insert a new tab if the button is clicked, the rest stays stable. import sys from shiny import App, reactive, ui, Session, Inputs, Outputs, module, render from shiny._utils import run_coro_sync from shiny._namespaces import resolve_id from shiny.types import NavSetArg from shiny.session import require_active_session from typing import Optional, Union if sys.version_info >= (3, 8): from typing import Literal else: from typing_extensions import Literal # adapted from https://github.com/posit-dev/py-shiny/pull/90/files def nav_insert( id: str, nav: Union[NavSetArg, str], target: Optional[str] = None, position: Literal["after", "before"] = "after", select: bool = False, session: Optional[Session] = None, ) -> None: """ Insert a new nav item into a navigation container. Parameters ---------- id The ``id`` of the relevant navigation container (i.e., ``navset_*()`` object). nav The navigation item to insert (typically a :func:`~shiny.ui.nav` or :func:`~shiny.ui.nav_menu`). A :func:`~shiny.ui.nav_menu` isn't allowed when the ``target`` references an :func:`~shiny.ui.nav_menu` (or an item within it). A string is only allowed when the ``target`` references a :func:`~shiny.ui.nav_menu`. target The ``value`` of an existing :func:`shiny.ui.nav` item, next to which tab will be added. Can also be ``None``; see ``position``. position The position of the new nav item relative to the target nav item. If ``target=None``, then ``"before"`` means the new nav item should be inserted at the head of the navlist, and ``"after"`` is the end. select Whether the nav item should be selected upon insertion. session A :class:`~shiny.Session` instance. If not provided, it is inferred via :func:`~shiny.session.get_current_session`. """ session = require_active_session(session) li_tag, div_tag = nav.resolve( selected=None, context=dict(tabsetid="tsid", index="id") ) msg = { "inputId": resolve_id(id), "liTag": session._process_ui(li_tag), "divTag": session._process_ui(div_tag), "menuName": None, "target": target, "position": position, "select": select, } def callback() -> None: run_coro_sync(session._send_message({"shiny-insert-tab": msg})) session.on_flush(callback, once=True) @module.ui def textbox_ui(panelNum): return ui.nav_panel( f"Tab {panelNum}", ui.input_text_area(id=f"test_text_{panelNum}", label = "Enter some text"), ui.output_ui(f"display_text_{panelNum}"), value = f"Tab_{panelNum}" ) @module.server def textbox_server(input, output, session, panelNum): @output(id=f"display_text_{panelNum}") @render.text def return_text(): return input[f"test_text_{panelNum}"]() app_ui = ui.page_fluid( ui.tags.head( ui.tags.style( """ body { height: 100vh; overflow-y: auto !important; } """ ) ), ui.h2('Test App', align='center', fillable=True), ui.navset_tab( ui.nav_panel("Home", ui.card( ui.card_header("Overview"), ui.p("An example of the outputs not clearing") ), value = "Tab_0" ), ui.nav_panel("+"), id = "shiny_tabs" ), title = "Test App" ) def server(input: Inputs, output: Outputs, session: Session): navCounter = reactive.value(0) @reactive.effect @reactive.event(input.shiny_tabs) def add_tabs(): if input.shiny_tabs() == "+": navCounter.set(navCounter.get() + 1) id = str(navCounter.get()) idPrev = str(navCounter.get() - 1) nav_insert( "shiny_tabs", textbox_ui(id, id), target=f"Tab_{idPrev}", position="after", select=True ) textbox_server(id, id) app = App(app_ui, server)
2
1
78,826,115
2024-8-2
https://stackoverflow.com/questions/78826115/how-to-make-a-barplot-with-a-double-grouped-axis-using-pandas
I am working on a plot where I want to show two groups on one axis and a third group as fill-value. The problem is, that when I plot it, the y-axis shows values in tuples: data_dict = {'major_group': list(np.array([['A']*10, ['B']*10]).flat), 'minor_group': ['q ','r ','s ','t ']*5, 'legend_group':np.repeat(['d','e','f','g','h','i'],[7,3,1,5,1,3])} (pd.DataFrame(data= data_dict) .groupby(['major_group', 'minor_group','legend_group'], observed = True) .size() .unstack() .plot(kind='barh', stacked=True)) Result: However, I'm looking for something like this: How can this be achieved? Is there some major and minor axis label that can be set?
This code will create horizontal stacked bars, grouped hierarchically in the y-axis label: import matplotlib.pyplot as plt import pandas as pd import numpy as np from pandas import DataFrame def create_data() -> DataFrame: data_dict = { 'major_group': list(np.array([['A'] * 10, ['B'] * 10]).flat), 'minor_group': ['q', 'r', 's', 't'] * 5, 'legend_group': np.repeat(['d', 'e', 'f', 'g', 'h', 'i'], [7, 3, 1, 5, 1, 3]) } return pd.DataFrame(data=data_dict).groupby(['major_group', 'minor_group', 'legend_group'], observed=True).size().unstack() def plot_stacked_barh(df: DataFrame) -> None: fig, axes = plt.subplots(nrows=2, ncols=1, sharex=True) for i, major_group in enumerate(df.index.levels[0]): ax = axes[i] df.loc[major_group].plot(kind='barh', stacked=True, ax=ax, width=.8) if i == 0: handles, labels = ax.get_legend_handles_labels() ax.legend_.remove() ax.set_ylabel(major_group, weight='bold') ax.xaxis.grid(visible=True, which='major', color='black', linestyle='--', alpha=.4) ax.set_axisbelow(True) fig.legend(handles, labels, title='Legend Group') plt.tight_layout() fig.subplots_adjust(hspace=0) plt.show() def main() -> None: df = create_data() print(df) plot_stacked_barh(df) if __name__ == "__main__": main() For a similar vertical equivalent, look here.
2
1
78,824,983
2024-8-2
https://stackoverflow.com/questions/78824983/python-str-subclass-with-lazy-evaluation-of-its-value-for-argparse
I am building a command-line program that uses argparse. In the (assumed-to-be) rare case of a wrong call, argparse will show a description string supplied when creating the ArgumentParser. I want this description to show the version number of my program. I want to extract this from the pyproject.toml file via tomllib. Since this is an expensive operation (and even more so since I want to learn how to do it), I would like the description string to be evaluated lazily: only when it is actually to be printed. I have not yet found a way to do it even though I am willing to build a one-trick-pony object specialized for this particular value: collections.UserString could provide the lazy evaluation (via overriding __getattribute__ for the data attribute), but, alas, some code in argparse uses re.sub() on it, which appears to check isinstance(x, str), which a UserString does not fulfill. a subclass of str can override any operation done on a string -- but not perform lazy evaluation for a plain use of the entire string. (Is this true?) if ArgumentParser would use str(description) instead of description when it is about to print the description, one could supply an object that performs the lazy evaluation in its __str__ method. But, alas, ArgumentParser does not do this. Is there any approach that does the job?
In a roundabout way: don't set description, and only set it just before ArgumentParser would format the help. import argparse import time class MyArgParser(argparse.ArgumentParser): def format_help(self): if not self.description: print("Now thinking hard...") time.sleep(3) self.description = "An application!" return super().format_help() ap = MyArgParser() ap.add_argument("--foo", help="foo") args = ap.parse_args() print(args) This program is fast when run without arguments, or with --foo=bar, and takes its sweet time when you run with --help.
2
1
78,824,984
2024-8-2
https://stackoverflow.com/questions/78824984/cannot-use-selector-in-polars-dataframe-unpivot
I cannot use the pl.exclude() selector as index in pl.DataFrame.unpivot despite the documentation of this method type hinting the index parameter as follows. index: 'ColumnNameOrSelector | Sequence[ColumnNameOrSelector] | None' = None The follow code can be used to reproduce the error. import polars as pl df = pl.DataFrame({ 'foo' : ['one', 'two', 'three'], 'bar' : ['four', 'five', 'six'], 'baz' : [10, 20, 30], 'qux' : [50, 60, 70] }) column_list = ['baz', 'qux'] df.unpivot(column_list, index=pl.exclude(column_list)) TypeError: argument 'index': 'Expr' object cannot be converted to 'PyString' Moreover, using the columns explicitly works fine. df.unpivot(column_list, index=['foo', 'bar']) So, how can column selectors be used in pl.DataFrame.unpivot?
pl.exclude is a "regular" polars expression (of type pl.Expr). Selectors are special objects (also expressions, but of a specific subtype cs._selector_proxy_) that can be found in pl.selectors. import polars.selectors as cs column_list = ['baz', 'qux'] df.unpivot(column_list, index=cs.exclude(column_list)) shape: (6, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ foo ┆ bar ┆ variable ┆ value β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ══════β•ͺ══════════β•ͺ═══════║ β”‚ one ┆ four ┆ baz ┆ 10 β”‚ β”‚ two ┆ five ┆ baz ┆ 20 β”‚ β”‚ three ┆ six ┆ baz ┆ 30 β”‚ β”‚ one ┆ four ┆ qux ┆ 50 β”‚ β”‚ two ┆ five ┆ qux ┆ 60 β”‚ β”‚ three ┆ six ┆ qux ┆ 70 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
4
5
78,824,444
2024-8-2
https://stackoverflow.com/questions/78824444/django-url-reset-after-authentication-fail
if user is not None: login(request, user) return redirect('home') else: error_message = "Invalid username or password." return render(request, 'login.html', {'error_message': error_message}) Using the above code, if user enters wrong credentials, login.html shows up successfully with error message. But now the URL is http://localhost:8000/authentication_view/. Now if the user re-enters credentials, the URL will be http://localhost:8000/authentication_view/authentication_view/, and it creates Page Not Found error. If I use return redirect('login_view'), then the error message will not show. Could someone help me how to show login page with error message without adding Authentication View to the URL? I was expecting: if user provides wrong credentials, he gets error message and goes back to login page. Now he can re-enter credentials. But now login page is not usable because authentication view name is already added to the URL. New login attempt will add authentication view name twice the URL.
The issue with the action mentioned in the html page in form tag. you must have something like <form method="post" action="authentication_view/"> To solve the issue, best way is to remove the action from form tag. by default html form submit to same page. do something like this. <form method="post"> If you are loading page from different url and submitting form to other mention the complete url. in your case it should be. <form method="post" action="/authentication_view">
2
1
78,824,119
2024-8-2
https://stackoverflow.com/questions/78824119/pytest-doesnt-found-my-settings-directory
I tried to start pytest but the settings file cannot be found by pytest I'm in virtualenv with Python 3.11.9 and pytest 8.3.2 ImportError: No module named 'drf.settings' pytest-django could not find a Django project (no manage.py file could be found). You must explicitly add your Django project to the Python path to have it picked up. here the structure of my project β”œβ”€β”€ README.md β”œβ”€β”€ drf β”‚ β”œβ”€β”€ drf β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ β”œβ”€β”€ production_settings.py β”‚ β”‚ β”œβ”€β”€ settings.py β”‚ β”‚ β”œβ”€β”€ urls.py β”‚ β”‚ └── wsgi.py β”‚ β”œβ”€β”€ manage.py β”‚ └── tests β”‚ β”œβ”€β”€ __pycache__ β”‚ β”‚ └── test_auth.cpython-311-pytest-8.3.2.pyc β”‚ β”œβ”€β”€ factory_boy β”‚ β”‚ β”œβ”€β”€ __pycache__ β”‚ β”‚ β”‚ └── factory_models.cpython-311.pyc β”‚ β”‚ └── factory_models.py β”‚ └── test_auth.py β”œβ”€β”€ drf-uwsgi.ini β”œβ”€β”€ pytest.ini β”œβ”€β”€ requirements.in β”œβ”€β”€ requirements.txt and here the content of pytest.ini [pytest] DJANGO_SETTINGS_MODULE = drf.settings python_files = test_*.py django_debug_mode = true pythonpath = .venv/bin/python What I have tried so far: I tried to add init.py to tests directory (seems not be recommended and didn't worked) deactivate and reactivate the virtualenv change drf.settings.py for drf.drf.settings but nothing run pytest as module with python -m pytest tests edit: at this moment I can run pytest with any problem, but I use a conftest for that and search a solution with pytest.ini If you have any suggestions ;)
You need to add pythonpath within the pytest.ini as follows: [pytest] pythonpath = drf DJANGO_SETTINGS_MODULE = drf.settings python_files = test_*.py Or you can change the layout of the tests and move the pytest.ini from the project's root to the tests folder like this SO post. Note: use drf.settings instead of drf.settings.py
2
1
78,824,644
2024-8-2
https://stackoverflow.com/questions/78824644/what-is-the-best-way-to-return-the-group-that-has-the-largest-streak-of-negative
My DataFrame is: import pandas as pd df = pd.DataFrame( { 'a': [-3, -1, -2, -5, 10, -3, -13, -3, -2, 1, 2, -100], } ) Expected output: a 0 -3 1 -1 2 -2 3 -5 Logic: I want to return the largest streak of negative numbers. And if there are more than one streak that are the largest, I want to return the first streak. In df there are two negative streaks with size of 4, so the first one is returned. This is my attempt but whenever I use idxmax() in my code, I want to double check because it gets tricky sometimes in some scenarios. import numpy as np df['sign'] = np.sign(df.a) df['sign_streak'] = df.sign.ne(df.sign.shift(1)).cumsum() m = df.sign.eq(-1) group_sizes = df.groupby('sign_streak').size() largest_group = group_sizes.idxmax() largest_group_df = df[df['sign_streak'] == largest_group]
Your code is fine, you could simplify it a bit, avoiding the intermediate columns: # get sign s = np.sign(df['a']) # form groups of successive identical sign g = s.ne(s.shift()).cumsum() # keep only negative, get size per group and first group with max size out = df[g.eq(df[s.eq(-1)].groupby(g).size().idxmax())] Or, since you don't really care about the 0/+ difference: # negative numbers m = df['a'].lt(0) # form groups g = m.ne(m.shift()).cumsum() out = df[g.eq(df[m].groupby(g).size().idxmax())] Note: idxmax is always fine if you want the first match. Output: a 0 -3 1 -1 2 -2 3 -5
3
1
78,823,597
2024-8-2
https://stackoverflow.com/questions/78823597/space-complexity-of-dijkstra
I was working on some Leetcode questions for Dijkstra's algorithm and I do not quite understand the space complexity of it. I looked online but I found various answers and some were rather complicated so I wanted to know if I understood it correctly. # initialize maxheap maxHeap = [(-1, start_node)] heapq.heapify(maxHeap) # dijkstras algorithm visit = set() res = 0 while maxHeap: prob1,cur = heapq.heappop(maxHeap) visit.add(cur) # update result if cur == end_node: res = max(res, -1 * prob1) # add the neighbors to the priority queue for nei,prob2 in adj_list[cur]: if nei not in visit: heapq.heappush(maxHeap, (prob1 * prob2, nei)) Since I use a visit set and a priority queue to keep track of the nodes would the space complexity just simply be O(V) where V is the number of vertices in the graph? And if I had to generate an adjacency list in Python using a dict would that have a space complexity of O(E) where E is the number of edges?
Since I use a visit set and a priority queue to keep track of the nodes would the space complexity just simply be O(V) where V is the number of vertices in the graph? For what concerns the set: yes. In the worst case, the target is the last node that could be visited, and then set will in the end have an entry for every node that is reachable from the start node, so O(𝑉') where 𝑉' is the number of nodes in the connected component where you start the search. That is O(𝑉) when the graph is connected. For what concerns the priority queue: this is not guaranteed. As you only mark nodes as visited when you pull them from the queue, you can potentially have multiple occurrences of the same node in the queue at a given moment. The limit on the number of heappush calls is given by the number of edges. So we have a worst case space complexity of O(𝐸) for the priority queue. And if I had to generate an adjacency list in Python using a dict would that have a space complexity of O(E) where E is the number of edges? That depends on whether you create dict-keys for nodes that have no outgoing edges (with empty lists as values). If so, then it is Θ(𝑉+𝐸). If you omit keys that would represent such nodes (without outgoing edges), then it is indeed Θ(𝐸), but then your program needs to have a check whether adj_list[cur] exists or not. Other remarks You don't need to call heapify when your list has only one element. A list with one element is already a valid heap. When you pop a node from the queue, you should verify whether it was already visited. This could happen when a node was pushed on the queue multiple times (as it was encountered via different paths), before the first occurrence was popped from it. In that case you'll mark it as visited as the first occurrence is popped from the queue, but then it might eventually get popped a second time: and then you should not process it. When you find the target node, and have res, you should exit the loop Instead of assigning to res, it would be good practice to wrap this code in a function and return that value. So: def dijkstra(adj, start_node, end_node): # initialize maxheap maxHeap = [(-1, start_node)] # dijkstra's algorithm visit = set() while maxHeap: prob1, cur = heapq.heappop(maxHeap) if cur in visit: continue visit.add(cur) # update result if cur == end_node: return max(res, -1 * prob1) # add the neighbors to the priority queue for nei,prob2 in adj_list[cur]: if nei not in visit: heapq.heappush(maxHeap, (prob1 * prob2, nei)) return 0 # end_node not reachable
3
0
78,822,168
2024-8-1
https://stackoverflow.com/questions/78822168/use-polars-when-then-otherwise-on-multiple-output-columns-at-once
Assume I have this dataframe import polars as pl df = pl.DataFrame({ 'item': ['CASH', 'CHECK', 'DEBT', 'CHECK', 'CREDIT', 'CASH'], 'quantity': [100, -20, 0, 10, 0, 0], 'value': [99, 47, None, 90, None, 120], 'value_other': [97, 57, None, 91, None, 110], 'value_other2': [94, 37, None, 93, None, 115], }) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ item ┆ quantity ┆ value ┆ value_other ┆ value_other2 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ══════════β•ͺ═══════β•ͺ═════════════β•ͺ══════════════║ β”‚ CASH ┆ 100 ┆ 99 ┆ 97 ┆ 94 β”‚ β”‚ CHECK ┆ -20 ┆ 47 ┆ 57 ┆ 37 β”‚ β”‚ DEBT ┆ 0 ┆ null ┆ null ┆ null β”‚ β”‚ CHECK ┆ 10 ┆ 90 ┆ 91 ┆ 93 β”‚ β”‚ CREDIT ┆ 0 ┆ null ┆ null ┆ null β”‚ β”‚ CASH ┆ 0 ┆ 120 ┆ 110 ┆ 115 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Now I want to set all value columns to 0 for all rows where value is null and quantity == 0. Right now I have this solution cols = ['value', 'value_other', 'value_other2'] df = df.with_columns([ pl.when(pl.col('value').is_null() & (pl.col('quantity') == 0)) .then(0) .otherwise(pl.col(col)) .alias(col) for col in cols ]) which correctly gives β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ item ┆ quantity ┆ value ┆ value_other ┆ value_other2 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ══════════β•ͺ═══════β•ͺ═════════════β•ͺ══════════════║ β”‚ CASH ┆ 100 ┆ 99 ┆ 97 ┆ 94 β”‚ β”‚ CHECK ┆ -20 ┆ 47 ┆ 57 ┆ 37 β”‚ β”‚ DEBT ┆ 0 ┆ 0 ┆ 0 ┆ 0 β”‚ β”‚ CHECK ┆ 10 ┆ 90 ┆ 91 ┆ 93 β”‚ β”‚ CREDIT ┆ 0 ┆ 0 ┆ 0 ┆ 0 β”‚ β”‚ CASH ┆ 0 ┆ 120 ┆ 110 ┆ 115 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ However, I feel this is very inefficient as my when condition is executed for every value column. Is there a way to achieve this using only polar internal functions & without the native for-loop?
You can pass list of column names into pl.col() and when\then\otherwise accepts Expr which can contain multiple columns. cols = ['value', 'value_other', 'value_other2'] df.with_columns( pl .when((pl.col.quantity != 0) | pl.col.value.is_not_null()) .then(pl.col(cols)) .otherwise(0) ) # or df.with_columns( pl .when(pl.col.quantity != 0).then(pl.col(cols)) .when(pl.col.value.is_not_null()).then(pl.col(cols)) .otherwise(0) ) shape: (6, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ item ┆ quantity ┆ value ┆ value_other ┆ value_other2 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ══════════β•ͺ═══════β•ͺ═════════════β•ͺ══════════════║ β”‚ CASH ┆ 100 ┆ 99 ┆ 97 ┆ 94 β”‚ β”‚ CHECK ┆ -20 ┆ 47 ┆ 57 ┆ 37 β”‚ β”‚ DEBT ┆ 0 ┆ 0 ┆ 0 ┆ 0 β”‚ β”‚ CHECK ┆ 10 ┆ 90 ┆ 91 ┆ 93 β”‚ β”‚ CREDIT ┆ 0 ┆ 0 ┆ 0 ┆ 0 β”‚ β”‚ CASH ┆ 0 ┆ 120 ┆ 110 ┆ 115 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
9
6
78,823,052
2024-8-1
https://stackoverflow.com/questions/78823052/what-does-python3-t-do
$ python3 -t -c 'print("hello world")' hello world What does -t do? It's not mentioned in python3 --help. Usually unknown options cause a non-zero exit code, like $ python3 -r Unknown option: -r usage: python3 [option] ... [-c cmd | -m mod | file | -] [arg] ... Try `python -h' for more information.
The -t option in python3 was a feature from Python 2.x that warned about inconsistent use of tabs and spaces in the source code. However, this option was removed in Python 3.x. Therefore, when you run python3 -t -c 'print("hello world")', Python 3 just ignores the -t option and executes the code as usual.
5
3
78,821,557
2024-8-1
https://stackoverflow.com/questions/78821557/assigning-an-attribute-to-a-staticmethod-in-python
I have a scenario where I have objects with static methods. They are all built using an outside def build_hello() as class variables. def build_hello(name: str): @staticmethod def hello_fn(): return "hello my name is " # Assign an attribute to the staticmethod so it can be used across all classes hello_fn.first_name = name print(hello_fn() + hello_fn.first_name) # This works return hello_fn class World: hello_fn = build_hello("bob") # Error, function object has no attribute "first_name" World.hello_fn.first_name What is happening here? I am able to access the attribute of hello_fn() within the build_hello() function call. but when its added to my object, that attribute no longer lists. Also if I call dir() on the static method. I do not see it present: dir(World.hello_fn) ['__annotations__', '__builtins__', '__call__', '__class__', '__closure__', '__code__', '__defaults__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__get__', '__getattribute__', '__getstate__', '__globals__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__kwdefaults__', '__le__', '__lt__', '__module__', '__name__', '__ne__', '__new__', '__qualname__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__type_params__']
Method retrieving from a class, in Python, be it regular instance methods, class methods or static methods, use an underlying mechanism to actually build the method object at the time it is requested (usually with the . operator, but also with a getattr(...) call): The objects that are bound to class namespaces have a __get__ method - it is this __get__ which builds the object that is retrieved and that will be called as the method - so, if it is a classmethod, the cls is inserted as first parameter, or the self argument for regular methods, and nothing inserted for staticmethods. The thing is that the __get__ method for a regular function will make it behave like a common instance methods. The @classmethod and @staticmethod decorators have a different behavior for the __get__ method which produce the desired final effects when calling the method. So, when you create a staticmethod wrapping your function, it is not this object that is retrieved when you do MyClass.mymethod - rather, it is whatever the __get__ method of @staticmethod returns - in this case, it returns the underlying function as is. TL;DR: put your attributes in the underlying function, before wrapping it with the staticmethod call, as that is what is returned by World.hello_fn: def build_hello(name: str): def hello_fn(): return "hello my name is " # Assign an attribute to the staticmethod so it can be used across all classes hello_fn.first_name = name print(hello_fn() + hello_fn.first_name) # This works return staticmethod(hello_fn) # Just wrap the function with the staticmethod decorator here! class World: hello_fn = build_hello("bob") World.hello_fn.first_name Alternatively, you can reach the function after applying the decorator through the attribute __func__ of the staticmethod object: def build_hello(name: str): @staticmethod def hello_fn(): return "hello my name is " hello_fn.__func__.first_name = name return hello_fn
5
4
78,820,057
2024-8-1
https://stackoverflow.com/questions/78820057/how-can-i-find-the-maximum-value-of-a-dynamic-window-and-the-minimum-value-below
This is my DataFrame: import pandas as pd df = pd.DataFrame( { 'a': [3, 1, 2, 5, 10, 3, 13, 3, 2], } ) Expected output is creating a a_max and a_min: a a_max a_min 0 3 NaN NaN 1 1 3 1 2 2 3 1 3 5 3 1 4 10 3 1 5 3 10 3 6 13 10 3 7 3 13 3 8 2 13 2 Logic: I explain the logic row by row. There is a dynamic window for this df that for the first instance of the window only the first row is considered. For the second instance of the window the first two rows are considered. Same as below: These are the first four windows. It expands accordingly. For each window I need to find the maximum value and after that I need to find the minimum value BELOW that maximum value. I start explaining it from the yellow window. For this window the max value is 3 and the min value BELOW it is 1. So that is why a_max and a_min for this window is 3 and 1. Now for the orange window the maximum value is 5 but since there are no values in this window BELOW this value that is less than 5, the previous a_max and a_min are repeated. And the logic continues for the rest of rows. This is my attempt: df['a_max'] = df.a.cummax() df['a_min'] = df.a.cummin()
This is a tricky one, I would use a cummax+shift, then mask+ffill to compute a_max. Then a_min is the groupby.cummin per group of identical a_max: # compute the shifted cummax cm = df['a'].cummax().shift() # a_max is the cummax except if the current row is larger df['a_max'] = cm.mask(df['a'].gt(cm)).ffill() # a_min is the cummin of the current group of a_max df['a_min'] = df.groupby('a_max')['a'].cummin() Output: a a_max a_min 0 3 NaN NaN 1 1 3.0 1.0 2 2 3.0 1.0 3 5 3.0 1.0 4 10 3.0 1.0 5 3 10.0 3.0 6 13 10.0 3.0 7 3 13.0 3.0 8 2 13.0 2.0 Intermediates: a a_max cummax shift mask ffill a_min 0 3 NaN 3 NaN NaN NaN NaN 1 1 3.0 3 3.0 3.0 3.0 1.0 2 2 3.0 3 3.0 3.0 3.0 1.0 3 5 3.0 5 3.0 NaN 3.0 1.0 4 10 3.0 10 5.0 NaN 3.0 1.0 5 3 10.0 10 10.0 10.0 10.0 3.0 6 13 10.0 13 10.0 NaN 10.0 3.0 7 3 13.0 13 13.0 13.0 13.0 3.0 8 2 13.0 13 13.0 13.0 13.0 2.0
2
2
78,816,181
2024-7-31
https://stackoverflow.com/questions/78816181/how-can-i-link-the-records-in-the-training-dataset-to-the-corresponding-model-pr
Using scikit-learn, I've set up a regression model to predict customers' maximum spend per transaction. The dataset I'm using looks a bit like this; the target column is maximum spend per transaction during the previous year: customer_number | metric_1 | metric_2 | target ----------------|----------|----------|------- 111 | A | X | 15 222 | A | Y | 20 333 | B | Y | 30 I split the dataset into training & testing sets, one-hot encode the features, train the model, and make some test predictions: target = pd.DataFrame(dataset, columns = ["target"]) features = dataset.drop("target", axis = 1) train_features, test_features, train_target, test_target = train_test_split(features, target, test_size = 0.25) train_features = pd.get_dummies(train_features) test_features = pd.get_dummies(test_features) model = RandomForestRegressor() model.fit(X = train_features, y = train_target) test_prediction = model.predict(X = test_features) I can output various measures of the model's accuracy (mean average error, mean squared error etc) using the relevant functions in scikit-learn. However, I'd like to be able to tell which customers' predictions are the most inaccurate. So I want to be able to create a dataframe which looks like this: customer_number | target | prediction | error ----------------|--------|----------- |------ 111 | 15 | 17 | 2 222 | 20 | 19 | 1 333 | 30 | 50 | 20 I can use this to investigate if there is any correlation between the features and the model making inaccurate predictions. In this example, I can see that customer 333 has the biggest error by far, so I could potentially infer that customers with metric_1 = B end up with less accurate predictions. I think I can calculate errors like this (please correct me if I'm wrong on this), but I don't know how to tie them back to customer number. error = abs(test_target - test_prediction) How can I get the desired result?
The error you are computing is the absolute error. When averaged it gives the Mean Absolute Error which is commonly used to evaluate regression models. You can read about the choice of an error metric here. This error vector is the length of your test dataset and its elements are in the same order as your records. Many people assign them back into the dataframe. Then, if you leave customer number in there, everything should line up. Starting with the DataFrame df and using idiomatic names for things: df_train, df_test = train_test_split(df) y_train, y_test = df_train["target"], df_test["target"] X_train = df_train.drop(["customer_number", "target"], axis=1) X_test = df_test.drop(["customer_number", "target"], axis=1) X_train = pd.get_dummies(X_train) X_test = pd.get_dummies(X_test) model = RandomForestRegressor() model.fit(X_train, y_train) df_test["prediction"] = model.predict(X_test) df_test["error"] = abs(df_test["target"] - df_test["prediction"])
3
2
78,818,244
2024-7-31
https://stackoverflow.com/questions/78818244/get-a-single-row-in-a-tuple-indexed-dataframe
I have a pandas DataFrame: >>> f = pd.DataFrame.from_dict({"r0":{"c0":1,"c1":2},("r",1):{"c0":3,"c1":4}},orient="index") c0 c1 r0 1 2 (r, 1) 3 4 I can get the 1st row: >>> list(f.loc["r0"].items()) [('c0', 1), ('c1', 2)] but not the second row because f.loc[("r",1)] raises KeyError. I suppose I can do >>> list(f.loc[[("r",1)]].iloc[0].items()) [('c0', 3), ('c1', 4)] but this is unspeakably ugly. What is the right way? PS. No, I do not want to use MultiIndex here.
Try using cross-section to get values from multiple indices. list(f.xs(("r", 1)).items())
2
5
78,816,780
2024-7-31
https://stackoverflow.com/questions/78816780/how-do-i-combine-mimetypefilters-and-namefilters-for-a-qfiledialog-using-pyqt6-o
Using PyQt6, I am investigating using QFileDialog directly without the use of one of the static functions (i.e. don't use QFileDialog.getOpenFileName). The issue that I am running into is creating a filter list that uses a combination of MIME types and named types. For example, say you want to set a filter for *.css and *.qss files. At there core, they are essentially the same file type, however MIME doesn't recognize *.qss. I really like the idea of using MIME types because it ensures that the many extension options of a file type are included (i.e. [*.jpg, *.jpeg, *.jpe] or [*.md, *.mkd, *.markdown]), but I also need to work with the files that are not included in MIME. The code snippet that I am working with is as follows: file_dialog = QFileDialog() file_dialog.setFileMode(QFileDialog.FileMode.ExistingFiles) file_dialog.setMimeTypeFilters(["text/css", "application/octet-stream"]) file_dialog.setNameFilter("Qt Style Sheet (*.qss)") if file_dialog.exec(): print(file_dialog.selectedFiles()) When this code executes, the .setNameFilter function completely overwrites the filters set with the .setMimeTypeFilters function. If I reverse the order of the filter setting, the same thing happens just in reverse. I have also tried adding the name filter to the MIME type list, but the name filter is just ignored. file_dialog.setMimeTypeFilters(["text/css", "Qt Style Sheet (*.qss)", "application/octet-stream"]) Anyone know of a way to have both filters without having to explicitly set all options and nix using .setMimeTypeFilters?
Combining mime-filters and name-filters is quite easy to achieve using QMimeDatabase. Doing things this way will allow you to merge glob patterns (e.g. *.qss with the css defaults), as well as getting full control over the final ordering of the filters. This won't normally be possible when using the QFileDialog methods. Below is a simple function that demonstrates the idea. The function takes a dict with mime-type/filter-name keys and glob-list values. If the function detects a mime-type, the glob-list will be merged with the built-in defaults. The return value is a list that can be passed to setNameFilters. The code logic is essentially the same as what QFileDialog itself uses, but tweaked slightly to allow greater flexibility: def combine_filters(filters): result = [] md = QMimeDatabase() for name, patterns in filters.items(): if (mt := md.mimeTypeForName(name)).isValid(): if mt.isDefault(): result.append('All Files (*)') continue name = mt.comment() patterns = sorted(mt.globPatterns() + list(patterns or ())) result.append(f'{name} ({" ".join(patterns)})') return result Example with merge: filters = combine_filters({ 'text/css': ['*.qss'], 'application/octet-stream': None, }) print('\n'.join(filters)) # CSS stylesheet (*.css *.qss) # All Files (*) Example without merge: filters = combine_filters({ 'text/css': None, 'Qt Style Sheet': ['*.qss'], 'application/octet-stream': None, }) print('\n'.join(filters)) # CSS stylesheet (*.css) # Qt Style Sheet (*.qss) # All Files (*)
2
3
78,817,557
2024-7-31
https://stackoverflow.com/questions/78817557/is-it-possible-to-solve-leetcode-1653-using-recursion
I am trying to solve LeetCode problem 1653. Minimum Deletions to Make String Balanced: You are given a string s consisting only of characters 'a' and 'b'​​​​. You can delete any number of characters in s to make s balanced. s is balanced if there is no pair of indices (i,j) such that i < j and s[i] = 'b' and s[j]= 'a'. Return the minimum number of deletions needed to make s balanced. Constraints: 1 <= s.length <= 105 s[i] is 'a' or 'b'​​. Example 1: Input: s = "aababbab" Output: 2 Explanation: You can either: Delete the characters at 0-indexed positions 2 and 6 ("aababbab" -> "aaabbb"), or Delete the characters at 0-indexed positions 3 and 6 ("aababbab" -> "aabbbb") I get it that the optimal solution is to use DP or some iterative approach, but I'm wondering if it's specifically possible via recursion. I initially did this: class Solution: def minimumDeletions(self, s: str) -> int: @lru_cache(None) def dfs(index, last_char): if index == len(s): return 0 if s[index] >= last_char: keep = dfs(index + 1, s[index]) delete = 1 + dfs(index + 1, last_char) return min(keep, delete) else: return 1 + dfs(index + 1, last_char) return dfs(0, 'a') But it was not pruning paths that are already exceeding a previously found minimum. Fair enough, so I tried this next: class Solution: def minimumDeletions(self, s: str) -> int: self.min_deletions = float('inf') memo = {} def dfs(index, last_char, current_deletions): if current_deletions >= self.min_deletions: return float('inf') if index == len(s): self.min_deletions = min(self.min_deletions, current_deletions) return 0 if (index, last_char) in memo: return memo[(index, last_char)] if s[index] >= last_char: keep = dfs(index + 1, s[index], current_deletions) delete = 1 + dfs(index + 1, last_char, current_deletions + 1) result = min(keep, delete) else: result = 1 + dfs(index + 1, last_char, current_deletions + 1) memo[(index, last_char)] = result return result return dfs(0, 'a', 0) It seemingly passes the test cases when I try to run it at 300ms, but when I try to submit the solution, I get a memory limit exceeded error. How can this be solved via recursion within the time limit?
It is the memory needed for your memo data structure that is the major contribution to the error you get. You could avoid keying by tuple, and pre-allocate your memo as two lists, like so: memo = { "a": [None] * len(s), "b": [None] * len(s) } ...and adapt your code to align with this structure. So: if memo[last_char][index] is not None: return memo[last_char][index] # ... # ... return memo[last_char][index] Then it will pass the tests. Unrelated to the memory consumption, but you are looking into too many possibilities. In the case s[index] == last_char there is no need to check the two variations (delete or not delete), as it is just fine to not delete. There is no need to consider the deletion case. So you could do: if s[index] > last_char: # Altered condition keep = dfs(index + 1, s[index], current_deletions) delete = 1 + dfs(index + 1, last_char, current_deletions + 1) result = min(keep, delete) elif s[index] == last_char: # No need to delete result = dfs(index + 1, last_char, current_deletions) else: result = 1 + dfs(index + 1, last_char, current_deletions + 1)
2
1
78,817,297
2024-7-31
https://stackoverflow.com/questions/78817297/how-to-control-recursion-depth-in-pydantic-s-model-dump-serialization
I have the following classes: class Info: data: str class Data: info: Info When I call model_dump in Data class, pydantic will serialize the classe recursively as described here This is the primary way of converting a model to a dictionary. Sub-models will be recursively converted to dictionaries. Is there any way to stop the recursive part or specify how deep we want the serialisation go? My desired output would be something like the following: { "info": Info } instead of { "info": { "data":"some data" } } I tried to search in documentation how to change this behaviour but didn't find anything.
Converting to a dictionary solves this: >>> data = Data(info=Info(data='test')) >>> dict(data) {'info': Info(data='test')}
3
3
78,816,988
2024-7-31
https://stackoverflow.com/questions/78816988/how-to-delete-row-with-max-min-values
I have dataframe: one N th 0 A 5 1 1 Z 17 0 2 A 16 0 3 B 9 1 4 B 17 0 5 B 117 1 6 XC 35 1 7 C 85 0 8 Ce 965 1 I'm looking the way to keep alternating 0101 in column three without doubling 0 or 1. So, i want to delete row with min of values in case if i have two repeating 0 in column th and max values if i have repeating 1. My base consis of 1000000 rows. I expect to have dataframe like this: one N th 0 A 5 1 1 Z 17 0 3 B 9 1 4 B 17 0 6 XC 35 1 7 C 85 0 8 Ce 965 1 What is the fastest way to do it. I mean vectorized way. My attempts without result.
using a custom groupby.idxmax You can swap the sign if "th" is 1 (to get the max instead of min), then set up a custom grouper (with diff or shift + cumsum) and perform a groupby.idxmax to select the rows to keep: out = df.loc[df['N'].mul(df['th'].map({0: 1, 1: -1})) .groupby(df['th'].ne(df['th'].shift()).cumsum()) .idxmax()] Variant with a different method to swap the sign and to compute the group: out = df.loc[df['N'].mask(df['th'].eq(1), -df['N']) .groupby(df['th'].diff().ne(0).cumsum()) .idxmax()] Output: one N th 0 A 5 1 1 Z 17 0 3 B 9 1 4 B 17 0 6 XC 35 1 7 C 85 0 8 Ce 965 1 Intermediates: one N th swap group max 0 A 5 1 -5 1 X 1 Z 17 0 17 2 X 2 A 16 0 16 2 3 B 9 1 -9 3 X 4 B 17 0 17 4 X 5 B 117 1 -117 5 6 XC 35 1 -35 5 X 7 C 85 0 85 6 X 8 Ce 965 1 -965 7 X using boolean masks The above code works for an arbitrary number of consecutive 0s or 1s. If you know that you only have up to 2 successive ones, you could also use boolean indexing, which should be significantly faster: # has the value higher precedence than the next? D = df['N'].mask(df['th'].eq(1), -df['N']).diff() # is the th different from the previous? G = df['th'].ne(df['th'].shift(fill_value=-1)) # rule for the bottom row m1 = D.gt(0) | G # rule for the top row # same rule as above but shifted up # D is inverted # comparison is not strict in case of equality m2 = ( D.le(0).shift(-1, fill_value=True) | G.shift(-1, fill_value=True) ) # keep rows of interest out = df.loc[m1&m2] Output: one N th 0 A 5 1 1 Z 17 0 3 B 9 1 4 B 17 0 6 XC 35 1 7 C 85 0 8 Ce 965 1 Intermediates: one N th D G m1 m2 m1&m2 0 A 5 1 NaN True True True True 1 Z 17 0 22.0 True True True True 2 A 16 0 -1.0 False False True False 3 B 9 1 -25.0 True True True True 4 B 17 0 26.0 True True True True 5 B 117 1 -134.0 True True False False 6 XC 35 1 82.0 False True True True 7 C 85 0 120.0 True True True True 8 Ce 965 1 -1050.0 True True True True More complex example with equal values: one N th D G m1 m2 m1&m2 0 A 5 1 NaN True True True True 1 Z 17 0 22.0 True True True True 2 A 16 0 -1.0 False False True False 3 B 9 1 -25.0 True True True True 4 B 17 0 26.0 True True True True 5 B 117 1 -134.0 True True False False 6 XC 35 1 82.0 False True True True 7 C 85 0 120.0 True True True True 8 Ce 965 1 -1050.0 True True True True 9 u 123 0 1088.0 True True True True # because of D.le(0) 10 v 123 0 0.0 False False True False # because or D.gt(0) NB. in case of equality, it is possible to select the first/second row or both or none, depending on the operator used (D.le(0), D.lt(0), D.gt(0), D.ge(0)). timings Although limited to maximum 2 consecutive "th", the boolean mask approach is ~4-5x faster. Timed on 1M rows: # groupby + idxmax 96.4 ms Β± 6.64 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) # boolean masks 22.2 ms Β± 1.48 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each)
3
3
78,815,902
2024-7-31
https://stackoverflow.com/questions/78815902/reading-writing-polars-data-frame-with-list-column-from-to-database
Writing a df with a list column like so df = pl.DataFrame({'a': [1,2,3], 'b':[['A','B'], ['C', 'D'], ['E', 'F']]}) df.write_database( "test", "sqlite:///test.db", if_table_exists = "replace", ) works fine, but then running pl.read_database_uri(query="SELECT * FROM test", uri="sqlite://test.db") gives the error RuntimeError: Invalid column type Blob at index: 1, name: b I cannot seem to get around this by using engine_options in write_database to specify that I want a list field (not a blob), nor by using schema_overrides in the read_database_uri. What is the correct way to write/read this sort of data frame with list column(s)?
I believe that that in your case you'll need to save list of strings as BLOB in MySQL lite and later somehow decode bytes back to list. However there is two problems: polars standard protocol connectorx can't read BLOB columns, thus you need to swith to adbc Even after switching to adbc it is not clear how to convert BLOB back to bytes. Thus I would propose to merge list of string with any separator and store strings as VARCHAR and later convert back: import polars as pl SEP = "\t" df = pl.DataFrame({'a': [1,2,3], 'b':[['A','B'], ['C', 'D'], ['E', 'F']]}) # mapping: List[str] -> str df = df.with_columns(b = pl.col('b').list.join(SEP)) df.write_database( "test", "sqlite:///test.db", if_table_exists = "replace", ) ff = pl.read_database_uri(query="SELECT a,b FROM test", uri="sqlite://test.db", ) # mapping: str -> List[str] ff = ff.with_columns(pl.col('b').str.split(SEP)) Alternatively: Or serialize the column to json and again save it in dabase as VARCHAR - later deserialise back import json import polars as pl df = pl.DataFrame({'a': [1,2,3], 'b':[['A','B'], ['C', 'D'], ['E', 'F']]}) # dump to json f = lambda x: json.dumps(list(x)) df = df.with_columns(b = pl.col('b').apply(f)) df.write_database( "test", "sqlite:///test.db", if_table_exists = "replace", ) ff = pl.read_database_uri(query="SELECT a,b FROM test", uri="sqlite://test.db", ) ff = ff.with_columns(pl.col('b').str.json_decode()) Alternative 2: Prepare the column by converting it to binary object before saving it in sql database. Column will be saved as BLOB. import pickle import polars as pl df = pl.DataFrame({'a': [1,2,3], 'b':[['A','B'], ['C', 'D'], ['E', 'F']]}) # dump column to binary object df = df.with_columns(b = pl.col('b').apply(pickle.dumps)) df.write_database( "test", "sqlite:///test.db", if_table_exists = "replace", ) ff = pl.read_database_uri(query="SELECT a,b FROM test", uri="sqlite://test.db", engine='adbc', # need to use adbc engine instead of standard ) ff = ff.with_columns(pl.col('b').apply(pickle.loads))
3
2
78,816,652
2024-7-31
https://stackoverflow.com/questions/78816652/multiply-polars-columns-of-number-type-with-object-type-which-supports-mul
I have the following code. import polars as pl class Summary: def __init__(self, value: float, origin: str): self.value = value self.origin = origin def __repr__(self) -> str: return f'Summary({self.value},{self.origin})' def __mul__(self, x: float | int) -> 'Summary': return Summary(self.value * x, self.origin) def __rmul__(self, x: float | int) -> 'Summary': return self * x mapping = { 'CASH': Summary( 1, 'E'), 'ITEM': Summary(-9, 'A'), 'CHECK': Summary(46, 'A'), } df = pl.DataFrame({'quantity': [7, 4, 10], 'type': mapping.keys(), 'summary': mapping.values()}) The dataframe df looks as follows. shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ quantity ┆ type ┆ summary β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ object β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ═══════════════║ β”‚ 7 ┆ CASH ┆ Summary(1,E) β”‚ β”‚ 4 ┆ ITEM ┆ Summary(-9,A) β”‚ β”‚ 10 ┆ CHECK ┆ Summary(46,A) β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Especially, the summary column contains a Summary class object, which supports multiplication. Now, I'd like to multiply this column with the quantity column. However, the naive approach raises an error. df.with_columns(pl.col('quantity').mul(pl.col('summary')).alias('qty_summary')) SchemaError: failed to determine supertype of i64 and object Is there a way to multiply these columns?
Remember that Polars is designed so that computations run in Rust, not Python, where it's like 1000x faster. If you have Python operations you want to run, you lose a lot of the benefit of using Polars in the first place. But, thankfully, Polars does have a very nice feature that is relevant here, which is β€œnative” processing of dataclasses. import polars as pl from dataclasses import dataclass @dataclass class Summary: value: float origin: str def __mul__(self, x: float | int) -> "Summary": return Summary(self.value * x, self.origin) def __rmul__(self, x: float | int) -> "Summary": return self * x mapping = { "CASH": Summary(1, "E"), "ITEM": Summary(-9, "A"), "CHECK": Summary(46, "A"), } df = pl.DataFrame( { "quantity": [7, 4, 10], "type": mapping.keys(), "summary": mapping.values(), } ) df Because Summary is a dataclass, you 1. don't need __init__ and __repr__ (they come for free), and 2. don't need to do anything special for Polars to struct-ify them. shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ quantity ┆ type ┆ summary β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ struct[2] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ════════════║ β”‚ 7 ┆ CASH ┆ {1.0,"E"} β”‚ β”‚ 4 ┆ ITEM ┆ {-9.0,"A"} β”‚ β”‚ 10 ┆ CHECK ┆ {46.0,"A"} β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Now you can just do regular Polars struct ops: df.with_columns( qty_summary=pl.struct( pl.col("summary").struct.field("value") * pl.col("quantity"), pl.col("summary").struct.field("origin"), ) )
3
5
78,817,193
2024-7-31
https://stackoverflow.com/questions/78817193/how-is-type-not-a-keyword-in-python
In Python 3.12 we have type aliases like this: Python 3.12.4+ (heads/3.12:99bc8589f0, Jul 27 2024, 11:20:07) [GCC 12.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> type S = str >>> S S By this syntax I assumed that, from now, the type word is considered a keyword, but it's not: >>> type = 2 >>> and also: >>> import keyword >>> keyword.iskeyword('type') False
The PEG parser introduced in Python 3.9 is a lot more flexible than the old parser, so it's just capable of handling this kind of thing. Trying to make type a keyword would have broken too much existing code, so they just... didn't. match/case is a similar story - making those keywords would have broken way too much code, such as everything that uses re.match. async used to be treated similarly, although since it was introduced back in 3.5, they had to use a tokenizer hack to get it to work - the parser wasn't powerful enough to handle the problem on its own.
4
7
78,815,235
2024-7-31
https://stackoverflow.com/questions/78815235/problem-with-seaborn-kdeplot-when-plotting-two-figures-side-by-side
I am trying two plot two 2d distributions together with their marginal distributions on the top and side of the figure like so: Now I wantto combine the above figure with the following figure, such that they appear side by side: However, when doing so, the marginal distributions arent plotted.. Can anyone help? The code for plotting the above figure is given here: import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from scipy.stats import multivariate_normal import ot import ot.plot # Define the mean and covariance for two different multivariate normal distributions mean1 = [0, 0] cov1 = [[1, 0.5], [0.5, 1]] mean2 = [3, 3] cov2 = [[1, -0.5], [-0.5, 1]] n = 100 # Generate random samples from the distributions np.random.seed(0) samples1 = np.random.multivariate_normal(mean1, cov1, size=n) samples2 = np.random.multivariate_normal(mean2, cov2, size=n) df1 = pd.DataFrame(np.concatenate([samples1, samples2]), columns=['X', 'Y']) df1['Distribution'] = 'Target' df1['Distribution'].iloc[n:] = 'Source' # Create a custom palette with blue and red custom_palette = {'Target': 'blue', 'Source': 'red'} # Plotting side by side fig, axs = plt.subplots(1, 2, figsize=(12, 4)) # Jointplot using seaborn g = sns.kdeplot(data=df1, x="X", y="Y", hue="Distribution", kind="kde", space=0, fill=True, palette=custom_palette, ax=axs[0]) axs[0].set_xlim(-4, 6.5) axs[0].set_ylim(-4, 6.5) # axs[0].set_aspect('equal', adjustable='box') sns.move_legend(axs[0], "lower right") # Optimal Transport matching between the samples a, b = np.ones((n,)) / n, np.ones((n,)) / n # uniform distribution on samples M = ot.dist(samples2, samples1, metric='euclidean') G0 = ot.emd(a, b, M) ot.plot.plot2D_samples_mat(samples2, samples1, G0, c=[.5, .5, 1]) axs[1].plot(samples2[:, 0], samples2[:, 1], '+r', markersize=10, label='Source samples') # Increased marker size axs[1].plot(samples1[:, 0], samples1[:, 1], 'xb', markersize=10, label='Target samples') # Increased marker size axs[1].legend(loc=4) # Common labels and limits for ax in axs: ax.set(xlabel='X') ax.set_xlim([-4, 6.5]) ax.set_ylim([-4, 6.5]) # Remove y-axis from the second figure axs[1].set(ylabel='') axs[1].yaxis.set_visible(False) # Adjust layout and save plot as PDF fig.tight_layout() # Show plot plt.show()
You can't do this directly as the marginal distributions require a jointplot, which is a figure-level plot and cannot directly add extra axes. It's however fairly easy to modify the JointGrid code to add more axes. The key is to change: # add more space to accommodate an extra plot # gs = plt.GridSpec(ratio + 1, ratio + 1) gs = plt.GridSpec(ratio + 1, ratio + 1 + ratio) # change how the space is defined (example for the ax_joint) # ax_joint = f.add_subplot(gs[1:, :-1]) # use all width but last ax_joint = f.add_subplot(gs[1:, :ratio]) # use first "ratio" slots Which gives us: ratio = 5 space = .2 f = plt.figure(figsize=(12, 4)) gs = plt.GridSpec(ratio + 1, ratio + 1 + ratio) ax_joint = f.add_subplot(gs[1:, :ratio]) ax_marg_x = f.add_subplot(gs[0, :ratio], sharex=ax_joint) ax_marg_y = f.add_subplot(gs[1:, ratio], sharey=ax_joint) ax_ot = f.add_subplot(gs[1:, ratio+1:], sharey=ax_joint) # Turn off tick visibility for the measure axis on the marginal plots plt.setp(ax_marg_x.get_xticklabels(), visible=False) plt.setp(ax_marg_y.get_yticklabels(), visible=False) plt.setp(ax_marg_x.get_xticklabels(minor=True), visible=False) plt.setp(ax_marg_y.get_yticklabels(minor=True), visible=False) plt.setp(ax_marg_x.yaxis.get_majorticklines(), visible=False) plt.setp(ax_marg_x.yaxis.get_minorticklines(), visible=False) plt.setp(ax_marg_y.xaxis.get_majorticklines(), visible=False) plt.setp(ax_marg_y.xaxis.get_minorticklines(), visible=False) plt.setp(ax_marg_x.get_yticklabels(), visible=False) plt.setp(ax_marg_y.get_xticklabels(), visible=False) plt.setp(ax_marg_x.get_yticklabels(minor=True), visible=False) plt.setp(ax_marg_y.get_xticklabels(minor=True), visible=False) ax_marg_x.yaxis.grid(False) ax_marg_y.xaxis.grid(False) utils = sns.axisgrid.utils utils.despine(ax=ax_marg_x, left=True) utils.despine(ax=ax_marg_y, bottom=True) for axes in [ax_marg_x, ax_marg_y]: for axis in [axes.xaxis, axes.yaxis]: axis.label.set_visible(False) f.tight_layout() f.subplots_adjust(hspace=space, wspace=space) sns.kdeplot(data=df1, x='X', y='Y', hue='Distribution', fill=True, palette=custom_palette, ax=ax_joint) sns.move_legend(ax_joint, 'lower right') sns.kdeplot(data=df1, x='X', hue='Distribution', fill=True, palette=custom_palette, legend=False, ax=ax_marg_x) sns.kdeplot(data=df1, y='Y', hue='Distribution', fill=True, palette=custom_palette, legend=False, ax=ax_marg_y) # Optimal Transport matching between the samples a, b = np.ones((n,)) / n, np.ones((n,)) / n # uniform distribution on samples M = ot.dist(samples2, samples1, metric='euclidean') G0 = ot.emd(a, b, M) ot.plot.plot2D_samples_mat(samples2, samples1, G0, c=[.5, .5, 1]) ax_ot.plot(samples2[:, 0], samples2[:, 1], '+r', markersize=10, label='Source samples') # Increased marker size ax_ot.plot(samples1[:, 0], samples1[:, 1], 'xb', markersize=10, label='Target samples') # Increased marker size ax_ot.legend(loc=4) Output: modifying JointGrid for full flexibility Another approach would be to create a subclass of JointGrid that can accept an existing Figure/GridSpec/Axes as input and use those instead of creating their own. In the example below, the JointGridCustom class would expect custom_gs=None (default) or custom_gs=(f, gs, ax_joint, ax_marg_x, ax_marg_y) to reuse existing objects. This will allow customization while letting seaborn handle the jointplot: f = plt.figure(figsize=(10, 5)) gs = plt.GridSpec(8, 8) ax_joint = f.add_subplot(gs[1:6, :3]) ax_marg_x = f.add_subplot(gs[0, :3], sharex=ax_joint) ax_marg_y = f.add_subplot(gs[1:6, 3], sharey=ax_joint) ax_ot = f.add_subplot(gs[1:6, 5:], sharey=ax_joint) ax_bottom = f.add_subplot(gs[7:, :], sharey=ax_joint) g = JointGridCustom(data=df1, x='X', y='Y', hue='Distribution', space=0, palette=custom_palette, custom_gs=(f, gs, ax_joint, ax_marg_x, ax_marg_y) ) g.plot(sns.kdeplot, sns.kdeplot, fill=True) Example output: Full code: import matplotlib from inspect import signature from seaborn._base import VectorPlotter, variable_type, categorical_order from seaborn._core.data import handle_data_source from seaborn._compat import share_axis, get_legend_handles from seaborn import utils from seaborn.utils import ( adjust_legend_subtitles, set_hls_values, _check_argument, _draw_figure, _disable_autolayout ) from seaborn.palettes import color_palette, blend_palette class JointGridCustom(sns.JointGrid): """Grid for drawing a bivariate plot with marginal univariate plots. Many plots can be drawn by using the figure-level interface :func:`jointplot`. Use this class directly when you need more flexibility. """ def __init__( self, data=None, *, x=None, y=None, hue=None, height=6, ratio=5, space=.2, palette=None, hue_order=None, hue_norm=None, dropna=False, xlim=None, ylim=None, marginal_ticks=False, custom_gs=None, ): # Set up the subplot grid if custom_gs: f, gs, ax_joint, ax_marg_x, ax_marg_y = custom_gs assert isinstance(f, matplotlib.figure.Figure) assert isinstance(gs, matplotlib.gridspec.GridSpec) assert isinstance(ax_joint, matplotlib.axes.Axes) assert isinstance(ax_marg_x, matplotlib.axes.Axes) assert isinstance(ax_marg_y, matplotlib.axes.Axes) else: f = plt.figure(figsize=(height, height)) gs = plt.GridSpec(ratio + 1, ratio + 1) ax_joint = f.add_subplot(gs[1:, :-1]) ax_marg_x = f.add_subplot(gs[0, :-1], sharex=ax_joint) ax_marg_y = f.add_subplot(gs[1:, -1], sharey=ax_joint) self._figure = f self.ax_joint = ax_joint self.ax_marg_x = ax_marg_x self.ax_marg_y = ax_marg_y # Turn off tick visibility for the measure axis on the marginal plots plt.setp(ax_marg_x.get_xticklabels(), visible=False) plt.setp(ax_marg_y.get_yticklabels(), visible=False) plt.setp(ax_marg_x.get_xticklabels(minor=True), visible=False) plt.setp(ax_marg_y.get_yticklabels(minor=True), visible=False) # Turn off the ticks on the density axis for the marginal plots if not marginal_ticks: plt.setp(ax_marg_x.yaxis.get_majorticklines(), visible=False) plt.setp(ax_marg_x.yaxis.get_minorticklines(), visible=False) plt.setp(ax_marg_y.xaxis.get_majorticklines(), visible=False) plt.setp(ax_marg_y.xaxis.get_minorticklines(), visible=False) plt.setp(ax_marg_x.get_yticklabels(), visible=False) plt.setp(ax_marg_y.get_xticklabels(), visible=False) plt.setp(ax_marg_x.get_yticklabels(minor=True), visible=False) plt.setp(ax_marg_y.get_xticklabels(minor=True), visible=False) ax_marg_x.yaxis.grid(False) ax_marg_y.xaxis.grid(False) # Process the input variables p = VectorPlotter(data=data, variables=dict(x=x, y=y, hue=hue)) plot_data = p.plot_data.loc[:, p.plot_data.notna().any()] # Possibly drop NA if dropna: plot_data = plot_data.dropna() def get_var(var): vector = plot_data.get(var, None) if vector is not None: vector = vector.rename(p.variables.get(var, None)) return vector self.x = get_var("x") self.y = get_var("y") self.hue = get_var("hue") for axis in "xy": name = p.variables.get(axis, None) if name is not None: getattr(ax_joint, f"set_{axis}label")(name) if xlim is not None: ax_joint.set_xlim(xlim) if ylim is not None: ax_joint.set_ylim(ylim) # Store the semantic mapping parameters for axes-level functions self._hue_params = dict(palette=palette, hue_order=hue_order, hue_norm=hue_norm) # Make the grid look nice utils.despine(f) if not marginal_ticks: utils.despine(ax=ax_marg_x, left=True) utils.despine(ax=ax_marg_y, bottom=True) for axes in [ax_marg_x, ax_marg_y]: for axis in [axes.xaxis, axes.yaxis]: axis.label.set_visible(False) f.tight_layout() f.subplots_adjust(hspace=space, wspace=space) f = plt.figure(figsize=(10, 5)) gs = plt.GridSpec(8, 8) ax_joint = f.add_subplot(gs[1:6, :3]) ax_marg_x = f.add_subplot(gs[0, :3], sharex=ax_joint) ax_marg_y = f.add_subplot(gs[1:6, 3], sharey=ax_joint) ax_ot = f.add_subplot(gs[1:6, 5:], sharey=ax_joint) ax_bottom = f.add_subplot(gs[7:, :], sharey=ax_joint) g = JointGridCustom(data=df1, x='X', y='Y', hue='Distribution', space=0, palette=custom_palette, custom_gs=(f, gs, ax_joint, ax_marg_x, ax_marg_y) ) g.plot(sns.kdeplot, sns.kdeplot, fill=True)
2
2
78,814,853
2024-7-31
https://stackoverflow.com/questions/78814853/create-a-widget-factory-in-qt
Hello~ I'm creating a set of custom widgets that extend the native widgets in Qt. My custom widgets are supposed to be constructed from a data source, and they all provide a custom function Foobar. For example: class CheckBox: public QCheckBox, public Control { Q_OBJECT public: CheckBox(QWidget* parent = 0); CheckBox(DataSource* data, QWidget* parent = 0); virtual ~CheckBox(); virtual void Foobar(); }; class ComboBox: public QComboBox, public Control { Q_OBJECT public: ComboBox(QWidget* parent = 0); ComboBox(DataSource* data, QWidget* parent = 0); virtual ~ComboBox(); virtual void Foobar(); }; Each widget class has a corresponding data source class that is responsible for creating the widget. For example: class CheckBoxDataSource: public DataSource { public: CheckBoxDataSource(); virtual ~CheckBoxDataSource(); virtual QWidget* createControl(QWidget* parent); }; class ComboBoxDataSource: public DataSource { public: ComboBoxDataSource(); virtual ~ComboBoxDataSource(); virtual QWidget* createControl(QWidget* parent); }; After these classes are exposed to Python, Python clients who have a list of various data sources are expected to use it like this: for data in dataSources: widget = data.createControl(parent) # returns a QCheckBox/QComboBox/etc # at a later time ... widget.Foobar() However, this is not going to work because data.createControl() will return an object of type QCheckBox or QCombox in PyQt, so clients won't be able to call widget.Foobar() as the Foobar() function isn't available in these classes. How can I work around this? So far I have tried these below but none of them works... Attempt 1 In Python, I tried casting the widget to the right C++ class so that I can call Foobar(). widget.__class__ = CheckBox widget.__class__ = ComboBox In some cases this works fine, in other cases I get an error AttributeError: 'CheckBox' object has no attribute 'Foobar' even though it's the correct type. Interestingly, if I put a breakpoint on the line and steps over it in the debugger, it always works, without the breakpoint it fails, so there seems to be a timing issue that I could not figure out. Anyway, mutating __class__ is dangerous so I probably should avoid it. Attempt 2 If I add virtual void Foobar() to the base class Control and declare createControl() to return a Control* instead of QWidget*. This should allow clients to call Foobar() in Python, but they also lose the ability to treat it as a QWidget. Attempt 3 widget = CheckBox(data.createControl(parent)) By wrapping the returned object in another constructor call, the widget in Python should now have the right type. But this will create 2 CheckBox widgets, not 1, so I have to "delete" one of them. Looks like the copy constructor is being called in this case. I don't think I can move it in Python. Sorry my brain is a huge mess at the moment, I think there should be a better factory design?
So you want to have QWidget*s with additional interface Foobar. I am afraid it might not be possible, not in a straightforward C++ way, without wrapper classes, e.g. similar to setupUi(parentPtr); pattern used by Qt User Interface Compiler . Qt doesn't allow multiple inheritance from QObject, so you cannot design your class Control to be a QWidget, because then you cannot inherit both Control and QCheckBox. However, you can use the MetaObjects features (Reflection pattern). If you define your additional interfaces as QINVOKALBES: class MyCheckBox : public QCheckBox { Q_OBJECT; public: Q_INVOKABLE void Foobar() { qDebug() << "MyCheckBox::Foobar"; } }; you can then call them with QMetaObject::invokeMethod(myCheckBoxPtr, "Foobar", Qt::DirectConnection); where myCheckBoxPtr is QWidget* (or even QObject* if you want to abstract further). C++ documentation, python documentation
3
4
78,814,153
2024-7-31
https://stackoverflow.com/questions/78814153/i-cant-install-pyinstaller-from-command-line-in-debian-linux
I wanted Pyinstaller on my Debian machine, so I ran the following command: sudo pip3 install pyinstaller This returned the following error: error: externally-managed-environment Γ— This environment is externally managed ╰─> To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. If you wish to install a non-Debian-packaged Python package, create a virtual environment using python3 -m venv path/to/venv. Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make sure you have python3-full installed. If you wish to install a non-Debian packaged Python application, it may be easiest to use pipx install xyz, which will manage a virtual environment for you. Make sure you have pipx installed. See /usr/share/doc/python3.11/README.venv for more information. note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification. So then I ran this: sudo apt install python3-pyinstaller That returned this: Reading package lists... Done Building dependency tree... Done Reading state information... Done E: Unable to locate package python3-pyinstaller I don't know what else to try. pip has been doing this, but usually using apt, I can install Python packages pretty easily.
As the error message states, you would need to install through apt to install globally, if that's not available for whaterver reason, the next best thing is to install that package into a virtual environment of some sort (e.g. with virtualenv env and then source env/bin/activate to activate the virtual environment). In my case there are some packages I would definitely not install in any other way and that I want to be able to use without activating any virtual environment whatsoever (e.g. bpytop, ps_mem) so just as the error says just pass --break-system-packages as you run pip3 install pyinstaller. To clarify, the command should look like this: pip3 install pyinstaller --break-system-packages
2
1
78,802,547
2024-7-27
https://stackoverflow.com/questions/78802547/compute-minimum-area-convex-k-gon-in-2d
I am trying to solve the following problem: given a set of points P and a value k, find the area of the smallest convex k-gon defined by a subset of points S of P with |P| = n and |S| = k. I found this paper describing an algorithm solving the problem in O(kn^3) (which is exactly what I need, considering in my case k can be big). Specifically, you can see the algorithm in section 4 (Algorithm 3). Based on this paper, I have implemented the following python script: import math # Given a line defined by the points ab, it returns +1 if p is on the left side of the line, 0 if collinear and -1 if on the right side of the line def side(a, b, p): t = (b[0] - a[0]) * (p[1] - a[1]) - (b[1] - a[1]) * (p[0] - a[0]) return math.copysign(1, t) def angle(v1, v2): v1_u = v1 / np.linalg.norm(v1) v2_u = v2 / np.linalg.norm(v2) return np.arccos(np.clip(np.dot(v1_u, v2_u), -1.0, 1.0)) # Computes the area of triangle abc def area(a, b, c): return abs(ConvexHull([a,b,c]).volume) # Computes the slope of a line defined by points a,b def slope(a, b): return (b[1] - a[1]) / (b[0] - a[0]) # Returns the list of points in P above pi ordered clockwise around pi def get_above_clockwise(pi, P, ref = np.array((-1, 0))): above = [p for p in P if p[1] > pi[1]] return sorted(above, key=lambda x: angle(ref, np.array(x) - np.array(pi))) # Returns the list of points in P ordered clockwise (by slope) around pj def get_slope_clockwise(pj, P): T = [p for p in P if p != pj] return sorted(T, key=lambda x: abs(slope(pj, x))) def get_min_kgon(P, K): D = {p:i for i, p in enumerate(P)} T = np.full((K+1, len(P), len(P), len(P)), np.inf, dtype=np.float32) total_min = np.inf for pi in P: T[2, D[pi]].flat = 0 PA = get_above_clockwise(pi, P) for k in range(3, K+1): for pj in PA: min_area = np.inf PA2 = get_slope_clockwise(pj, PA + [pi]) pi_idx = PA2.index(pi) PA2 = PA2[(pi_idx+1):] + PA2[:pi_idx] for pl in PA2: if pl == pj: continue if side(pi, pj, pl) == 1: min_area = min(min_area, T[k-1, D[pi], D[pl], D[pj]] + area(pi, pj, pl)) T[k, D[pi], D[pj], D[pl]] = min_area total_min = min(total_min, np.min(T[K, D[pi]].flat)) return total_min I have also implemented a function computing the exact solution for P and k using a brute-force approach. from itertools import combinations from scipy.spatial import ConvexHull def reliable_min_kgon(P, K): m = np.inf r = None for lst in combinations(P, K): ch = ConvexHull(lst) if ch.volume < m and len(ch.vertices) == K: m = ch.volume r = lst return m, r That being said, I have been stuck with this for a few days now. The solutions are sometimes correct and sometimes no. I guess the way I sort the points pl around pj is not correct (maybe?). I would really appreciate some help if any of you has experience with computational geometry or (even better) this paper/problem in particular. I am providing a point set P and k for which the result of get_min_kgon(P,k) != reliable_min_kgon(P, k). K = 5 P = [ (102, 466), (435, 214), (860, 330), (270, 458), (106, 87), (71, 372), (700, 99), (20, 871), (614, 663), (121, 130) ] Thank you for your time and help! P.S. you can expect P to be in general position and that only integer coordinates are allowed.
I changed the function get_slope_clockwise(pj, P). The original function can not ensure the correct order using the absolute value of the slope. Absolute values can lead to points on opposite sides of the line (with positive and negative slopes) being treated as equivalent, which can disrupt the desired clockwise order. In the modified version: sorted T based on whether P is above pj or not. After that, sort based on the angle value as it is always in the range [0,pi]. def get_slope_clockwise(pj, P): T = [p for p in P if p != pj] # and p[0]>pj[0] # return sorted(T, key=lambda x: abs(slope(pj, x))) ref = np.array([-1, 0]) # return sorted(T, key=lambda x: angle(ref, np.array(x) - np.array(pj))) # return sorted(T, key=lambda x: ((np.array(x) - np.array(pj))[1] < 0,angle(ref, np.array(x) - np.array(pj)))) return sorted( T, key=lambda x: ( # (np.array(x) - np.array(pj))[1] < 0, # Check if the point is below pj angle(ref, np.array(x) - np.array(pj)) if (np.array(x) - np.array(pj))[1] > 0 else 2 * math.pi - angle(ref, np.array(x) - np.array(pj)) # Negate angle if below pj ) ) Then, import random P = [(random.randrange(10 ** 3), random.randrange(10 ** 3)) for _ in P] get_min_kgon(P, K) == reliable_min_kgon(P, K)[0] In some cases, these two functions indeed returned different values, such as: P = [(551, 728), (393, 966), (668, 453), (113, 616), (322, 33), (736, 665), (893, 440), (54, 597), (67, 140), (599, 614)] K = 5 Now, both functions return the same value in all local cases. import matplotlib.pyplot as plt result = reliable_min_kgon(P, K) x, y = zip(*P) # Plot the points plt.figure(figsize=(8, 8)) plt.scatter(x, y, color='blue') # Annotate the points for i, (xi, yi) in enumerate(result[1]): plt.text(xi + 5, yi + 5, f'{i}', fontsize=12, color='red') point0 = result[1][0] # (551, 728) point3 = result[1][3] # (54, 597) plt.plot([point0[0], point3[0]], [point0[1], point3[1]], color='green', linestyle='-', linewidth=2) # Add grid, labels, and title plt.grid(True) plt.xlabel('X') plt.ylabel('Y') plt.title('Scatter Plot of Points in P') # Set equal scaling plt.gca().set_aspect('equal', adjustable='box') # Show the plot plt.show()
3
1
78,809,996
2024-7-30
https://stackoverflow.com/questions/78809996/how-to-load-templates-in-django-for-specific-application
So I am learning Django at in advanced level. I already know how to include templates from BASE_DIR where manage.py is located. However I wanted to know how to locate templates in the specific app in Django. For example, I have a project named mysite and an app called polls. I then specify the templates directory in settings.py: DIRS=[os.path.join(BASE_DIR, "templates")] However, I do not know how to set a templates directory specific to the polls app. This is the project structure: . β”œβ”€β”€ db.sqlite3 β”œβ”€β”€ manage.py β”œβ”€β”€ mysite β”‚ β”œβ”€β”€ asgi.py β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ settings.py β”‚ β”œβ”€β”€ urls.py β”‚ └── wsgi.py β”œβ”€β”€ polls β”‚ β”œβ”€β”€ admin.py β”‚ β”œβ”€β”€ apps.py β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ migrations β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ models.py β”‚ β”œβ”€β”€ static β”‚ β”‚ β”œβ”€β”€ images β”‚ β”‚ β”œβ”€β”€ scripts.js β”‚ β”‚ └── style.css β”‚ β”œβ”€β”€ tests.py β”‚ β”œβ”€β”€ urls.py β”‚ └── views.py └── templates β”œβ”€β”€ polls β”‚ └── index.html └── static β”œβ”€β”€ images β”œβ”€β”€ scripts.js └── style.css I wanted it to look like this. . β”œβ”€β”€ db.sqlite3 β”œβ”€β”€ manage.py β”œβ”€β”€ mysite β”‚ β”œβ”€β”€ asgi.py β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ settings.py β”‚ β”œβ”€β”€ urls.py β”‚ └── wsgi.py β”œβ”€β”€ polls β”‚ β”œβ”€β”€ admin.py β”‚ β”œβ”€β”€ apps.py β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ migrations β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ models.py β”‚ β”œβ”€β”€ static β”‚ β”‚ β”œβ”€β”€ images β”‚ β”‚ β”œβ”€β”€ scripts.js β”‚ β”‚ └── style.css β”‚ β”œβ”€β”€ templates <- New β”‚ β”‚ β”œβ”€β”€ polls β”‚ | β”‚ β”œβ”€β”€ index.html β”‚ β”œβ”€β”€ tests.py β”‚ β”œβ”€β”€ urls.py β”‚ └── views.py └── templates β”œβ”€β”€ polls β”‚ └── index.html └── static β”œβ”€β”€ images β”œβ”€β”€ scripts.js └── style.css
If you have the app specific templates located at polls/templates/polls/, then you need to add that path to TEMPLATES in settings.py as shown below: TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join(BASE_DIR, 'polls/templates')], # here the app template path is set 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] And while rendering template the path should have a folder prefix like render(request, 'polls/index.html', context) UPDATE: as in above settings APP_DIRS is set to True, django will automatically look into <yourapp>/<templates> directory for the templates even if the app template paths are not explicitly specified.
2
3
78,794,422
2024-7-25
https://stackoverflow.com/questions/78794422/matplotlib-plot-not-responding-in-vscode-debug-mode
I'm encountering an issue when trying to plot a simple graph using Matplotlib in Python while in debug mode in VSCode. The plot works perfectly in normal execution mode, but when I set a breakpoint and run the code in debug mode, the plot window becomes unresponsive. Example Code: import matplotlib.pyplot as plt # Sample data x = [1, 2, 3, 4, 5] y = [10, 20, 25, 30, 40] # Plotting the data plt.plot(x, y) plt.title('Sample Plot') plt.xlabel('x-axis') plt.ylabel('y-axis') # Show plot plt.show() In normal execution mode, the plot window appears and is responsive. In debug mode with a breakpoint, the plot window appears but becomes unresponsive and eventually throws a "not responding" error. I tried using different Matplotlib backends, but none of them resolved the issue. I expect to be able to plot graphs using Matplotlib in debug mode vscode without the plot window becoming unresponsive.
I also had this issue. I got it working again by running Python 3.11.9 instead of 3.12.*, and I also had to switch to matplotlib 3.8.3 instead of 3.9.*. Has been driving me crazy for a couple weeks. So, try setting up a virtual environment in VSCode with matplotlib 3.8.3 and switching your interpreter to 3.11.9 in that environment.
2
9
78,796,828
2024-7-26
https://stackoverflow.com/questions/78796828/i-got-this-error-oserror-winerror-193-1-is-not-a-valid-win32-application
I trying to run an Python file today and got this error below! Anyone know what's issue and the solution to fix it? Traceback (most recent call last): File "C:\Users\Al PC\PycharmProjects\Fe\report_auto-final-v2.7.py", line 60, in <module> driver = webdriver.Chrome(service=chrome_service, options=options) # ChromeDriverManager() File "C:\Users\Al PC\PycharmProjects\SocialMedia\venv\lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 45, in __init__ super().__init__( File "C:\Users\Al PC\PycharmProjects\SocialMedia\venv\lib\site-packages\selenium\webdriver\chromium\webdriver.py", line 53, in __init__ self.service.start() File "C:\Users\Al PC\PycharmProjects\SocialMedia\venv\lib\site-packages\selenium\webdriver\common\service.py", line 105, in start self._start_process(self._path) File "C:\Users\Al PC\PycharmProjects\SocialMedia\venv\lib\site-packages\selenium\webdriver\common\service.py", line 206, in _start_process self.process = subprocess.Popen( File "C:\Users\Al PC\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 966, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "C:\Users\Al PC\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1435, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, OSError: [WinError 193] %1 is not a valid Win32 application Process finished with exit code 1
If you construct the service like this: service = ChromeService(ChromeDriverManager().install()) I noticed ChromeDriverManager().install() returns <path stuff>\chromedriver-win32\THIRD_PARTY_NOTICES.chromedriver instead of the chromedrive.exe. This works for me: chrome_install = ChromeDriverManager().install() folder = os.path.dirname(chrome_install) chromedriver_path = os.path.join(folder, "chromedriver.exe") service = ChromeService(chromedriver_path)
9
26
78,802,585
2024-7-27
https://stackoverflow.com/questions/78802585/abbreviating-dataclass-decorator-without-losing-intellisense
Scenario Suppose I want to create an alias for a dataclasses.dataclass decorator with specific arguments. For example: # Instead of repeating this decorator all the time: @dataclasses.dataclass(frozen=True, kw_only=True) class Entity: ... # I just write something like this: @struct class Entity: ... The static analyzer I am using is Pylance, in Visual Studio Code. I am using Python 3.11. Attempt 1: Direct Assignment (Runtime βœ…, Static Analysis ❌) My first instinct was to leverage the fact that functions are first-class citizens and simply assign the created decorator function to a custom name. This works at runtime, but Pylance no longer recognizes Entity as a dataclass, as evident from the static analysis error: struct = dataclasses.dataclass(frozen=True, kw_only=True) @struct class Entity: name: str value: int # STATIC ANALYZER: # Expected no arguments to "Entity" constructor Pylance(reportCallIssue) valid_entity = Entity(name="entity", value=42) # RUNTIME: # Entity(name='entity', value=42) print(valid_entity) Attempt 2: Wrapping (Runtime ❌, Static Analysis ❌) I then thought that maybe some information was being lost somehow if I just assign to another name (though I don't see why that would be the case), so I looked to wrapping it with functools. However, this still has the same behavior in static analysis and even causes a runtime error, when I apply @struct: import dataclasses import functools def struct(cls): decorator = dataclasses.dataclass(frozen=True, kw_only=True) decorated_cls = decorator(cls) functools.update_wrapper(decorated_cls, cls) return decorated_cls # No error reported by static analyzer, but runtime error at `@struct`: # AttributeError: 'mappingproxy' object has no attribute 'update' @struct class Entity: name: str value: int # STATIC ANALYZER: # Expected no arguments to "Entity" constructor Pylance(reportCallIssue) # RUNTIME: # (this line doesn't even get reached) valid_entity = Entity(name="entity", value=42) Full traceback: Traceback (most recent call last): File "C:\Users\***\temp.py", line 12, in <module> @struct ^^^^^^ File "C:\Users\***\temp.py", line 7, in struct functools.update_wrapper(decorated_cls, cls) File "C:\Users\***\AppData\Local\Programs\Python\Python311\Lib\functools.py", line 58, in update_wrapper getattr(wrapper, attr).update(getattr(wrapped, attr, {})) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'mappingproxy' object has no attribute 'update' Attempt 3: Wrapper Factory (Runtime βœ…, Static Analysis ❌) I then tried making struct a decorator factory instead and used functools.wraps() on a closure function that just forwards to the dataclass function. This now works at runtime, but Pylance still reports the same error as in Attempt 1: def struct(): decorator = dataclasses.dataclass(frozen=True, kw_only=True) @functools.wraps(decorator) def decorator_wrapper(*args, **kwargs): return decorator(*args, **kwargs) return decorator_wrapper @struct() class Entity: name: str value: int # STATIC ANALYZER: # Expected no arguments to "Entity" constructor Pylance(reportCallIssue) valid_entity = Entity(name="entity", value=42) # RUNTIME: # Entity(name='entity', value=42) print(valid_entity) I also found that using the plain dataclasses.dataclass function itself (no ()) has the exact same problem across all 3 attempts. Is there any way to get this to work without messing up IntelliSense? Optional follow-up: why did Attempt 2 fail at runtime?
Decorate struct() with dataclass_transform(frozen_default = True, kw_only_default = True): (playgrounds: Mypy, Pyright) # 3.11+ from typing import dataclass_transform # 3.10- from typing_extensions import dataclass_transform @dataclass_transform(frozen_default = True, kw_only_default = True) def struct[T](cls: type[T]) -> type[T]: return dataclass(frozen = True, kw_only = True)(cls) # By the way, you can actually pass all of them # to dataclass() in just one call: # dataclass(cls, frozen = True, kw_only = True) # It's just that this signature isn't defined statically. @struct class Entity: name: str value: int reveal_type(Entity.__init__) # (self: Entity, *, name: str, value: int) -> None valid_entity = Entity(name="entity", value=42) # fine valid_entity.name = "" # error: "Entity" is frozen dataclass_transform() is used to mark dataclass transformers (those that has similar behaviour to the built-in dataclasses.dataclass). It accepts a number of keyword arguments, in which: frozen_default = True means that the class decorated with @struct will be frozen "by default". kw_only_default = True means that the constructor generated will only have keyword arguments (aside from self) "by default". "By default" means that, unless otherwise specified via the frozen/kw_only arguments to the @struct decorator, the @struct-decorated class will behave as such. However, since struct itself takes no such arguments, "by default" here is the same as "always".
4
3
78,806,812
2024-7-29
https://stackoverflow.com/questions/78806812/third-party-notices-chromedriver-exec-format-error-undetected-chromedriver
undetected_chromedriver with webdriver_manager was working well few days ago for scraping websites but out of nowhere it started throwing the error: OSError: [Errno 8] Exec format error: '/Users/pd/.wdm/drivers/chromedriver/mac64/127.0.6533.72/chromedriver-mac-x64/THIRD_PARTY_NOTICES.chromedriver' I am guessing it is related to recent update of webdriver_manager. This is the code: import undetected_chromedriver as uc from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.support import expected_conditions as EC def get_driver(): options = uc.ChromeOptions() # options.add_argument("--headless") options.add_argument("--no-sandbox") options.add_argument("--disable-dev-sim-usage") options.add_argument("--start-maximized") options.add_argument('--disable-popup-blocking') driver = uc.Chrome(driver_executable_path=ChromeDriverManager().install(), options=options, version_main=116) driver.maximize_window() return driver It would be really great if someone can help me on this, Thanks.
The command ChromeDriverManager().install(): creates a new folder without the executable and it retrieves the wrong file. First, you need to remove the .wdm folder and then reinstall webdriver-manager: Windows Location: r"C:\Users\{user}\.wdm" Linux Location: /home/{user}/.wdm Mac Location: /Users/{user}/.wdm rm -rf /home/user/.wdm pip uninstall webdriver-manager pip install webdriver-manager Now, after executing ChromeDriverManager().install(), you should only see a single folder with the executable: It check if there is really a chromedriver executable inside this folder. Second, it makes a correction to the file name: if 'THIRD_PARTY_NOTICES.chromedriver' in chromedriver_path: chromedriver_path = chromedriver_path.replace('THIRD_PARTY_NOTICES.chromedriver', 'chromedriver')
18
31
78,808,819
2024-7-29
https://stackoverflow.com/questions/78808819/issue-with-polars-and-polars-talib
I'm fairly new to programming as a whole and I'm trying to learn using Polars with the polars_talib library. However when I import polars_talib I get the following error: ModuleNotFoundError Traceback (most recent call last) Cell In[19], line 8 6 import mintalib as mt 7 import pandas_ta as pta ----> 8 from polars.utils.udfs import _get_shared_lib_location 9 import polars_talib as plt ModuleNotFoundError: No module named 'polars.utils' And I can't for the life of me figure out why I'm getting this message. I expected the library to import without error, tried Googling for a fix but couldn't find anything useful. I've reinstalled Polars with all optional dependencies and polars ta_lib, still nothing.
This issue has been fixed in the latest version of the library. Please update to version 0.1.3, which now supports Polars v1, to resolve the problem.
2
0
78,788,533
2024-7-24
https://stackoverflow.com/questions/78788533/preventing-the-gibbs-phenomenon-on-a-reverse-fft
i am currently filtering some data and ran into trouble, when filtering smaller frequencies from a large trend.The Reverse FFTs seem to have large spikes at the beginning and the ending. Here is the Data before and after filtering smaller frequencies. I have looked into the mathematic phenomenon and it is called the Gibbs phenomenon. Is there a way around this to clear the data of some overlying frequencies without getting this effect. Or is there even a workaround to keep the spikes as small as possible. Here is the code BTW: fourier_transformation= np.fft.fft(Sensor_4) frequencies = np.fft.fftfreq(len(time), d=1/Values_per_second) fourier_transformation[np.abs(frequencies) > 0.18] = 0 Sensor_4 = np.fft.ifft(fourier_transformation)
Following the suggestion of Martin Brown's comment, the following code subtracts a ramp before FFT and adds it back after IFFT (I needed to make up my own values for Sensor_4, Values_per_second, and time, as the corresponding variables were missing in the question, so you might need to tune the parameters to match your actual signal): import matplotlib.pyplot as plt import numpy as np # Create signal of Sensor_4 (similar to question) T = 600 Values_per_second = 100 # Just to have a value here time = np.arange(0, T, 1/Values_per_second) Sensor_4 = 2 * np.sin(.25 * (2 * np.pi * time / T)) - 2.25 Sensor_4 += .1 * np.sin(1.9 * (2 * np.pi * time / T)) Sensor_4 += .1 * np.sin(2 * np.pi * time * 0.18) frequencies = np.fft.fftfreq(len(time), d=1/Values_per_second) # Original version fourier_transformation = np.fft.fft(Sensor_4) fourier_transformation[np.abs(frequencies) > 0.18] = 0 filtered_no_ramp = np.real(np.fft.ifft(fourier_transformation)) # Using ramp, as suggested in # https://stackoverflow.com/questions/78788533#comment138912162_78788533 # (1) Create ramp, (2) subtract ramp, (3) FFT, (4) filter, (5) IFFT, (6) add ramp back ramp = np.linspace(Sensor_4[0], Sensor_4[-1], len(Sensor_4)) # (1) fourier_transformation = np.fft.fft(Sensor_4 - ramp) # (2, 3) fourier_transformation[np.abs(frequencies) > 0.18] = 0 # (4) filtered_ramp = np.real(np.fft.ifft(fourier_transformation)) + ramp # (5, 6) # Show result ax1 = plt.subplot(311, title="unfiltered") plt.plot(time, Sensor_4) plt.subplot(312, sharey=ax1, title="w/o ramp") plt.plot(time, filtered_no_ramp) plt.subplot(313, sharey=ax1, title="with ramp") plt.plot(time, filtered_ramp) plt.show() Here is the result, before filtering (top), filtering without (center) and with (bottom) ramp subtraction:
4
2