question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
79,294,496 | 2024-12-19 | https://stackoverflow.com/questions/79294496/writing-windows-registry-with-backslash-in-sub-key-name-in-python | I'm trying to write into the Windows registry key "SOFTWARE/Microsoft/DirectX/UserGpuPreferences" using the Python winreg module. The name of the sub-key needs to be the python executable path, which contains backslashes. However when using winreg.SetValue(), this instead adds a tree of keys following the components of the path, instead of a single sub-key whose name contains backslashes. import winreg path = ['SOFTWARE', 'Microsoft', 'DirectX', 'UserGpuPreferences'] registry = winreg.ConnectRegistry(None, winreg.HKEY_CURRENT_USER) key = winreg.OpenKey(registry, '\\'.join(path), 0, winreg.KEY_WRITE) sub_key = sys.executable value = 'GpuPreference=2;' winreg.SetValue(key, sub_key, winreg.REG_SZ, value) Also escaping the backslashes (sys.executable.replace('\\', '\\\\')) does not change it. Is there any way to insert such a registry value, using Python? | It works correctly with SetValueEx() | 1 | 2 |
79,291,758 | 2024-12-18 | https://stackoverflow.com/questions/79291758/trying-to-time-the-in-operator | I have a question about how the in operator works internally. So to test it I tried to graph the time it takes to find a member of a list, in different parts of the list. import time import matplotlib.pyplot as plt # list size size = 100000 # number of points n = 100000 # display for not having to count zeros if size >= 1000000: new_size = str(int(size/1000000))+"M" else: new_size = str(int(size/1000))+"K" if n >= 1000000: new_n = str(int(n/10000000))+"M" elif n >= 1000: new_n = str(int(n/1000))+"K" else: new_n = n lst = list(range(size)) result = [] for number in range(0,size+1,int(size/n)): start_time = time.time_ns() if number in lst: end_time = time.time_ns() total_time = (end_time-start_time)/1000000000 #convert ns to seconds result.append([total_time,number]) print(number,total_time) x_values, y_values = zip(*result) plt.scatter(x_values, y_values, c='red', marker='o',s=5) plt.xlabel('Time (sec)') plt.ylabel('Number') plt.title(f'List length: {new_size}\nNumber of points: {new_n}\n\nTime to find number in list') plt.grid(True) plt.show() From what I saw, in works internally by calling iter which sequentially takes all elements of the iterable, starting from the first. So I would expect that searching further in a list takes more and more time. The list has to be above a certain length so that python takes a long enough time to find the number that we can measure, but fiddling around with different list sizes and number of points I found out that the points cluster around vertical lines on multiples of 0.001s, which is very bizzare to me. I could see some horizontal lines forming due to how python works internally, but not vertical ones. I even used time.time_ns() to have even more time precision, but it still happens. I mean, how can it take the same time to find 539 and 94,598 inside a [0,1, ... 100000] list if it always starts from 0? | This test seems to suggest to me that searching for a number in a list takes longer and longer based on where the item falls in the list. import timeit setup = """ size = 1_000_000 size_minus_one = size - 1 data = list(range(size)) """ print(timeit.timeit("0 in data", setup=setup, number=1_000)) print(timeit.timeit("size_minus_one in data", setup=setup, number=1_000)) I assume some people are not satisfied with that answer as the vertical bars are not "explained". Let's do that as well. The vertical bars are an artifact of the resolution of the time.time_ns(). Using time.perf_counter_ns() as recommended by @no-comment gives us the following result confirming that the time to find an item in a list is directly proportional to the location of the item in the list. Via this code: import time import matplotlib.pyplot as plt # list size size = 100000 # number of points n = 100000 # display for not having to count zeros if size >= 1000000: new_size = str(int(size/1000000))+"M" else: new_size = str(int(size/1000))+"K" if n >= 1000000: new_n = str(int(n/10000000))+"M" elif n >= 1000: new_n = str(int(n/1000))+"K" else: new_n = n clock = time.perf_counter_ns lst = list(range(size)) result = [] for number in range(0,size+1, int(size/n)): start_time = clock() if number in lst: result.append([clock() - start_time, number]) x_values, y_values = zip(*result) plt.scatter(x_values, y_values, c='red', marker='o',s=5) plt.xlabel('Time') plt.ylabel('Number') plt.title(f'List length: {new_size}\nNumber of points: {new_n}\n\nTime to find number in list') plt.grid(True) plt.show() | 2 | 2 |
79,293,171 | 2024-12-19 | https://stackoverflow.com/questions/79293171/import-of-a-function-from-nested-structure | Consider this directory structure: a-a | b | c | print.py Basically a-a/b/c. and print.py inside that directory. The contents of print.py looks like as : def print_5(): print("5") def print_10(): print("10") I want to import and use these functions into my current file at the level of a-a. Structure : ls a-a test.py How do I do that? Inside test.py, I tried importing all the functions as : from a-a.b.c import f print_5() And it gives error as SyntaxError: invalid syntax that I understand. So, I moved 'a-a' to 'a_a' and it started to give me ModuleNotFoundError: No module named 'a_a'. I know it can be a trivial thing, just not coming in place for me. | Assuming a-a is the root directory, you first need to update to a_a as you have already done. Python modules cannot contain -. You should also add an __init__.py in each directory to mark it as package that can be imported. Your directory structure should look like: a_a/ ├── __init__.py ├── b/ │ ├── __init__.py │ ├── c/ │ ├── __init__.py │ ├── print.py └── test.py You also need to ensure that the parent directory of a_a is in your python path. You can either run your script from the parent directory or update the PYTHONPATH environment variable. Your test.py would import like this: from a_a.b.c.print import print_5, print_10 | 2 | 2 |
79,290,203 | 2024-12-18 | https://stackoverflow.com/questions/79290203/groupby-a-df-column-based-on-more-than-3-columns | I have an df which has 3 columns: Region, Country and AREA_CODE. Region Country AREA_CODE AREA_SUB_CODE_1 AREA_SUB_CODE_2 =========================================================================== AMER US A1 A1_US_1 A1_US_2 AMER CANADA A1 A1_CA_1 A1_CA_2 AMER US B1 B1_US_1 B1_US_2 AMER US A1 A1_US_1 A1_US_2 Is there a way to get output list of both the AREA_SUB_CODE_1 and AREA_SUB_CODE_2 as a list under each of the previous column value. something like the below? { "AREA_SUB_CODE_1": { "AMER": { "US": { "A1": ["A1_US_1"], "B1": ["B1_US_1"] }, "CANADA": { "A1": ["A1_CA_1"], } } }, "AREA_SUB_CODE_2": { "AMER": { "US": { "A1": { "A1_US_1": ["A1_US_2"] }, "B1": { "B1_US_1": ["B1_US_2"] }, "CANADA": { "A1": { "A1_CA_1": ["A1_CA_2"], } } } }, } So far i have tried to groupby on 3 columns it works which is, for (k1, k2), v in df.groupby(['Region', 'Country'])['AREA_CODE']: tTmp.setdefault(k1, {})[k2] = sorted(v.unique()) But when i try to groupby 4 columns, it is throwing error too many values to unpack (expected 2) for (k1, k2), v in df.groupby(['Region', 'Country', 'AREA_CODE'])['AREA_SUB_CODE_1']: tTmp.setdefault(k1, {})[k2] = sorted(v.unique()) How to apply groupby for 4 columns and 5 columns? Or any other way to achieve this? | I think we can achieve this with the following recursive function: f = lambda s: ({k: f(s[k]) for k in s.index.levels[0]} if s.index.nlevels > 1 else {k: s.loc[[k]].unique().tolist() for k in s.index.unique()}) Here, s is expected to be a pandas.Series with hierarchical indexing. At each indexing level, we map the keys to the corresponding depth of the resulting dictionary. At the last level, we extract unique values into a list. The double square brackets in s.loc[[k]] ensure the output is a series, the following unique method returns a numpy.ndarray with unique values of the series, and tolist converts the array into a Python list. If we know there's exactly one unique value at the final level, we can simplify the function: f = lambda s: {k: f(s[k]) for k in s.index.levels[0]} \ if s.index.nlevels > 1 \ else s.to_dict() In this case, we skip creating a list at the end. But if needed, we can insert additional mapping like s.map(lambda x: [x]).to_dict(). Before applying any of the function above, we have to transform the data into a properly indexed series: inner = ['Region', 'Country', 'AREA_CODE'] values = df.melt(inner).set_index(['variable', *inner]).squeeze() Here, 'variable' is the default name for the new column with the rest of column names excluding the inner list after melting. The final answer is f(values) Let's see the example: df = pd.DataFrame({ 'Region': ['AMER', 'AMER', 'AMER', 'AMER'], 'Country': ['US', 'CANADA', 'US', 'US'], 'AREA_CODE': ['A1', 'A1', 'B1', 'A1'], 'AREA_SUB_CODE_1': ['A1_US_1x', 'A1_CA_1', 'B1_US_1', 'A1_US_1y'], 'AREA_SUB_CODE_2': ['A1_US_2', 'A1_CA_2', 'B1_US_2', 'A1_US_2']}) f = lambda s: ({k: f(s[k]) for k in s.index.levels[0]} if s.index.nlevels > 1 else {k: s.loc[[k]].unique().tolist() for k in s.index.unique()}) inner = ['Region', 'Country', 'AREA_CODE'] values = df.melt(inner, var_name='sub_code').set_index(['sub_code', *inner]).squeeze() answer = f(values) Note, that in this example, we have 2 different values for the key set ('AREA_SUB_CODE_1', 'AMER', 'US', 'A1') and 2 equal ones for the key set ('AREA_SUB_CODE_2', 'AMER', 'US', 'A1'), so the second case will end up as a list with one value in the final answer: {'AREA_SUB_CODE_1': {'AMER': {'CANADA': {'A1': ['A1_CA_1']}, 'US': {'A1': ['A1_US_1x', 'A1_US_1y'], 'B1': ['B1_US_1']}}}, 'AREA_SUB_CODE_2': {'AMER': {'CANADA': {'A1': ['A1_CA_2']}, 'US': {'A1': ['A1_US_2'], 'B1': ['B1_US_2']}}}} If we drop the last record in the example data, then we can use the alternative function with s.to_dict() at the end. | 2 | 2 |
79,291,770 | 2024-12-18 | https://stackoverflow.com/questions/79291770/fill-pandas-columns-based-on-datetime-condition | Here is the sample code to generate a dataframe. import pandas as pd import numpy as np dates = pd.date_range("20241218", periods=9600,freq='1MIN') df = pd.DataFrame(np.random.randn(9600, 4), index=dates, columns=list("ABCD")) I want to fill all the columns with -1 for time between 1:35 to 1:45 for all the dates. Similarly I want to fill all the columns with -2 for the exact time of 1:00 for all the dates. For all other time values, the columns need to be filled with zeros. Please suggest the way forward. | Try: df.loc[df.between_time('01:35', '01:45').index] = -1 df.loc[df.index.time == pd.Timestamp('01:00').time()] = -2 Output can be verified with the similar: print(df.between_time('1:35', '1:45').head(15) ) print(df.loc[df.index.time == pd.Timestamp('01:00').time()]) | 1 | 2 |
79,291,557 | 2024-12-18 | https://stackoverflow.com/questions/79291557/python-pyproject-toml-arch-dependency-solved-on-install | In pyproject.toml we have a optional-dependencies for a windows package: [project.optional-dependencies] windows = [ "pywinpty>=2.0.14" ] To install: # on windows pip install .[windows] # on linux/mac we use the enclosed pty pip install . Is it possible so pip install . does this check automatic? Or uv pip install . (pywinpty is a rust package) | You can use PEP 496 – Environment Markers and PEP 508 – Dependency specification for Python Software Packages; they're usable in setup.py, setup.cfg, pyproject.toml, requirements.txt. In particular in pyproject.toml: [project] dependencies = [ "pywinpty>=2.0.14; sys_platform == 'win32'" ] | 1 | 3 |
79,289,700 | 2024-12-18 | https://stackoverflow.com/questions/79289700/how-to-add-labels-to-3d-plot | I have the following code which generates a 3D plot. I am trying to label the plot lines, but they are not ending up where I expect them. import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import numpy as np # Data names = [ "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L" ] wins = [ [0, 14, 20, 24, 29, 33, 39, 39, 39, 39, 39, 39, 39, 39, 39, 39, 39, 39], [0, 7, 13, 17, 23, 27, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30], [0, 5, 8, 11, 15, 16, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19], [0, 7, 11, 17, 20, 25, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29], [0, 9, 14, 22, 29, 36, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42, 42], [0, 6, 10, 16, 20, 24, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29], [0, 7, 13, 20, 26, 31, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34], [0, 10, 13, 18, 24, 29, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33], [0, 5, 11, 13, 16, 21, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26], [0, 12, 15, 18, 21, 25, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30], [0, 9, 11, 12, 17, 20, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26], [0, 15, 20, 25, 27, 33, 37, 37, 37, 37, 37, 37, 37, 37, 37, 37, 37, 37], ] weeks = [f"Week{i+1}" for i in range(18)] # X-axis categories (nb of weeks) # Convert names into numerical indices for the Y-axis x = np.arange(len(weeks)) # X-axis y = np.arange(len(names)) # Y-axis wins_array = np.array(wins) # wins accumulated per weeks and per names # Create a 3D plot fig = plt.figure(figsize=(18, 18)) ax = fig.add_subplot(111, projection='3d') # Plot each line and add labels for i, y_week in enumerate(y): ax.plot(x, wins_array[i], zs=y_week, zdir='x', label=weeks[i]) # Add shaded area below the line ax.bar(x, wins_array[i], zs=y_week, zdir='x', alpha=0.1) # Add labels directly on the curves for j, my_label in enumerate(wins_array[i]): ax.text(i, my_label, y_week, f"{my_label}", color="red", fontsize=8, ha="center") # Customize labels ax.set_zlabel('Wins') # Z-axis is wins ax.set_xticks(y) # X-ticks are names ax.set_xticklabels(names, rotation=45) # Name labels on X-axis ax.set_yticks(x) # Y-ticks are weeks ax.set_yticklabels(weeks) # Week labels on Y-axis plt.show() This is the current result: The result I am trying to achieve is something like this: How do I fix my code to achieve my desired result? | It is easier to separate the drawing and the text annotation: # Plot each line for i, y_week in enumerate(y): ax.plot(x, wins_array[i], zs=y_week, zdir='x', label=weeks[i]) # Add shaded area below the line ax.bar(x, wins_array[i], zs=y_week, zdir='x', alpha=0.1) # Add labels directly on the curves for i in y: for j, k in zip(x, wins_array[i]): ax.text(i, j, k, f"{k}", color="red", fontsize=8, ha="center") | 2 | 1 |
79,279,190 | 2024-12-13 | https://stackoverflow.com/questions/79279190/how-to-conditionally-format-data-in-great-tables | I am trying to conditionally format table data using Great Tables but not sure how to do it. To highlight the color of all those cells (sort of heatmap) whose values is higher than Upper Range column. Data: import polars as pl gt_sample_df = pl.DataFrame({'Test': ['Test A','Test B','Test C','Test D','Test Z','Test E','Test F','Test X', 'Test G','Test H','Test I','Test J'], 'Lower Range': [35.3,2.5,85.0,0.0,None,3.0,200.0,None,3.0,400.0,None,7.0], 'Upper Range': [79.5,3.5,150.0,160.0,160.0,5.0,None,200.0,5.0,1000.0,150.0,30.0], '2024-11-10': [43.0,3.14,135.82,162.7,None,None,206.0,None,4.76,519.52,134.4,26.88], '2024-08-03': [36.0,4.31,152.98,None,175.5,5.94,None,211.0,None,512.08,112.6,22.52], '2024-06-17': [47.0,3.38,158.94,None,182.0,4.87,None,229.0,None,550.24,115.3,23.06], '2024-02-01': [44.0,3.12,136.84,None,154.1,4.51,None,198.0,None,465.04,86.3,17.26], '2023-10-16': [45.0,3.11,140.14,None,162.0,4.6,None,207.0,None,501.44,109.3,21.86], '2023-05-15': [42.0,3.8,159.58,None,192.0,5.57,None,234.0,None,597.68,162.1,32.42]}) gt_sample_df The various date columns in this dataframe gt_sample_df contain the results and I want to compare with the Upper Range and highlight those whose values are higher than Upper Range column. There can be n number of date columns with any date so I can't use static names for columns. I have tried: from great_tables import GT, md, style, loc, google_font (GT(gt_sample_df) .tab_style(style=[style.text(color="Navy"), style.fill(color="PaleTurquoise")], locations=loc.body(columns=pl.exclude(["Test",'Lower Range','Upper Range']), rows=pl.col(lambda x: x) > pl.col('Upper Range'))) ) from great_tables import GT, md, style, loc, google_font (GT(gt_sample_df) .tab_style(style=[style.text(color="Navy"), style.fill(color="red")], locations=loc.body(columns=[3:], rows=pl.col(lambda x: x) > pl.col('Upper Range'))) ) As I only want to highlight values greater in the date columns so I was trying to exclude first 3 columns in the column selection but it didn't work and I am not sure how to automatically compare values of all other Date columns to Upper Range column. Update: Column selection I am able to do it but not able to select proper rows columns_required = gt_sample_df.select(pl.exclude(["Test",'Lower Range','Upper Range'])).columns (GT(gt_sample_df) .tab_style(style=[style.text(color="Navy"), style.fill(color="PaleTurquoise")], locations=loc.body(columns=columns_required, rows=pl.col(lambda x: x) > pl.col('Upper Range'))) ) import polars.selectors as cs (GT(gt_sample_df) .tab_style(style=[style.text(color="Navy"), style.fill(color="PaleTurquoise")], locations=loc.body(columns=cs.starts_with("20"), rows=pl.col(lambda x: x) > pl.col('Upper Range'))) ) columns_required = gt_sample_df.select(pl.exclude(["Test",'Lower Range','Upper Range'])).columns (GT(gt_sample_df) .tab_style(style=[style.text(color="Navy"), style.fill(color="PaleTurquoise")], for col_name in columns_required: locations=loc.body(columns=[col_name], rows=pl.col(col_name) > pl.col('Upper Range'))) ) this also didn't work. Desired Output probably something like this: | Edit: As of the last Great Tables release, this is now more natively supported with a mask argument to loc.body See the PR for further detail. import polars as pl import polars.selectors as cs from great_tables import GT, loc, style # define `gt_sample_df` as per example snippet required_columns = gt_sample_df.drop("Test", "Lower Range", "Upper Range").columns ( GT(gt_sample_df) # `gt_sample_df.style` works here too .tab_style( style=[style.text(color="Navy"), style.fill(color="PaleTurquoise")], locations=loc.body(mask=pl.col(required_columns) > pl.col("Upper Range")) # or with a selector instead (not needing the required_columns variable) # locations=loc.body( # mask=cs.exclude("Test", "Lower Range", "Upper Range") > pl.col("Upper Range") # ) ) ) Original answer: You were pretty close in your last snippet! For the locations argument to accept a list, it needs to be done in a list comprehension, or ahead of time, outside the tab_style method call. Link to an example of this in the docs can be found here import polars as pl from great_tables import GT, loc, style # define `gt_sample_df` as per example snippet required_columns = gt_sample_df.drop("Test", "Lower Range", "Upper Range").columns ( GT(gt_sample_df) # `gt_sample_df.style` works here too .tab_style( style=[style.text(color="Navy"), style.fill(color="PaleTurquoise")], locations=[ loc.body(columns=col_name, rows=pl.col(col_name) > pl.col("Upper Range")) for col_name in required_columns ], ) ) Edit: See this answer and this discussion on github for related solutions/discussions. Disclaimer: I am no great tables expert, so it may be possible there is a better way to do this | 3 | 2 |
79,280,500 | 2024-12-14 | https://stackoverflow.com/questions/79280500/how-to-save-the-clicked-map-coordinates-in-a-reactive-variable | I have a Shiny for Python app that I would like to make interactive to allow user to see various statistics depending on where they click on the map. I am using folium to display the map. I can't find a way to return the coordinates of the clicked spot back to shiny for further processing. I found this process relatively easy to implement using ipyleaflet but its dependency on ipywidgets conflicts with plotly which my app heavily depends on to display interactive charts relevant to the place clicked on. Here is a toy example that I would like to use for practice: from shiny import App, render, ui, reactive import folium def server(input, output, session): # Initialize reactive value for coordinates coords = reactive.Value({'lat': None, 'lng': None}) @output @render.ui def map(): m = folium.Map(location=[-1.9403, 29.8739], zoom_start=8) # Define a click event handler here that returns the coordinate to shiny return ui.HTML(m._repr_html_()) @reactive.Effect @reactive.event(input.map_click) def _(): # capture the coordinates returned by the click event handler on the map coords.set() return coords app_ui = ui.page_fluid( ui.h1("Folium Map that returns Coordinates of clicked spot"), ui.div( ui.output_ui("map"), style="height: 600px; width: 100%;" ) ) app = App(app_ui, server) | You can add a folium.MacroElement() to the map which captures the coordinates of the click location and sends them to Shiny. It can use a jinja2.Template() having this content: {% macro script(this, kwargs) %} function getLatLng(e){ var lat = e.latlng.lat.toFixed(6), lng = e.latlng.lng.toFixed(6); parent.Shiny.setInputValue('coords', [lat, lng], {priority: 'event'}); }; {{ this._parent.get_name() }}.on('click', getLatLng); {% endmacro %} What happens here is that we have a function getLatLng() which reads lat and lng and then sends these values to Shiny. Therefore, we use Shiny.setInputValue() (this link refers to an R article on Shiny because I did not find it documented for Python, but it's really similar). Important is the parent prefix because the map is rendered within an <iframe> and we need to refer to the global environment. The values are then received within an reactive.Value, here coords, and can be used for further processing. from shiny import App, render, ui, reactive import folium from jinja2 import Template app_ui = ui.page_fluid( ui.h1("Folium Map that returns Coordinates of clicked spot"), ui.output_code("result"), ui.div( ui.output_ui("map"), style="height: 600px; width: 100%;" ) ) def server(input, output, session): # Initialize reactive value for coordinates coords = reactive.Value({'lat': None, 'lng': None}) @output @render.ui def map(): m = folium.Map(location=[-1.9403, 29.8739], zoom_start=8) el = folium.MacroElement().add_to(m) el._template = Template( """ {% macro script(this, kwargs) %} function getLatLng(e){ var lat = e.latlng.lat.toFixed(6), lng = e.latlng.lng.toFixed(6); parent.Shiny.setInputValue('coords', [lat, lng], {priority: 'event'}); }; {{ this._parent.get_name() }}.on('click', getLatLng); {% endmacro %} """ ) return ui.HTML(m._repr_html_()) # processing the coordinates, e.g. rendering them @render.code @reactive.event(input.coords) def result(): return ( f""" Coordinates of click location: \n Latitude: {input.coords()[0]} \n Longitude: {input.coords()[1]}""" ) app = App(app_ui, server) | 3 | 3 |
79,288,622 | 2024-12-17 | https://stackoverflow.com/questions/79288622/django-vs-code-custom-model-manager-method-typing | I'm trying custom model managers to add annotations to querysets. My problem, which started as a little annoyance but I now realize can be an actual problem, is that VS Code does not recognise the methods defined in the custom model manager/queryset. Example: from django.db import models from rest_framework.generics import ListAPIView # models.py class CarQuerySet(models.QuerySet): def wiht_wheels(self): # NOTE: intentional typo pass # assume this does some annotaion class Car(models.Model): objects = CarQuerySet.as_manager() # views.py class ListCarsView(ListAPIView): def get_queryset(self): return Car.objects.wiht_weels() # <--- white instead of yellow At first, I was just annoyed by the fact that wiht_weels is printed in white as opposed to the usual yellow for methods/functions. Then I was more annoyed because this means VS Code will not give me any hints as to what args the method expects or what it returns. Finally, I accidentally made a typo on a name of one of these custom model methods, I hit refactor->rename, but it only renamed it in place, not on the places where it is used (views), probably because VS Code doesn't understand that method is being used anywhere. Is there a solution to this? | Add a few type annotations to your class methods: from django.db import models from rest_framework.generics import ListAPIView # models.py class CarQuerySet(models.QuerySet): def with_wheels(self) -> "CarQuerySet": # quotes here pass class CarManager(models.Manager): def get_queryset(self) -> CarQuerySet: # update the definition here return CarQueryset(self.model, using=self._db) def with_wheels(self) -> CarQuerySet: # redefine method here return self.with_wheels(self): class Car(models.Model): objects: CarManager = CarManager # add a type hint here vs code will now show you your customer queryset methods anytime you invoke them from the Car model: Car.objets.with... # try this | 1 | 2 |
79,285,419 | 2024-12-16 | https://stackoverflow.com/questions/79285419/deployed-my-fastapi-app-to-azure-but-cannot-access-the-routes | I succesfully deployed my FastAPI app to azure, but when I try to access the routes, it says 404 not found. However, when I tested the same routes locally, they worked. My db is hosted on azure. I tried adding a startup.sh file with this command: gunicorn -w 4 -k uvicorn.workers.UvicornWorker main:app, without any luck. Thhis is my database.py: import pyodbc import logging import sys # Set up logging to output to stdout (which Azure captures) logging.basicConfig(level=logging.INFO, stream=sys.stdout, format='%(asctime)s - %(levelname)s - %(message)s') logger = logging.getLogger(__name__) def get_db_connection(): server = '//' # Azure SQL server name database = '//' # Database name username = '//' # Username password = '//' # Password driver = '{ODBC Driver 18 for SQL Server}' # Driver for SQL Server # Create a connection string connection_string = f'DRIVER={driver};SERVER={server};PORT=1433;DATABASE={database};UID={username};PWD={password}' try: # Log the attempt to connect logger.info("Attempting to connect to the database...") # Establish the connection conn = pyodbc.connect(connection_string) # Log success logger.info("Successfully connected to the database.") return conn except Exception as e: # Log any connection error logger.error(f"Failed to connect to the database: {e}") raise My main.py: from fastapi import FastAPI, Query, HTTPException import pyodbc from database import get_db_connection import traceback import logging # Set up logging logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) app = FastAPI() def get_user_organizational_unit_code() -> str: # This is the hardcoded default organizational unit code return "0000-0010" @app.get("/chargers") async def get_chargers(organizationalUnitCode: str = Query(..., description="The organizational unit code to filter chargers")): """ Fetch chargers based on the provided organizational unit code. """ try: if not organizationalUnitCode: raise HTTPException(status_code=400, detail="No organizationalUnitCode provided.") # Log the received organizational unit code logger.info(f"Received organizationalUnitCode: {organizationalUnitCode}") # Establish the connection to the database conn = get_db_connection() cursor = conn.cursor() # Log the query being executed logger.info(f"Executing query for organizationalUnitCode: {organizationalUnitCode}") # Query chargers based on the organizational unit code cursor.execute(""" SELECT * FROM [dbo].[chargepointStatus] WHERE organizationalUnitCode = ? """, organizationalUnitCode) # Fetch all rows from the query result rows = cursor.fetchall() # Log the number of results found logger.info(f"Query returned {len(rows)} results for organizationalUnitCode: {organizationalUnitCode}") # If no rows are returned, raise a 404 exception if not rows: logger.warning(f"No chargers found for the provided organizational unit code: {organizationalUnitCode}") return {"detail": "No chargers found for the provided organizational unit code."} # Format the result chargers = [{ "ID": row[0], "chargepointID": row[1], "name": row[2], "connector": row[3], "location": row[4], "status": row[5], "statusError": row[6], "statusTime": row[7], "networkStatus": row[8], "networkStatusTime": row[9], "mailContactOffline": row[10], "mailContactStatus": row[11], "mailContactOfflineLate": row[12], "organizationalUnitCode": row[13], "organizationalUnitName": row[14], } for row in rows] # Log the result logger.info(f"Returning {len(chargers)} chargers for organizationalUnitCode: {organizationalUnitCode}") # Close the cursor and the connection cursor.close() conn.close() return {"chargers": chargers} except HTTPException as http_err: # Log HTTP-specific errors logger.error(f"HTTP error occurred: {http_err.detail}") return {"detail": http_err.detail} except Exception as e: # Log the exception for debugging logger.error(f"Error: {e}") logger.error(traceback.format_exc()) return {"detail": "Internal Server Error", "error": str(e)} My yaml file for deployment: name: Build and deploy Python app to Azure Web App - watkanikladenapi on: push: branches: - main workflow_dispatch: jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Set up Python version uses: actions/setup-python@v5 with: python-version: '3.10' - name: Clear pip cache run: pip cache purge - name: Install system dependencies for pyodbc run: sudo apt-get install -y unixodbc-dev g++ python3-dev - name: Create and start virtual environment run: | python -m venv venv source venv/bin/activate - name: Install dependencies run: pip install -r requirements.txt # Optional: Add step to run tests here (PyTest, Django test suites, etc.) - name: Zip artifact for deployment run: zip release.zip ./* -r - name: Upload artifact for deployment jobs uses: actions/upload-artifact@v4 with: name: python-app path: | release.zip !venv/ deploy: runs-on: ubuntu-latest needs: build environment: name: 'Production' url: ${{ steps.deploy-to-webapp.outputs.webapp-url }} permissions: id-token: write # This is required for requesting the JWT steps: - name: Download artifact from build job uses: actions/download-artifact@v4 with: name: python-app - name: Unzip artifact for deployment run: unzip release.zip - name: Login to Azure uses: azure/login@v2 with: client-id: ${{ secrets.AZUREAPPSERVICE_CLIENTID= }} tenant-id: ${{ secrets.AZUREAPPSERVICE_TENANTID }} subscription-id: ${{ secrets.AZUREAPPSERVICE_SUBSCRIPTIONID}} - name: 'Deploy to Azure Web App' uses: azure/webapps-deploy@v3 id: deploy-to-webapp with: app-name: 'watkanikladenapi' slot-name: 'Productio ' and my requirements.txt: fastapi==0.95.0 uvicorn==0.20.0 pyodbc==4.0.34 --find-links https://github.com/mkleehammer/pyodbc/releases gunicorn==20.0.4 | I've successfully deployed your code to Azure Web App and have been able to access the routes. I configured the below startup command in my Azure App Service, and then it worked. gunicorn --worker-class uvicorn.workers.UvicornWorker --timeout 600 --access-logfile '-' --error-logfile '-' main:app My workflow file: name: Build and deploy Python app to Azure Web App - fastapidb on: push: branches: - main workflow_dispatch: jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Set up Python version uses: actions/setup-python@v5 with: python-version: '3.12' - name: Create and start virtual environment run: | python -m venv venv source venv/bin/activate - name: Install dependencies run: pip install -r requirements.txt - name: Zip artifact for deployment run: zip release.zip ./* -r - name: Upload artifact for deployment jobs uses: actions/upload-artifact@v4 with: name: python-app path: | release.zip !venv/ deploy: runs-on: ubuntu-latest needs: build environment: name: 'Production' url: ${{ steps.deploy-to-webapp.outputs.webapp-url }} permissions: id-token: write steps: - name: Download artifact from build job uses: actions/download-artifact@v4 with: name: python-app - name: Unzip artifact for deployment run: unzip release.zip - name: Login to Azure uses: azure/login@v2 with: client-id: ${{ secrets.AZUREAPPSERVICE_CLIENTID_12D6966CF4C5454E9FE78BB4C6996709 }} tenant-id: ${{ secrets.AZUREAPPSERVICE_TENANTID_937C88E501F7462AA806F8E129035DAF }} subscription-id: ${{ secrets.AZUREAPPSERVICE_SUBSCRIPTIONID_F8A2F86EDA3644A9B8C33A1CD25C0D7C }} - name: 'Deploy to Azure Web App' uses: azure/webapps-deploy@v3 id: deploy-to-webapp with: app-name: 'fastapidb' slot-name: 'Production' I've successfully deployed the app to Azure Web App using GitHub actions. Output: | 2 | 1 |
79,289,598 | 2024-12-17 | https://stackoverflow.com/questions/79289598/finding-all-1-d-arrays-within-a-numpy-array | Given a numpy array of dimension n with each direction having length m, I would like to iterate through all 1-dimensional arrays of length m. For example, consider: import numpy as np x = np.identity(4) array([[1., 0., 0., 0.], [0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]]) then I would like to find all all 1-dimensional arrays with length 4. So the result should include all 4 rows, all 4 columns, and the 2 diagonals: [x[i,:] for i in range(4)] + [x[:,i] for i in range(4)] + [np.array([x[i,i] for i in range(4)])] + [np.array([x[3-i,i] for i in range(4)])] It's unclear to me how to generalize this to higher dimensional arrays since the position of the ":" in the slice needs to iterate as well. In a higher dimensional analogue with slices = [[0, 0], [0, 1], [0, 2], [1, 0], [1, 1], [1, 2], [2, 0], [2, 1], [2, 2]] we can get [x[i,j,:] for (i,j) in slices] but then I'm not sure how to proceed to iterate through the permutations of [i,j,:]. | Although the comprehensions are readable, for large arrays you probably want to use what numpy gives you: import numpy as np n = 4 # random n*n numpy array arr = np.random.rand(n, n) print(arr) # this has all the data you need, relatively efficiently - but not in 1D shape result = ( np.vsplit(arr, n) + np.hsplit(arr, n) + [arr.diagonal()] + [arr[np.arange(n), np.arange(n - 1, -1, -1)]] ) # you can just flatten them as you used them: for xs in result: print(xs.ravel()) # or flatten them into a new result: result_1d = [xs.ravel() for xs in result] Edit: user @Matt correctly pointed out in the comments that this solution only works for the case of a 2-dimensional array. Things get a bit more complicated for an arbitrary number of dimensions n with size m across all dimensions. This works, but given the complexity, can probably be improved upon for simplicity: import numpy as np import itertools as it # random n*n numpy array m = 2 n = 3 arr = np.random.rand(*([m] * n)) print(arr) def get_all_lines(arr): ndim = arr.ndim size = arr.shape[0] # assuming the same size for all dimensions # generate each 1d slice along and across each axis for fixed in it.product(range(size), repeat=ndim - 1): for axis in range(ndim): yield arr[fixed[:axis] + (slice(None, ),) + fixed[axis:]] # generate each 1d diagonal for each combination of axes for d_dim in range(2, ndim+1): # d_dim is the number of varying axes for fixed in it.product(range(size), repeat=(ndim - d_dim)): # fixed indices for the other axes of, od = 0, 0 # offsets for accessing fixed values and directions # each varying axis can be traversed in one of two directions for d_tail in it.product((0, 1), repeat=d_dim - 1): # dir is the direction for each varying axis d = (1, *d_tail)[::-1] # deduplicate and reverse the direction for axes in it.combinations(range(ndim), d_dim): # axes to vary fm = d_dim * (d_dim + 1) // 2 - sum(axes) # first dimension with a fixed index dm = min(axes) # first dimension with a varying index yield [ arr[*[fixed[of := 0 if j == fm else of + 1] if j not in axes else (i if d[od := 0 if j == dm else od + 1] else size - (i + 1)) for j in range(ndim)]] for i in range(size) ] lines = get_all_lines(arr) for line in lines: print(line) The mentioned "deduplication" avoids including each diagonal twice (once in both directions). Also note that this yields 1d arrays as well as lists of numbers, you can of course cast these appropriately. | 2 | 1 |
79,288,467 | 2024-12-17 | https://stackoverflow.com/questions/79288467/python-selenium-impossible-to-close-a-frame-using-xpath-or-class-name | I'm trying to close a frame on this page. What I want is to click in here: It seems to be easy, but so far the following code (which should work) has failed: import selenium.webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC driver = selenium.webdriver.Chrome() driver.maximize_window() driver.get('https://www.bvc.com.co/variable-income-local-market/cemargos?tab=issuer-information') #X(close) bvc frame xpath = '//*[@id="__next"]/div/div[1]/div/div[1]/div/div/div/div/div[3]/div/div/div/div[3]/div[2]/span' class_name = 'sc-843139d2-14 iVPGqd' # Trying with XPath if 1: try: WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, xpath))).click() except: driver.find_element(By.XPATH, xpath).click() # Trying with class_name if 1: try: WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CLASS_NAME, class_name))).click() except: driver.find_element(By.CLASS_NAME, class_name).click() The output using XPath: raise TimeoutException(message, screen, stacktrace) selenium.common.exceptions.TimeoutException: Message: Stacktrace: #0 0x64a95375031a <unknown> ... The output using class_name: selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".sc-843139d2-14 iVPGqd"} | when you do driver.find_element(By.CLASS_NAME, 'sc-843139d2-14 iVPGqd') selenium translates (poorly, by adding one dot) your class name into a selector, so it becomes driver.find_element(By.CSS_SELECTOR, '.sc-843139d2-14 iVPGqd') which means: find a iVPGqd tag (yes tag) contained within something with a class sc-843139d2-14. you don't want that. if you want to specify two classes for the same element you must collate them with no spaces between and use the proper locator CSS_SELECTOR driver.find_element(By.CSS_SELECTOR, '.sc-843139d2-14.iVPGqd') | 1 | 1 |
79,275,886 | 2024-12-12 | https://stackoverflow.com/questions/79275886/speed-up-numpy-looking-for-best-indices | I have a numpy array that maps x-y-coordinates to the appropriate z-coordinates. For this I use a 2D array that represents x and y as its axes and contains the corresponding z values: import numpy as np x_size = 2000 y_size = 2500 z_size = 400 rng = np.random.default_rng(123) z_coordinates = np.linspace(0, z_size, y_size) + rng.laplace(0, 1, (x_size, y_size)) So each of the 2000*2500 x-y-points is assigned a z-value (float between 0 and 400). Now I want to look up for each integer z and integer x which is the closest y-value, essentially creating a map that is of shape (x_size, z_size) and holds the best y-values. The simplest approach is creating an empty array of target shape and iterating over each z value: y_coordinates = np.empty((x_size, z_size), dtype=np.uint16) for i in range(z_size): y_coordinates[:, i] = np.argmin( np.abs(z_coordinates - i), axis=1, ) however this takes about 11 s on my machine, which unfortunately is way to slow. Surely using a more vectorised approach would be faster, such as: y_coordinates = np.argmin( np.abs( z_coordinates[..., np.newaxis] - np.arange(z_size) ), axis=1, ) Surprisingly this runs about 60% slower than the version above (tested at 1/10th size, since at full size this uses excessive memory). Also wrapping the code blocks in functions and decorating them with numba's @jit(nopython=True) doesn't help. How can I speed up the calculation? | This answer provide an algorithm with an optimal complexity: O(x_size * (y_size + z_size)). This algorithm is the fastest one proposed so far (by a large margin). It is implemented in Numba using multiple threads. Explanation of the approach The idea is that there is no need to iterate over all Z values : we can iterate over z_coordinates line by line, and for each line of z_coordinates, we fill an array used to find the nearest value for each possible z. The best candidate for the value z is stored in arr[z]. In practice, there are tricky corner cases making things a more complicated. For example, due to rounding, I decided to fill the neighbours of arr[z] (i.e. arr[z-1] and arr[z+1]) so to make the algorithm simpler. Moreover, when there are not enough values so arr cannot be fully filled by all the values in a line of z_coordinates, we need to fill the holes in the arr. In some more complicated cases (combining rounding issue while kind of holes in arr), we need to correct the values in arr (or operate on more distant neighbours which is not efficient). The number of step in the correction function should always be a small constant, certainly <= 3 (it nerver reached 3 in practice in my tests). Note that, in practice, no corner case happens on the specific input dataset provided. Each line is computed in parallel using multiple threads. I assume the array is not too small (to avoid to deal with more corner cases in the code and make it simpler) which should not be an issue. I also assume there are no special values like NaN in z_coordinates. Resulting code Here is the final code: import numba as nb import numpy as np # Fill the missing values in the value-array if there is not enough values (e.g. pretty large z_size) # (untested) @nb.njit('(float64[::1], uint16[::1], int64)') def fill_missing_values(all_val, all_pos, z_size): i = 0 while i < z_size: # If there is a missing value if all_pos[i] == 0xFFFF: j = i while j < z_size and all_pos[j] == 0xFFFF: j += 1 if i == 0: # Fill the hole based on 1 value (lower bound) assert j+1 < z_size and all_pos[j] == 0xFFFF and all_pos[j] != 0xFFFF for i2 in range(i, j): all_val[i2] = all_val[j+1] all_pos[i2] = all_pos[j+1] elif j == z_size: # Fill the hole based on 1 value (upper bound) assert i-1 >= 0 and all_pos[i-1] != 0xFFFF and all_pos[i] == 0xFFFF for i2 in range(i, j): all_val[i2] = all_val[i-1] all_pos[i2] = all_pos[i-1] else: assert i-1 >= 0 and j < z_size and all_pos[i-1] != 0xFFFF and all_pos[j] != 0xFFFF lower_val = all_val[i-1] lower_pos = all_pos[i-1] upper_val = all_val[j] upper_pos = all_pos[j] # Fill the hole based on 2 values for i2 in range(i, j): if np.abs(lower_val - i2) < np.abs(upper_val - i2): all_val[i2] = lower_val all_pos[i2] = lower_pos else: all_val[i2] = upper_val all_pos[i2] = upper_pos i = j i += 1 # Correct values in very pathological cases where z_size is big so there are not enough # values added to the value-array causing some values of the value-array to be incorrect. # The number of `while` iteration should be always <= 3 in practice @nb.njit('(float64[::1], uint16[::1], int64)') def correct_values(all_val, all_pos, z_size): while True: stop = True for i in range(0, z_size-1): current = np.abs(all_val[i] - i) if np.abs(all_val[i+1] - i) < current: all_val[i] = all_val[i+1] all_pos[i] = all_pos[i+1] stop = False for i in range(1, z_size): current = np.abs(all_val[i] - i) if np.abs(all_val[i-1] - i) < current: all_val[i] = all_val[i-1] all_pos[i] = all_pos[i-1] stop = False if stop: break @nb.njit('(float64[:,::1], int64)', parallel=True) def compute_fastest(z_coordinates, z_size): x_size, y_size = z_coordinates.shape assert y_size >= 2 and z_size >= 2 y_coordinates = np.empty((x_size, z_size), dtype=np.uint16) for x in nb.prange(x_size): all_pos = np.full(z_size, 0xFFFF, dtype=np.uint16) all_val = np.full(z_size, np.inf, dtype=np.float64) for y in range(0, y_size): val = z_coordinates[x, y] #assert not np.isnan(val) if val < 0: # Lower bound i = 0 if np.abs(val - i) < np.abs(all_val[i] - i): all_val[i] = val all_pos[i] = y elif val >= z_size: # Upper bound i = z_size - 1 if np.abs(val - i) < np.abs(all_val[i] - i): all_val[i] = val all_pos[i] = y else: # Inside the array of values offset = np.int32(val) for i in range(max(offset-1, 0), min(offset+2, z_size)): if np.abs(val - i) < np.abs(all_val[i] - i): all_val[i] = val all_pos[i] = y fill_missing_values(all_val, all_pos, z_size) correct_values(all_val, all_pos, z_size) for i in range(0, z_size): y_coordinates[x, i] = all_pos[i] return y_coordinates Performance results Here are performance results on my machine with a i5-9600KF CPU (6 cores), Numpy 1.24.3, Numba 58.1, on Windows, for the provided input: Naive fully vectorized code in the question: 113000 ms (slow due to swapping) Naive loop in the question: 8460 ms ZLi's implementation: 1964 ms Naive Numba parallel code with loops: 402 ms PaulS' implementation: 262 ms This Numba code: 12 ms <---------- Note the fully-vectorized code in the question use so much memory it cause memory swapping. It completely saturate my 32 GiB of RAM (about 24 GiB was available in practice) which is clearly not reasonable! Note the PaulS' implementation is about equally fast with 32-bit and 64-bit on my machine. This is probably because the operation is compute-bound on my machine (dependent of the speed of the RAM). This Numba implementation is 705 times faster than the fastest implementation in the question. It is also 22 times faster than the best answer so far! It also use a tiny amount of additional RAM for the computation (<1 MiB). | 8 | 3 |
79,289,546 | 2024-12-17 | https://stackoverflow.com/questions/79289546/type-hints-lost-when-a-decorator-is-wrapped-as-a-classmethod | Consider the following code: from typing import Any, Callable, Coroutine class Cache[**P, R]: @classmethod def decorate(cls, **params): def decorator(f: Callable[P, Coroutine[Any, Any, R]]) -> Callable[P, Coroutine[Any, Any, R]]: return f # in the real world, we instantiate a Cache here return decorator @Cache.decorate() async def some_function(i: int) -> int: return i + 1 cached_function = Cache.decorate()(some_function) If I ask pyright the type of Cache.decorate before the @classmethod wrapper (inspecting the word decorate in the above code), it returns: (method) def decorate( cls: type[Self@Cache[P@Cache, R@Cache]], **params: Unknown ) -> ((f: ((**P@Cache) -> (Coroutine[Any, Any, R@Cache])) -> ((**P@Cache) -> Coroutine[Any, Any, R@Cache])) That looks to me like it understands that P (the argument types) and R (the return types) are plumbed through correctly. However, if I ask it to introspect Cache.decorate in the context where it's being used as a decorator, it returns: (method) def decorate(**params: Unknown) -> ((f: ((...) -> Coroutine[Any, Any, Unknown])) -> ((...) -> Coroutine[Any, Any, Unknown])) ...which is to say, the relationship between input types and output types has been entirely discarded! | decorator depends on two contextual type variables: P and R. In the context of the Cache class, these are not known, but assumed to be known by Pyright at call time. However, when Cache.decorate() is called, Pyright does not get enough information to resolve P and R (no explicit type arguments and no arguments), so these two are resolved as Unknown. A simple fix is to parametrize Cache explicitly: (playground) @Cache[[int], int].decorate() async def some_function(i: int) -> int: return i + 1 reveal_type(some_function) # (int) -> Coroutine[Any, Any, int] However, this does not get you to the root of the problem, which is that you are not correctly specifying what you want. Cache is generic over P and R, so there must exist a way in which Pyright can infer the types correspond to those parameters. Normally, these are handled by passing arguments to the function creating that class, commonly __new__/__init__ and factory @classmethods. # Hypothetical usages Cache(return_value, *args, **kwargs) Cache.factory(return_value, *args, **kwargs) # This is also possible, as shown, but rarer Cache[[int], int]() decorate() is a factory method, but it does not create instances of Cache on its own. It takes some arguments, but these do not affect the type of the decorated some_function() nor the type of the to-be-created Cache. Instead, decorate() is meant to create decorators that themselves create instances of Cache. It is these decorators that you want to parametrize, because they are the ones receiving the decorated functions and responsible for creating Caches. (playground) class Cache[**P, R]: @classmethod def decorate(cls, **params: Any): def decorator[**P2, R2](f: Callable[P2, Coroutine[Any, Any, R2]]) -> ...: # ^^^^^^^^^^ return f return decorator reveal_type(Cache.decorate()) # (f: (**P2@decorator) -> Coroutine[Any, Any, R2@decorator]) -> ((**P2@decorator) -> Coroutine[Any, Any, R2@decorator]) reveal_type(some_function) # (i: int) -> Coroutine[Any, Any, int] | 1 | 2 |
79,289,373 | 2024-12-17 | https://stackoverflow.com/questions/79289373/are-there-alternatives-to-a-for-loop-when-parsing-free-text-in-python-pyspark | I have to read in data in Databricks Python/PySpark but the format is not the usual CSV or JSON so I have to iterate over a for loop. As a result it's very slow. The data looks like this, for millions of rows. It's not the same format each row, although there are certain common formats: HEADER0123 a bunch of spaces ACCTNUM999787666 more numbers ABC2XYZ some text So to parse I read from s3 as text and get the data by character position: raw_text = (spark.read .format("text") .option("mode", "PERMISSIVE") .option("header", "false") .option("inferSchema","false") .load(my_path)) my_list = [] input = raw_text.collect() for row in input: line = row[0].strip() header = line[0:6] acct = line[6:9] my_list.append(header, acct) etc. Then later I create dataframes: df = spark.createDataFrame(my_list, "header string, acct int") Even though I have experience with Spark dataframes this is the only way I can think of due to the unusual format. Is there a way to leverage Spark to process this kind of data? Or a way that doesn't require a for loop? | you're looking for substring() my_list = [] input = raw_text.collect() for row in input: line = row[0].strip() header = line[0:6] acct = line[6:9] my_list.append(header, acct) df = spark.createDataFrame(my_list, "header string, acct int") is same as df = ( raw_text .withColumn('header', F.substring('value', 0, 6)) .withColumn('acct', F.substring('value', 7, 3)) .drop('value') ) Also note that if each line in your input file is fixed length and has header and account fields separated by space then you can still read it as a csv. spark.read.option("delimiter", " ").csv(file) # or spark.read.csv(file, sep=' ') | 1 | 3 |
79,277,656 | 2024-12-13 | https://stackoverflow.com/questions/79277656/error-post-got-an-unexpected-keyword-argument-proxies | I'm using youtube-search-python to get the URLs of an array with song names but this error keeps popping up: post() got an unexpected keyword argument 'proxies' I'm new to Python and I've been looking around but I couldn't find nothing useful for fixing this error (at least that I understood). This is the code that is throwing the error: elif "open.spotify.com" == url_procesed.hostname: try: playlist = Playlist(client_id, client_secret) playlist_tracks = playlist.get_playlist_tracks(url) link_list = playlist.search_for_songs(playlist_tracks) print(link_list) Here's the error stack trace: Traceback (most recent call last): File "G:\Bot_v1.1.2\spoty_handler.py", line 69, in <module> playlist.search_for_songs(song_array) File "G:\Bot_v1.1.2\spoty_handler.py", line 48, in search_for_songs search = VideosSearch(song, limit=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\TarjetaCiudadana\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\youtubesearchpython\search.py", line 148, in __init__ self.sync_create() File "C:\Users\TarjetaCiudadana\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\youtubesearchpython\core\search.py", line 29, in sync_create self._makeRequest() File "C:\Users\TarjetaCiudadana\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\youtubesearchpython\core\search.py", line 51, in _makeRequest request = self.syncPostRequest() ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\TarjetaCiudadana\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\youtubesearchpython\core\requests.py", line 20, in syncPostRequest return httpx.post( ^^^^^^^^^^^ TypeError: post() got an unexpected keyword argument 'proxies' The error must be in playlist.search_for_songs because it doesn't print anything. Here is the code for all of it: import os import spotipy from spotipy.oauth2 import SpotifyClientCredentials from pytube import YouTube from youtubesearchpython import VideosSearch from dotenv import load_dotenv class Playlist: def __init__(self, client_id, client_secret): self.client_id = client_id self.client_secret = client_secret load_dotenv() def get_playlist_tracks(self, playlist_url): auth_manager = SpotifyClientCredentials(client_id=self.client_id, client_secret=self.client_secret) sp = spotipy.Spotify(auth_manager=auth_manager) results = sp.playlist_tracks(playlist_url) artists_songs_dict = [] for item in results['items']: track = item['track'] song_name = track['name'] #artist_names = [artist['name'] for artist in track['artists']] artists_songs_dict.append(song_name) return artists_songs_dict def search_for_songs(self, song_dict): link_list = [] for songs in song_dict: for song in songs: search = VideosSearch(song, limit=1) results = search.result() link = results['result'][0]['link'] link_list.append(link) return link_list TOKEN_1 = os.getenv("MY_TOKEN_1") TOKEN_2= os.getenv("MY_TOKEN_2") playlist= Playlist(client_id, client_secret) song_array= playlist.get_playlist_tracks("https://open.spotify.com/playlist/0drb98YI5Kk0ENtWXIS67y?si=bEHhpXlUThSyKkxNxkMCeg") playlist.search_for_songs(song_array) | I believe you are using httpx version 0.28.0 or above. In this version, the post method really doesn't have the proxies parameter declared. Compare this: >>> httpx.__version__ '0.27.2' >>> 'proxies' in inspect.getargs(httpx.post.__code__).args True versus >>> httpx.__version__ '0.28.1' >>> 'proxies' in inspect.getargs(httpx.post.__code__).args False You might want to consider downgrading httpx to version 0.27 if possible, like so: pip install --force-reinstall 'httpx<0.28' I think a reasonable approach would be to try it in a fresh, separate environment first. | 3 | 6 |
79,281,902 | 2024-12-15 | https://stackoverflow.com/questions/79281902/regex-to-match-a-whole-number-not-ending-in-some-digits | I've not been able to construct a pattern which can return an whole numbers that don't end in a sequence of digits. The numbers could be of any length, even single digits, but they will always be whole. Additionally, multiple numbers could be on the same line of text and I want to match them all. The numbers are always followed by either a single space or the end of the line or the end of the text. I'm matching in python 3.12 For example, over the text '12345 67890 123175 9876', let's say I want to get all numbers not ending in 175. I would want the following matches: 12345 67890 9876 I've tried using the following: \d+(?<!175)(\b|$), which matched 3 empty strings, text = "12345 67890 123175 9876" matches = findall(r"\d+(?<!175)(\b|$)", text) print(matches) > ['', '', ''] \d+(?!175)(\b|$), which matched 4 empty strings, text = "12345 67890 123175 9876" matches = findall(r"\d+(?!175)(\b|$)", text) print(matches) > ['', '', '', ''] \d+(?<!175), which matched all 4 numbers matches = findall(r"\d+(?<!175)", text) > ['12345', '67890', '12317', '9876'] \d+(?:175), which matched only the number ending in 175 matches = findall(r"\d+(?:175)", text) > ['123175'] | You can use is a negative lookbehind .*(?<!a) that ensures the string does not end with a. \d++(?<!175) Test here. Note that Possessive Quantifier (++) has been introduced in Python 3.11. Your 2nd approach from revision 1 was close, but not correct since the Greedy quantifier (+) would eat up all the digits, and then try to backtrack. | 1 | 3 |
79,287,522 | 2024-12-17 | https://stackoverflow.com/questions/79287522/compute-percentage-of-positive-rows-in-a-group-by-polars-dataframe | I need to compute the percentage of positive values in the value column grouped by the group column. import polars as pl df = pl.DataFrame( { "group": ["A", "A", "A", "A", "A", "B", "B", "B", "B", "B"], "value": [2, -1, 3, 1, -2, 1, 2, -1, 3, 2], } ) shape: (10, 2) ┌───────┬───────┐ │ group ┆ value │ │ --- ┆ --- │ │ str ┆ i64 │ ╞═══════╪═══════╡ │ A ┆ 2 │ │ A ┆ -1 │ │ A ┆ 3 │ │ A ┆ 1 │ │ A ┆ -2 │ │ B ┆ 1 │ │ B ┆ 2 │ │ B ┆ -1 │ │ B ┆ 3 │ │ B ┆ 2 │ └───────┴───────┘ In group A there are 3 out of 5 positive values (60%), while in column B there are 4 out 5 positive values (80%). Here's the expected dataframe. ┌────────┬──────────────────┐ │ group ┆ positive_percent │ │ --- ┆ --- │ │ str ┆ f64 │ ╞════════╪══════════════════╡ │ A ┆ 0.6 │ │ B ┆ 0.8 │ └────────┴──────────────────┘ | You could use a custom group_by.agg with Expr.ge and Expr.mean. This will convert the values to False/True depending on the sign, then compute the proportion of True by taking the mean: df.group_by('group').agg(positive_percent=pl.col('value').ge(0).mean()) Output: ┌───────┬──────────────────┐ │ group ┆ positive_percent │ │ --- ┆ --- │ │ str ┆ f64 │ ╞═══════╪══════════════════╡ │ A ┆ 0.6 │ │ B ┆ 0.8 │ └───────┴──────────────────┘ Intermediates: ┌───────┬───────┬───────┬──────┐ │ group ┆ value ┆ ge(0) ┆ mean │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ bool ┆ f64 │ ╞═══════╪═══════╪═══════╪══════╡ │ A ┆ 2 ┆ true ┆ 0.6 │ # │ A ┆ -1 ┆ false ┆ 0.6 │ # group A │ A ┆ 3 ┆ true ┆ 0.6 │ # (True+False+True+True+False)/5 │ A ┆ 1 ┆ true ┆ 0.6 │ # = 3/5 = 0.6 │ A ┆ -2 ┆ false ┆ 0.6 │ # │ B ┆ 1 ┆ true ┆ 0.8 │ │ B ┆ 2 ┆ true ┆ 0.8 │ │ B ┆ -1 ┆ false ┆ 0.8 │ │ B ┆ 3 ┆ true ┆ 0.8 │ │ B ┆ 2 ┆ true ┆ 0.8 │ └───────┴───────┴───────┴──────┘ | 2 | 3 |
79,286,464 | 2024-12-17 | https://stackoverflow.com/questions/79286464/uv-python-packing-how-to-set-environment-variables-in-virtual-envrionments | How do I set environment variables in virtual environment creating by UV? I try setting it in .venv/scripts/activate_this.py and it doesn't work. | You can tell the uv run command to load environment variables from a file, either by using the --env-file option: uv run --env-file=.env myscript.py Or by setting the UV_ENV_FILE environment variable: export UV_ENV_FILE=.env uv run myscript.py You will find more details in the documentation. | 1 | 3 |
79,285,068 | 2024-12-16 | https://stackoverflow.com/questions/79285068/setting-slice-of-column-to-list-of-values-on-polars-dataframe | In the code below I'm creating a polars- and a pandas dataframe with identical data. I want to select a set of rows based on a condition on column A, then update the corresponding rows for column C. I've included how I would do this with the pandas dataframe, but I'm coming up short on how to get this working with polars. The closest I've gotten is by using when-then-otherwise, but I'm unable to use anything other than a single value in then. import pandas as pd import polars as pl df_pd = pd.DataFrame({'A': ['x', 'x', 'x', 'x', 'y', 'y', 'y', 'y'], 'B': [1, 1, 2, 2, 1, 1, 2, 2], 'C': [1, 2, 3, 4, 5, 6, 7, 8]}) df_pl = pl.DataFrame({'A': ['x', 'x', 'x', 'x', 'y', 'y', 'y', 'y'], 'B': [1, 1, 2, 2, 1, 1, 2, 2], 'C': [1, 2, 3, 4, 5, 6, 7, 8]}) df_pd.loc[df_pd['A'] == 'x', 'C'] = [-1, -2, -3, -4] df_pl ??? | If you wrap the values in a pl.lit Series, you can index the values with Expr.get values = pl.lit(pl.Series([-1, -2, -3, -4])) idxs = pl.when(pl.col.A == 'x').then(1).cum_sum() - 1 df.with_columns(C = pl.coalesce(values.get(idxs), 'C')) shape: (8, 3) ┌─────┬─────┬─────┐ │ A ┆ B ┆ C │ │ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╡ │ x ┆ 1 ┆ -1 │ │ x ┆ 1 ┆ -2 │ │ x ┆ 2 ┆ -3 │ │ x ┆ 2 ┆ -4 │ │ y ┆ 1 ┆ 5 │ │ y ┆ 1 ┆ 6 │ │ y ┆ 2 ┆ 7 │ │ y ┆ 2 ┆ 8 │ └─────┴─────┴─────┘ These are the steps expanded. The indices are created, used to .get() and .coalesce() combines in the values from the other column. df.with_columns( idxs = idxs, values = values.get(idxs), D = pl.coalesce(values.get(idxs), 'C') ) shape: (8, 6) ┌─────┬─────┬─────┬──────┬────────┬─────┐ │ A ┆ B ┆ C ┆ idxs ┆ values ┆ D │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 ┆ i32 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╪══════╪════════╪═════╡ │ x ┆ 1 ┆ 1 ┆ 0 ┆ -1 ┆ -1 │ │ x ┆ 1 ┆ 2 ┆ 1 ┆ -2 ┆ -2 │ │ x ┆ 2 ┆ 3 ┆ 2 ┆ -3 ┆ -3 │ │ x ┆ 2 ┆ 4 ┆ 3 ┆ -4 ┆ -4 │ │ y ┆ 1 ┆ 5 ┆ null ┆ null ┆ 5 │ │ y ┆ 1 ┆ 6 ┆ null ┆ null ┆ 6 │ │ y ┆ 2 ┆ 7 ┆ null ┆ null ┆ 7 │ │ y ┆ 2 ┆ 8 ┆ null ┆ null ┆ 8 │ └─────┴─────┴─────┴──────┴────────┴─────┘ | 3 | 3 |
79,279,855 | 2024-12-14 | https://stackoverflow.com/questions/79279855/have-numpy-concatenate-return-proper-subclass-rather-than-plain-ndarray | I have a numpy array subclass, and I'd like to be able to concatenate them. import numpy as np class BreakfastArray(np.ndarray): def __new__(cls, n=1): dtypes=[("waffles", int), ("eggs", int)] obj = np.zeros(n, dtype=dtypes).view(cls) return obj b1 = BreakfastArray(n=1) b2 = BreakfastArray(n=2) con_b1b2 = np.concatenate([b1, b2]) print(b1.__class__, con_b1b2.__class__) this outputs <class '__main__.BreakfastArray'> <class 'numpy.ndarray'>, but I'd like the concatenated array to also be a BreakfastArray class. It looks like I probably need to add a __array_finalize__ method, but I can't figure out the right way to do it. | Expanding simon's solution, this is what I settled on so other numpy functions fall-back to standard ndarray (so, numpy.unique(b2["waffles"]) works as expected). Also a slight change to concatenate so it will work for any subclasses as well. import numpy as np HANDLED_FUNCTIONS = {} class BreakfastArray(np.ndarray): def __new__(cls, *args, n=1, **kwargs): dtypes=[("waffles", int), ("eggs", int)] obj = np.zeros(n, dtype=dtypes).view(cls) return obj def __array_function__(self, func, types, args, kwargs): # If we want "standard numpy behavior", # convert any BreakfastArray to ndarray views if func not in HANDLED_FUNCTIONS: new_args = [] for arg in args: if issubclass(arg.__class__, BreakfastArray): new_args.append(arg.view(np.ndarray)) else: new_args.append(arg) return func(*new_args, **kwargs) if not all(issubclass(t, BreakfastArray) for t in types): return NotImplemented return HANDLED_FUNCTIONS[func](*args, **kwargs) def implements(numpy_function): def decorator(func): HANDLED_FUNCTIONS[numpy_function] = func return func return decorator @implements(np.concatenate) def concatenate(arrays): result = arrays[0].__class__(n=sum(len(a) for a in arrays)) return np.concatenate([np.asarray(a) for a in arrays], out=result) | 1 | 0 |
79,285,449 | 2024-12-16 | https://stackoverflow.com/questions/79285449/find-average-rate-per-group-in-specific-years-using-groupby-transform | I'm trying to find a better/faster way to do this. I have a rather large dataset (~200M rows) with individual dates per row. I want to find the average yearly rate per group from 2018 to 2019. I know I could create a small df with the results and merge it back in but, I was trying to find a way to use transform. Not sure if it would just be faster to merge. Extra points for one-liners. Sample data rng = np.random.default_rng(seed=123) df = pd.DataFrame({'group':rng.choice(list('ABCD'), 100), 'date':[(pd.to_datetime('2018')+pd.Timedelta(days=x)).normalize() for x in rng.integers(0, 365*5, 100)], 'foo':rng.integers(1, 100, 100), 'bar':rng.integers(50, 200, 100)}) df['year'] = df['date'].dt.year This works #find average 2018 and 2019 'foo' and 'bar' for col in ['foo', 'bar']: for y in [2018, 2019]: df[col+'_'+str(y)+'_total'] = df.groupby('group')['year'].transform(lambda x: df.loc[x.where(x==y).dropna().index, col].sum()) #find 2018 and 2019 rates for y in [2018, 2019]: df['rate_'+str(y)] = df['foo_'+str(y)+'_total'].div(df['bar_'+str(y)+'_total']) #find average rate df['2018_2019_avg_rate'] = df[['rate_2018', 'rate_2019']].mean(axis=1) Thing's I've tried that don't quite work (I'm using apply to test if it works before switching to transform) #gives yearly totals for each year and each column, but further 'apply'ing to find rates then averaging isn't working after I switch to transform df.groupby(['group', 'year'])['year'].apply(lambda x: df.loc[x.where(x.between(2018, 2019)).dropna().index, ['foo', 'bar']].sum()) #close but is averaging too early df.groupby(['group', 'year'])['year'].apply(lambda x: df.loc[i, 'foo'].sum()/denom if (denom:=df.loc[i:=x.where(x.between(2018, 2019)).dropna().index, 'bar'].sum())>0 else np.nan) | You can't perform multiple filtering/aggregations efficiently with a groupby.transform. You will have to loop. A more efficient approach would be to combine a pivot_table + merge: cols = ['foo', 'bar'] years = [2018, 2019] tmp = (df[df['year'].isin(years)] .pivot_table(index='group', columns='year', values=cols, aggfunc='sum') [cols] .pipe(lambda x: x.join(pd.concat({'rate': x['foo'].div(x['bar'])}, axis=1))) ) avg_rate = tmp['rate'].mean(axis=1) tmp.columns = tmp.columns.map(lambda x: f'{x[0]}_{x[1]}_total') tmp[f'{"_".join(map(str, years))}_avg_rate'] = avg_rate out = df.merge(tmp, left_on='group', right_index=True) Output: group date foo bar year foo_2018_total foo_2019_total bar_2018_total bar_2019_total rate_2018_total rate_2019_total 2018_2019_avg_rate 0 A 2022-03-11 59 91 2022 343 270 972 875 0.352881 0.308571 0.330726 1 C 2018-08-22 56 52 2018 175 325 331 902 0.528701 0.360310 0.444506 2 C 2019-04-24 47 89 2019 175 325 331 902 0.528701 0.360310 0.444506 3 A 2019-04-16 43 102 2019 343 270 972 875 0.352881 0.308571 0.330726 4 D 2019-11-25 3 56 2019 126 222 224 696 0.562500 0.318966 0.440733 5 A 2018-01-06 86 148 2018 343 270 972 875 0.352881 0.308571 0.330726 ... 99 B 2018-02-25 32 90 2018 253 204 703 400 0.359886 0.510000 0.434943 | 2 | 2 |
79,284,760 | 2024-12-16 | https://stackoverflow.com/questions/79284760/pymongo-async-client-not-raising-exception-when-connection-fails | It seems that a pymongo 4.10 async client does not raise an exception when there is a problem with the connection. Taken from the doc, a test without any mongo DB running locally yields: >>> import asyncio >>> from pymongo import AsyncMongoClient >>> client = AsyncMongoClient('mongodb://localhost:27017/') >>> asyncio.run(client.aconnect()) # no errors When activating debug logs I see the connection being refused but I would expect an exception to be raised. >>> import logging >>> logging.basicConfig(level='DEBUG') >>> asyncio.run(client.aconnect()) DEBUG:asyncio:Using selector: KqueueSelector DEBUG:pymongo.topology:{"topologyId": {"$oid": "676020be62e71d3fe6f27721"}, "serverHost": "localhost", "serverPort": 27017, "awaited": false, "durationMS": 2.786167000522255, "failure": "\"AutoReconnect('localhost:27017: [Errno 61] Connection refused (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)')\"", "message": "Server heartbeat failed"} I would expect the DEBUG log error to be an exception. Am I misunderstanding something with the async client ? | The mongo client uses connection pools etc in the background, even though you tell it to explicitly connect (why?) it doesn't raise an exception for failing to connect until you actually try to read or write from/to the DB. But you can check if/where it's connected: >>> list(client.nodes) [('10.0.0.1', 27017)] The result will be an empty list if aconnect fails. But if you try any communication such as: >>> await client.server_info() ... you will get an exception: pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms), Timeout: 1.9985503089847043s, Topology Description: <TopologyDescription id: 6760281eda9a9980ea35e425, topology_type: Unknown, servers: [<ServerDescription ('localhost', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('localhost:27017: [Errno 111] Connection refused (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)')>]> The pymongo async driver is just built like this.. why the tutorial tells you to use aconnect() ... I have no idea. I didn't even know it existed. Btw, you can use python -m async to get a REPL where you can run async commands without asyncio.run() | 1 | 2 |
79,278,950 | 2024-12-13 | https://stackoverflow.com/questions/79278950/how-should-i-configure-a-pathfinding-algororithim-for-my-new-level-generation-pr | my problem is that I have a 2D array like this: [["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"]] the player starts in the to right (represented as "X") and the goal is to get to the door ("[") . I've already made the game and player movement, but I'm trying to make a level generator so that I don't have to manually make levels, Ive already made the level gen, I just need an algorithm to check whether or not the level is possible to play, (sometimes the door isn't reachable) i tried to make my own (quite janky) pathfinding algorithm and it really just didn't work. How do I go about making such a function, to check the levels for playability? code for the game below: import sys import tty import termios import os import random #instalize base variables for the game levelnum = 1 op = 1 atk = 1 hp = 20 ehp = 5 XX = 0 XY = 0 keytf = False level = [["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "["], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "["], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"]] #key getting function def get_key(): # get the key pressed fd = sys.stdin.fileno() old_settings = termios.tcgetattr(fd) try: tty.setraw(fd) ch = sys.stdin.read(1) finally: termios.tcsetattr(fd, termios.TCSADRAIN, old_settings) return ch #print level function def printlevel(): # Print the current level level[XY][XX] = "X" print("\033[H", end="") # Clear the terminal for i in range(10): print(" ".join(level[i])) print("Stats:") print("HP: " + str(hp)) print("ATK: " + str(atk)) print("Enemy HP: " + str(ehp)) #level storage def newlevel(levelnum): global level if levelnum == 1: level = [["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "["], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "["], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"]] elif levelnum == 2: level = [["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "|", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "e", "#", "#", "#", "["], ["#", "#", "#", "#", "|", "|", "#", "#", "#", "["], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"]] elif levelnum == 3: level = [["#", "|", "#", "e", "#", "|", "#", "e", "#", "#"], ["#", "|", "#", "|", "#", "|", "#", "|", "#", "#"], ["#", "|", "#", "|", "#", "|", "#", "|", "#", "#"], ["#", "|", "#", "|", "#", "|", "#", "|", "#", "#"], ["H", "|", "H", "|", "H", "|", "H", "|", "#", "["], ["#", "|", "#", "|", "#", "|", "#", "|", "#", "["], ["#", "|", "#", "|", "#", "|", "#", "|", "#", "#"], ["#", "|", "#", "|", "#", "|", "#", "|", "#", "#"], ["#", "|", "#", "|", "#", "|", "#", "|", "#", "#"], ["#", "e", "#", "|", "#", "e", "#", "|", "#", "#"]] elif levelnum == 4: level = [["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "|", "|", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "D", "#", "#", "#", "["], ["#", "|", "|", "|", "|", "|", "#", "#", "#", "["], ["#", "#", "#", "|", "|", "#", "#", "#", "#", "#"], ["|", "|", "#", "|", "|", "#", "#", "#", "#", "#"], ["|", "K", "#", "|", "|", "#", "#", "#", "#", "#"], ["|", "|", "|", "|", "|", "#", "#", "#", "#", "#"]] #check your next move def movechecker(move, op): global XY, XX, levelnum, ehp, atk, hp, keytf level[XY][XX] = "#" if move == "[": XX = 0 XY = 0 levelnum += 1 newlevel(levelnum) elif move == "e" and ehp >= 1: ehp -= atk hp -= 1 elif move == "H": hp += 10 moveit(op) elif move == "K": keytf = True moveit(op) elif move == "D" and keytf == True: moveit(op) elif ehp < 1: moveit(op) ehp = 5 else: moveit(op) printlevel() #move the player def moveit(op): global XY, XX if op == 1: # Move right XX += 1 elif op == 2: # Move left XX -= 1 elif op == 3: # Move up XY -= 1 elif op == 4: # Move down XY += 1 level[XY][XX] = "X" printlevel() printlevel() #main game loop while True: # Main game loop key = get_key() if key == "d" and XX < 9 and level[XY][XX+1] != "|": move = level[XY][XX+1] op = 1 movechecker(move, op) elif key == "a" and XX > 0 and level[XY][XX-1] != "|": move = level[XY][XX-1] op = 2 movechecker(move, op) elif key == "w" and XY > 0 and level[XY-1][XX] != "|": move = level[XY-1][XX] op = 3 movechecker(move, op) elif key == "s" and XY < 9 and level[XY+1][XX] != "|": move = level[XY+1][XX] op = 4 movechecker(move, op) level gen: import random level = [["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"]] def getwall(): if random.randint(1, 100) < 35: return "|" return "#" def genlevel(): for i in range (10): for j in range (10): level [i][j] = getwall() level [0][0] = " " level [9][9] = "[" | Your approach is to generate a random grid, and then verify whether it is one where you can reach the exit from the entry point. I suppose if this is not possible, you will generate another random grid, and continue like that until you have found a valid one. I would suggest to approach this differently. You can enhance the generation part to always generate a grid that has a solution. I would also suggest to avoid cycles, which also means you'd avoid trivial "spaces" that have no walls, like this: # # # # You could use one of the algorithms suggested on Wikipedia - Maze generation algorithm, such as the depth-first traversal, using an explicit stack: from random import choice, randrange WALL = "|" FREE = " " # A temporary value. Will not occur in a fully generated level REACHABLE = "#" ENTRY = "[" # The single cell where the player starts EXIT = "]" # The target cell that the player should reach def gen_level(size): size |= 1 # ensure it is odd # start with all cells surrounded by walls level = [ [ (WALL, FREE)[(x & y) % 2] for x in range(size) ] for y in range(size) ] stack = [(1, 1)] level[1][1] = REACHABLE while stack: x1, y1 = stack[-1] try: x2, y2 = choice([ (x2, y2) for dx, dy in ((-2, 0), (2, 0), (0, -2), (0, 2)) for x2, y2 in ((x1 + dx, y1 + dy), ) if 0 <= x2 < size and 0 <= y2 < size and level[y2][x2] == FREE ]) level[y2][x2] = level[(y1+y2)//2][(x1+x2)//2] = REACHABLE stack.append((x2, y2)) except IndexError: stack.pop() level[1 + randrange(size//2) * 2][0] = ENTRY level[1 + randrange(size//2) * 2][-1] = EXIT return level Here is how you would call the above function: level = gen_level(20) for line in level: print(*line) And here is one possible output that this produces: | | | | | | | | | | | | | | | | | | | | | | # # # # # # # | # # # # # # # # # | # | | | | | | | | # | | | # | | | | | # | # | | # | # # # | # # # | # # # | # # # | # | | # | # | # | | | # | | | # | # | | | # | | # # # | # # # | # # # | # | # # # | # | | # | | | # | | | | | # | # | | | # | # | | # # # | # # # # # # # | # # # | # # # ] | | | # | | | | | | | | | # | # | | | # | [ # | # # # # # | # # # # # | # # # | # | | # | | | | | # | # | | | | | | | # | # | | # # # # # | # | # # # # # # # | # | # | | # | | | # | # | | | | | | | # | # | | | | # # # | # # # | # # # # # | # | # # # | | | | # | | | | | # | | | # | | | | | # | | # | # | # # # | # # # | # # # # # # # | | # | # | # | # | | | # | | | | | | | # | | # | # # # | # # # | # # # | # | # # # | | # | | | | | | | # | | | # | # | # | | | | # # # # # # # # # # # # # | # # # # # | | | | | | | | | | | | | | | | | | | | | | Note that the actual height/width is odd (not 20, but 21): this is a consequence of the choice to have all (x, y) reachable that have odd x and odd y. Unrelated, but the output is a bit more "readable" when you use block-characters for walls, and a very light character (like a dot) for the reachable cells, like using: WALL = "█" REACHABLE = "·" ...and then format the output as follows: for line in level: print(" ".join(line).replace(WALL + " " + WALL, WALL * 3) .replace(WALL + " " + WALL, WALL * 3)) Then you get an output like this: █████████████████████████████████████████ █ · · · █ · · · · · · · · · · · · · █ · █ █████ · █████████ · █████████ · █ · █ · █ █ · █ · █ · · · █ · █ · · · █ · █ · · · █ █ · █ · █ · █ · █ · █████ · █ · █████████ █ · █ · · · █ · █ · · · · · █ · █ · · · █ █ · █████████ · █████████ · █ · █ · █ · █ [ · · · · · █ · █ · · · █ · █ · · · █ · █ █ · █████████ · █ · █ · █████████████ · █ █ · · · · · █ · · · █ · █ · · · · · · · █ █ · █████ · █████████ · █ · █████████████ █ · █ · · · █ · · · · · █ · █ · · · · · █ █████ · █████ · █████████ · █ · █████ · █ █ · · · █ · · · █ · · · · · █ · █ · █ · █ █ · █████ · █████ · █████████ · █ · █ · █ █ · █ · · · █ · · · · · · · · · · · █ · █ █ · █ · █████ · █████████████████ · █ · █ █ · █ · █ · · · · · · · █ · · · █ · █ · █ █ · █ · █████████████████ · █ · █████ · █ █ · · · · · · · · · · · · · █ · · · · · ] █████████████████████████████████████████ | 2 | 1 |
79,282,130 | 2024-12-15 | https://stackoverflow.com/questions/79282130/split-a-pandas-column-of-lists-with-different-lengths-into-multiple-columns | I have a Pandas DataFrame that looks like: ID result 1 [.1,.5] 2 [.4,-.2,-.3,.1,0] 3 [0,.1,.6] How can split this column of lists into two columns? Desired result: ID result_1 result_2 result_3 result_4 result_5 1 .1 .5 NaN NaN NaN 2 .4 -.2 -.3 .1 0 3 0 .1 .6 NaN NaN I have digged into it a little and found this: Split a Pandas column of lists into multiple columns but this only seems to apply to list with a constant number of elements. Thank you so much in advance. | You can do this as suggested in linked post. import pandas as pd # your example code data = {"ID": [1, 2, 3], "result": [[0.1, 0.5], [0.4, -0.2, -0.3, 0.1, 0], [0, 0.1, 0.6]]} df = pd.DataFrame(data) print(df) answer out = df[['ID']].join( pd.DataFrame(df['result'].tolist()) .rename(columns=lambda x: f'result_{x + 1}') ) out: ID result_1 result_2 result_3 result_4 result_5 0 1 0.1 0.5 NaN NaN NaN 1 2 0.4 -0.2 -0.3 0.1 0.0 2 3 0.0 0.1 0.6 NaN NaN | 1 | 1 |
79,281,240 | 2024-12-14 | https://stackoverflow.com/questions/79281240/why-does-the-basehttprequesthandler-rfile-read-delay-execution | I am making a simple server in python using http.server package. My goal is to log the data of POST from client to server. The problem I am having is rfile.read() is delaying execution until next POST request or if the connection is disconnected. However this problem doesn't occur if the length of the content is specified as the argument in rfile.read(). The below is the subclass for BaseHTTPRequestHandler of http.server. class Reqhand(BaseHTTPRequestHandler): def do_POST(self): self.send_response(200) #The lines below don't execute until the next POST req req = self.rfile.read() print(req) The below code executes as intended in a single POST request. class Reqhand(BaseHTTPRequestHandler): def do_POST(self): self.send_response(200) content_len = int(self.headers.get_all('Content-Length')[0]) req = self.rfile.read(content_len) print(req) | This happens because rfile.read() waits indefinitely until the client closes the connection or signals the end of the data stream. This behavior is by design when the Content-Length header is not provided, as the server does not know how much data to expect. To handle this, you should always rely on the Content-Length header in HTTP post requests to determine how much data to read. Make sure the client sends the Content-Length header in its post request. | 1 | 2 |
79,280,773 | 2024-12-14 | https://stackoverflow.com/questions/79280773/runtimeerror-trying-to-backward-through-the-graph-a-second-time-on-loss-tensor | I have the following training code. I am quite sure I call loss.backward() just once, and yet I am getting the error from the title. What am I doing wrong? Note that the X_train_tensor is output from another graph calculation, so it has required_grad=True as you can see in the print statement. Is this the source of the problem, and if so, how can I change it? It won't allow me to toggle it directly on the tensor. for iter in range(max_iters): start_ix = 0 loss = None while start_ix < len(X_train_tensor): loss = None end_ix = min(start_ix + batch_size, len(X_train_tensor)) out, loss, accuracy = model(X_train_tensor[start_ix:end_ix], y_train_tensor[start_ix:end_ix]) # every once in a while evaluate the loss on train and val sets if (start_ix==0) and (iter % 10 == 0 or iter == max_iters - 1): out_val, loss_val, accuracy_val = model(X_val_tensor, y_val_tensor) print(f"step {iter}: train loss={loss:.2f} train_acc={accuracy:.3f} | val loss={loss_val:.2f} val_acc={accuracy_val:.3f} {datetime.datetime.now()}") optimizer.zero_grad(set_to_none=True) print (iter, start_ix, X_train_tensor.requires_grad, y_train_tensor.requires_grad, loss.requires_grad) loss.backward() optimizer.step() start_ix = end_ix + 1 This is the error: RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward. Update: this is where the model input tensors are coming from, as output of other (autoencoder) model: autoencoder.eval() with torch.no_grad(): # it seems like adding this line solves the problem? X_train_encoded, loss = autoencoder(X_train_tensor) X_val_encoded, loss = autoencoder(X_val_tensor) X_test_encoded, loss = autoencoder(X_test_tensor) Adding the with torch.no_grad() line above solves the issue, but I don't understand why. Does it actually change how the outputs are generated, how does that work? | From what I understand, the X_train_tensor is output from the autoencoder. When you do not run torch.no_grad() during the encoding step, a computational graph is created for the outputs of the autoencoder, which links the autoencoder's operations and weights to the encoded tensors. In your code, since the model's output uses the X_train_tensor, the model's loss is connected to the autoencoder's computational graph. When you call loss.backward() the first time, PyTorch traverses the entire computational graph, including the autoencoder, to compute gradients and then clears the graph. When you call loss.backward() in the second iteration of the loop, you are attempting to traverse the cleared autoencoder's computational graph. torch.no_grad() prevents PyTorch from creating the autoencoder computational graph or linking the resulting loss to the autoencoder. | 2 | 2 |
79,279,588 | 2024-12-13 | https://stackoverflow.com/questions/79279588/interpolating-time-series-data-for-step-values | I have time series data that looks like this (mm/dd hh:mm): 3.100 12/14 05:42 3.250 12/14 05:24 3.300 12/14 05:23 3.600 12/14 02:45 3.700 12/13 10:54 3.600 12/12 13:19 3.900 12/12 10:43 I need to interpolate it at 1 minute intervals. It will be a step chart, so the values should be the same until the new value. | If your goal is to make a step plot, no need to interpolate, just use matplotlib.pyplot.step: import matplotlib.pyplot as plt s = pd.Series(['12/14 05:42', '12/14 05:24', '12/14 05:23', '12/14 02:45', '12/13 10:54', '12/12 13:19', '12/12 10:43'], index=[3.1, 3.25, 3.3, 3.6, 3.7, 3.6, 3.9]) plt.step(pd.to_datetime(s, format='%m/%d %H:%M'), s.index) NB. assuming here the values are the index and the dates the series' values, which is a bit counterintuitive. Better use the date as index. Output: Also, be aware that without a year, the default will be to use 1900 during the conversion to datetime, which might be unwanted. Better be explicit and add the exact year. If you really want to interpolate, use the date as index and asfreq: s = pd.Series([3.1, 3.25, 3.3, 3.6, 3.7, 3.6, 3.9], index=['12/14 05:42', '12/14 05:24', '12/14 05:23', '12/14 02:45', '12/13 10:54', '12/12 13:19', '12/12 10:43']) s.index = pd.to_datetime(s.index, format='%m/%d %H:%M') out = s.asfreq('min', method='ffill') Output: 1900-12-12 10:43:00 3.9 1900-12-12 10:44:00 3.6 1900-12-12 10:45:00 3.6 1900-12-12 10:46:00 3.6 1900-12-12 10:47:00 3.6 ... 1900-12-14 05:38:00 3.1 1900-12-14 05:39:00 3.1 1900-12-14 05:40:00 3.1 1900-12-14 05:41:00 3.1 1900-12-14 05:42:00 3.1 Freq: T, Length: 2580, dtype: float64 | 2 | 0 |
79,280,091 | 2024-12-14 | https://stackoverflow.com/questions/79280091/behavior-of-df-map-inside-another-df-apply | I find this code very interesting. I modified the code a little to improve the question. Essentially, the code uses a DataFrame to format the style of another DataFrame using pd.style. t1 = pd.DataFrame({'x':[300,200,700], 'y':[100,300,200]}) t2 = pd.DataFrame({'x':['A','B','C'], 'y':['C','B','D']}) def highlight_cell(val, props=''): return props if val > 200 else '' t2.style.apply(lambda x: t1.map(highlight_cell, props='background-color:yellow'), axis=None) But can anyone explain how the last line works? I couldn't find Pandas documentation that clarifies the behavior of df.map() inside another df.apply(). To me, the code reads like for each item in t1, apply highlight_cell() to the entire t2 at once, and then return the whole thing, as illustrated in this pseudocode. for x in all_items_in_t1: yield [highlight_cell(y) for y in all_items_in_t2] However, the output is saying for each item in t1, apply highlight_cell() only to the corresponding item in t2 that has the same (x, y) location as that item in t1, like this. for x, y in zip(all_items_in_t1, all_items_in_t2): yield highlight_cell(y) I'm still having trouble understanding this pattern because it seems a bit confusing. Can anyone explain it more clearly? | DataFrame.style.apply is used here, not DataFrame.apply. By using the parameter axis=None, the callable is applied once (not per cell) on the whole DataFrame. Since the callable is a lambda, this essentially means we run: t1.map(highlight_cell, props='background-color:yellow') and use the output as format. x y 0 background-color:yellow 1 background-color:yellow 2 background-color:yellow Note that using DataFrame.map here is not needed (and inefficient), better go for a vectorial approach: t2.style.apply(lambda x: np.where(t1>200, 'background-color:yellow', ''), axis=None) | 1 | 3 |
79,277,671 | 2024-12-13 | https://stackoverflow.com/questions/79277671/waiting-for-a-pyqt-pyside-qtcore-qthread-to-finish-before-doing-something | I have a data acquisition thread which samples and processes data which it then emits as a signal to a receiver. Now, when that thread is stopped, how can I ensure it has finished the current loop and emitted its signal before proceeding (and e.g. emitting a summary signal)? import sys import time from PySide6.QtCore import Signal, Slot from PySide6 import QtCore from PySide6 import QtWidgets ##============================================================================== class EmitterClassThreaded(QtCore.QThread): ## Define a signal that emits a dictionary data_signal = Signal(dict) ##-------------------------------------------------------------------------- def __init__(self): super().__init__() self.counter = 0 self.t_start = time.time() self.running = True ## Connect the signal to a method within the same class self.data_signal.connect(self.handle_data) ##-------------------------------------------------------------------------- def run(self): while self.running: self.counter += 1 now = time.time() - self.t_start data = {'counter': self.counter, 'timestamp': f"{now:.1f}"} time.sleep(1) # <------ doing something here which takes time self.data_signal.emit(data) ##-------------------------------------------------------------------------- def stop(self): self.running = False ##-------------------------------------------------------------------------- @Slot(dict) def handle_data(self, data): print(f"EmitterClassThreaded received data: {data}") ##============================================================================== class ReceiverClass(): def __init__(self): super().__init__() ##-------------------------------------------------------------------------- @Slot(dict) def handle_data(self, data): print(f"ReceiverClass received data: {data}") ##============================================================================== class MainWindow(QtWidgets.QMainWindow): def __init__(self): super().__init__() self.setWindowTitle("Example ThreadedEmitter-Receiver") self.setGeometry(100, 100, 400, 200) self.label = QtWidgets.QLabel("Waiting for signal...", self) self.label.move(150, 80) self.stop_button = QtWidgets.QPushButton("Stop Emitter", self) self.stop_button.move(150, 120) self.stop_button.clicked.connect(self.stop_emitter) self.emitter = EmitterClassThreaded() self.emitter.data_signal.connect(self.handle_data) self.receiver = ReceiverClass() ## Connect the signal from EmitterClass to the method in ReceiverClass self.emitter.data_signal.connect(self.receiver.handle_data) ## Start the emitter thread self.emitter.start() self.emitter.running = True ##-------------------------------------------------------------------------- @Slot(dict) def handle_data(self, data): self.label.setText(f"Counter: {data['counter']}\nTimestamp: {data['timestamp']}") ##-------------------------------------------------------------------------- def stop_emitter(self): print("ReceiverClass: Stopping the emitter thread...") self.emitter.stop() ## TODO: Wait for the thread to finish (incl. emitting the last signal) before proceeding print("Creating own data to emit.") self.emitter.data_signal.emit({'counter': -999, 'timestamp': 0}) ##****************************************************************************** ##****************************************************************************** if __name__ == "__main__": app = QtWidgets.QApplication(sys.argv) window = MainWindow() window.show() sys.exit(app.exec()) In my current example, the last signal from the thread always overwrites that summary signal. Thanks in advance! | The problem is caused by the fact that you're emitting the signal from another object, and, more specifically, from another thread. In general, it's normally preferred to emit signals directly from "within" their object, and emitting them externally is generally discouraged (but not forbidden nor completely wrong in principle). Note, though, that it's also important to be aware of the thread from which the signal is emitted. For instance, trying to do the following will not solve the problem: class EmitterClassThreaded(QtCore.QThread): ... def stop(self): self.running = False self.data_signal.emit({'counter': -999, 'timestamp': 0}) That code won't change anything, because stop() is being directly called from the main thread, and the fact that stop() is a member of the QThread instance is irrelevant. Remember that QThreads are objects that manage execution OS threads, they are not "the thread": directly calling someMethod() on a QThread instance will not cause that method to be executed in the related thread. As you correctly assumed, when you emit the signal from the main thread, the other thread is still running (doing whatever you simulated with time.sleep()), therefore a further signal from the thread will be emitted afterwards. Depending on the situations, many alternatives exist. Use the finished signal The simpler solution is to make use of the finished signal that QThread provides. That signal is always emitted once the thread is finished, even when not using QThread's own event loop by overriding run(). A possible approach, then, could be this: class EmitterClassThreaded(QtCore.QThread): data_signal = Signal(dict) def __init__(self): ... self.finished.connect(self.emitFinal) def emitFinal(self): self.data_signal.emit({'counter': -999, 'timestamp': 0}) ... Emit the signal only if the thread must continue You could change the logic of the while loop by checking the running flag before trying to emit the signal: def run(self): while True: self.counter += 1 now = time.time() - self.t_start data = {'counter': self.counter, 'timestamp': "{:.1f}".format(now)} time.sleep(1) # <------ doing something here which takes time if self.running: self.data_signal.emit(data) else: self.data_signal.emit({'counter': -999, 'timestamp': 0}) break In case you still need the last computed data, you may emit that in any case, and eventually exit the loop after emitting the "final" signal: ... self.data_signal.emit(data) if not self.running: self.data_signal.emit({'counter': -999, 'timestamp': 0}) break Alternatively, since self.counter is a simple reference to an integer (and, therefore, thread safe), you may even change the above just by checking the self.counter value to -999 and check whether self.counter < 0: class EmitterClassThreaded(QtCore.QThread): data_signal = Signal(dict) def __init__(self): super().__init__() self.counter = 0 self.t_start = time.time() # no self.running def run(self): while True: self.counter += 1 now = time.time() - self.t_start data = {'counter': self.counter, 'timestamp': "{:.1f}".format(now)} time.sleep(1) # <------ doing something here which takes time self.data_signal.emit(data) if self.counter < 0: self.data_signal.emit({'counter': -999, 'timestamp': 0}) break def stop(self): self.counter = -999 ... Use wait() after trying to "stop" the thread In some cases, it may be necessary to completely block everything until the thread is done, which can be achieved by using QThread.wait(). Note that the documentation says that it "Blocks the thread until [...]". In this case "the thread" is the calling thread; consider the following change: def stop_emitter(self): print("ReceiverClass: Stopping the emitter thread...") self.emitter.stop() self.emitter.wait() print("Creating own data to emit.") self.emitter.data_signal.emit({'counter': -999, 'timestamp': 0}) This is perfectly valid, in theory, because it works similarly to Python's thread.join(). Unfortunately, it's also discouraged in a case like this, because its blocking nature means that calling it in the main thread will block the main event loop, resulting in UI freeze until the thread has finished. A possible alternative would be to use wait() with a small interval and ensure that the main app processEvents() is called: def stop_emitter(self): print("ReceiverClass: Stopping the emitter thread...") self.emitter.stop() while not self.emitter.wait(10): QApplication.processEvents() QApplication.processEvents() print("Creating own data to emit.") self.emitter.data_signal.emit({'counter': -999, 'timestamp': 0}) Note that the further call to processEvents outside of the loop is necessary, because there will still be pending events, most importantly, the last signal from the exiting thread (which has been queued). Use terminate() (no, don't) Unlike Python, QThread provides the terminate() function, so you could add self.emitter.terminate() right after self.emitter.stop(). In reality, killing threads is considered a bad pattern, and highly discouraged in any language. You may try to use it, at your own risk, but only IF you have deep understanding of how threading works in all involved systems and possible permutations (including hardware and software aspects), and full awareness of the objects used in the actual execution within the thread. That's a huge "if": if you're here because you're asking yourself if you could use terminate(), then it most certainly means that you should not, because that's one choice you can only take if your experiences tell you that you are fully aware that it is the case of using it (and if you are, you probably wouldn't be reading this while looking for suggestions). So: no, do not use terminate(). Define and understand what is the actual task done in the thread It's important to note that threads have important limitations, especially when dealing with Python (see Global Interpreter Lock). Simply put, if whatever is done within the run() override (or any function connected to the started signal of the thread) is purely CPU bound, there is fundamentally no benefit in using threading: it just introduces further complications, and does not provide actual concurrency. The only cases in which threading makes sense is when using IO (eg: file read/write, network access, etc.) or calls that do not require waiting for their results (but still need to be executed in separate threads). If what you do within run() is a long and heavy computation, then you only have two options: if it is or can be "broken" into smaller parts (eg. a long for loop), then ensure that sleep calls are frequently added at small intervals; use multiprocessing; Note that, in the first case, this can still be achieved without using threading (just replace sleep with QApplication.processEvents()). To clarify, consider the following example: class EmitterClassThreaded(QtCore.QThread): ... def run(self): while self.running: self.counter += 1 now = time.time() - self.t_start data = {'counter': self.counter, 'timestamp': f"{now:.1f}"} for i in range(1000): # some relatively long and complex computation time.sleep(.01) # temporarily release control to other threads self.data_signal.emit(data) In this case, while perfectly reasonable, you're not actually using advantages threading could provide, it's just a different code structure that "coincidentally uses" threading, but without real benefit. The same could be achieved with a simple class (without threading), that calls QApplication.processEvents() instead of time.sleep(.01). If properly written, it could even be more efficient, because it wouldn't need to always wait that interval if the main event loop doesn't have queued events that require processing. Unrelated considerations about decorators The PySide Slot, Property and Signal decorators (along with the pyqt* prefix based decorators in PyQt) only make sense for QObject based classes. Those decorators should only be used in QObject subclasses (including Python object subclasses used in mix-ins with QObject based ones), otherwise they are completely useless and should be avoided to begin with: the Slot (or pyqtSlot) decorator is rarely necessary, as it almost always provides very little benefits, and is only required in very specific cases (dealing with complex threading based scenarios, or when implementing QtDesigner plugins); the Property (or pyqtProperty) decorator behaves like a Python property: if the class is never used as/within a QObject one, then just use @property; non QObject instances cannot have Qt signals, and cannot therefore be emitted; See this related post for further details: Why do I need to decorate connected slots with pyqtSlot?. | 2 | 1 |
79,279,966 | 2024-12-14 | https://stackoverflow.com/questions/79279966/why-this-nested-loop-generator-does-not-seem-to-be-working | I was trying this: tuple(map(tuple, tuple(((x,y) for x in range(5)) for y in range(3)))) I got this: (((0, 2), (1, 2), (2, 2), (3, 2), (4, 2)), ((0, 2), (1, 2), (2, 2), (3, 2), (4, 2)), ((0, 2), (1, 2), (2, 2), (3, 2), (4, 2))) but I expect: (((0, 0), (1, 0), (2, 0), (3, 0), (4, 0)), ((0, 1), (1, 1), (2, 1), (3, 1), (4, 1)), ((0, 2), (1, 2), (2, 2), (3, 2), (4, 2))) | You're forcing the evaluations in the wrong order. You're building a generator of generators, building a tuple out of the outer generator to build a tuple of generators, and then building tuples out of the inner generators. By the time you start working with the inner generators, the outer generator has finished iteration, so y is already 2, the value from the last iteration. This is the y value used the entire time you're iterating over the inner generators, so it's the y value used every time you evaluate (x, y). You need to call tuple on the inner generators as they're produced, to iterate over them and evaluate (x, y) before y changes: tuple(tuple((x, y) for x in range(5)) for y in range(3)) | 2 | 2 |
79,279,831 | 2024-12-13 | https://stackoverflow.com/questions/79279831/meaning-of-in-python-in-reserved-classes-of-identifiers | The python documentation writes about _* "Not imported by from module import *." What do they mean with that? https://docs.python.org/3/reference/lexical_analysis.html#:~:text=_*,import%20*. | The documentation uses * as a wildcard, meaning it substitutes for anything, similar to the way wildcards work in the shell. So when it says _*, it means any identifier beginning with _. So when you do from module import *, it imports all the top-level names in the module except those that begin with _. When writing a module, you use _XXX names to create private variables/functions/classes. | 1 | 3 |
79,279,391 | 2024-12-13 | https://stackoverflow.com/questions/79279391/predecessors-from-scipy-depth-first-order | I use scipy version 1.14.1 to traverse the minimum spanning tree in depth-first order, but I do not understand some results, namely the predecessors returned by scipy are not correct. Here is an illustration for the following graph: The following code import numpy as np from scipy.sparse import coo_matrix from scipy.sparse.csgraph import minimum_spanning_tree from scipy.sparse.csgraph import depth_first_order rows = np.array([0, 1, 2, 2, 4, 9, 2, 2, 10, 10, 8 ]) cols = np.array([1, 2, 3, 4, 9, 5, 6, 10, 11, 8, 7 ]) # construct undirected graph X = coo_matrix( (12,12)) X.col = np.concatenate( (rows, cols), axis=0) X.row = np.concatenate( (cols, rows), axis=0) X.data = np.ones(len(X.row)) # the minimum spanning tree is the graph itself tree = minimum_spanning_tree(X) print(tree) # traversing the graph print(depth_first_order(tree, i_start=0, directed=False, return_predecessors=True)) gives the minimum spanning tree (the graph itself in fact): Coords Values (0, 1) 1.0 (1, 2) 1.0 (2, 3) 1.0 (2, 4) 1.0 (2, 6) 1.0 (2, 10) 1.0 (4, 9) 1.0 (5, 9) 1.0 (7, 8) 1.0 (8, 10) 1.0 (10, 11) 1.0 and the depth-first order: [ 0, 1, 2, 3, 4, 9, 5, 6, 10, 11, 8, 7] predecessors: [-9999, 0, 1, 2, 2, 9, 2, 8,10, 4, 2, 10] So it says that 9 has 9 as ancestor, but it is 4, and from that position on results are not coherent. Thanks for any help. | From the documentation, the function depth_first_order() returns the following two lists: node_array ndarray, one dimension The depth-first list of nodes, starting with specified node. The length of node_array is the number of nodes reachable from the specified node. predecessors ndarray, one dimension Returned only if return_predecessors is True. The length-N list of predecessors of each node in a depth-first tree. If node i is in the tree, then its parent is given by predecessors[i]. If node i is not in the tree (and for the parent node) then predecessors[i] = -9999. It basically implies that the second list predecessors returned has nothing to do with the first list node_array returned. Rather, the 2nd list is to be read as: [π[i] if i ∈ G else -9999 for all i], where π[i] is parent of node i and the graph G is a (spanning) tree here. The list predecessors = [-9999, 0, 1, 2, 2, 9, 2, 8,10, 4, 2, 10] is to be read as following π[0] = -9999 (since node 0 has no parent in G) π[1] = 0 (parent of node 1 is node 0) π[2] = 1 π[3] = 2 π[4] = 2 π[[5] = 9 π[6] = 2 π[7] = 8 ... The same result you can obtain with networkx.dfs_predecessors(), more explicitly as a dictionary, where key is a node and value is the parent of the node (obtained using dfs traversal): import networkx as nx nx.dfs_predecessors(tree, source=0) # {1: 0, 2: 1, 3: 2, 4: 2, 9: 4, 5: 9, 6: 2, 10: 2, 11: 10, 8: 10, 7: 8} | 2 | 1 |
79,279,025 | 2024-12-13 | https://stackoverflow.com/questions/79279025/how-to-process-a-massive-file-in-parallel-in-python-while-maintaining-order-and | I'm working on a Python project where I need to process a very large file (e.g., a multi-gigabyte CSV or log file) in parallel to speed up processing. However, I have three specific requirements that make this task challenging: Order Preservation: The output must strictly maintain the same line order as the input file. Memory Efficiency: The solution must avoid loading the entire file into memory (e.g., by reading it line-by-line or in chunks). Concurrency: The processing should leverage parallelism to handle CPU-intensive tasks efficiently. My Current Approach I used concurrent.futures.ThreadPoolExecutor to parallelize the processing, but I encountered the following issues: While executor.map produces results in the correct order, it seems inefficient because tasks must wait for earlier ones to complete even if later tasks finish earlier. Reading the entire file using file.readlines() consumes too much memory, especially for multi-gigabyte files. Here’s an example of what I tried: import concurrent.futures def process_line(line): # Simulate a CPU-bound operation return line.upper() with open("large_file.txt", "r") as infile: lines = infile.readlines() with concurrent.futures.ThreadPoolExecutor() as executor: results = list(executor.map(process_line, lines)) with open("output.txt", "w") as outfile: outfile.writelines(results) While this code works for small files, it fails for larger ones due to memory constraints and potential inefficiencies in thread usage. Desired Solution I’m looking for a solution that: Processes lines in parallel to leverage multiple CPU cores or threads. Ensures that output lines are written in the same order as the input file. Reads and processes the file in a memory-efficient way (e.g., streaming or chunk-based processing). Additionally, I would like to understand: Whether ThreadPoolExecutor or ProcessPoolExecutor is more appropriate for this scenario, considering the potential CPU-bound nature of the tasks. Best practices for buffering and writing results to an output file without consuming too much memory. Key Challenges* How can I assign unique identifiers to each line (or chunk) to maintain order without introducing significant overhead? Are there existing libraries or design patterns in Python that simplify this kind of parallel processing for large files? Any insights, examples, or best practices to tackle this problem would be greatly appreciated! | No one can really answer whether using ThreadPoolExecutor or ProcessPoolExecutor will be faster without knowing exactly what each task does. you need to try both and Benchmark the time taken by each to find which is better. this code can help you figure that out yourself, it is based on this answer, but it uses a queue to limit the lines being read, so you don't risk having the entire file in memory if the processing is slow. also writing to the output file is done by its own thread, reading and writing to files (IO) releases the GIL, so they can both happen in parallel. import concurrent.futures import os import queue import threading from io import IOBase import time from typing import Optional def process_line(line: str): # Simulate some CPU-bound work on the line for i in range(int(1e6)): pass return line.upper() def writer_task(out_file: IOBase, writer_queue: queue.Queue): while True: fut: Optional[concurrent.futures.Future] = writer_queue.get() if fut is None: break line = fut.result() out_file.write(line) print("line written") # Wrap main script behavior in main function def main(): t1 = time.time() with open("large_file.txt") as infile, open("output.txt", "w") as outfile: with concurrent.futures.ThreadPoolExecutor() as executor: writer_queue = queue.Queue(maxsize=os.cpu_count() * 2 + 10) writer = threading.Thread(target=writer_task, args=(outfile, writer_queue), daemon=True) writer.start() for line in infile: print("line read") writer_queue.put(executor.submit(process_line, line)) writer_queue.put(None) # signal file end writer.join() t2 = time.time() print(f"time taken = {t2-t1}") # Invoke main function only when run as script, not when imported or invoked # as part of spawn-based multiprocessing if __name__ == '__main__': main() you can easily swap ThreadPoolExecutor for ProcessPoolExecutor and Measure which one is better. you might want to delete the print("line written") and its counterpart as they are only for illustrative purpose. for something as small as just line.upper, then just processing it on the main thread will be faster than either option. FYI: don't use this code in production, if an exception happens in the writer then your app will be stuck forever, you need to catch whatever fut.result() could throw. | 2 | 3 |
79,279,060 | 2024-12-13 | https://stackoverflow.com/questions/79279060/how-to-use-numpy-where-in-a-pipe-function-for-pandas-dataframe-groupby | Here is a script to simulate the issue I am facing: import pandas as pd import numpy as np data = { 'a':[1,2,1,1,2,1,1], 'b':[10,40,20,10,40,10,20], 'c':[0.3, 0.2, 0.6, 0.4, 0.5, 0.2, 0.8], 'd':[3, 1, 5, 1, 7, 2., 2.], } df = pd.DataFrame.from_dict(data) # I apply some custom function to populate column 'e'. # For demonstration, I am using a very simple function here. df['e']=df.apply(lambda x: x['c']<=0.3, axis=1) # This is the column I need to obtain using groupby and pipe/transform df['f']=[2., 1., 0., 2., 1., 2., 0.] print(df) Output: a b c d e f 0 1 10 0.3 3.0 True 2.0 1 2 40 0.2 1.0 True 1.0 2 1 20 0.6 5.0 False 0.0 3 1 10 0.4 1.0 False 2.0 4 2 40 0.5 7.0 False 1.0 5 1 10 0.2 2.0 True 2.0 6 1 20 0.8 2.0 False 0.0 The logic to be used to find column f is as follows: For each group of df.groupby(['a', 'b']): select entries with True value for e. if there are any item in the selected array: find entry with minimum d and return d (in real application, d needs to be manipulated in conjunction with other columns, and the result would be returned) else: return 0 What I have tried: def func(x): print(type(x)) print(x) print('-'*50) ind=np.where(x['e']) #<--- How can I implement this? if len(ind)>0: ind_min=np.argmin(x.iloc[ind]['d']) return x.iloc[ind[ind_min]]['d'] else: return 0 df['g']=df.groupby(['a', 'b']).pipe(func) Output: <class 'pandas.core.groupby.generic.DataFrameGroupBy'> <pandas.core.groupby.generic.DataFrameGroupBy object at 0x000001B348735550> -------------------------------------------------- ... ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (3, 2) + inhomogeneous part. I get the mentioned error on line: ind=np.where(x['e']) #<--- How can I implement this? So, how can apply np.where on a pandas.core.groupby.generic.DataFrameGroupBy object? | You're presenting the XY problem. Here's one approach: cond = df['c'] <= 0.3 df['f'] = ( df.assign(filtered_d=df['d'].where(cond)) .groupby(['a', 'b'])['filtered_d'] .transform('min') .fillna(0) ) Output: a b c d e f 0 1 10 0.3 3.0 True 2.0 1 2 40 0.2 1.0 True 1.0 2 1 20 0.6 5.0 False 0.0 3 1 10 0.4 1.0 False 2.0 4 2 40 0.5 7.0 False 1.0 5 1 10 0.2 2.0 True 2.0 6 1 20 0.8 2.0 False 0.0 Explanation / Intermediate First, apply Series.where to column 'd' based on the boolean series to keep only the values that we want to consider for min: # df['d'].where(cond) 0 3.0 1 1.0 2 NaN 3 NaN 4 NaN 5 2.0 6 NaN Name: d, dtype: float64 Use df.assign to include the result (here as filtered_d) and apply df.groupby + groupby.transform to get 'min'. Finally, add Series.fillna and assign as a new column. An alternative way to do this could be: cond = df['c'] <= 0.3 df['f'] = ( df.merge( df[cond] .groupby(['a', 'b'], as_index=False) .agg(f=('d', 'min')), on=['a', 'b'], how='left' ).assign(f=lambda x: x['f'].fillna(0)) ) Explanation Use boolean indexing + df.groupby + groupby.agg + named aggregation. Merge with df via df.merge + chain df.assign to apply Series.fillna. | 2 | 2 |
79,278,351 | 2024-12-13 | https://stackoverflow.com/questions/79278351/polars-how-to-field-fill-null-for-whole-column | This code not fill null values in column. I want to some fields to forward and backward fill nulls. import polars as pl df1 = pl.LazyFrame({ "dt": [ "2024-08-30", "2024-08-02", "2024-09-03", "2024-09-04" ], "df1": { "a": [0.1, 0.2, 0.3, 0.1], "b": [0, -1, 2, 1] }, }).with_columns( pl.col("dt").str.to_datetime("%Y-%m-%d") ) df2 = pl.LazyFrame({ "dt": [ "2024-08-29", "2024-08-30", "2024-09-02", "2024-09-03" ], "df2":{ "a": [100, 120, -80, 20], "b": [1, -2, 0, 0] }, }).with_columns( pl.col("dt").str.to_datetime("%Y-%m-%d") ) df = pl.concat([df1, df2], how="align") df = df.with_columns( *[ pl.col(c).struct.with_fields( pl.field("a").forward_fill().backward_fill(), pl.field("b").forward_fill().backward_fill(), ) for c in ["df1", "df2"] ] ) print(df.collect()) Null values appear in the output. I would expect them to be forward and backward filled, but they aren't. ┌─────────────────────┬───────────┬───────────┐ │ dt ┆ df1 ┆ df2 │ │ --- ┆ --- ┆ --- │ │ datetime[μs] ┆ struct[2] ┆ struct[2] │ ╞═════════════════════╪═══════════╪═══════════╡ │ 2024-08-02 00:00:00 ┆ {0.2,-1} ┆ null │ │ 2024-08-29 00:00:00 ┆ null ┆ {100,1} │ │ 2024-08-30 00:00:00 ┆ {0.1,0} ┆ {120,-2} │ │ 2024-09-02 00:00:00 ┆ null ┆ {-80,0} │ │ 2024-09-03 00:00:00 ┆ {0.3,2} ┆ {20,0} │ │ 2024-09-04 00:00:00 ┆ {0.1,1} ┆ null │ └─────────────────────┴───────────┴───────────┘ I would expected this output: ┌─────────────────────┬───────────┬───────────┐ │ dt ┆ df1 ┆ df2 │ │ --- ┆ --- ┆ --- │ │ datetime[μs] ┆ struct[2] ┆ struct[2] │ ╞═════════════════════╪═══════════╪═══════════╡ │ 2024-08-02 00:00:00 ┆ {0.2,-1} ┆ {100,1} │ │ 2024-08-29 00:00:00 ┆ {0.2,-1} ┆ {100,1} │ │ 2024-08-30 00:00:00 ┆ {0.1,0} ┆ {120,-2} │ │ 2024-09-02 00:00:00 ┆ {0.1,0} ┆ {-80,0} │ │ 2024-09-03 00:00:00 ┆ {0.3,2} ┆ {20,0} │ │ 2024-09-04 00:00:00 ┆ {0.1,1} ┆ {20,0} │ └─────────────────────┴───────────┴───────────┘ How to do that and why null values in column aren't filled? Probably it could be by pl.col(c).forward_fill().backward_fill(), but what if I want only one field to be filled? | The reason .struct.with_fields doesn't do what you want is because structs still have outer nullability, and with_fields does not have a special case where the outer nullability is ignored if all fields are replaced. So instead of using with_fields to update fields, completely replace the struct column with a new one which resets the outer nullability: out = df.with_columns( *[ pl.struct( pl.col(c).struct.field("a").forward_fill().backward_fill(), pl.col(c).struct.field("b").forward_fill().backward_fill(), ).alias(c) for c in ["df1", "df2"] ] ) | 1 | 3 |
79,276,109 | 2024-12-12 | https://stackoverflow.com/questions/79276109/poetry-install-failing-with-sslerror-max-retries-exceeded-on-github-httpsconnec | I am encountering an error when running the poetry install command in my Python project. The error message is as follows: HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: ... (Caused by SSLError(FileNotFoundError(2, 'No such file or directory'))) I have tried the following troubleshooting steps, but none have resolved the issue: Verified my internet connection. Checked that I have the latest version of Poetry installed. Ensured my Git and OpenSSL installations are up-to-date. Confirmed that the repository URL is correct and accessible from a browser. Despite all this, the error persists. It seems related to SSL or Git configuration, but I am unsure how to resolve it. Environment Details: OS: 14.7.1 (23H222) Python version: 3.11.9] Poetry version: 1.8.5 Git version: 2.45.2 Has anyone encountered this issue before or can provide guidance on how to resolve it? | Here’s how I resolved the issue: 1. Install or Update certifi in the Virtual Environment Make sure the certifi package is installed and up-to-date: pip install --upgrade certifi Locate the cacert.pem File Use the following command to find where the cacert.pem file is located: python -m certifi 2. Copy the cacert.pem File to the Expected Location From the error message (I tried to clone the repo from github using https), I saw that the virtual environment was looking for the cacert.pem file here: /Users/user/.../certifi/cacert.pem If the file is missing, copy it from another working virtual environment. For example: mkdir -p /Users/user/.../certifi/ cp /path/to/working/venv/lib/python3.11/site-packages/certifi/cacert.pem /Users/user/.../cacert.pem 3. Retry the Command After copying the cacert.pem file to the correct location, I retried the poetry install command and it worked!. | 1 | 2 |
79,277,527 | 2024-12-13 | https://stackoverflow.com/questions/79277527/why-does-the-getattrclassinstance-http-method-names-function-in-django | I have a view being inherited by APIView and I have only implemented the GET method in it. class MyView(APIView): def get(self, request, id): # do something But when I call getattr(ClassInstance, "http_method_names", []) I get list of all the HTTP methods even though they are not implemented. | The .http_method_names [Django-doc] is an attribute that, by default lists all HTTP methods, so: ["get", "post", "put", "patch", "delete", "head", "options", "trace"] Normally it will always return that list, unless you override it. This can be useful if you inherit from a view that for example has defined a .post(…) method, but you don't want to expose that, like: class MyView(SomethingWithPostMethodView): http_method_names = ['get', 'head', 'options'] def get(self, request, id): # … pass Or you could, technically speaking, expand the http_method_names list with an custom method, although I would strongly advise not to do this. In order to find out what methods the view implements, you can use the _allowed_methods(…), which thus checks if the corresponding .get(…) method is allowed or not. You thus can not introspect the methods allowed with: getattr(ClassInstance, "http_method_names", []) You can just create an instance and work with _allowed_methods(): ClassInstance()._allowed_method() If ClassInstance is thus a reference to the type itself, or use ClassInstance._allowed_method() if it is a view object already. | 1 | 2 |
79,276,400 | 2024-12-12 | https://stackoverflow.com/questions/79276400/how-to-get-the-index-of-a-text-node-in-beautifulsoup | How can I get the source index of a text node in an HTML string? Tags have sourceline and sourcepos which is useful for this, but NavigableString does not have any directly-helpful properties like that (as far as I can find) I've thought about using def get_index(text_node: NavigableString) -> int: return text_node.next_element.sourcepos - len(text_node) But this will not work perfectly because the length of the closing tag is unpredictable, e.g. >>> get_index(BeautifulSoup('<p>hello</p><br>', 'html.parser').find(text=True)) 7 Is incorrect, and '<p>hello</p >' is also valid HTML and will produce an even more incorrect result, and I'm not sure how to solve this kind of case using the tools I've found so far in BeautifulSoup. I would also be interested in an lxml or Python html module answer if they have simple solutions. Desired results: >>> get_index(BeautifulSoup('hello', 'html.parser').find(text=True)) 0 >>> get_index(BeautifulSoup('<p>hello</p><br>', 'html.parser').find(text=True)) 3 >>> get_index(BeautifulSoup('<!-- hi -->hello', 'html.parser').find(text=True)) 11 >>> get_index(BeautifulSoup('<p></p ><p >hello<br>there</p>', 'html.parser').find(text=True)) 12 >>> get_index(BeautifulSoup('<p></p ><p >hello<br>there</p>', 'html.parser').find_all(string=True)[1]) 21 | Using html.parser: class MyHTMLParser(HTMLParser): def handle_data(self, data: str): line, col = self.getpos() previous_lines = ''.join(html_string.splitlines(True)[:line - 1]) index = len(previous_lines) + col print(data, 'at', index) parser = MyHTMLParser() parser.feed(html_string) | 1 | 2 |
79,276,761 | 2024-12-12 | https://stackoverflow.com/questions/79276761/how-to-convert-the-column-with-lists-into-one-hot-encoded-columns | Assume, there is one DataFrame such as following import pandas as pd import numpy as np df = pd.DataFrame({'id':range(1,4), 'items':[['A', 'B'], ['A', 'B', 'C'], ['A', 'C']]}) df id items 1 [A, B] 2 [A, B, C] 3 [A, C] Is there an efficient way to convert above DataFrame into the following (one-hot encoded columns)? Many Thanks in advance! id items A B C 1 [A, B] 1 1 0 2 [A, B, C] 1 1 1 3 [A, C] 1 0 1 | SOLUTION 1 A possible solution, whose steps are: First, the explode function is used to transform each item of a list-like to a row, replicating the index values. Then, the to_numpy method converts the resulting dataframe to a numpy array, and .T transposes this array. The crosstab function computes a simple cross-tabulation of factors, which, in this case, are the transposed columns of the exploded dataframe. The reset_index method is used to reset the index of the dataframe, turning the index into a column named id. Finally, the original dataframe df is merged with this transformed dataframe using the merge function. df.merge( pd.crosstab(*df.explode('items').to_numpy().T) .reset_index(names='id')) SOLUTION 2 Another possible solution, whose steps are: First, the explode function is used to transform each item of a list-like to a row, replicating the index values. Then, the pivot_table function is applied to reshape the data based on the unique values in the items column, aggregating the count of each id for every item. The fill_value=0 ensures that any missing combinations are filled with zeros. The rename_axis method is used to remove the axis name for the columns. Finally, reset_index is called to reset the index of the dataframe, turning the index into a column. The original dataframe df is then merged with this transformed dataframe using the merge function. df.merge( df.explode('items') .pivot_table(index='id', columns='items', values='id', aggfunc=len, fill_value=0) .rename_axis(None, axis=1).reset_index()) Output: id items A B C 0 1 [A, B] 1 1 0 1 2 [A, B, C] 1 1 1 2 3 [A, C] 1 0 1 | 2 | 2 |
79,276,013 | 2024-12-12 | https://stackoverflow.com/questions/79276013/how-to-increase-the-space-between-the-subplots-and-the-figure | I'm using a python code to plot 3D surface. However, the z-axis label get cut by the figure. Here is the code : import matplotlib.pyplot as plt import numpy as np fig = plt.figure(figsize=(12, 10), facecolor='lightblue') x = np.linspace(0, 10) y = np.linspace(0, 10) X, Y = np.meshgrid(x, y) for idx in range(4): Z = np.cos(X) - np.sin(np.pi*idx/4 * Y) ax3D = fig.add_subplot(2, 2, idx+1, projection='3d') ax3D.plot_surface(X, Y, Z, cmap="viridis") ax3D.set_zlabel("Title") plt.show() The result : 3D surface plots Is it a possible to include the axis label in the axe ? Or to increase the space subplots and the figure ? I have tried different options such as : plt.subplots_adjust(left=0, bottom=0, right=0.8, top=0.7, wspace=0.5, hspace=0.2) or fig.tight_layout(); but nothing seems to resolve my problem. | One solution is to zoom out to decrease the size of each subplot (set_box_aspect). One can also play with the three angles that defines the view: elevation, azimuth, and roll (view_init). fig = plt.figure(figsize=(12/2, 10/2), facecolor='lightblue') x = np.linspace(0, 10) y = np.linspace(0, 10) X, Y = np.meshgrid(x, y) for idx in range(4): Z = np.cos(X) - np.sin(np.pi*idx/4 * Y) ax3D = fig.add_subplot(2, 2, idx+1, projection='3d') ax3D.view_init(elev=30, azim=70, roll=0) ax3D.set_box_aspect(aspect=(1,1,1), zoom=0.8) ax3D.plot_surface(X, Y, Z, cmap="viridis") ax3D.set_zlabel("Title") fig.tight_layout() plt.show() | 2 | 2 |
79,276,537 | 2024-12-12 | https://stackoverflow.com/questions/79276537/calling-a-wrapped-static-method-using-self-instead-of-class-name-passes-self-as | This question is related to Calling a static method with self vs. class name but I'm trying to understand the behavior when you wrap a static method so I can fix my wrapper. For example: import functools def wrap(f): @functools.wraps(f) def wrapped(*args, **kwargs): print(f"{f.__name__} called with args: {args}, kwargs: {kwargs}") return f(*args, **kwargs) return wrapped class Test: @staticmethod def static(): print("hello") def method(self): self.static() Test.static = wrap(Test.static) Test().method() will produce: static called with args: (<__main__.Test object at 0x1050b3fd0>,), kwargs: {} Traceback (most recent call last): File "/Users/aiguofer/decorator_example.py", line 20, in <module> Test().meth() File "/Users/aiguofer/decorator_example.py", line 16, in meth self.static() File "/Users/aiguofer/decorator_example.py", line 7, in wrapped return f(*args, **kwargs) TypeError: Test.static() takes 0 positional arguments but 1 was given However, if I change self.static() -> Test.static(), we get the expected output: static called with args: (), kwargs: {} hello My use case is that I need to wrap some methods from an external library, including a staticmethod on a class. Within that class, they call the static method from an instance method using self.<method_name>, which is causing the above issue in my wrapper. I thought I might be able to deal with this issue with a isinstance(f, staticmethod) but that seems to return False. I'd love to understand what is happening as well as potential solutions to this problem! | Method access in Python works by using the descriptor protocol to customize attribute access. When you access a staticmethod, it uses the descriptor protocol to make the attribute access return the underlying function. That's why isinstance(f, staticmethod) reported False, in the versions of your code where you tried that. Then when you try to assign Test.static = wrap(Test.static), wrap returns an ordinary function object. When you access one of those on an instance, they use the descriptor protocol to return a method object, with the first argument bound to the instance. You need to create a staticmethod object, to get staticmethod descriptor handling. You can bypass the descriptor protocol with inspect.getattr_static: import inspect import types def wrap_thing(thing): if isinstance(thing, types.FunctionType): return wrap(thing) elif isinstance(thing, staticmethod): return staticmethod(wrap(thing.__func__)) elif isinstance(thing, classmethod): return classmethod(wrap(thing.__func__)) elif isinstance(thing, property): fget, fset, fdel = [None if attr is None else wrap(attr) for attr in [thing.fget, thing.fset, thing.fdel]] return property(fget, fset, fdel, thing.__doc__) else: raise TypeError(f'unhandled type: {type(thing)}') Test.static = wrap_thing(inspect.getattr_static(Test, 'static')) | 2 | 4 |
79,274,712 | 2024-12-12 | https://stackoverflow.com/questions/79274712/numpy-matrix-tiling-and-multiplication-combination | I'm looking for a function capable of taking a m x n array, which repeats each row n times over a identity-like grid of m size. For demo: input = [[a1, b1, c1], [a2, b2, c2]] output = [[a1, b1, c1, 0, 0, 0], [a1, b1, c1, 0, 0, 0], [a1, b1, c1, 0, 0, 0], [ 0, 0, 0, a2, b2, c2], [ 0, 0, 0, a2, b2, c2], [ 0, 0, 0, a2, b2, c2]] Last time I asked something similar I was told of the Kronecker product, is there some similar function? | While looking similar, it appears to me that the given problem cannot be solved using a Kronecker product: with the latter, you could only manage to get repetitions of your complete input matrix as blocks of the result matrix. I stand corrected: For a solution that employs the Kronecker product, see @ThomasIsCoding's answer. In any case, what you need is the individual rows, repeated, as blocks of the result matrix. So that is what the code below does: it constructs a block matrix from the rows, using scipy.linalg.block_diag(), then repeats them as required, using numpy.repeat(). Note that block_diag() expects individual blocks as individual arguments, which is why a is unpacked with the * prefix. import numpy as np import scipy a = np.asarray([[11, 21, 31], [21, 22, 32], [31, 32, 33]]) print(a) # [[11 21 31] # [21 22 32] # [31 32 33]] blocked = scipy.linalg.block_diag(*a) print(blocked) # [[11 21 31 0 0 0 0 0 0] # [ 0 0 0 21 22 32 0 0 0] # [ 0 0 0 0 0 0 31 32 33]] result = np.repeat(blocked, a.shape[1], axis=0) print(result) # [[11 21 31 0 0 0 0 0 0] # [11 21 31 0 0 0 0 0 0] # [11 21 31 0 0 0 0 0 0] # [ 0 0 0 21 22 32 0 0 0] # [ 0 0 0 21 22 32 0 0 0] # [ 0 0 0 21 22 32 0 0 0] # [ 0 0 0 0 0 0 31 32 33] # [ 0 0 0 0 0 0 31 32 33] # [ 0 0 0 0 0 0 31 32 33]] | 4 | 4 |
79,273,432 | 2024-12-11 | https://stackoverflow.com/questions/79273432/python-multiprocessing-gets-slower-with-additional-cpus | I'm trying to parallelize code that should be embarrassingly parallel and it just seems to get slower the more processes I use. Here is a minimally (dys)functional example: import os import time import random import multiprocessing from multiprocessing import Pool, Manager, Process import numpy as np import pandas as pd def pool_func( number: int, max_number: int ) -> dict: pid = str(multiprocessing.current_process().pid) print('[{:2d}/{:2d} {:s}] Starting ...'.format(number, max_number, pid)) t0 = time.time() # # the following takes ~10 seconds on a separate node # for i in range(2): # print('[{:d}] Passed loop {:d}/2...'.format(number, i+1)) # time.sleep(5) # the following takes ~3.3 seconds on a separate node n = 1000 for _ in range(50): u = np.random.randn(n, n) v = np.linalg.inv(u) t1 = time.time() print('[{:2d}/{:2d} {:s}] Finished in {:.1f} seconds.'.format(number, max_number, pid, t1 - t0)) return {} if __name__ == "__main__": runs = [] count = 0 while count < 50: runs.append( (count, 50) ) count += 1 print(f"Number of runs to perform: {len(runs):d}") num_cpus = 4 print(f"Running job with {num_cpus:d} CPUs in parallel ...") # with Pool(processes=num_cpus) as pool: with multiprocessing.get_context("spawn").Pool(processes=num_cpus) as pool: results = pool.starmap(pool_func, runs) print('Main process done.') There are three features I want to point out. First, num_cpus can be changed to increase the number of workers in the pool. Second, I can change from the default 'fork' pool to the 'spawn' method, this doesn't seem to change anything. Finally, inside pool_func, the process that is running can be either a CPU intensive matrix inversion or a CPU-absent wait function. When I use the wait function, the processes run in approximately the correct time, about 10 seconds per process. When I use the matrix inversion, the process time increases with the number of processes in the following, approximate, way: 1 CPU : 3 seconds 2 CPUs: 4 seconds 4 CPUs: 30 seconds 8 CPUs: 95 seconds Here is a partial output of the script above, run as-is: Number of runs to perform: 50 Running job with 4 CPUs in parallel ... [ 0/50 581194] Starting ... [ 4/50 581193] Starting ... [ 8/50 581192] Starting ... [12/50 581191] Starting ... [ 0/50 581194] Finished in 24.7 seconds. [ 1/50 581194] Starting ... [ 4/50 581193] Finished in 29.3 seconds. [ 5/50 581193] Starting ... [12/50 581191] Finished in 30.3 seconds. [13/50 581191] Starting ... [ 8/50 581192] Finished in 32.2 seconds. [ 9/50 581192] Starting ... [ 1/50 581194] Finished in 26.9 seconds. [ 2/50 581194] Starting ... [ 5/50 581193] Finished in 30.3 seconds. [ 6/50 581193] Starting ... [13/50 581191] Finished in 30.8 seconds. [14/50 581191] Starting ... [ 9/50 581192] Finished in 32.8 seconds. [10/50 581192] Starting ... ... The process ids look unique to me. Clearly, there is some problem with scaling as adding more CPUs is causing the processes to run slower. There isn't any I/O at all in the processes that are being timed. These are pedestrian processes that I expected would work right out of the box. I have no idea why this is not working as I think it should. Why does this script have individual processes that take longer when I use more CPUs? When I run this on my macos laptop, it works as expected. But has similar scaling issues on a different remote linux computer I have access to. This might be a platform specific problem, but I'll leave it up in case someone has seen it before and knows how to fix it. | One thing that can cause this kind of problem is nested parallelism, when each process in your process pool starts multiple threads to speed up operations within that process. You can investigate whether this is happening is by looking at one minute load average. A program like htop can show you this. Run your program with varying process counts. If there one thread per process, and then you get load average close to the number of processes after the program has run for a minute or so. If you have many threads per process, then you can get load averages which are much higher than the number of processes. If this happens, you can frequently get better results by limiting or eliminating NumPy parallelism, so that the number of threads times the number of processes is equal to your number of CPU cores. There are two ways you can do this. threadpoolctl can adjust thread count in NumPy. You can do this once at process startup, or limit parallelism only in sections that you know won't benefit from parallelism. Example: from threadpoolctl import threadpool_limits import numpy as np with threadpool_limits(limits=1, user_api='blas'): # In this block, calls to blas implementation (like openblas or MKL) # will be limited to use only one thread. They can thus be used jointly # with thread-parallelism. a = np.random.randn(1000, 1000) a_squared = a @ a Be aware that these limits are not shared between processes, so you'll need to call threadpool_limits() in the subprocess, not the parent process. Most BLAS libraries will read configuration for the number of threads from an environment variable, and limit their thread pool accordingly. You can see a common list of environment variables here. I am not a big fan of this approach, for two reasons. The first reason is that this depends on library load order. The environment variable needs to be set before NumPy is imported. The second reason is that it doesn't allow you to dynamically set threadpool size, which makes it harder to make targeted, small fixes. | 3 | 3 |
79,275,860 | 2024-12-12 | https://stackoverflow.com/questions/79275860/joining-two-dataframes-that-share-index-columns-id-columns-but-not-data-col | I find myself doing this: import polars as pl import sys red_data = pl.DataFrame( [ pl.Series("id", [0, 1, 2], dtype=pl.UInt8()), pl.Series("red_data", [1, 0, 1], dtype=pl.UInt8()), ] ) blue_data = pl.DataFrame( [ pl.Series("id", [0, 2, 3], dtype=pl.UInt8()), pl.Series("blue_data", [0, 1, 1], dtype=pl.UInt8()), ] ) # in both red and blue red_and_blue = red_data.join(blue_data, on=["id"]) # in red, but not blue red_not_blue = red_data.join(blue_data, on=["id"], how="anti").with_columns( blue_data=pl.lit(None, dtype=pl.UInt8()) ) # in blue, but not red blue_not_red = blue_data.join(red_data, on=["id"], how="anti").with_columns( red_data=pl.lit(None, dtype=pl.UInt8()) ) columns = ["id", "red_data", "blue_data"] sys.displayhook( pl.concat( [ red_and_blue.select(columns), red_not_blue.select(columns), blue_not_red.select(columns), ] ) ) shape: (4, 3) ┌─────┬──────────┬───────────┐ │ id ┆ red_data ┆ blue_data │ │ --- ┆ --- ┆ --- │ │ u8 ┆ u8 ┆ u8 │ ╞═════╪══════════╪═══════════╡ │ 0 ┆ 1 ┆ 0 │ │ 2 ┆ 1 ┆ 1 │ │ 1 ┆ 0 ┆ null │ │ 3 ┆ null ┆ 1 │ └─────┴──────────┴───────────┘ | It seems like you are looking for a simple pl.DataFrame.join with how="full" and coalesce=True. red_data.join(blue_data, on="id", how="full", coalesce=True) shape: (4, 3) ┌─────┬──────────┬───────────┐ │ id ┆ red_data ┆ blue_data │ │ --- ┆ --- ┆ --- │ │ u8 ┆ u8 ┆ u8 │ ╞═════╪══════════╪═══════════╡ │ 0 ┆ 1 ┆ 0 │ │ 1 ┆ 0 ┆ null │ │ 2 ┆ 1 ┆ 1 │ │ 3 ┆ null ┆ 1 │ └─────┴──────────┴───────────┘ | 1 | 3 |
79,275,745 | 2024-12-12 | https://stackoverflow.com/questions/79275745/odd-boolean-expression | I'm trying to debug (rewrite?) someone else's Python/cherrypy web app, and I ran across the following 'if' statement: if not filename.endswith(".dat") and ( filename.endswith(".dat") or not filename.endswith(".cup") ): raise RuntimeError( "Waypoint file {} has an unsupported format.".format( waypoint_file.filename ) ) I think this is the same as: if not A and (A or not B): If so, then: if A = False, then it reduces to if True and (False or not B): if True and not B = not B if A = True, then it reduces to if False: i.e. the if block will never execute I'm pretty sure that the intent of the if block is to warn the user that the extension of the file in question is neither .DAT nor .CUP, but it doesn't look to me that it actually executes that intent. I think the if block should be: if(not .DAT and not .CUP) = if not(.DAT or .CUP) Is that correct? | As you have two variables which could only have one of two values you can easily test each case, for example by doing for a in [False, True]: for b in [False, True]: print(a, b) if not a and (a or not b): print("Condition hold") else: print("Condition does not hold") which gives output False False Condition hold False True Condition does not hold True False Condition does not hold True True Condition does not hold As you can see it does only hold when both are False, therefore it is equivalent to if not a and not b: However in this particular case you do not even needs 2 conditions, as endswith argument can also be a tuple of suffixes to look for therefore you can write if not filename.endswith((".dat", ".cup")): print("Unsupported format") | 2 | 2 |
79,275,441 | 2024-12-12 | https://stackoverflow.com/questions/79275441/do-programers-need-to-manually-implement-optimization-such-as-loop-unfolding-et | I am recently learning some HPC topics and get to know that modern C/C++ compilers is able to detect places where optimization is entitled and conduct it using corresponding techniques such as SIMD, loop unfolding, etc, especially under flag -O3, with a tradeoff between runtime performance vs compile time and object file size. Then it immediately occurred to me that CPython interprets and executes on-fly, so I assume it cannot afford to conduct those compiler features because compiling time for it is equivalently runtime, so I did a toy experiment below: import time, random n = 512 A = [[random.random() for _ in range(n)] for _ in range(n)] B = [[random.random() for _ in range(n)] for _ in range(n)] C = [[0] * n for _ in range(n)] def matMul( A, B, C ): """ C = A .* B """ for i in range(0, n - 4, 4): for j in range(0, n - 4, 4): for k in range(0, n - 4, 4): C[i][j] = A[i][k] * B[k][j] C[i + 1][j + 1] = A[i + 1][k + 1] * B[k + 1][j + 1] C[i + 2][j + 2] = A[i + 2][k + 2] * B[k + 2][j + 2] C[i + 3][j + 3] = A[i + 3][k + 3] * B[k + 3][j + 3] C[i + 4][j + 4] = A[i + 4][k + 4] * B[k + 4][j + 4] # return C start = time.time() matMul( A, B, C ) end = time.time() print( f"Elapsed {end - start}" ) With the loop unfolding, the program finishes within 3 seconds, without it, it takes up to almost 20 seconds. Does that mean one needs to pay attention and manually implement those opt techniques when writing Python code? or does Python offer the optimization under any special setting? | Loop unrolling is useful because it can (1) reduce the overhead spent managing the loop and (2) at the assembly level, it let the processor run faster by eliminating branch penalties, keeping the instruction pipeline full, etc. (2) doesn't really apply to an interpreted language implementation like Python - it's already doing lots of branching and decision-making at the assembly level. It might gain you with (1), but my gut feeling is that that time is often dwarfed by interpreter overhead. The golden rule of performance optimization is to first measure and confirm that the bottleneck is where you think it is. Incidentally, your code has a bug: C[i][j] = A[i][k] + B[k][j] C[i + 1][j + 1] = A[i + 1][k + 1] + B[k + 1][j + 1] C[i + 2][j + 2] = A[i + 2][k + 2] + B[k + 2][j + 2] C[i + 3][j + 3] = A[i + 3][k + 3] + B[k + 3][j + 3] C[i + 4][j + 4] = A[i + 4][k + 4] + B[k + 4][j + 4] It processes cells (0, 0), (1, 1), (2, 2), (3, 3), and (4, 4) (even though (4, 4) will also be processed on the next iteration), but not (0, 1), (0, 2), (1, 0)... (That's the other reason for the golden rule of performance optimization: it's easy to make mistakes by optimizing code that doesn't need it.) As @Mat said, the standard approach for Python in particular is to use NumPy, which uses an optimized C implementation. All the above applies to CPython, the standard Python implementation. There are other Python implementations like Cython that offer their own optimizations; I'm less familiar with those. | 1 | 3 |
79,275,036 | 2024-12-12 | https://stackoverflow.com/questions/79275036/pandas-dataframe-multiindex-calculate-mean-and-add-additional-column-to-each-l | Given the following dataframe: Year 2024 2023 2022 Header N Result SD N Result SD N Result SD Vendor A 5 20 3 5 22 4 1 21 3 B 4 25 2 4 25 3 4 26 5 C 9 22 3 9 27 1 3 23 3 D 3 23 5 3 16 2 5 13 4 E 5 27 2 5 21 3 3 19 5 I would like to calculate for each year the mean value of the results column and then create a column, where the relative deviation to the mean is displayed (e.g. Results Value / mean-value * 100). The N and SD column were just included for completeness and is not needed for the calculation. Year 2024 2023 2022 Header N Result SD Deviation N Result SD Deviation N Result SD Deviation Vendor A 5 20 3 85.5 5 22 4 99.1 1 21 3 .. B 4 25 2 106 4 25 3 113 4 26 5 .. C 9 22 3 .. 9 27 1 .. 3 23 3 .. D 3 23 5 .. 3 16 2 .. 5 13 4 .. E 5 27 2 .. 5 21 3 .. 3 19 5 .. How what i be able to achieve that? Thanks a lot in advance! | Use DataFrame.xs for select Result labels in MultiIndex, divide by mean and append to original in concat, last for correct position add DataFrame.sort_index with parameter sort_remaining=False: df1 = df.xs('Result', axis=1, level=1, drop_level=False) out = (pd.concat([df, df1.div(df1.mean()).mul(100) .rename(columns={'Result':'Deviation'})], axis=1) .sort_index(axis=1, ascending=False, level=0, sort_remaining=False)) print (out) 2024 2023 2022 \ N Result SD Deviation N Result SD Deviation N Result SD A 5 20 3 85.470085 5 22 4 99.099099 1 21 3 B 4 25 2 106.837607 4 25 3 112.612613 4 26 5 C 9 22 3 94.017094 9 27 1 121.621622 3 23 3 D 3 23 5 98.290598 3 16 2 72.072072 5 13 4 E 5 27 2 115.384615 5 21 3 94.594595 3 19 5 Deviation A 102.941176 B 127.450980 C 112.745098 D 63.725490 E 93.137255 Another loop idea: for x in df.columns.levels[0]: df[(x, 'Deviation')] = df[(x, 'Result')].div(df[(x, 'Result')].mean()).mul(100) out = df.sort_index(axis=1, ascending=False, level=0, sort_remaining=False) print (out) 2024 2023 2022 \ N Result SD Deviation N Result SD Deviation N Result SD A 5 20 3 85.470085 5 22 4 99.099099 1 21 3 B 4 25 2 106.837607 4 25 3 112.612613 4 26 5 C 9 22 3 94.017094 9 27 1 121.621622 3 23 3 D 3 23 5 98.290598 3 16 2 72.072072 5 13 4 E 5 27 2 115.384615 5 21 3 94.594595 3 19 5 Deviation A 102.941176 B 127.450980 C 112.745098 D 63.725490 E 93.137255 | 1 | 1 |
79,274,733 | 2024-12-12 | https://stackoverflow.com/questions/79274733/ifdrational-is-not-json-serializable-using-pillow | I am using PIL in python to extract the metadata of an image. Here is my code: import json from PIL import Image, TiffImagePlugin import PIL.ExifTags img = Image.open("/home/user/DSCN0010.jpg") dct = { PIL.ExifTags.TAGS[k]: float(v) if isinstance(v, TiffImagePlugin.IFDRational) else v for k, v in img._getexif().items() if k in PIL.ExifTags.TAGS } print(json.dumps(dct)) I'm getting the following error: Error processing EXIF data: Object of type IFDRational is not JSON serializable As you can see in the code, I cast all the values of type IFDRational to float but I'm still getting the error. Here is the link to the image: https://github.com/ianare/exif-samples/blob/master/jpg/gps/DSCN0010.jpg | The problem is that you are just casting to float the IFDRational values that are directly in the root of the EXIF items. However, it looks like one of those items, called GPSInfo, is a dict that contains internally more IFDRational values. You would need a function to sanitise the values, which would ideally iterate recursively all possible nested data so that conversion is done at all levels. An initial idea would be to do it like this: import json import PIL.ExifTags from PIL import Image, TiffImagePlugin img = Image.open("/home/user/DSCN0010.jpg") def sanitise_value(value): # Base case: IFDRational to float if isinstance(value, TiffImagePlugin.IFDRational): return float(value) # Dict case: sanitise all values if isinstance(value, dict): for k, v in value.items(): value[k] = sanitise_value(v) # List/tuple case: sanitise all values and convert to list, # as a tuple in JSON will anyway be a list elif isinstance(value, (list, tuple)): value = [sanitise_value(i) for i in value] # Extra case: some values are byte-strings, so you have to # decode them in order to make them JSON serializable. I # decided to use 'replace' in case some bytes cannot be # decoded, but there are other options elif isinstance(value, (bytes, bytearray)): value = value.decode("utf-8", "replace") return value dct = { PIL.ExifTags.TAGS[k]: sanitise_value(v) # <-- We use here the sanitising function for k, v in img._getexif().items() if k in PIL.ExifTags.TAGS } print(json.dumps(dct)) This would work with that image, but feel free to test with other scenarios just in case it's still not a universal solution. Plus, keep in mind the comment regarding the byte-strings, because you might need to decode them in a different way depending on your needs. For instance, you might prefer to decode it as latin-1 instead of utf-8, or use other type of error handling as stated here. Hope this helps! | 2 | 2 |
79,274,376 | 2024-12-12 | https://stackoverflow.com/questions/79274376/slice-a-numpy-2d-array-using-another-2d-array | I have a 2D array of (4,5) and another 2D array of (4,2) shape. The second array contains the start and end indices that I need to filter out from first array i.e., I want to slice the first array using second array. np.random.seed(0) a = np.random.randint(0,999,(4,5)) a array([[684, 559, 629, 192, 835], [763, 707, 359, 9, 723], [277, 754, 804, 599, 70], [472, 600, 396, 314, 705]]) idx = np.array([[2,4], [0,3], [2,3], [1,3] ]) Expected output - can be either of following two formats. Only reason for padding with zeros is that variable length 2d arrays are not supported. [[629, 192, 835, 0, 0], [763, 707, 359, 9, 0], [804, 599, 0, 0, 0], [600, 396, 314, 0, 0] ] [[0, 0, 629, 192, 835], [763, 707, 359, 9, 0], [0, 0, 804, 599, 0], [0, 600, 396, 314, 0] ] | Another possible solution, which uses: np.arange to create a range of column indices based on the number of columns in a. A boolean mask m is created using logical operations to check if each column index falls within the range specified by idx. The np.newaxis is used to align dimensions for broadcasting. np.where is used to create a_mask, where elements in a are replaced with 0 if the corresponding value in m is False. np.argsort is used to get the indices that would sort each row of m (negated) in ascending order. np.take_along_axis is used to rearrange the elements of a_mask based on the sorted indices. cols = np.arange(a.shape[1]) m = (cols >= idx[:, 0, np.newaxis]) & (cols <= idx[:, 1, np.newaxis]) a_mask = np.where(m, a, 0) sort_idx = np.argsort(~m, axis=1) np.take_along_axis(a_mask, sort_idx, axis=1) NB: Notice that a_mask contains the unsorted version of the solution (that is essentially the approach followed by @mozway). Output: array([[629, 192, 835, 0, 0], [763, 707, 359, 9, 0], [804, 599, 0, 0, 0], [600, 396, 314, 0, 0]]) # a_mask array([[ 0, 0, 629, 192, 835], [763, 707, 359, 9, 0], [ 0, 0, 804, 599, 0], [ 0, 600, 396, 314, 0]]) | 5 | 4 |
79,273,994 | 2024-12-12 | https://stackoverflow.com/questions/79273994/pandas-multi-index-subset-selection | import pandas as pd import numpy as np # Sample data index = pd.MultiIndex.from_tuples([ ('A', 'a1', 'x'), ('A', 'a1', 'y'), ('A', 'a2', 'x'), ('A', 'a2', 'y'), ('B', 'b1', 'x'), ('B', 'b1', 'y'), ('B', 'b2', 'x'), ('B', 'b2', 'y') ], names=['level_1', 'level_2', 'level_3']) data = np.random.randn(len(index)) df = pd.DataFrame(data, index=index, columns=['value']) Say I have for example the above dataframe, which is multi-indexed with 3 levels. Now, my goal is to select a subset from this dataframe, where the first two levels of index comes from a subset of the cartesian product (A,B) * (a1, a2, b1, b2), say S = [(A, a1), (B, b2)]. I want to keep third level of the multi-index. I expect the result to be like Original DataFrame: value level_1 level_2 level_3 A a1 x 0.123456 y 0.234567 a2 x 0.345678 y 0.456789 B b1 x 0.567890 y 0.678901 b2 x 0.789012 y 0.890123 Subset DataFrame: value level_1 level_2 level_3 A a1 x 0.123456 y 0.234567 B b2 x 0.789012 y 0.890123 | Use DataFrame.droplevel for remove 3 level, so possible filter by subset by Index.isin in boolean indexing: S = [('A', 'a1'), ('B', 'b2')] out = df[df.droplevel(2).index.isin(S)] print (out) value level_1 level_2 level_3 A a1 x 0.545790 y -1.298511 B b2 x 0.018436 y -1.076408 | 2 | 2 |
79,273,312 | 2024-12-11 | https://stackoverflow.com/questions/79273312/mismatch-between-the-volume-shape-and-the-axes-grid-in-matplotlib | I have written a script to visualize a 3D volume using Matplotlib. The decay volume is explicitly centered at x = y = 0, but the grid displayed appears displaced relative to the volume. This seems to be an issue with the grid, not the decay volume definition. The script is provided below, and I also attach the result of its execution. Could you please tell me what might be causing the grid to be misaligned, and how can I fix it? I believe the issue is with how the grid limits or alignment are set. The script: # funcs/setup.py from mpl_toolkits.mplot3d.art3d import Poly3DCollection import matplotlib.pyplot as plt z_min = 32 z_max = 82 Delta_x_in = 1 Delta_x_out = 4 Delta_y_in = 2.7 Delta_y_out = 6.2 def x_max(z): return (Delta_x_in/2 * (z - z_max) / (z_min - z_max) + Delta_x_out/2 * (z - z_min) / (z_max - z_min)) def y_max(z): return (Delta_y_in/2 * (z - z_max) / (z_min - z_max) + Delta_y_out/2 * (z - z_min) / (z_max - z_min)) def plot_decay_volume(ax): x_min_zmin = -x_max(z_min) x_max_zmin_val = x_max(z_min) y_min_zmin = -y_max(z_min) y_max_zmin_val = y_max(z_min) x_min_zmax = -x_max(z_max) x_max_zmax_val = x_max(z_max) y_min_zmax = -y_max(z_max) y_max_zmax_val = y_max(z_max) vertices = [ [x_min_zmin, y_min_zmin, z_min], [x_max_zmin_val, y_min_zmin, z_min], [x_max_zmin_val, y_max_zmin_val, z_min], [x_min_zmin, y_max_zmin_val, z_min], [x_min_zmax, y_min_zmax, z_max], [x_max_zmax_val, y_min_zmax, z_max], [x_max_zmax_val, y_max_zmax_val, z_max], [x_min_zmax, y_max_zmax_val, z_max] ] edges = [ [vertices[0], vertices[1]], [vertices[1], vertices[2]], [vertices[2], vertices[3]], [vertices[3], vertices[0]], [vertices[4], vertices[5]], [vertices[5], vertices[6]], [vertices[6], vertices[7]], [vertices[7], vertices[4]], [vertices[0], vertices[4]], [vertices[1], vertices[5]], [vertices[2], vertices[6]], [vertices[3], vertices[7]] ] for edge in edges: xs, ys, zs = zip(*edge) ax.plot(xs, ys, zs, color='gray', linewidth=1) faces = [ [vertices[0], vertices[1], vertices[2], vertices[3]], [vertices[4], vertices[5], vertices[6], vertices[7]], [vertices[0], vertices[1], vertices[5], vertices[4]], [vertices[1], vertices[2], vertices[6], vertices[5]], [vertices[2], vertices[3], vertices[7], vertices[6]], [vertices[3], vertices[0], vertices[4], vertices[7]] ] face_collection = Poly3DCollection(faces, linewidths=0.5, edgecolors='gray', alpha=0.3) face_collection.set_facecolor('lightgray') ax.add_collection3d(face_collection) def visualize_decay_volume(): fig = plt.figure(figsize=(10, 8)) ax = fig.add_subplot(111, projection='3d') plot_decay_volume(ax) ax.set_xlabel('X (m)') ax.set_ylabel('Y (m)') ax.set_zlabel('Z (m)') x_lim = max(abs(x_max(z_min)), abs(x_max(z_max))) + 1 y_lim = max(abs(y_max(z_min)), abs(y_max(z_max))) + 1 ax.set_xlim(-x_lim, x_lim) ax.set_ylim(-y_lim, y_lim) ax.set_zlim(z_min - 5, z_max + 5) ax.set_title('Decay Volume') plt.show() if __name__ == "__main__": visualize_decay_volume() The output: I have removed the axes scaling, but got a similar problem: | This is simply a result of your choice of z-limits. You've chosen to adjust the limits so the volume is not on the bottom plane, so the perspective makes it look like the volume isn't centered. If you adjust ax.set_zlim(z_min - 5, z_max + 5) to be ax.set_zlim(z_min, z_max + 5) you will see that the volume appears centered. | 2 | 4 |
79,272,800 | 2024-12-11 | https://stackoverflow.com/questions/79272800/rearrange-and-encode-columns-in-pandas | i have data structured like this (working with pandas): ID|comp_1_name|comp_1_percentage|comp_2_name|comp_2_percentage| 1| name_1 | 13 | name_2 | 33 | 2| name_3 | 15 | name_1 | 46 | There are six comp_name/comp_percentage couples. Names are not equally distributed in all six "*_name" columns. I would like to obtain that kind of trasformation: ID|name_1|name_2|name_3| 1| 13 | 33 | 0 | 2| 46 | 0 | 15 | I tried transposing (.T) both the entire dataframe and isolating the comp_name, comp_percentage couples, but to no avail. | You can try using pd.wide_to_long with a little column renaming and shaping dataframe: # Renaming columns to move name and percentage to the front for pd.wide_to_long dfr = df.rename(columns=lambda x: '_'.join(x.rsplit('_', 1)[::-1])) (pd.wide_to_long(dfr, ['name_', 'percentage_'], 'ID', 'No', suffix='.*') .reset_index() .pivot(index='ID', columns='name_', values='percentage_') .fillna(0) .reset_index() .rename_axis(None, axis=1) .astype(int)) Output: ID name_1 name_2 name_3 0 1 13 33 0 1 2 46 0 15 | 1 | 2 |
79,273,078 | 2024-12-11 | https://stackoverflow.com/questions/79273078/python-dataframe-slicing-by-row-number | all Python experts, I'm a Python newbie, stuck with a problem which may look very simple to you. Say I have a data frame of 100 rows, how can I split it into 5 sub-frames, each of which contains the rows of 5n+0, 5n+1, 5n+2, 5n+3 and 5n+4 respectively? For instance, the 0th, 5th, 10th up to the 95th will go to one sub-frame, 1st, 6th, 11th up to 96th will go to the 2nd sub-frame and the 4th, 9th, 14th up to 99th will go to the 5th sub-frame? This is what I have tried: grouped = l_df.groupby(l_df.index//5) a_df = grouped.get_group(1) b_df = grouped.get_group(2) c_df = grouped.get_group(3) d_df = grouped.get_group(4) But each of my group got only 5 rows. Any suggestions? Thanks much in advance! | Just use iloc and slice as you would do with a list i.e. start:end:step. Example: df = pd.DataFrame({"A":range(100)}) display(df.T) display(df.iloc[0::5].T) display(df.iloc[1::5].T) display(df.iloc[2::5].T) # ... | 2 | 0 |
79,273,153 | 2024-12-11 | https://stackoverflow.com/questions/79273153/convert-float-base-2-to-base-10-to-100-decimal-places | I'm trying to find more decimal places to the 'Prime Constant'. The output maxes out at 51 decimal places after using getcontext().prec=100 from decimal import * getcontext().prec = 100 base = 2 s = "0.0110101000101000101000100000101000001000101000100000100000101000001000101000001000100000100000001000101000101000100000000000001000100000101000000000101000001000001000100000100000101000000000101000101000000000001000000000001000101000100000101000000000100000100000100000101000001000101000000000100000000000001000101000100000000000001000001000000000101000100000100000001000001000001000100000100000001000100000001000000000101000000000101000001000100000100000001000101000100000000000100000001000100000001000100000100000000000101000000000000000001000001000000000100000100000101000001000000000100000100000101000001000001000101000000000001000000000101000100000100000101000000000001000100000100000001000000000100000001000000000100000001000001000001000100000001000001000100000001000100000000000001000000000100000000000101000000000101000101000000000100000000000001000101000100000000000001000101000100000000000000000001000100000001000000000100000001000100000100000100000000000001000100000100000100000001000001000000000001000100000101000" split = s.split(".") joinsplit = "".join(split) position = len(split[1]) if len(split) > 1 else 0 divisor = base ** position base10 = Decimal(int(joinsplit, base=base) / divisor) print(base10) | Try base10 = Decimal(int(joinsplit, base=base)) / Decimal(divisor) You need to have the Decimal package doing the division. | 1 | 1 |
79,273,143 | 2024-12-11 | https://stackoverflow.com/questions/79273143/what-does-colon-do-in-python-dictionaries | I stumbled upon this code while debugging something. Is the second line a valid Python code? I tried running this and it ran successfully without any errors. Shouldn't I be getting a syntax error since = is used for assigning values in dictionaries? dict1 = {'temp':10} dict1['temp'] : 5 | What you're seeing is the type annotation syntax. In your case it does absolutely nothing but is not invalid. Annotations must be valid expressions that evaluate without raising exceptions at the time the function is defined (but see below for forward references). | 1 | 2 |
79,270,470 | 2024-12-11 | https://stackoverflow.com/questions/79270470/error-during-python-setup-py-develop-and-pip-install-r-requirements-txt | I'm encountering an issue while trying to install the dependencies for my Python project using setup.py and pip install -r requirements.txt in Windows Poweshell. Here's my setup.py: import setuptools with open("README.md","r") as fh: long_description = fh.read() setuptools.setup( name = "titanic-prediction", version = "0.0.1", author = "author", author_email = "[email protected]", description = "simple repo for exercise", long_description = "long_description", long_description_content_type = "text/markdown", url = "", packages = setuptools.find_packages(), classifiers = ["Programming Language :: Python :: 3"], install_requires = [ "matplotlib", "numpy", "pandas == 1.1.4", "scikit-learn == 0.22", #(>= 3.5) "seaborn == 0.11.0"], python_requires = ">=3.7" ) and this is my requirements.txt install_requires =[ "matplotlib", "numpy", "pandas==1.1.4", "pydantic==1.6.1", "scikit-learn", "seaborn==0.11.0", ], Then I try to run python setup.py develop, but I got this error: running develop D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\command\develop.py:41: EasyInstallDeprecationWarning: easy_install command is deprecated. !! ******************************************************************************** Please avoid running ``setup.py`` and ``easy_install``. Instead, use pypa/build, pypa/installer or other standards-based tools. See https://github.com/pypa/setuptools/issues/917 for details. ******************************************************************************** !! easy_install.initialize_options(self) D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\_distutils\cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated. !! ******************************************************************************** Please avoid running ``setup.py`` directly. Instead, use pypa/build, pypa/installer or other standards-based tools. See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details. ******************************************************************************** !! self.initialize_options() running egg_info writing titanic_prediction.egg-info\PKG-INFO writing dependency_links to titanic_prediction.egg-info\dependency_links.txt writing requirements to titanic_prediction.egg-info\requires.txt writing top-level names to titanic_prediction.egg-info\top_level.txt reading manifest file 'titanic_prediction.egg-info\SOURCES.txt' writing manifest file 'titanic_prediction.egg-info\SOURCES.txt' running build_ext Creating d:\data analyst\github\lokalhangatt\pacmann - git\minggu_4\4. advanced data manipulation\pertemuan 8 - data preprocessing pipeline\virenv_preprocessing\lib\site-packages\titanic-prediction.egg-link (link to .) titanic-prediction 0.0.1 is already the active version in easy-install.pth Installed d:\data analyst\github\lokalhangatt\pacmann - git\minggu_4\4. advanced data manipulation\pertemuan 8 - data preprocessing pipeline\virenv_preprocessing\simple_pipeline Processing dependencies for titanic-prediction==0.0.1 Searching for scikit-learn==0.22 Reading https://pypi.org/simple/scikit-learn/ Downloading https://files.pythonhosted.org/packages/4f/2c/04e10167991ed6209fb251a212ca7c3148006f335f4aadf1808db2cbeda8/scikit-learn-0.22.tar.gz#sha256=314abf60c073c48a1e95feaae9f3ca47a2139bd77cebb5b877c23a45c9e03012 Best match: scikit-learn 0.22 Processing scikit-learn-0.22.tar.gz Writing C:\Users\ADMINI~1\AppData\Local\Temp\easy_install-dxw9q95a\scikit-learn-0.22\setup.cfg Running scikit-learn-0.22\setup.py -q bdist_egg --dist-dir C:\Users\ADMINI~1\AppData\Local\Temp\easy_install-dxw9q95a\scikit-learn-0.22\egg-dist-tmp-a2evsbeb C:\Users\ADMINI~1\AppData\Local\Temp\easy_install-dxw9q95a\scikit-learn-0.22\setup.py:12: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html from pkg_resources import parse_version Partial import of sklearn during the build process. Traceback (most recent call last): File "C:\Users\ADMINI~1\AppData\Local\Temp\easy_install-dxw9q95a\scikit-learn-0.22\setup.py", line 195, in check_package_status module = importlib.import_module(package) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\Python312\Lib\importlib\__init__.py", line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1387, in _gcd_import File "<frozen importlib._bootstrap>", line 1360, in _find_and_load File "<frozen importlib._bootstrap>", line 1324, in _find_and_load_unlocked ModuleNotFoundError: No module named 'numpy' Traceback (most recent call last): File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\sandbox.py", line 167, in save_modules yield saved File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\sandbox.py", line 209, in setup_context yield File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\sandbox.py", line 270, in run_setup _execfile(setup_script, ns) File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\sandbox.py", line 52, in _execfile exec(code, globals, locals) File "C:\Users\ADMINI~1\AppData\Local\Temp\easy_install-dxw9q95a\scikit-learn-0.22\setup.py", line 303, in <module> File "C:\Users\ADMINI~1\AppData\Local\Temp\easy_install-dxw9q95a\scikit-learn-0.22\setup.py", line 291, in setup_package File "C:\Users\ADMINI~1\AppData\Local\Temp\easy_install-dxw9q95a\scikit-learn-0.22\setup.py", line 219, in check_package_status ImportError: numpy is not installed. scikit-learn requires numpy >= 1.11.0. Installation instructions are available on the scikit-learn website: http://scikit-learn.org/stable/install.html During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\simple_pipeline\setup.py", line 6, in <module> setuptools.setup( File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\__init__.py", line 117, in setup return distutils.core.setup(**attrs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\_distutils\core.py", line 183, in setup return run_commands(dist) ^^^^^^^^^^^^^^^^^^ File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\_distutils\core.py", line 199, in run_commands dist.run_commands() File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\_distutils\dist.py", line 954, in run_commands self.run_command(cmd) File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\dist.py", line 995, in run_command super().run_command(command) File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\_distutils\dist.py", line 973, in run_command cmd_obj.run() File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\command\develop.py", line 35, in run self.install_for_development() File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\command\develop.py", line 127, in install_for_development self.process_distribution(None, self.dist, not self.no_deps) File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\command\easy_install.py", line 788, in process_distribution distros = WorkingSet([]).resolve( ^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\pkg_resources\__init__.py", line 892, in resolve dist = self._resolve_dist( ^^^^^^^^^^^^^^^^^^^ File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\pkg_resources\__init__.py", line 928, in _resolve_dist dist = best[req.key] = env.best_match( ^^^^^^^^^^^^^^^ File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\pkg_resources\__init__.py", line 1266, in best_match return self.obtain(req, installer) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\pkg_resources\__init__.py", line 1302, in obtain return installer(requirement) if installer else None ^^^^^^^^^^^^^^^^^^^^^^ File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\command\easy_install.py", line 710, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\command\easy_install.py", line 737, in install_item dists = self.install_eggs(spec, download, tmpdir) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\command\easy_install.py", line 934, in install_eggs return self.build_and_install(setup_script, setup_base) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\command\easy_install.py", line 1206, in build_and_install self.run_setup(setup_script, setup_base, args) File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\command\easy_install.py", line 1192, in run_setup run_setup(setup_script, args) File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\sandbox.py", line 260, in run_setup with setup_context(setup_dir): File "C:\Program Files\Python312\Lib\contextlib.py", line 158, in __exit__ self.gen.throw(value) File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\sandbox.py", line 201, in setup_context with save_modules(): File "C:\Program Files\Python312\Lib\contextlib.py", line 158, in __exit__ self.gen.throw(value) File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\sandbox.py", line 180, in save_modules saved_exc.resume() File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\sandbox.py", line 154, in resume raise exc.with_traceback(self._tb) File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\sandbox.py", line 167, in save_modules yield saved File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\sandbox.py", line 209, in setup_context yield File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\sandbox.py", line 270, in run_setup _execfile(setup_script, ns) File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\setuptools\sandbox.py", line 52, in _execfile exec(code, globals, locals) File "C:\Users\ADMINI~1\AppData\Local\Temp\easy_install-dxw9q95a\scikit-learn-0.22\setup.py", line 303, in <module> File "C:\Users\ADMINI~1\AppData\Local\Temp\easy_install-dxw9q95a\scikit-learn-0.22\setup.py", line 291, in setup_package File "C:\Users\ADMINI~1\AppData\Local\Temp\easy_install-dxw9q95a\scikit-learn-0.22\setup.py", line 219, in check_package_status ImportError: numpy is not installed. scikit-learn requires numpy >= 1.11.0. Installation instructions are available on the scikit-learn website: http://scikit-learn.org/stable/install.html Then when I try to run requirements.txt, it shows an error like this: ERROR: Invalid requirement: '–r': Expected package name at the start of dependency specifier –r ^ Is there any mistakes from my setup.py or requirements.txt files? And how to solve it? | You use a wrong character for -r option; Python options parser (and any othe options parsers) expects simple dash (ascii minus) but you use n-dash. Compare these: -r (simple dash, minus) –r (n-dash) —r (m-dash) Use simple dash, minus: pip install -r | 1 | 1 |
79,271,959 | 2024-12-11 | https://stackoverflow.com/questions/79271959/upos-mappings-tensorflow-datasets-tdfs | I am using the tensorflow tdfs dataset extreme/pos which I retrieve using the code below. It is annotated with universal part of speech POS labels. These are int values. Its fairly easy to map them back to their part of speech by creating my own mapping (0 = ADJ, 7 = NOUN, etc.) but I was wondering if there is a way of retrieving these class mappings from the tdfs dataset? (orig_train, orig_dev, orig_test), ds_info = tfds.load( 'xtreme_pos/xtreme_pos_en', split=['train', 'dev', 'test'], shuffle_files=True, with_info=True ) | One way is to dig into Tensorflow code to see where is defined the list of POS and then import it to use in your code. You can find the list of the POS in the Github code of tensorflow Datasets there (UPOS constant): https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/dataset_builders/conll/conllu_dataset_builder_utils.py#L31 The item order is their index so with display(pd.Series(UPOS)), you get: Another way would be to extract the items from the upos column of tfds.as_dataframe (taking a few rows, concatenating the upos values, splitting by the separating character and taking the set() to get the unique values. | 1 | 2 |
79,271,631 | 2024-12-11 | https://stackoverflow.com/questions/79271631/why-reference-count-of-none-object-is-fixed | I was experimenting with refcount of objects, and I noticed the reference count of None object does not change when I bind identifiers to None. I observed this behavior in python version 3.13. Python 3.13.0 (main, Oct 7 2024, 05:02:14) [Clang 16.0.0 (clang-1600.0.26.4)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.getrefcount(None) 4294967295 >>> list_of_nones = [None for _ in range(100)] >>> sys.getrefcount(None) 4294967295 >>> del list_of_nones >>> sys.getrefcount(None) 4294967295 This behavior is in contrast with the behavior of python 3.10: Python 3.10.15 (main, Sep 7 2024, 00:20:06) [Clang 15.0.0 (clang-1500.3.9.4)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> >>> sys.getrefcount(None) 4892 >>> list_of_nones = [None for _ in range(100)] >>> sys.getrefcount(None) 4990 >>> >>> del list_of_nones >>> >>> sys.getrefcount(None) 4890 In the 3.10 version, the reference count of None, decrease and increase according to binding identifiers and deleting them. But in the 3.13, the reference count is always fixed. Can someone explain this behavior? | This is due to PEP 683. To avoid any need to ever write to the memory of certain "immortal" objects, like None, those objects now have a fixed refcount that never changes, no matter how many actual references to the object exist. This helps with multi-threaded and multi-process performance, avoiding issues with things like cache invalidation and copy-on-write. | 2 | 3 |
79,271,271 | 2024-12-11 | https://stackoverflow.com/questions/79271271/fill-in-rows-to-dataframe-based-on-another-dataframe | I have 2 dataframes that look like this: import pandas as pd data = {'QuarterYear': ["Q3 2023", "Q4 2023", "Q1 2024", 'Q2 2024', "Q3 2024", "Q4 2024"], 'data1': [5, 6, 2, 1, 10, 3], 'data2': [12, 4, 2, 7, 2, 9], 'data3': [2, 42, 2, 6, 2, 4]} df = pd.DataFrame(data) This looks like: QuarterYear data1 data2 data3 0 Q3 2023 5 12 2 1 Q4 2023 6 4 42 2 Q1 2024 2 2 2 3 Q2 2024 1 7 6 4 Q3 2024 10 2 2 5 Q4 2024 3 9 4 data1 = {'QuarterYear': ["Q4 2023", 'Q2 2024', "Q3 2024"], 'data1': [5, 9, 10], 'data2': [7, 7, 3], 'data3': [2, 11, 3]} df1 = pd.DataFrame(data1) This looks like: QuarterYear data1 data2 data3 0 Q4 2023 5 7 2 1 Q2 2024 9 7 11 2 Q3 2024 10 3 3 What I would like to do is get df1 and make it the same size as df1, that is fill in for the other quarters with values for data1,2, and 3 being 0 if they are not already in df1. So the end result should look like this: QuarterYear data1 data2 data3 0 Q3 2023 0 0 0 1 Q4 2023 5 7 2 2 Q1 2024 0 0 0 3 Q2 2024 9 7 11 4 Q3 2024 10 3 3 5 Q4 2024 0 0 0 | Use DataFrame.set_index with DataFrame.reindex: out = (df1.set_index('QuarterYear') .reindex(df['QuarterYear'], fill_value=0) .reset_index()) print (out) QuarterYear data1 data2 data3 0 Q3 2023 0 0 0 1 Q4 2023 5 7 2 2 Q1 2024 0 0 0 3 Q2 2024 9 7 11 4 Q3 2024 10 3 3 5 Q4 2024 0 0 0 Another idea: out = df1.merge(df[['QuarterYear']], how='right').fillna(0) print (out) QuarterYear data1 data2 data3 0 Q3 2023 0.0 0.0 0.0 1 Q4 2023 5.0 7.0 2.0 2 Q1 2024 0.0 0.0 0.0 3 Q2 2024 9.0 7.0 11.0 4 Q3 2024 10.0 3.0 3.0 5 Q4 2024 0.0 0.0 0.0 Thank you @ouroboros1 for comment: In both cases, you're assuming that df['QuarterYear'] at least contains all values in df1['QuarterYear']. In OP's example, that the case, but if it were not, both methods will actually filter out rows from df1. That does not seem to be what the OP wants. E.g. change "Q3 2024" to "Q3 2025" for df, and "Q3 2024" will be gone from the result. So, with reindex, you would in that case need a union, with merge outer. out = df1.merge(df[['QuarterYear']], how='outer').fillna(0) print (out) QuarterYear data1 data2 data3 0 Q1 2024 0.0 0.0 0.0 1 Q2 2024 9.0 7.0 11.0 2 Q3 2023 0.0 0.0 0.0 3 Q3 2024 0.0 0.0 0.0 4 Q3 2025 10.0 3.0 3.0 5 Q4 2023 5.0 7.0 2.0 6 Q4 2024 0.0 0.0 0.0 out = (df1.set_index('QuarterYear') .reindex(pd.Index(df['QuarterYear']).union(df1['QuarterYear']), fill_value=0) .reset_index()) print (out) QuarterYear data1 data2 data3 0 Q1 2024 0 0 0 1 Q2 2024 9 7 11 2 Q3 2023 0 0 0 3 Q3 2024 0 0 0 4 Q3 2025 10 3 3 5 Q4 2023 5 7 2 6 Q4 2024 0 0 0 If necessary, here is solution also for sorting by quarters: out = (df1.set_index('QuarterYear') .reindex(pd.Index(df['QuarterYear']).union(df1['QuarterYear']), fill_value=0) .reset_index() .sort_values('QuarterYear', key=lambda x: pd.to_datetime(x.str[-4:] + x.str[:2], format='mixed'), ignore_index=True)) print (out) QuarterYear data1 data2 data3 0 Q3 2023 0 0 0 1 Q4 2023 5 7 2 2 Q1 2024 0 0 0 3 Q2 2024 9 7 11 4 Q3 2024 0 0 0 5 Q4 2024 0 0 0 6 Q3 2025 10 3 3 | 1 | 3 |
79,270,601 | 2024-12-11 | https://stackoverflow.com/questions/79270601/error-during-installation-with-pip-install | I'm encountering an issue while trying to install the dependencies for my Python project using pip install pandas==1.1.4 in Windows Poweshell. Then I got an Error like the picture below: DEPRECATION: Loading egg at d:\data analyst\github\lokalhangatt\pacmann - git\minggu_4\4. advanced data manipulation\pertemuan 8 - data preprocessing pipeline\virenv_preprocessing\lib\site-packages\seaborn-0.11.0-py3.12.egg is deprecated. pip 25.1 will enforce this behaviour change. A possible replacement is to use pip for package installation. Discussion can be found at https://github.com/pypa/pip/issues/12330 Collecting pandas==1.1.4 Downloading pandas-1.1.4.tar.gz (5.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.2/5.2 MB 491.1 kB/s eta 0:00:00 Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> [32 lines of output] <string>:19: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html <string>:45: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. Traceback (most recent call last): File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module> main() File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Data Analyst\Github\lokalhangatt\Pacmann - Git\Minggu_4\4. ADVANCED DATA MANIPULATION\Pertemuan 8 - Data Preprocessing Pipeline\virenv_preprocessing\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel return hook(config_settings) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Temp\pip-build-env-zwgg3cb7\overlay\Lib\site-packages\setuptools\build_meta.py", line 334, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=[]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Temp\pip-build-env-zwgg3cb7\overlay\Lib\site-packages\setuptools\build_meta.py", line 304, in _get_build_requires self.run_setup() File "C:\Users\Administrator\AppData\Local\Temp\pip-build-env-zwgg3cb7\overlay\Lib\site-packages\setuptools\build_meta.py", line 522, in run_setup super().run_setup(setup_script=setup_script) File "C:\Users\Administrator\AppData\Local\Temp\pip-build-env-zwgg3cb7\overlay\Lib\site-packages\setuptools\build_meta.py", line 320, in run_setup exec(code, locals()) File "<string>", line 792, in <module> File "<string>", line 759, in setup_package File "C:\Users\Administrator\AppData\Local\Temp\pip-install-um_nsy30\pandas_cfb0da7acd094bb49eb02e9b0460de24\versioneer.py", line 1439, in get_version return get_versions()["version"] ^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Temp\pip-install-um_nsy30\pandas_cfb0da7acd094bb49eb02e9b0460de24\versioneer.py", line 1368, in get_versions cfg = get_config_from_root(root) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Temp\pip-install-um_nsy30\pandas_cfb0da7acd094bb49eb02e9b0460de24\versioneer.py", line 400, in get_config_from_root parser = configparser.SafeConfigParser() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: module 'configparser' has no attribute 'SafeConfigParser'. Did you mean: 'RawConfigParser'? [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> See above for output. It says that the error originates from a subprocess, and is likely not a problem with pip. But I don't understand what it means. | You are getting the DEPRECATION warning and subsequent Error message because you're trying to install an older release of this package with an unsupported python version. You seem to be using Python 3.12 (as seen from the warning) but pandas 1.1.4 is compatible with python 3.6 - 3.9. One way to fix this is to install the stable release of pandas which supports python 3.12 or if you must use the older version, downgrade your python to any of the supported version mentioned above. | 2 | 1 |
79,263,771 | 2024-12-9 | https://stackoverflow.com/questions/79263771/where-does-scipys-adaptive-step-size-method-for-finite-differences-originate | Inside the KrylovJacobian class from SciPy, there is this method: def _update_diff_step(self): mx = abs(self.x0).max() mf = abs(self.f0).max() self.omega = self.rdiff * max(1, mx) / max(1, mf) which would be the same as: This modifies the step size that the finite difference method uses, however, I cannot find the origin of this expression, or why it works. Does anybody know the source of this method or the reasoning behind it? | Reading the journal articles which SciPy cites on the documentation page,* I cannot find any choice of omega which is exactly equivalent to what SciPy is doing. However, there are a couple of cases which are similar. High-level rationale Does anybody know the source of this method or the reasoning behind it? Reading D.A. Knoll and D.E. Keyes, J. Comp. Phys. 193, 357 (2004). DOI:10.1016/j.jcp.2003.08.010, one of the article SciPy cites, I found a high-level rationale for SciPy's choices. As shown above, the Jacobian-vector product approximation is based on a Taylor series expansion. Here, we discuss various options for choosing the perturbation parameter, ε in Eq. (10), [Editor's note: this is the variable which SciPy calls omega, divided by the norm of v.] which is obviously sensitive to scaling, given u and v. If ε is too large, the derivative is poorly approximated and if it is too small the result of the finite difference is contaminated by floating-point roundoff error. The best ε to use for a scalar finite-difference of a single argument can be accurately optimized as a balance of these two quantifiable trade-offs. However, the choice of ε for the vector finite difference is as much of an art as a science. So there are two concerns being balanced here: Change of slope: If the step size is big, then the jacobian of the function may change between f(x) and f(x + step). Roundoff: If x is big, then the step size must also be big, or there will be roundoff error when x + step is computed. Ideally, to address the first concern, you would look at the second derivative of the function. However, we don't know the first or second derivative of the function. That's the whole point of finding the step size. I think that it is looking at the size of f(x) as the next best thing: if f(x) is big, then either the user put in a really bad guess for x when they started the solver, or this is an area of the function where the function changes rapidly. Roundoff is addressed similarly, where if x is big, then the step will be big as well. Comparison to existing approaches SciPy's method is most similar to equation (11), from the same paper. In this equation, n represents the number of dimensions of the problem, v represents the point where we are trying to find the Jacobian-vector product, u represents the direction of the product, and b represents an arbitrary constant which is approximately the square root of machine epsilon. (Note: this is similar to self.rdiff, which defaults to the square root of machine epsilon.) We can algebraically manipulate this to find the similarities and differences between this and SciPy's formula. # difference between SciPy's omega and Knoll's epsilon epsilon = omega / norm(v) epsilon = 1/(n*norm(v)) * (sum(b * abs(u[i]) for i in range(n)) + b) # Combine the two equations omega / norm(v) = 1/(n*norm(v)) * (sum(b * abs(u[i]) for i in range(n)) + b) # Multiply both sides by norm of v omega = 1/n * (sum(b * abs(u[i]) for i in range(n)) + b) # Factor out b omega = b/n * (sum(abs(u[i]) for i in range(n)) + 1) # Move n into parens omega = b * (sum(abs(u[i]) for i in range(n))/n + 1/n) # Recognize sum(...)/n as mean omega = b * (mean(abs(u)) + 1/n) This is somewhat similar to what SciPy is doing, except: It uses mean instead of max. It avoids taking a step of zero size by adding 1/n rather than using max(1, ...). It doesn't adjust for f(x). Avoiding infinite or zero step sizes There are two step sizes which must be avoided at all costs: zero and infinity. If one takes a step of zero, then this will cause a division by zero. A step size of infinity will tell us nothing about the local Jacobian. This is a problem, because both x and f(x) can be zero. The max(1, ...) step is likely placed there to avoid this. Conclusion I can't find a pre-existing journal paper which takes the same approach. I suspect that this equation is just an approach which is experimentally justified and works in practice. *Note: I only read the papers by D.A. Knoll and D.E. Keyes, and A.H. Baker and E.R. Jessup and T. Manteuffel. The first reference on that page was added after omega was chosen, so I did not read it. | 2 | 1 |
79,264,683 | 2024-12-9 | https://stackoverflow.com/questions/79264683/error-loading-pytorch-model-checkpoint-pickle-unpicklingerror-invalid-load-ke | I'm trying to load the weights of a Pytorch model but getting this error: _pickle.UnpicklingError: invalid load key, '\x1f'. Here is the weights loading code: import os import torch import numpy as np # from data_loader import VideoDataset import timm device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print('Device being used:', device) mname = os.path.join('./CDF2_0.pth') checkpoints = torch.load(mname, map_location=device) print("Checkpoint loaded successfully.") model = timm.create_model('legacy_xception', pretrained=True, num_classes=2).to(device) model.load_state_dict(checkpoints['state_dict']) model.eval() I have tried with different Pytorch versions. I have tried to inspect the weights by changing the extension to .zip and opening with archive manager but can't fix the issue. Here is a public link to the weights .pth file, I'm trying to load. Any help is highly appreciated as I have around 40 trained models that took around one month for training! | The error is typical when trying to open a gzip file as if it was a pickle or pytorch file, because gzips start with a 1f byte. But this is not a valid gzip: it looks like a corrupted pytorch file. Indeed, looking at hexdump -C file.pt | head (shown below), most of it looks like a pytorch file (which should be a ZIP archive, not gzip, containing a python pickle file named data.pkl). But the first few bytes are wrong: instead of starting like a ZIP file as it should (bytes 50 4B or ASCII PK), it starts like a gzip file (1f 8b 08 08). In fact it's exactly as if the first 31 bytes were replaced with a valid, empty gzip file (with a timestamp ff 35 29 67 pointing to November 4, 2024 9:00:47 PM GMT). Your file: 00000000 1f 8b 08 08 ff 35 29 67 02 ff 43 44 46 32 5f 30 |.....5)g..CDF2_0| 00000010 2e 70 74 68 00 03 00 00 00 00 00 00 00 00 00 44 |.pth...........D| 00000020 46 32 5f 30 2f 64 61 74 61 2e 70 6b 6c 46 42 0f |F2_0/data.pklFB.| 00000030 00 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a |.ZZZZZZZZZZZZZZZ| 00000040 80 02 7d 71 00 28 58 08 00 00 00 62 65 73 74 5f |..}q.(X....best_| 00000050 61 63 63 71 01 63 6e 75 6d 70 79 2e 63 6f 72 65 |accq.cnumpy.core| ... (inspecting the pickle data we can see a dictionary {"best_acc": ..., "state_dict": ...}) with the typical contents of a checkpoint of a pytorch model). A valid zipped pickle produced by torch.save({"best_acc": np.array([1]), "state_dict": ...}, "CDF2_0.pth"): 00000000 50 4b 03 04 00 00 08 08 00 00 00 00 00 00 00 00 |PK..............| 00000010 00 00 00 00 00 00 00 00 00 00 0f 00 13 00 43 44 |..............CD| 00000020 46 32 5f 30 2f 64 61 74 61 2e 70 6b 6c 46 42 0f |F2_0/data.pklFB.| 00000030 00 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a |.ZZZZZZZZZZZZZZZ| 00000040 80 02 7d 71 00 28 58 08 00 00 00 62 65 73 74 5f |..}q.(X....best_| 00000050 61 63 63 71 01 63 6e 75 6d 70 79 2e 63 6f 72 65 |accq.cnumpy.core| ... A gzip containing an empty file with the same name and timestamp (with gzip --best) has 31 bytes, the same as your file's prefix (except for the two 'Operating System' bytes): 00000000 1f 8b 08 08 ff 35 29 67 02 03 43 44 46 32 5f 30 |.....5)g..CDF2_0| 00000010 2e 70 74 68 00 03 00 00 00 00 00 00 00 00 00 |.pth...........| Edit: Here's a script that might fix such files in general: #!/usr/bin/env python3 import os import sys from pathlib import Path from shutil import copy2 from tempfile import TemporaryDirectory import numpy as np import torch CHUNK_SIZE = 4 def main(orig_path: Path) -> None: fixed_path = orig_path.with_suffix(".fixed.pth") copy2(orig_path, fixed_path) with TemporaryDirectory() as temp_dir: temp_path = Path(temp_dir) / orig_path.name torch.save({"best_acc": np.array([1]), "state_dict": {}}, temp_path) with open(temp_path, "rb") as f_temp: with open(fixed_path, "rb+") as f_fixed: while True: content = f_fixed.read(CHUNK_SIZE) replacement = f_temp.read(CHUNK_SIZE) if content == replacement: break print(f"Replacing {content!r} with {replacement!r}") f_fixed.seek(-CHUNK_SIZE, os.SEEK_CUR) f_fixed.write(replacement) if __name__ == "__main__": assert len(sys.argv) == 2, "Expected exactly one argument (the path to the broken .pth file)." main(Path(sys.argv[1])) | 3 | 6 |
79,255,009 | 2024-12-5 | https://stackoverflow.com/questions/79255009/memory-problem-when-serializing-zipped-files-in-pyspark-on-databricks | I want to unzip many files in 7z format in PySpark on Databricks. The zip files contain several thousand tiny files. I read the files using binary File and I use a UDF to unzip the files: schema = ArrayType(StringType()) @F.udf(returnType=schema) def unzip_content_udf(content): extracted_file_contents= [] with py7zr.SevenZipFile(io.BytesIO(content), mode='r') as z: for name, bytes_stream in z.readall().items(): if name.startswith("v1") or name.startswith("v2"): unzipped_content = bytes_stream.read().decode(ENCODING) extracted_file_contents.append(unzipped_content) return extracted_file_contents df = spark.read.format("binaryFile").load("/mnt/file_pattern*") df = df.withColumn("unzipped_files", unzip_content_udf(F.col("content"))) df.write.mode("overwrite").parquet("/mnt/test_dump_unzipped") This works well for smaller files, but if I specify one of the larger files (150 MB zipped, 4.5GB unzipped) the process dies and I get: Py4JJavaError: An error occurred while calling o1665.parquet. ValueError: can not serialize object larger than 2G I guess, it makes sense since the serialization limit is smaller than the unzipped file size. Do you have any ideas on how to either increase the limit or chunk the size of the unzip operation below the limit? | Typical stackoverflow answer would be: You're doing it wrong. This seems like a misuse of Spark as you're not really using spark features. You're mostly using it to distribute unzipping across multiple nodes of a cluster. E.g. you could've used dispy instead. can not serialize object larger than 2G IMO it's very reasonable to say if a single row in your Dataframe/table is more than 2GB, then you have some data modelling issues. Primary problems in trying to do this using a udf are: a udf can not return multiple rows (i.e. split one zip file's 3GB contents into 3x1GB rows) a row can not be bigger than 2GB. Java byte-Array has max capacity of 2GB. I don't know if this is the reason for Spark's limitation, but what it means is that you might not be able to throw more money/hardware at this problem and change some spark config to serialize 4.5 GB, i.e. run the code you posted as is on bigger hardware. Overall, IMO you're running into the issues because you're using wrong tool (spark) to do this. Options: If you're really not in a hurry (performance) then just use a ThreadPoolExecutor or something and unzip all files using simple multi-threaded-python code. Catch is it doesn't scale horizontally. If you have zillions of files and petabytes of data and option 1 would take years: Write simple python program to list all files and then issue ssh commands to bunch of worker nodes to unzip the files in parallel across a cluster. Listing can be done in parallel (ThreadPoolExecutor). Use something like dispy instead, for unzipping files. Redistribute unzipped files in groups smaller than 2GB. Then use Spark to read the redistributed-unzipped-files and write back as parquet. There are other frameworks if you like. Using udf. Make it a 2 pass process. First pass to create sub-groups of inner files and second to actually decode. @F.udf(returnType=ArrayType(StringType())) def create_sub_groups(zip_path): # parse metadata of zip_path to get list of files and their sizes inside it. No content decoding. # create sub groups of inner files such that each group's total size is less than 2GB (or a few MB IMO) # return sub-groups @F.udf(ArrayType(StringType())) def unzip_sub_group(...): # similar to unzip_content_udf, but only decodes files in sub-group df = spark.read.format("binaryFile").load("/mnt/file_pattern*") df.select('path', 'length', 'content').show() # +-----------------------+------+--------------------+ # |path |length|content | # +-----------------------+------+--------------------+ # |file:/path/to/file1.7z |123456|[byte array content]| # |file:/path/to/file2.7z |123456|[byte array content]| # +-------------------+---+------+--------------------+ df = df.withColumn('sub_groups', create_sub_groups(df.path)) df.show() # +-----------------------+------+--------------------+-----------------------------------+ # |path |length|content |sub_groups | # +-----------------------+------+--------------------+-----------------------------------+ # |file:/path/to/file1.7z |123456|[byte array content]|[[v1f1, v1f2,..], [v1f3, v2f1,..]] | # |file:/path/to/file2.7z |123456|[byte array content]|[[v1f5, v2f4,..], [v2f3, v1f9,..]] | # +-------------------+---+------+--------------------+-----------------------------------+ df = df.withColumn('sub_group', F.explode(df.sub_groups)).drop('sub_groups') df.show() # +-----------------------+------+--------------------+----------------+ # |path |length|content |sub_group | # +-----------------------+------+--------------------+----------------+ # |file:/path/to/file1.7z |123456|[byte array content]|[v1f1, v1f2,..] | # |file:/path/to/file1.7z |123456|[byte array content]|[v1f3, v2f1,..] | # |file:/path/to/file2.7z |123456|[byte array content]|[v1f5, v2f4,..] | # |file:/path/to/file2.7z |123456|[byte array content]|[v2f3, v1f9,..] | # +-------------------+---+------+--------------------+----------------+ df = df.withColumn("unzipped_files", unzip_sub_group(df.path, df.sub_group, df.content)) df.write.mode("overwrite").parquet("/mnt/test_dump_unzipped") | 1 | 1 |
79,250,961 | 2024-12-4 | https://stackoverflow.com/questions/79250961/joining-with-condition-in-pandas-like-in-sql-on-clause | I want to write the below type of query in Python. But basic python filtering acts like I did use WHERE clause in SQL, not ON for filtering. Could anyone please help? Appreciate for your support. select * from t1 left join t2 on t1.key = t2.key and t2.x2 <= t1.x and t2.y2 > t1.y I tried the below Python code, and it is not working same with SQL query. df = t1.merge(t2, how = 'left', on = 'key') df = df[df[x2] <= df[x]] df = df[df[y2] > df[y]] | Merge the dataframes, then check which locs does not match the condition, then set those locs as NaN. While converting to NaN the datatype becomes float, so you might need to convert them to integer. t1_data = {'key': [1, 2, 3, 4], 'x': [5, 6, 7, 8], 'y': [10, 11, 12, 13]} t2_data = {'key': [2, 3, 4], 'x2': [6, 7, 8], 'y2': [16, 17, 18]} t1 = pd.DataFrame(t1_data) t2 = pd.DataFrame(t2_data) merged = pd.merge(t1, t2, on='key', how='left') condition = (merged['x2'] <= merged['x']) & (merged['y2'] > merged['y']) # This identifies the rows which does not satisfy the condition, and then set x2 and y2 to NaN merged.loc[~condition & merged['y2'].notna(), ['x2', 'y2']] = pd.NA Pandas Output key x y x2 y2 0 1 5 10 NaN NaN 1 2 6 11 6.0 16.0 2 3 7 12 7.0 17.0 3 4 8 13 8.0 18.0 The output with SQL matches with the Pandas output. Fiddle SQL Output key x y x2 y2 1 5 10 null null 2 6 11 6 16 3 7 12 7 17 4 8 13 8 18 | 2 | 2 |
79,269,901 | 2024-12-10 | https://stackoverflow.com/questions/79269901/custom-link-on-column | I am working with django-tables2 to display some patient information on a page. I am creating the table like this: class PatientListView(tables.Table): name = tables.Column('Practice') patientid = tables.Column() firstname = tables.Column() lastname = tables.Column() dob = tables.Column() addressline1 = tables.Column() addressline2 = tables.Column() city = tables.Column() state = tables.Column() zipcode = tables.Column() class Meta: template_name = 'django_tables2/bootstrap.html' and then I am populating the table in my view with the result of an sql query like this: table = PatientListView(patients) I would like to ideally make each row of the table clickable so clicking anywhere on the table row would take me to a separate url defined by me. I would also settle for having a specific cell to click that would take me to a separate url. I have seen the linkify option, but from what I've read of the documentation it looks like linkify does redirects to django model pages, but I am not using models for this database as the database is created and managed by another application, and I am just reading and displaying that information. If django-tables2 is not the right solution for this issue I am open to hearing suggestions of other ways I can accomplish my goal. | Option 1: turn every column into a link You can make a callable that converts the record to the link, and add that to all columns, so: def get_link(record): return f'www.example.com/patients/{record.patientid}' class PatientListView(tables.Table): name = tables.Column('Practice', linkify=get_link) patientid = tables.Column(linkify=get_link) firstname = tables.Column(linkify=get_link) lastname = tables.Column(linkify=get_link) dob = tables.Column(linkify=get_link) addressline1 = tables.Column(linkify=get_link) addressline2 = tables.Column(linkify=get_link) city = tables.Column(linkify=get_link) state = tables.Column(linkify=get_link) zipcode = tables.Column(linkify=get_link) class Meta: template_name = 'django_tables2/bootstrap.html' Option 2: make the row clickable Another option is to generate a data-href attribute and use JavaScript then to make it behave like a link, with: def get_link(record): return f'www.example.com/patients/{record.patientid}' class PatientListView(tables.Table): name = tables.Column('Practice') patientid = tables.Column() firstname = tables.Column() lastname = tables.Column() dob = tables.Column() addressline1 = tables.Column() addressline2 = tables.Column() city = tables.Column() state = tables.Column() zipcode = tables.Column() class Meta: row_attrs = {'data-href': get_link} and then add some JavaScript: <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/3.7.1/jquery.min.js"> </script> $(function() { $('tr[data-href]').on('click', function() { window.location = $(this).data('href'); }); }); and perhaps style the row with: tr[data-href] { cursor: pointer; } | 1 | 1 |
79,261,741 | 2024-12-8 | https://stackoverflow.com/questions/79261741/text-recognition-with-pytesseract-and-cv2-or-other-libs | Please download the png file and save it as 'sample.png'. I want to extract english characters in the png file. import cv2 import pytesseract img = cv2.imread("sample.png") gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) thr = cv2.adaptiveThreshold(gry, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 23, 100) bnt = cv2.bitwise_not(thr) txt = pytesseract.image_to_string(bnt, config="--psm 6") res = ''.join(i for i in txt if i.isalnum()) print(res) The output is ee Another try: import cv2 import pytesseract pytesseract.pytesseract.tesseract_cmd = r'/bin/tesseract' image = cv2.imread('sample.png') gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) inverted_image = cv2.bitwise_not(gray_image) binary_image = cv2.adaptiveThreshold(inverted_image, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 11, 2) denoised_image = cv2.medianBlur(binary_image, 3) kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (4, 4)) eroded_image = cv2.erode(denoised_image, kernel) mask = (denoised_image == 255) & (eroded_image == 0) denoised_image[mask] = 0 cv2.imwrite('preprocessed_image.png', denoised_image) text = pytesseract.image_to_string(denoised_image, config='--psm 6') print("result:", text.strip()) Get more accurate result than the first: result:CRSP It is 5 instead of S in the sample.png. How can I improve the code then? Where is the number 5 then? | When working with images containing grid lines and noise, it's important to preprocess the image effectively to improve OCR accuracy. I've added some line removal, denoising, and text amplification. You might need to tweak the parameters a little bit but i've got the expected result from your sample image and also tried other samples with the same grid pattern, some different fonts and different colors and all worked correctly. You may need to change dilation kernel and line detection params to achieve a more accurate result. Here's a Google Colab notebook for trial and error Important Note: The reason that rizzling's code didn't work for you was the tesseract version. I have made sure the notebook linked above is using tesseract 5.4.1, which is the exact version i'm using on my machine import cv2 import numpy as np import pytesseract from PIL import Image import matplotlib.pyplot as plt def preprocess_image(image_path): # Read the image img = cv2.imread(image_path) # Convert to grayscale gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Removing gridlines (most important step) kernel = np.ones((2,2), np.uint8) gray = cv2.morphologyEx(gray, cv2.MORPH_CLOSE, kernel) # Thresholding _, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU) # Denoise denoised = cv2.fastNlMeansDenoising(thresh) # Dilate kernel = np.ones((1,3),np.uint8) dilated = cv2.dilate(denoised, kernel, iterations=1) return dilated def perform_ocr(image_path): # Preprocess the image processed_image = preprocess_image(image_path) # Configure Tesseract parameters custom_config = r'--oem 3 --psm 6 -c tessedit_char_whitelist=0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ' # Perform OCR try: text = pytesseract.image_to_string(processed_image, config=custom_config) return text.strip() except Exception as e: print(f"Error during OCR: {str(e)}") return None # Make sure you upload the image to colab for testing image_path = 'sample.png' # Perform OCR and get the text extracted_text = perform_ocr(image_path) if extracted_text: print("Extracted Text:") print(extracted_text) else: print("Failed to extract text") # Optional: Display the processed image processed = preprocess_image(image_path) plt.imshow(processed, cmap='gray') plt.axis('off') plt.show() | 4 | 0 |
79,263,593 | 2024-12-9 | https://stackoverflow.com/questions/79263593/unable-to-login-subdomain-of-django-tenant-rest-framework-token-doesnt-query | I have a multi tenant app using the library django-tenants. When logging into the core URL with the created superuser, the login page works completely fine. When using the subdomain address to login, the page returns a 500 error when using the correct username and password that exists and the console says Uncaught (in promise) SyntaxError: Unexpected token '<', " <!doctype "... is not valid JSON. When using an login username that doesn't exist, the output is 400 Bad Request. The odd part, is I can access the admin panel of a subdomain, log in completely fine, change the URL to go to a part of the app - and the user is now logged in. After being logged in, I can submit forms completely fine and data is being captured, it is solely the login page on a subdomain that has an error submitting. I am lost since subsequent data forms submit and go correctly, and there are no invalid tokens (from what I see) views.py def dual_login_view(request): """Authenticate users with both Django Authentication System and Django Rest Framework""" if request.method == "POST": username = request.POST.get("login") password = request.POST.get("password") user = authenticate(request, username=username, password=password) if user is not None: # Login user and create session login(request, user) # Create or get API token token, created = Token.objects.get_or_create(user=user) response_data = {"status": "success", "token": token.key} return JsonResponse(response_data) else: return JsonResponse( {"status": "error", "message": "Username with the password doesn't exist!"}, status=400 ) return JsonResponse( {"status": "error", "message": "Invalid request method"}, status=405 ) class CustomLoginView(LoginView): def form_invalid(self, form): for error in form.errors.values(): for e in error: messages.error(self.request, e) return super().form_invalid(form) class UserLoginAPIView(APIView): def post(self, request): if not (request.user and request.user.is_authenticated): return Response( {"error": "User not recognized"}, status=status.HTTP_401_UNAUTHORIZED ) try: token, created = Token.objects.get_or_create(user=request.user) return Response({"token": f"Token {token.key}"}, status=status.HTTP_200_OK) except Exception as e: print(e) return Response( {"error": "Invalid credentials"}, status=status.HTTP_401_UNAUTHORIZED ) class UserLoginAPIView(APIView): def post(self, request): """Handle user login and return an authentication token.""" username = request.data.get("username") password = request.data.get("password") try: user = MSPAuthUser.objects.get(username=username) # Verify password if not user.check_password(password): raise ValueError("Invalid credentials") token, created = Token.objects.get_or_create(user=user) return Response({"token": f"Token {token.key}"}, status=status.HTTP_200_OK) except ObjectDoesNotExist: return Response( {"error": "Invalid credentials"}, status=status.HTTP_401_UNAUTHORIZED ) except ValueError as e: return Response({"error": str(e)}, status=status.HTTP_401_UNAUTHORIZED) login.js <script> let form = document.querySelector('form'); form.addEventListener('submit', function(event) { event.preventDefault(); var formData = new FormData(this); form.classList.add('was-validated') if (!form.checkValidity()) { event.preventDefault() event.stopPropagation() return; } fetch("{% url 'dual_login' %}", { method: 'POST', body: formData, headers: { 'X-CSRFToken': formData.get('csrfmiddlewaretoken'), } }).then(response => response.json()) .then(data => { if (data.status === 'success') { // Store the token (e.g., in local storage or a cookie) localStorage.setItem('token', data.token); // Redirect or update the UI window.location.href = '/'; } else { Swal.fire({ icon: 'error', title: data.message, showConfirmButton: false, timer: 1500, }) } }); }); </script> EDIT Adding network response: NETWORK RESPONSE: 1 requests 757 B transferred 145 B resources Request URL: https://subdomain.primarydomain.com Request Method: POST Status Code: 500 Internal Server Error Remote Address: 1xx.1xx.xxx.xxx:443 Referrer Policy: same-origin HTTP/1.1 500 Internal Server Error Server: nginx/1.26.0 (Ubuntu) Date: Fri, 13 Dec 2024 18:41:39 GMT Content-Type: text/html; charset=utf-8 Content-Length: 145 Connection: keep-alive Vary: Cookie, origin X-Frame-Options: DENY X-Content-Type-Options: nosniff Referrer-Policy: same-origin Cross-Origin-Opener-Policy: same-origin access-control-allow-origin: https://probleu.rocket-command.com Set-Cookie: csrftoken=Wx.....jO; expires=Fri, 12 Dec 2025 18:41:39 GMT; Max-Age=31449600; Path=/; SameSite=Lax; Secure Strict-Transport-Security: max-age=31536000; includeSubDomains The payload includes username, password, as well as the CSRF token. EDIT 2 I created a log of the application and it spat back the following error: During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/root/MSPDashboard/msp-dashboard/venv/lib/python3.12/site-packages/django/db/backends/base/base.py", line 313> return self.connection.commit() ^^^^^^^^^^^^^^^^^^^^^^^^ psycopg2.errors.ForeignKeyViolation: insert or update on table "authtoken_token" violates foreign key constraint "aut> DETAIL: Key (user_id)=(d34fd88e-b7bb-4f5c-880a-43f1cb98230d) is not present in table "accounts_mspauthuser". This line appears to be the error since the user does exist (photo below) just not in the public schema: token, created = Token.objects.get_or_create(user=user) So the question becomes, how can I use from rest_framework.authtoken.models to query from the proper database schema? Proof user exists: | The Uncaught (in promise) SyntaxError: Unexpected token '<', " <!doctype "... is not valid JSON error occurs because the Token call raises an uncaught exception, as you noticed. Which in turn means the token object doesn't contain a token, it contains the HTML response displaying the exception response. And you have rightly deduced that the solution is to fix the Token call. how can I use from rest_framework.authtoken.models to query from the proper database schema? By the time you make the Token call, you've already authenticated the user so the next step is to look up their tenant association. Then you can use the schema_context() context manager to direct the query to the correct schema. from django_tenants.utils import schema_context from .tenants import Tenant # Or whatever your tenant model is def dual_login_view(request): """Authenticate users with both Django Authentication System and Django Rest Framework""" if request.method == "POST": username = request.POST.get("login") password = request.POST.get("password") user = authenticate(request, username=username, password=password) if user is not None: # Login user and create session login(request, user) # Retrieve tenant information tenant = user.tenant # Or whatever is appropriate for you to look up the relationship between the user and the tenant schema_name = tenant.schema_name # Create or get API token, schema-aware with schema_context(schema_name): token, created = Token.objects.get_or_create(user=user) response_data = {"status": "success", "token": token.key} return JsonResponse(response_data) else: return JsonResponse( {"status": "error", "message": "Username with the password doesn't exist!"}, status=400 ) return JsonResponse( {"status": "error", "message": "Invalid request method"}, status=405 ) | 1 | 1 |
79,268,152 | 2024-12-10 | https://stackoverflow.com/questions/79268152/why-does-beautifulsoup-output-self-closing-tags-in-html | I've tried with 3 different parsers: lxml, html5lib, html.parser All of them output invalid HTML: >>> BeautifulSoup('<br>', 'html.parser') <br/> >>> BeautifulSoup('<br>', 'lxml') <html><body><br/></body></html> >>> BeautifulSoup('<br>', 'html5lib') <html><head></head><body><br/></body></html> >>> BeautifulSoup('<br>', 'html.parser').prettify() '<br/>\n' All of them have /> "self-closing" void tags. How can I get BeautifulSoup to output HTML that has void tags without />? | Use the html5 formatter: If you pass in formatter="html5", it’s the same as formatter="html", but Beautiful Soup will omit the closing slash in HTML void tags like “br”: from bs4 import BeautifulSoup BeautifulSoup('<br>', 'html.parser').decode(formatter="html5") Which outputs: '<br>' | 2 | 1 |
79,269,686 | 2024-12-10 | https://stackoverflow.com/questions/79269686/alternate-background-colors-in-styled-pandas-df-that-also-apply-to-multiindex-in | SETUP I have the following df: import pandas as pd import numpy as np arrays = [ np.array(["fruit", "fruit", "fruit","vegetable", "vegetable", "vegetable"]), np.array(["one", "two", "total", "one", "two", "total"]), ] df = pd.DataFrame(np.random.randn(6, 4), index=arrays) df.index.set_names(['item','count'],inplace=True) def style_total(s): m = s.index.get_level_values('count') == 'total' return np.where(m, 'font-weight: bold; background-color: #D2D2D2', None) def style_total_index(s): return np.where(s == 'total', 'font-weight: bold; background-color: #D2D2D2','') (df .style .apply_index(style_total_index) .apply(style_total) ) WHAT I WANT TO DO/DESIRED OUTPUT I would like to apply alternating background colors to each row (as well as the MultiIndex) while still keeping the separately colored and formatted total row. Here is a visual example of what I am trying to accomplish: As you can see, all rows as well as the MultiIndex are alternating colors with the total keeping its own custom formatting. WHAT I HAVE TRIED I have tried a whole bunch of things. I came across this question and this question that both use set_table_styles(), however, this issue on the pandas Github says that "Styler.set_table_styles is not exported to excel. This will not be change...". So, set_table_styles() is not an option here. How can I go about achieving the desired output? | Here's one approach: Using np.resize See below for itertools.cycle option. def style_total_index(s, colors, color_map, total): # `s` is series with index level values as *values*, name 0 for level 0, etc. level = s.name if level == 0: # level 0 is quite easy: # check if `s` equals its shift + `cumsum` result. # `result % 2 == 1` gets us `True` for 1st ('fruit'), 3rd, etc. value, # and `False` for 2nd ('vegetable'), 4th, etc. # these boolean values we map onto the colors with `color_map`. style = (s.ne(s.shift()).cumsum() % 2 == 1).map(color_map) else: # level 1: # check `s` == 'total' + shift (with `False` for 1st NaN) + `cumsum`. # `groupby` and `transform` to resize `colors` to match size of group. # mask to add `total` style style = s.eq('total').shift(fill_value=False).cumsum() style = style.groupby(style).transform(lambda x: np.resize(colors, len(x)) ) style = style.mask(s == 'total', total) return style def style_total(s, colors, total): # `s` is series (a column) with index like `df` # so, similar to `level=1` above, but access `s.index.get_level_values(1)` # and convert that to a series before checking equal to 'total'. # rest the same. style = (s.index.get_level_values(1).to_series().eq('total') .shift(fill_value=False).cumsum() ) style = style.groupby(style).transform(lambda x: np.resize(colors, len(x))) style = style.mask(s.index.get_level_values(1) == 'total', total) return style.values # using `np.random.seed(0)` for reproducibility colors = ['background-color: #CFE2F3', 'background-color: #FFF2CC'] total = 'font-weight: bold; background-color: #D2D2D2' color_map = {k: color for k, color in zip([True, False], colors)} df_styled = (df .style .apply_index(style_total_index, colors=colors, color_map=color_map, total=total) .apply(style_total, colors=colors, total=total) ) df_styled Output: Export to Excel (for the header setting, cf. here): # removing default formatting for header from pandas.io.formats import excel excel.ExcelFormatter.header_style = None df_styled.to_excel('df_styled.xlsx') Output Excel: The above method is set up to handle the situation in which unique level-0 values might have an uneven number of associated rows. That is to say, I have assumed that each level-0 needs to start with #CFE2F3 (light blue), regardless of whether the previous group of rows ended with that color. E.g., suppose we add an extra row only for 'fruit', the above gets us: If you just want to 'cycle' through regardless of the transition to a new group, here's an approach that uses cycle from itertools: from itertools import cycle def style_total_index(s, colors, color_map, total): level = s.name if level == 0: style = (s.ne(s.shift()).cumsum() % 2 == 1).map(color_map) else: colors_cycle = cycle(colors) style = [next(colors_cycle) if i != 'total' else total for i in df.index.get_level_values(level)] return style def style_total(s, colors, total): level = 'count' colors_cycle = cycle(colors) style = [next(colors_cycle) if i != 'total' else total for i in df.index.get_level_values(level)] return style colors = ['background-color: #CFE2F3', 'background-color: #FFF2CC'] total = 'font-weight: bold; background-color: #D2D2D2' color_map = {k: color for k, color in zip([True, False], colors)} df_styled = (df .style .apply_index(style_total_index, colors=colors, color_map=color_map, total=total) .apply(style_total, colors=colors, total=total) ) df_styled Output: Of course, both of these methods are adjustable to reach whatever alternation with a bit of tweaking. | 1 | 4 |
79,269,716 | 2024-12-10 | https://stackoverflow.com/questions/79269716/attributeerror-pathway-object-has-no-attribute-hidden | I am attempting to create a nested dictionary 'world'. Due to the planned size and complexity I am hoping to automate the creation a bit. However, when I attempt to run it, I get "AttributeError: 'Pathway' object has no attribute 'hidden'". The intended structure of the dictionary is as follows **world village a. rooms i. villageEntrance 1. hidden : false 2. pathways: [p1,etc] ii. etc b. alignment : good c. etc void a. rooms i. voidRoom 1. hidden : false 2. pathways: [p1] b. alignment : neutral c. etc** class Pathway(dict): def __init__(self, hidden, travelType): self.dict = { self.hidden : hidden, self.travelType : travelType } class Room(dict): def __init__(self, hidden, pathways): self.dict = { self.hidden : hidden, self.pathways : pathways } class Location(dict): def __init__(self, rooms, alignment): self.dict = { self.alignment : alignment, self.rooms : rooms } p1 = Pathway(True, "teleport") voidRoom = Room(True, [p1]) villageEntrance = Room(False, [p1]) void = Location([voidRoom], "neutral") village = Location([villageEntrance], "good") world = { "void" : void, "village" : village } def locationDataRetrieval(path): val = world.get(path) return val print(world) I looked up proper syntax for dictionaries and as far as I can see I have it correct. I also attempted removing the pathway class but the error moved on to the next class. | This is python, not javascript. class Pathway(dict): def __init__(self, hidden, travelType): self.dict = { self.hidden : hidden, self.travelType : travelType } PEP 8 asks you to instead name it travel_type. It's not strictly necessary to define all attributes in the __init__() constructor, but it's good practice, it's considered polite. Here, you're attempting to dereference self.hidden before you have assigned anything to it. A typical ctor would look like: def __init__(self, hidden, travel_type): self.hidden = hidden self.travel_type = travel_type After that it's perfectly fine to dereference self.hidden. You can even put it in a dict if you like. It's unclear why that would be desirable, though. I recommend you settle on object attributes, or settle on dicts, without trying to mix them. Storing the same thing in two places tends to lead to maintenance trouble later on. A @dataclass decorator can save you the trouble of writing a ctor: from dataclasses import dataclass @dataclass class Pathway: hidden: bool travel_type: str | 1 | 1 |
79,269,012 | 2024-12-10 | https://stackoverflow.com/questions/79269012/how-to-style-all-cells-in-a-row-of-a-specific-multiindex-value-in-pandas | SETUP I have the following df: import pandas as pd import numpy as np arrays = [ np.array(["fruit", "fruit", "fruit","vegetable", "vegetable", "vegetable"]), np.array(["one", "two", "total", "one", "two", "total"]), ] df = pd.DataFrame(np.random.randn(6, 4), index=arrays) df.index.set_names(['item','count'],inplace=True) WHAT I AM TRYING TO DO I am trying to style df so that each cell where count == 'total' is bolded. WHAT I HAVE TRIED I was able to index all rows where count == 'total' with the following code: idx = pd.IndexSlice totals = df.loc[idx[:, 'total'],:] but when I try to apply a function: def df_style(val): return "font-weight: bold" df.style.applymap(df_style,subset=totals) I get the following error: KeyError: 0 How can I style this df so that all cells where count == 'total' are bolded? Here is a similar question, albeit with just a regular index rather than MultiIndex. | Here's one approach: # used `np.random.seed(0)` for reproducibility def highlight_total(s): m = s.index.get_level_values('count') == 'total' return np.where(m, 'font-weight: bold', None) df.style.apply(highlight_total) Output: Explanation For level 'count' (or: '1'), check where index.get_level_values equals 'total'. Pass the result to np.where. Use inside Styler.apply. Specifically traversing the df row-wise, you could do: def highlight_total(s): return ['font-weight: bold']*len(s) if s.name[1] == 'total' else [None]*len(s) df.style.apply(highlight_total, axis=1) So, here we are accessing s.name[1], with name understood as ('fruit', 'total'), ('vegetable', 'total'), etc. Leads to the same result. | 1 | 2 |
79,268,477 | 2024-12-10 | https://stackoverflow.com/questions/79268477/sort-normalized-stacked-bar-chart-by-dataframe-order-with-altair | How can I keep the order of my stacked bars chart from my Dataframe ? The head of my Dataframe looks like this : The countries are ordered as I want them to be and I can handle it by setting sort=None. But I want to order lineages in the stacked bar by sequences_number, only keeping the 'Others' value at the end, as it is in my Dataframe. By using sort=None in the X encoding channel, it still sort lineages alphabetically. So I added a 'rank' column in my Dataframe in order to use the sort=alt.SortField("rank:Q", order="ascending") but it still order lineage alphabetically instead of by sequences_number (with 'Others' at the end). Here is my code : histo_base_proportion = ( alt.Chart( top10_selection_africa ).encode( alt.Y( "country:N", scale=alt.Scale(padding=0.3), sort=None #sorting works here ), alt.X( "sequences_number:Q", sort=alt.SortField("rank:Q", order="ascending"), #but does not work here title = "Lineages", stack="normalize" ), alt.Color( "lineage" ).scale( domain=fixed_domain_lineages, range=fixed_range_lineages ).legend(None), tooltip=[ {"type": "nominal", 'title': 'Country',"field": "country"}, {"type": "nominal", 'title': 'Lineage',"field": "lineage"}, {"type": "quantitative", 'title': 'Nombres', "field": "sequences_number"} ] ).mark_bar( size=12 ).properties( width=800, title="Proportion of sequenced genomes lineage by country in " + year_title ) ) Here is what I get : This represents the lineages number proportion by country. Top 5 lineages is shown per country, lineages outside of top 5 are in 'Others' category, in black color. In my case, I would have wanted the first green lineage of Rwanda to be just before 'Others' (black color) or the dark blue lineage of Comoros to be just before 'Others' (black color), for examples. How can I do that ? Thanks in advance. | You can use the order encoding instead of sort to order the stacked segments as in this example: import altair as alt from vega_datasets import data source = data.barley() alt.Chart(source).mark_bar().encode( x='sum(yield)', y='variety', color='site', order=alt.Order( # Sort the segments of the bars by this field 'site', sort='ascending' ) ) You can read more about how order works in the documentation | 1 | 2 |
79,268,222 | 2024-12-10 | https://stackoverflow.com/questions/79268222/pyspark-subset-array-based-on-other-column-value | I use Pyspark in Azure Databricks to transform data before sending it to a sink. In this sink any array must at most have a length of 100. In my data I have an array that is always length 300 an a field specifying how many values of these are relevant (n_relevant). n_relevant values might be: below 100 -> then I want to keep all values between 100 and 300 -> then I want to subsample based on modulo above 300 -> then I want to subsample modulo 3 E.g.: array: [1,2,3,4,5,...300] n_relevant: 4 desired outcome: [1,2,3,4] array: [1,2,3,4,5,...300] n_relevant: 200 desired outcome: [1,3,5,...199] array: [1,2,3,4,5,...300] n_relevant: 300 desired outcome: [1,4,7,...298] array: [1,2,3,4,5,...300] n_relevant: 800 desired outcome: [1,4,7,...298] This little program reflects the desired behavior: from math import ceil def subsample(array:list,n_relevant:int)->list: if n_relevant<100: return [x for i,x in enumerate(array) if i<n_relevant] if 100<=n_relevant<300: mod=ceil(n_relevant/100) return [x for i,x in enumerate(array) if i%mod==0 and i<n_relevant] else: return [x for i,x in enumerate(array) if i%3==0] n_relevant=<choose n> t1=[i for i in range(300)] subsample(t1,n_relevant) What I have tried: transforms to set undesired values to 0 and remove those with array_remove could subset with a specific modulo BUT cannot adopt to n_relevant. Specifically you cannot hand a parameter to the lambda function and you cannot dynamically change the function. | You can filter by index as follows from pyspark.sql.types import StructField, StructType, IntegerType, ArrayType df = spark.createDataFrame( [[list(range(300)), 4], [list(range(300)), 200], [list(range(300)), 300], [list(range(300)), 800]], schema=StructType( [ StructField("array", ArrayType(IntegerType())), StructField("n_relevant", IntegerType()), ] ), ) df = df.withColumn( "result", F.when(F.col("n_relevant") <= 100, F.slice("array", 1, F.col("n_relevant"))) .when( F.col("n_relevant") <= 200, F.filter( F.slice("array", 1, F.col("n_relevant")), lambda _, index: index % 2 == 0 ), ) .otherwise( F.filter( F.slice("array", 1, F.col("n_relevant")), lambda elem, index: index % 3 == 0 ) ), ) display(df) | 1 | 1 |
79,267,898 | 2024-12-10 | https://stackoverflow.com/questions/79267898/camera-pose-estimation-using-opencvs-solvepnp | I have a grayscale camera for which I have already calculated intrinsic parameters with standard methods of calibrations. I have then position this camera in a particular stationary setup and put a plate with 8 marker points in front of the camera. I have calculated the camera pose with respect to the coordinate system of those markers using the formula: cameraPosition = -np.matrix(rotM).T * np.matrix(tvec), where rotM is the rotational matrix and tvec is the translation vector obtained with opencv's cv.solvePnP I wanted to check, how accurate the calculated camera pose is, but have stumbled into a problem, which is, that I do not know, to what physical point does the cameraPosition vector points to. I firstly thought that it points to the center of the image plane, but now I've read that it points to the center of projection point. Which is it? And if it's indeed the center of projection, how could I calculate, where that is located in my camera (using a certain camera settings)? Thanks | -np.matrix(rotM).T * np.matrix(tvec) is the reverse operation from converting a point (xw yw zw) in the world coordinates into camera coordinates, when that point is (0,0,0) in camera system. [R R R tx][xw] [R R R ty][yw] [R R R tz][zw] [0 0 0 1 ][1 ] is Xc = rotM@Xw + tvec (Xw=[xw,yw,zw], tvec=[tx,ty,tz], rotM=R) So reverse operation is Xw = RotM⁻¹@(Xc-tvec) aka Xw = RotM.T@(Xc-tvec), since, for a rotation matrix, inverse and transpose are the same. If Xc=[0,0,0], that is Xw=-rotM.T@tvec So, indeed, that points to the center of projection (origin of camera coordinates system), expressed in the world coordinates. It tells you where the camera is in your world coordinates (the one you used to position physical points in the 3D world) To end up in the center of the image, that takes a projection (canonical projection matrix, times intrisinc matrix), that would lead to [cx,cy], the position of the camera direction in the image (usually the middle of the image, but not necessarily). But the only point in the world that you cannot project happens to be the center of the projection. So to get that, you need first to add something (anything) to Zc So, I do not know how to you intend to use that "center of projection expressed in world coordinates" to check your projection. Maybe to verify that, in your world coordinates, that point indeed to where the physical camera is? (in reality, not exactly: that is the center of projection. It is a virtual point created by lens. But, well, it should be not far from a point "inside" the camera, at a focal distance from the lens. | 1 | 4 |
79,267,542 | 2024-12-10 | https://stackoverflow.com/questions/79267542/how-to-get-information-of-a-function-and-its-arguments-in-python | I ran below to get information about the list of arguments and default values of a function/method. import pandas as pd import inspect inspect.getfullargspec(pd.drop_duplicates) Results: Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'pandas' has no attribute 'drop_duplicates' >>> inspect.getfullargspec(pd.drop_duplicates) What is the correct way to fetch such information from any function/method? | The issue is that the drop_duplicates method exists as a method on a pandas.DataFrame object, i.e., pandas.DataFrame.drop_duplicates not directly on the pandas module. With that being said you might also want to check out inspect.signature as an alternative to inspect.getfullargspec: >>> import inspect >>> import pandas as pd >>> method = pd.DataFrame.drop_duplicates >>> inspect.getfullargspec(method) FullArgSpec(args=['self', 'subset'], varargs=None, varkw=None, defaults=(None,), kwonlyargs=['keep', 'inplace', 'ignore_index'], kwonlydefaults={'keep': 'first', 'inplace': False, 'ignore_index': False}, annotations={'return': 'DataFrame | None', 'subset': 'Hashable | Sequence[Hashable] | None', 'keep': 'DropKeep', 'inplace': 'bool', 'ignore_index': 'bool'}) >>> inspect.signature(method) <Signature (self, subset: 'Hashable | Sequence[Hashable] | None' = None, *, keep: 'DropKeep' = 'first', inplace: 'bool' = False, ignore_index: 'bool' = False) -> 'DataFrame | None'> | 1 | 2 |
79,266,819 | 2024-12-10 | https://stackoverflow.com/questions/79266819/faster-glossary-generation | I am trying to make a table of contents for my queryset in Django like this: def get_toc(self): toc = {} qs = self.get_queryset() idx = set() for q in qs: idx.add(q.title[0]) idx = list(idx) idx.sort() for i in idx: toc[i] = [] for q in qs: if q.title[0] == i: toc[i].append(q) return toc But it has time complexity O(n^2). Is there a better way to do it? UPDATE I meant glossary, not table of contents. | This doesn't look like a table of contents, but a glossary, where you map the first character of a term to a list of terms. We can work with .groupby(…) [python-doc] here: from itertools import groupby result = { k: list(vs) for k, vs in groupby( self.get_queryset().order_by('title'), lambda x: x.title[0] ) } | 2 | 2 |
79,265,773 | 2024-12-9 | https://stackoverflow.com/questions/79265773/python-opc-ua-authentification | I'm trying to set up authentication for my OPC-UA server. I don't want my clients to be able to connect to my server in ‘anonymous’ mode. So I used this configuration in my opc-ua server: I'm testing with UaExpert in client mode and the password login works (I have to enter the right login + the right password). The certificate login is blocked, which is fair enough. However, I can still connect in ‘anonymous’ mode, which obviously doesn't work for me. The logs from UaExpert when connecting in "anonymous": I'd prefer to use the certificate authentication mode but I wanted to test the password login first. I've tested the certificate mode and I can also connect in anonymous mode! Does anyone have any ideas? | The python opcua library is no longer supported. There was a fix for this issue, but the pip package never got uppdated. So either use the current master from github. Or you switch to asyncua, which has a sync layer for easier porting, but i would recommend use it via async, if possible. | 1 | 2 |
79,266,741 | 2024-12-10 | https://stackoverflow.com/questions/79266741/writing-multiple-polars-dataframes-to-separate-worksheets-of-excel-workbook | I am trying to get the following code to work: import polars as pl # Create sample DataFrames df1 = pl.DataFrame({ "Name": ["Alice", "Bob", "Charlie"], "Age": [25, 30, 35] }) df2 = pl.DataFrame({ "Product": ["Laptop", "Phone", "Tablet"], "Price": [1000, 500, 300] }) df3 = pl.DataFrame({ "City": ["New York", "San Francisco", "Chicago"], "Population": [8_400_000, 873_965, 2_746_388] }) def openwb(): return xlsxwriter.workbook("name.xlsx") def writewb(wb, df, sheetname): df.write_excel(workbook=wb, worksheet=sheetname) def main(): mywb = openwb() writewb(mywb, df1, "s1") writewb(mywb, df2, "s2") writewb(mywb, df3, "s3") Problem is that each worksheet deletes the previously written ones, leaving me with only worksheet s3 in my workbook. Of course, this is oversimplified code. In reality, the functions above do a lot more stuff, and writing the worksheets is just one of the actions. Since the calls are spread across function calls, I am not using the "with workbook as wb ..." approach, since I feel that won't leave the workbook open across the function calls. How do I solve this? It seems I can convert the dataframe to pandas, but I am hoping for a polars-native solution | The workbook object needs to be closed. This is usually best achieved with a with context manager like in this polars/xlsxwriter example. However, you can also call an explicit close() on the workbook. Like this: import polars as pl import xlsxwriter # Create sample DataFrames df1 = pl.DataFrame({ "Name": ["Alice", "Bob", "Charlie"], "Age": [25, 30, 35] }) df2 = pl.DataFrame({ "Product": ["Laptop", "Phone", "Tablet"], "Price": [1000, 500, 300] }) df3 = pl.DataFrame({ "City": ["New York", "San Francisco", "Chicago"], "Population": [8_400_000, 873_965, 2_746_388] }) def openwb(): return xlsxwriter.Workbook("name.xlsx") def writewb(wb, df, sheetname): df.write_excel(workbook=wb, worksheet=sheetname) def main(): mywb = openwb() writewb(mywb, df1, "s1") writewb(mywb, df2, "s2") writewb(mywb, df3, "s3") mywb.close() if __name__ == "__main__": main() Output: | 2 | 1 |
79,263,433 | 2024-12-8 | https://stackoverflow.com/questions/79263433/multiprocessing-and-sourcing-shell-file-for-every-subprocess | I am working on a code which aims to gather simulation commands and multiprocess them, sourcing a shell file for each subprocess before running a simulation command in the subprocess. For this I gather the commands in another function in a dictionary which is used by the functions below: def source_shell_script(self, script_path: str) -> dict: """ Sources a shell script and updates the environment for each subprocess. """ if not os.path.exists(script_path): raise FileNotFoundError(f"Script not found: {script_path}") # source the script and output environment variables command = f"bash -c 'source {script_path} && env'" try: # run the command and capture the output result = subprocess.run(command, shell=True, stdout=subprocess.PIPE, text=True, check=True) # parse the environment variables from the command output env_vars = dict( line.split("=", 1) for line in result.stdout.splitlines() if "=" in line ) return env_vars except subprocess.CalledProcessError as e: raise RuntimeError(f"Failed to source script: {script_path}. Error: {e}") def run_cmd(self, cmd_px: tuple) -> None: cmd, px_src_path = cmd_px try: # get environment variables env_vars = self.source_shell_script(px_src_path) print(f"Executing command: {cmd}") result = subprocess.run(cmd, shell=True, env=env_vars, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) print(f"Output: {result.stdout}") if result.returncode != 0: print(f"Error: {result.stderr}") except Exception as e: print(f"Failed to execute command: {cmd}. Error: {e}") def exec_sim(self) -> None: """ Execute all simulations in parallel using multiprocessing. """ # create a list of (command, px_src_path) tuples for each pixel configuration run_queue = [(cmd, self.sim_dict[px_key]["px_src_path"]) for px_key in self.sim_dict for cmd in self.sim_dict[px_key]["px_ddsim_cmds"]] num_workers = os.cpu_count() # number of processes with multiprocessing.Pool(num_workers) as pool: pool.map(self.run_cmd, run_queue) if __name__ == "__main__": # initialize program eic_object = HandleEIC() eic_object.init_path_var() pixel_sizes = eic_object.setup_json() eic_object.pixel_sizes = pixel_sizes os.chmod(eic_object.execution_path, 0o777) eic_object.setup_sim() print("Simulation dictionary:", eic_object.sim_dict) eic_object.exec_sim() # create backup for simulation eic_object.mk_sim_backup() Instead of running properly, my program gets stuck and the console prints the commands: Console output: Simulation dictionary: {'2.0_0.1': {'px_epic_path': '/data/user/Analysis_epic_new/simulations/simEvents/2.0x0.1px/epic', 'px_compact_path': '/data/user/Analysis_epic_new/simulations/simEvents/2.0x0.1px/epic/install/share/epic/compact', 'px_ip6_path': '/data/user/Analysis_epic_new/simulations/simEvents/2.0x0.1px/epic/install/share/epic/epic_ip6_extended.xml', 'px_src_path': '/data/user/Analysis_epic_new/simulations/simEvents/2.0x0.1px/epic/install/bin/thisepic.sh', 'px_out_path': '/data/user/Analysis_epic_new/simulations/simEvents/2.0x0.1px', 'px_ddsim_cmds': ['ddsim --inputFiles /data/user/Analysis_epic_new/simulations/genEvents/results/beamEffectsElectrons_20.hepmc --outputFile /data/user/Analysis_epic_new/simulations/simEvents/2.0x0.1px/output_20edm4hep.root --compactFile /data/user/Analysis_epic_new/simulations/simEvents/2.0x0.1px/epic/install/share/epic/epic_ip6_extended.xml -N 5']}, '0.1_0.1': {'px_epic_path': '/data/user/Analysis_epic_new/simulations/simEvents/0.1x0.1px/epic', 'px_compact_path': '/data/user/Analysis_epic_new/simulations/simEvents/0.1x0.1px/epic/install/share/epic/compact', 'px_ip6_path': '/data/user/Analysis_epic_new/simulations/simEvents/0.1x0.1px/epic/install/share/epic/epic_ip6_extended.xml', 'px_src_path': '/data/user/Analysis_epic_new/simulations/simEvents/0.1x0.1px/epic/install/bin/thisepic.sh', 'px_out_path': '/data/user/Analysis_epic_new/simulations/simEvents/0.1x0.1px', 'px_ddsim_cmds': ['ddsim --inputFiles /data/user/Analysis_epic_new/simulations/genEvents/results/beamEffectsElectrons_20.hepmc --outputFile /data/user/Analysis_epic_new/simulations/simEvents/0.1x0.1px/output_20edm4hep.root --compactFile /data/user/Analysis_epic_new/simulations/simEvents/0.1x0.1px/epic/install/share/epic/epic_ip6_extended.xml -N 5']}} Executing command: ddsim --inputFiles /data/user/Analysis_epic_new/simulations/genEvents/results/beamEffectsElectrons_20.hepmc --outputFile /data/user/Analysis_epic_new/simulations/simEvents/0.1x0.1px/output_20edm4hep.root --compactFile /data/user/Analysis_epic_new/simulations/simEvents/0.1x0.1px/epic/install/share/epic/epic_ip6_extended.xml -N 5 Executing command: ddsim --inputFiles /data/user/Analysis_epic_new/simulations/genEvents/results/beamEffectsElectrons_20.hepmc --outputFile /data/user/Analysis_epic_new/simulations/simEvents/2.0x0.1px/output_20edm4hep.root --compactFile /data/user/Analysis_epic_new/simulations/simEvents/2.0x0.1px/epic/install/share/epic/epic_ip6_extended.xml -N 5 Why is my code not running as expected? What am I doing wrong? | Kindly check now after these changes. Changes: 1.The source_shell_script method sources the shell script and returns environment variables. These variables are passed into the subprocess.run call via the env parameter. If the script is missing, a FileNotFoundError is raised. If sourcing fails, a RuntimeError is raised. 2.The run_cmd method now checks for the return code of the subprocess. If the command fails (result.returncode != 0), the error is printed using stderr. Otherwise, it prints the output using stdout. If any exception occurs during the above process, it's caught and printed. The exec_sim method collects the simulation commands and pixel source paths. It uses multiprocessing.Pool to execute the commands in parallel, with each worker running run_cmd. 4, The simulation dictionary holds the simulation details, including paths to the shell scripts and the commands to run. This (sim_dict) is assumed to be initialized in setup_json. execution_path should exist and should have correct permissions (0o777) Added more verbose logging (print) (to help trace command execution. Check the output and confirm if the paths and commands are coect. 7.Test the script with a simplified example first. Ensure that the paths to the shell scripts (px_src_path) and the simulation commands (px_ddsim_cmds) are correct. import os import subprocess import multiprocessing class SimulationHandler: def __init__(self): self.sim_dict = {} def get_path(self, *path_components): """Joins path components into a single file path.""" return os.path.join(*path_components) def source_shell_script(self, *path_components) -> dict: """Sources a shell script and returns the environment variables as a dictionary.""" script_path = self.get_path(*path_components) if not os.path.exists(script_path): raise FileNotFoundError(f'Script not found: {script_path}') command = ['bash', '-c', 'source "$1" && env', 'bash', script_path] try: result = subprocess.run(command, stdout=subprocess.PIPE, text=True, check=True) env_vars = dict( line.split('=', 1) for line in result.stdout.splitlines() if '=' in line ) return env_vars except subprocess.CalledProcessError as e: raise RuntimeError(f'Failed to source script: {script_path}. Error: {e}') def run_cmd(self, cmd_px: tuple) -> None: """Executes a command with environment variables sourced from a shell script.""" cmd, script_dir, script_name = cmd_px try: env_vars = self.source_shell_script(script_dir, script_name) print(f'Executing command: {cmd}') result = subprocess.run( cmd.split(), env=env_vars, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, ) print(f'Output: {result.stdout}') if result.returncode != 0: print(f'Error: {result.stderr}') except Exception as e: print(f'Failed to execute command: {cmd}. Error: {e}') def exec_sim(self) -> None: """Executes all simulations in parallel using multiprocessing.""" run_queue = [ (cmd, self.sim_dict[px_key]['script_dir'], self.sim_dict[px_key]['script_name']) for px_key in self.sim_dict for cmd in self.sim_dict[px_key]['commands'] ] num_workers = os.cpu_count() # Number of processes with multiprocessing.Pool(num_workers) as pool: pool.map(self.run_cmd, run_queue) if __name__ == '__main__': handler = SimulationHandler() # Example simulation dictionary for demonstration handler.sim_dict = { 'config_1': { 'script_dir': '/data/user/Analysis_epic_new/simulations/simEvents/config_1/epic/install/bin', 'script_name': 'thisepic.sh', 'commands': [ 'ddsim --inputFiles input1.hepmc --outputFile output1.root --compactFile epic_ip6_extended.xml -N 5' ], }, 'config_2': { 'script_dir': '/data/user/Analysis_epic_new/simulations/simEvents/config_2/epic/install/bin', 'script_name': 'thisepic.sh', 'commands': [ 'ddsim --inputFiles input2.hepmc --outputFile output2.root --compactFile epic_ip6_extended.xml -N 5' ], }, } # Execute simulations handler.exec_sim() Edit: Reasons for the unexpected behavior: For focus and hover behavior, rasio-button like widgets behave differently. For eg., When hovered, a Radiobutton might display a different border or background color by default, depending on the platform and theme. Your code hasn't given style for hover or focus effects, so the default behavior is being used. You're setting bg="#292d2e" for both Radiobutton and Label, but the Radiobutton itself might override some parts of the style when hovered (like its default border or highlight background). These overrides could result in visual artifacts. The default behavior of widgets like Radiobutton can vary slightly across platforms (Windows, macOS, Linux), leading to hover effects you might not expect: Some themes or window managers have hover effects enabled by default for interactive widgets like buttons and radiobuttons. | 1 | 2 |
79,266,557 | 2024-12-9 | https://stackoverflow.com/questions/79266557/python-heatmap-with-categorical-color-and-continuous-transparency | I want to make a heatmap in python (seaborn, matplotlib, etc) with two dimensions of information. I have a categorical value I want to assign to color, and a continuous variable (i.e. between 0-100 or 0-1) I want to assign to transparency, so each box has its own color and transparency (or intensity). for example: colors = pd.DataFrame([['b','g','r'],['black','orange','purple'],['r','yellow','white']]) transparency = pd.DataFrame([[0.1,0.2,0.3],[0.9,0.1,0.2],[0.1,0.6,0.3]]) how can I make a heatmap from this data such that the top left box is blue in color and 10% transparency (or 10% opaqueness, whichever), and so on? The best idea I have so far is to turn the colors into integer values, add those to the transparency values, and then make a custom colormap where each integer has a different color, ranging from white to the color in between the integer values. That sounds complicated to make and I'm hoping there's a built-in way to do this. Any ideas? | You could draw individual rectangles, giving each a specific color and transparency: import matplotlib.pyplot as plt from matplotlib.patches import Rectangle, Patch import pandas as pd colors = pd.DataFrame([['b', 'g', 'r'], ['black', 'orange', 'purple'], ['r', 'yellow', 'white']]) transparency = pd.DataFrame([[0.1, 0.2, 0.3], [0.9, 0.1, 0.2], [0.1, 0.6, 0.3]]) fig, ax = plt.subplots() for i, (color_col, transp_col) in enumerate(zip(colors.columns, transparency.columns)): for j, (color, transp) in enumerate(zip(colors[color_col], transparency[transp_col])): ax.add_patch(Rectangle((i - 0.5, j - 0.5), 1, 1, facecolor=color, alpha=transp, edgecolor='none', lw=0)) ax.invert_yaxis() # start at the top ax.autoscale(enable=True, tight=True) # recalculate axis limits ax.set_xticks(range(len(colors.columns)), colors.columns) ax.set_yticks(range(len(colors.index)), colors.index) plt.show() | 2 | 2 |
79,266,262 | 2024-12-9 | https://stackoverflow.com/questions/79266262/when-i-navigate-to-the-url-and-get-the-contents-of-the-table-tag-its-empty | I am trying to scrape data from this website https://data.anbima.com.br/debentures/AALM11/agenda?page=1&size=100& and when I look at the DevTools > Elements, it has a TABLE tag with the data inside TR and TD tags (dates, values, etc.), but when I try to parse the HTML with Selenium or bs4 the data disappear and instead I see a <div class="skeleton-container" aria-hidden="true">. What can I do to extract the information I need? My code deb = 'AALM11' link_agenda = 'https://data.anbima.com.br/debentures/' + deb + '/agenda?page=1&size=100' driver.get(link_agenda) html_source = driver.find_element(By.TAG_NAME, 'table').get_attribute('outerHTML') The result <table id="" class="anbima-ui-table anbima-ui-table-responsive anbima-ui-table-mobile"> <thead> <tr> <th><span style="width: 80px;"><div class="skeleton-container" aria-hidden="true" style="width: 80px; height: 18px; margin-top: 0px;"></div></span></th> <th><span style="width: 110px;"><div class="skeleton-container" aria-hidden="true" style="width: 100px; height: 18px; margin-top: 0px;"></div></span></th> <th><span style="width: 110px;"><div class="skeleton-container" aria-hidden="true" style="width: 45px; height: 18px; margin-top: 0px;"></div></span></th> <th><span style="width: 110px;"><div class="skeleton-container" aria-hidden="true" style="width: 90px; height: 18px; margin-top: 0px;"></div></span></th> <th><span style="width: 110px;"><div class="skeleton-container" aria-hidden="true" style="width: 55px; height: 18px; margin-top: 0px;"></div></span></th> <th><span style="width: 80px;"><div class="skeleton-container" aria-hidden="true" style="width: 45px; height: 18px; margin-top: 0px;"></div></span></th> </tr> </thead> <tbody> <tr> <td><span><div class="skeleton-container" aria-hidden="true" style="width: 75px; height: 18px; margin-top: 0px;"></div></span></td> <td><span><div class="skeleton-container" aria-hidden="true" style="width: 75px; height: 18px; margin-top: 0px;"></div></span></td> <td><span><div class="skeleton-container" aria-hidden="true" style="width: 125px; height: 18px; margin-top: 0px;"></div></span></td> <td><span><div class="skeleton-container" aria-hidden="true" style="width: 75px; height: 18px; margin-top: 0px;"></div></span></td> <td><span><div class="skeleton-container" aria-hidden="true" style="width: 100px; height: 18px; margin-top: 0px;"></div></span></td> <td><span><div class="skeleton-container" aria-hidden="true" style="width: 100px; height: 18px; margin-top: 0px;"></div></span></td> </tr> ... I was expecting to see this instead <table id="" class="anbima-ui-table anbima-ui-table-responsive agenda-ativo-page__table--liquidado-1 agenda-ativo-page__table--liquidado-2 agenda-ativo-page__table--liquidado-3 agenda-ativo-page__table--liquidado-4 agenda-ativo-page__table--liquidado-5 agenda-ativo-page__table--liquidado-6 agenda-ativo-page__table--liquidado-7 agenda-ativo-page__table--liquidado-8 agenda-ativo-page__table--liquidado-9 agenda-ativo-page__table--liquidado-10 "> <thead> <tr> <th><span style="width: 80px;">Data do evento</span></th> <th><span style="width: 110px;">Data de liquidação</span></th> <th><span style="width: 110px;">Evento</span></th> <th><span style="width: 110px;">Percentual / Taxa</span></th> <th><span style="width: 110px;">Valor pago</span></th> <th><span style="width: 80px;">Status</span></th> </tr> </thead> <tbody> <tr> <td><span id="agenda-data-evento-0" class="normal-text">13/01/2022</span></td> <td><span id="agenda-data-liquidacao-0" class="normal-text">13/01/2022</span></td> <td><span id="agenda-evento-0" class="normal-text">Pagamento de juros</span></td> <td><span id="agenda-taxa-0" class="normal-text">4,3500 %</span></td> <td><span id="agenda-valor-0" class="normal-text">R$ 53,434259</span></td> <td><span id="agenda-status-0" class="anbima-ui-flag anbima-ui-flag--small anbima-ui-flag--small--green " style="max-width: 96px;"><label class="flag__children">Liquidado</label></span></td> </tr> ... | The problem is that the table data is dynamically loaded. When the browser is loading the page, it signals to Selenium that the page is done loading but the content of the page is still loading in the background. So your code is executed and it scrapes the partially loaded page. To fix this, we need to wait for something that indicates that the page is done loading. I chose to wait for the absence of all the <div class="skeleton-container" ...> elements. Once those are gone, the table data load is complete and the table data is available. Working code... from selenium import webdriver from selenium.webdriver.support.wait import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC driver = webdriver.Chrome() driver.maximize_window() deb = 'AALM11' link_agenda = 'https://data.anbima.com.br/debentures/' + deb + '/agenda?page=1&size=100' driver.get(link_agenda) wait = WebDriverWait(driver, 10) wait.until(EC.invisibility_of_element_located((By.CSS_SELECTOR, "div.skeleton-container"))) table = driver.find_element(By.CSS_SELECTOR, "table") print(table.get_attribute('outerHTML')) | 1 | 1 |
79,265,874 | 2024-12-9 | https://stackoverflow.com/questions/79265874/generating-a-dataframe-of-combinations-not-permutations | Suppose I have a bag of items {a, b}. Then I can choose pairs out of it in a variety of ways. One way might be to pick all possible permutations: [a, a], [a, b], [b, a], [b, b]. But I might disallow repetition, in which case the possible permutations are: [a, b], [b, a]. I might go further and declare that [a, b] is the same as [b, a], i.e. I only care about the "combination" of choices, not their permutations. For more about the distinction between combination vs. permutation, see: https://en.wikipedia.org/wiki/Combination What are the best ways to produce a combination of choices (i.e. order of elements should not matter)? My current solutions looks like this: import polars as pl choices = pl.DataFrame( [ pl.Series("flavor", ["x"] * 2 + ["y"] * 3), pl.Series("choice", ["a", "b"] + ["1", "2", "3"]), ] ) # join to produce the choices choices.join(choices, on=["flavor"]).with_columns( # generate a 2-element list representing the choice sorted_choice_pair=pl.concat_list("choice", "choice_right").list.sort() ).filter(pl.col.choice.eq(pl.col.sorted_choice_pair.list.first())) shape: (9, 4) ┌────────┬────────┬──────────────┬────────────────────┐ │ flavor ┆ choice ┆ choice_right ┆ sorted_choice_pair │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ str ┆ list[str] │ ╞════════╪════════╪══════════════╪════════════════════╡ │ x ┆ a ┆ a ┆ ["a", "a"] │ │ x ┆ a ┆ b ┆ ["a", "b"] │ │ x ┆ b ┆ b ┆ ["b", "b"] │ │ y ┆ 1 ┆ 1 ┆ ["1", "1"] │ │ y ┆ 1 ┆ 2 ┆ ["1", "2"] │ │ y ┆ 2 ┆ 2 ┆ ["2", "2"] │ │ y ┆ 1 ┆ 3 ┆ ["1", "3"] │ │ y ┆ 2 ┆ 3 ┆ ["2", "3"] │ │ y ┆ 3 ┆ 3 ┆ ["3", "3"] │ └────────┴────────┴──────────────┴────────────────────┘ So I generate all permutations, and then filter out those that where the "left element" does not match the first element of the list. | You can use .join_where() with a row index predicate to prevent "duplicates". (choices .with_row_index() .join_where(choices.with_row_index(), pl.col.flavor == pl.col.flavor_right, pl.col.index <= pl.col.index_right ) ) shape: (9, 5) ┌───────┬────────┬────────┬─────────────┬──────────────┐ │ index ┆ flavor ┆ choice ┆ index_right ┆ choice_right │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ u32 ┆ str ┆ str ┆ u32 ┆ str │ ╞═══════╪════════╪════════╪═════════════╪══════════════╡ │ 0 ┆ x ┆ a ┆ 0 ┆ a │ │ 0 ┆ x ┆ a ┆ 1 ┆ b │ │ 1 ┆ x ┆ b ┆ 1 ┆ b │ │ 2 ┆ y ┆ 1 ┆ 2 ┆ 1 │ │ 2 ┆ y ┆ 1 ┆ 3 ┆ 2 │ │ 3 ┆ y ┆ 2 ┆ 3 ┆ 2 │ │ 2 ┆ y ┆ 1 ┆ 4 ┆ 3 │ │ 3 ┆ y ┆ 2 ┆ 4 ┆ 3 │ │ 4 ┆ y ┆ 3 ┆ 4 ┆ 3 │ └───────┴────────┴────────┴─────────────┴──────────────┘ | 1 | 1 |
79,265,302 | 2024-12-9 | https://stackoverflow.com/questions/79265302/sum-up-column-values-by-special-logic | Say we have an array like: a = np.array([ [k11, k12, k13, k14, k15, k16, k17, k18], [k21, k22, k23, k24, k25, k26, k27, k28], [k31, k32, k33, k34, k35, k36, k37, k38], [k41, k42, k43, k44, k45, k46, k47, k48] ]) const = C I need to create a vector from this array like this (runge kutta 4): result = np.array([ const * (k11 + 2*k21 + 2*k31 + k41), const * (k12 + 2*k22 + 2*k32 + k42), const * (k13 + 2*k23 + 2*k33 + k43), .... const * (k18 + 2*k28 + 2*k38 + k48) ]) I am able to do this in cycle, but I am pretty sure numpy methods allow this in vectorised form. | np.einsum solution: result = const * np.einsum('ij,i->j', a, [1, 2, 2, 1]) ij,i are the dimensions of a and the coefficients. The result, j is missing i, which means that that dimension is multiplied and summed across the arrays. This solution is nice because it is very explicit about dimensions without requiring any reshaping or transposition. For larger matrices or longer multiplication chains, the order of operations will be optimized for speed. | 2 | 2 |
79,265,502 | 2024-12-9 | https://stackoverflow.com/questions/79265502/how-to-import-multiple-records-with-merge-function-to-an-oracle-db-via-python | I am trying to import data from a .csv file into an Oracle DB using Python. So far it works fine if the .csv file contains 10 records. If I increase the number of records in the .csv file to 1.000.000, the script takes far too long and does not end even after an hour. Can anyone tell me how I can optimise my source code? King Regards Jegor ... sql_insert = """Merge into TEST_TABLE a Using (Select :ID as ID, :COUNTRY as COUNTRY , :DATE as DATE From Dual) src on src.ID = a.ID when matched then update set a.COUNTRY = src.COUNTRY, a.DATE = src.DATE when not matched then Insert (a.ID, a.COUNTRY, a.DATE) Values (src.ID, src.COUNTRY, src.DATE)""" # Get S3-File obj = s3.Object(CDH_S3_Bucket, CDH_Path + '/' + s3_filename) body = obj.get()['Body'].read().decode('utf-8').splitlines() # ---------------------------------------------------------- csv_reader = csv.reader(body, delimiter=',') headings = next(csv_reader) for line in csv_reader: data.append(line) if data: cursor.executemany(sql_insert, data) connection.commit() cursor.close() ... | A merge is meant to modify one table based on the data in another table. It is not intended for single-row processing from the client like this. The proper design would be to use a normal bulk-bind insert to load a work table and then you can do a single merge execution to sync the target table with the work table. Also, when you do use a merge (appropriately), you don't want to use the where or delete subclauses within the merge when matched then update... clause. For merges, the where clause is different than in other SQL statements: it is a subprogram within the update program which means you already pay the penalty of CR block reads and other concurrency mechanisms, including redo, even if the where clause cancels the update of a row. You want to filter out unchanged rows within the using clause instead (which means pre-joining to the target table within the using clause and filter out unchanged rows there - you can then emit the target table's ROWID for optimal matching in the outer merge block; but the main benefit is filtering before the DML operation on the target). The delete subclause also fires after the update is processed and only on the updated row - if it's being updated, it's because it's coming in your new data and it won't be old by definition, unless you are getting old data in your files. To delete old records you have to use a totally separate archiving SQL that is not the same as your loading SQL. | 2 | 3 |
79,264,751 | 2024-12-9 | https://stackoverflow.com/questions/79264751/pandas-outofboundsdatetime-out-of-scope-issue | Am getting the following issue "pandas._libs.tslibs.np_datetime.OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 3036-12-31 00:00:00, at position 45100" I dont want to do the following as this will coerce all errors to NaT (not a time) s = pd.to_datetime(s, errors='coerce') Is there no way to keep the dates and not convert them to NaT/nan/null? this is the function causing the error def remove_duplicates_based_on_keydate(df, id_col, date_col): df = df.copy() df.loc[:, date_col] = pd.to_datetime(df[date_col]) # Sort by the date column in descending order df_sorted = df.sort_values(by=date_col, ascending=False) # Drop duplicates, keeping the first occurrence (latest date) df_unique = df_sorted.drop_duplicates(subset=id_col, keep='first') # Sort again by id_col and reset index df_unique = df_unique.sort_values(by=id_col).reset_index(drop=True) return df_unique | You can't have dates above Timestamp('2262-04-11 23:47:16.854775807'): Assuming dates like: df = pd.DataFrame({'date': ['2036-12-31 00:00:00', '3036-12-31 00:00:00']}) You could convert to periods with PeriodIndex df['periods'] = pd.PeriodIndex(df['date'].str.extract(r'^(\d{4}-\d\d-\d\d)', expand=False), freq='D') Output: date periods 0 2036-12-31 00:00:00 2036-12-31 1 3036-12-31 00:00:00 3036-12-31 Or for a precision to the second: df = pd.DataFrame({'date': ['2036-12-31 00:00:00', '3036-12-31 01:00:00']}) df['periods'] = pd.PeriodIndex(df['date'], freq='s') Output: date periods 0 2036-12-31 00:00:00 2036-12-31 00:00:00 1 3036-12-31 01:00:00 3036-12-31 01:00:00 Alternatively, first convert to datetime with errors='coerce', then to object and fillna with the missing dates: pd.to_datetime(df['date'], errors='coerce').astype(object).fillna(df['date']) Note however that this will remain an object column with mixed strings and timestamps, which is not really useful in pandas. df['mixed'] = pd.to_datetime(df['date'], errors='coerce').astype(object).fillna(df['date']) print(df['mixed'].tolist()) # [Timestamp('2036-12-31 00:00:00'), '3036-12-31 00:00:00'] | 1 | 1 |
79,263,329 | 2024-12-8 | https://stackoverflow.com/questions/79263329/how-to-change-text-color-of-facet-category-in-plotly-charts-in-python | I have created few Plotly charts with facets on basis of category variable and would like to change the color of facet text in the chart. Have searched alot even on plotly website but couldn't figure out the property that can be used to change the color for facet text. Using below image as an example I would like to change the color of - No & Yes: import plotly.express as px fig = px.scatter(px.data.tips(), x="total_bill", y="tip", facet_col="smoker") fig.for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1])) fig.show() Would really Appreciate any help !! | As for annotations, they are summarized in the layout attributes and can be done by making decisions based on the content of the text. In the following example, NO has been changed to red. import plotly.express as px fig = px.scatter(px.data.tips(), x="total_bill", y="tip", facet_col="smoker") fig.for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1])) for t in fig.layout.annotations: if t.text.split("=")[-1] == 'No': t.font.color='red' fig.show() | 1 | 2 |
79,262,249 | 2024-12-8 | https://stackoverflow.com/questions/79262249/asizeof-appears-to-be-inaccurate | Take this MWE: from pympler import asizeof from random import randint, choice from string import printable from heapq import heappush ascii = printable[:-5] pq = [] for _ in range(10_000_000): heappush(pq, (randint(0, 31), randint(0, 31), randint(0, 31), ''.join(choice(ascii) for _ in range(16)))) print(asizeof.asizeof(pq)) I can see from running 'top' that this takes about 2.7GB of RAM. But asizeof reports 1,449,096,184 bytes which is a long way off. This is what 'top" shows: /usr/bin/time -v gives: Maximum resident set size (kbytes): 2858616 Using another way of measuring RAM: from resource import getrusage, RUSAGE_SELF print(getrusage(RUSAGE_SELF).ru_maxrss * 1024) This returns 2927054848 | asizeof rather accurately does what it's supposed to do: Measure the total size of the object structure. That's just not all the memory that Python uses. I get the exact same total 1,449,096,184 bytes with this minified test (Attempt This Online!): from sys import getsizeof def size(obj, align=8): return getsizeof(obj) // -align * -align a = [] for _ in range(10_000_000): a.append((0, 0, 0, ' ' * 16)) list_size = size(a) tuple_size = size(a[0]) str_size = size(a[0][3]) ints_size = sum(map(size, range(32))) print(f'{ list_size + len(a) * (tuple_size + str_size) + ints_size :,}') Using align=16 (also in asizeof) would be more realistic, that's likely the alignment your Python uses. I get 1,529,096,192 bytes then. | 1 | 3 |
79,256,095 | 2024-12-5 | https://stackoverflow.com/questions/79256095/problems-plotting-timestamps-on-the-x-axis-with-matplotlib | I am working on a Python script that loads several CSV files containing timestamps and ping data and then displays them on a plot. The X-axis is supposed to display the timestamps in HH:MM format, with the timestamps coming from multiple CSV files that record different ping values for different addresses. The challenge is that I only want to display a limited number of timestamps for the X axis, e.g. 10-12 timestamps, based on the number of data points in the CSV files. I also want to ensure that the X-axis is correctly labeled with the appropriate timestamps and associated ping values. Problem: The plot shows the data, but the timestamps on the X-axis are not correct and too few ticks appear. Only the first timestamp is displayed and only 8 ticks are generated on the X-axis. In addition, the X-axis ticks do not seem to match the timestamps from the data correctly, which affects the readability of the plot. Goal: The X-axis should correctly display timestamps in the format HH:MM:SS for all addresses from the CSV files. I would like to have a limited number of timestamps (approx. 10-12) on the X-axis based on the data points in the CSV files. It is important to mention that the information for the plot is stored in x_labels and x_positions. 11 subdivisions are also correctly created and saved for 99 data records, but these are still displayed incorrectly. Example: x_positions: [0.0, 2.55, 5.1, 7.65, 10.216666666666667, 12.766666666666667, 15.316666666666666, 17.866666666666667, 20.416666666666668, 22.983333333333334, 25.533333333333335] x_labels: ['17:24:43', '17:27:16', '17:29:49', '17:32:22', '17:34:56', '17:37:29', '17:40:02', '17:42:35', '17:45:08', '17:47:42', '17:50:15'] This is the picture I get, but it should have 11 dividing lines on the X axis and all of them should be labeled Here is some test Data, I store in the csv: Time,Ping (ms) 17:24:43,0.1 17:25:00,0.2 17:25:17,0.23 17:25:34,0.12 17:25:51,0.23 17:26:08,0.123 17:26:25,0.321 17:26:42,0.231 Here is My Code: import os import pandas as pd import matplotlib.pyplot as plt import numpy as np from datetime import datetime, timedelta # Funktion zum Laden der Daten aus den CSV-Dateien def load_data(folder): data = {} for root, dirs, files in os.walk(folder): for file in files: if file.endswith(".csv"): address = file.replace('_', '.').replace('.csv', '') file_path = os.path.join(root, file) df = pd.read_csv(file_path) df['Time'] = pd.to_datetime(df['Time'], format='%H:%M:%S') df['Ping (ms)'] = df['Ping (ms)'].apply(lambda x: 0 if x == 0 else x) data[address] = df return data # Funktion zum Erstellen des Plots def plot_data(data): plt.figure(figsize=(14, 8)) colors = generate_colors(len(data)) # Bestimme die Anzahl der Datenpunkte für eine einzelne Adresse df = next(iter(data.values())) # Wähle den ersten DataFrame aus total_data_points = len(df) # Berechne den dif-Wert dif = total_data_points // 10 if dif < 1: dif = 1 # Sammle alle Zeitstempel für die X-Achse x_labels = [] x_positions = [] for i in range(0, len(df), dif): time = df['Time'].iloc[i] x_labels.append(time.strftime('%H:%M:%S')) x_positions.append((time - min(df['Time'])).total_seconds() / 60) # Plotten der Ping-Daten für jede Adresse for idx, (address, df) in enumerate(data.items()): df['Time_diff'] = (df['Time'] - min(df['Time'])).dt.total_seconds() / 60 mask_timeout = df['Ping (ms)'] == 0 mask_normal = ~mask_timeout plt.plot(df['Time_diff'][mask_normal], df['Ping (ms)'][mask_normal], label=address, color=colors[idx % len(colors)]) plt.plot(df['Time_diff'][mask_timeout], df['Ping (ms)'][mask_timeout], color='r', lw=2) # Anpassen der X-Achse plt.xticks(x_positions, x_labels, rotation=45, ha='right') plt.xlabel('Time') plt.ylabel('Ping (ms)') plt.title('Ping Times for Different Addresses') plt.legend() plt.grid(True) plt.tight_layout() plt.show() def generate_colors(n): colors = [] for i in range(n): hue = i / n colors.append(plt.cm.hsv(hue)) return colors # Main-Funktion def main(): data_folder = input("Bitte geben Sie den Pfad zum Ordner mit den CSV-Dateien ein: ") if not os.path.exists(data_folder): print(f"Der Ordner {data_folder} existiert nicht.") return data = load_data(data_folder) plot_data(data) if __name__ == "__main__": main() | Every time you add a new plot, a new axis is added for both 'x' and 'y'. And I'm unsure if you can control which axis will be on top. so the workaround that I can think about is to set the ticks param for the 'x' axis every time you add a new plot: for idx, (address, df) in enumerate(data.items()): df['Time_diff'] = (df['Time'] - min(df['Time'])).dt.total_seconds() / 60 mask_timeout = df['Ping (ms)'] == 0 mask_normal = ~mask_timeout plt.plot(df['Time_diff'][mask_normal], df['Ping (ms)'][mask_normal], label=address, color=colors[idx % len(colors)]) plt.tick_params(axis='x', which='both', labelbottom=False) plt.plot(df['Time_diff'][mask_timeout], df['Ping (ms)'][mask_timeout], color='r', lw=2) plt.tick_params(axis='x', which='both', labelbottom=False) And set it back to true (in my example for the minor) right after you set your xticks: plt.xticks(x_positions, x_labels, rotation=45, ha='right') plt.tick_params(axis='x', which='minor', labelbottom=True) I believe it's true for your 'y' axis as well (your graph shows that Google pings better than your local devices). | 1 | 1 |
79,258,240 | 2024-12-6 | https://stackoverflow.com/questions/79258240/scrapy-script-does-not-start-spiders | I have created a new scrapy projects with a spider (multiple to be added). The spider works without any issues if started with scrapy crawl myspider However, when I try to run the scraper from a custom script, it does not start. I have broken down the script to a bare minimum that does not work: from scrapy.spiderloader import SpiderLoader from scrapy.crawler import CrawlerRunner from scrapy.utils.project import get_project_settings from scrapy.utils.log import configure_logging from twisted.internet import reactor settings = get_project_settings() configure_logging(settings) runner = CrawlerRunner(settings) spider_loader = SpiderLoader.from_settings(settings) for spider in spider_loader.list(): print(f"Adding Spider: {spider}") runner.crawl(spider_loader.load(spider)) d = runner.join() d.addBoth(lambda _: reactor.stop()) reactor.run() The output of the script is: $ python3 minimal.py Adding Spider: myspider 2024-12-06 14:52:02 [scrapy.addons] INFO: Enabled addons: [] The script than hangs, and no additional messages from the spider are printed. I confirmed that no network traffic related to crawling is observed. The code is very close to the documentation, so I am little bit clueless on what the problem could be and where to look. Edit: This is a minimal crawler that does not run: import scrapy class GoogleSpider(scrapy.Spider): name = "google" allowed_domains = ["google.com"] start_urls = ["https://www.google.com"] def parse(self, response): pass # Scrapy settings for fwscraper project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://docs.scrapy.org/en/latest/topics/settings.html # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html # https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = "fwscraper" SPIDER_MODULES = ["fwscraper.spiders"] NEWSPIDER_MODULE = "fwscraper.spiders" # Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT = "fwscraper (+http://www.yourdomain.com)" # Obey robots.txt rules ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0) # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default) #COOKIES_ENABLED = False # Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED = False # Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", # "Accept-Language": "en", #} # Enable or disable spider middlewares # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # "fwscraper.middlewares.FwscraperSpiderMiddleware": 543, #} # Enable or disable downloader middlewares # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = { # "fwscraper.middlewares.FwscraperDownloaderMiddleware": 543, #} # Enable or disable extensions # See https://docs.scrapy.org/en/latest/topics/extensions.html #EXTENSIONS = { # "scrapy.extensions.telnet.TelnetConsole": None, #} # Configure item pipelines # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html #ITEM_PIPELINES = { # "fwscraper.pipelines.FwscraperPipeline": 300, #} # Enable and configure the AutoThrottle extension (disabled by default) # See https://docs.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default) # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = "httpcache" #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = "scrapy.extensions.httpcache.FilesystemCacheStorage" # Set settings whose default value is deprecated to a future-proof value TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor" FEED_EXPORT_ENCODING = "utf-8" | You requested a non-default reactor in your settings so you need to install it explicitly when using CrawlerRunner (this is mentioned in https://docs.scrapy.org/en/latest/topics/asyncio.html#installing-the-asyncio-reactor). You also need to do it before importing twisted.internet.reactor as that installs the default reactor (this is mentioned in https://docs.scrapy.org/en/latest/topics/asyncio.html#handling-a-pre-installed-reactor). A correct code example is shown in https://docs.scrapy.org/en/latest/topics/practices.html#run-scrapy-from-a-script: import scrapy from scrapy.crawler import CrawlerRunner from scrapy.utils.reactor import install_reactor install_reactor("twisted.internet.asyncioreactor.AsyncioSelectorReactor") runner = CrawlerRunner() d = runner.crawl(MySpider) from twisted.internet import reactor d.addBoth(lambda _: reactor.stop()) reactor.run() | 1 | 1 |
79,262,465 | 2024-12-8 | https://stackoverflow.com/questions/79262465/how-to-invert-ohlc-data | I have a OHLC (Open, High, Low, Close financial data). An upward bar (bullish) is when the close price is higher than the open price. A downward bar (bearish) is when the close price is lower than the open price. I am trying to find a way to invert the dataset in order to have the following behavior: Original data: Inverted data: The first steps seams: • For an upward bar, swap the open and close prices to make it a downward bar. • For a downward bar, swap the open and close prices to make it an upward bar. Second step: Preserve the range of the candlestick - Maintain the difference of High and Low: # Function to invert the OHLC bars def invert_ohlc(row): if row['Close'] > row['Open']: # Bullish bar, invert to bearish row['Open'], row['Close'] = row['Close'], row['Open'] elif row['Close'] < row['Open']: # Bearish bar, invert to bullish row['Open'], row['Close'] = row['Close'], row['Open'] return row But I don't know how to continue: Reproducible dataset (same as image): import pandas as pd from io import StringIO data = """ Date,Time,Open,High,Low,Close 7/16/2024,09:00,1302000000,1303600000,1302000000,1303550000 7/16/2024,10:00,1303550000,1305650000,1301300000,1301800000 7/16/2024,11:00,1301800000,1305650000,1301150000,1302650000 7/16/2024,12:00,1302650000,1303700000,1300550000,1303600000 7/16/2024,13:00,1303600000,1304150000,1298400000,1299550000 7/16/2024,14:00,1299550000,1300900000,1297300000,1300000000 7/16/2024,15:00,1300000000,1302650000,1298700000,1301300000 7/16/2024,16:00,1301300000,1303800000,1299850000,1300500000 7/16/2024,17:00,1300550000,1301950000,1300000000,1301800000 7/16/2024,18:00,1301800000,1302800000,1301400000,1302450000 7/16/2024,19:00,1302500000,1303450000,1302300000,1303350000 7/17/2024,09:00,1299800000,1300500000,1298800000,1299650000 7/17/2024,10:00,1299650000,1301300000,1297900000,1299900000 7/17/2024,11:00,1299900000,1303600000,1296700000,1302050000 7/17/2024,12:00,1302050000,1305250000,1299000000,1303400000 7/17/2024,13:00,1303400000,1305950000,1302400000,1303750000 7/17/2024,14:00,1303800000,1304450000,1301350000,1303950000 7/17/2024,15:00,1304000000,1305800000,1302950000,1303300000 7/17/2024,16:00,1303300000,1305750000,1302950000,1305050000 7/17/2024,17:00,1305050000,1305250000,1303200000,1303350000 7/17/2024,18:00,1303350000,1304800000,1302950000,1304250000 7/17/2024,19:00,1304300000,1304750000,1302650000,1303150000 7/18/2024,09:00,1302250000,1303850000,1302250000,1303650000 7/18/2024,10:00,1303650000,1304650000,1299100000,1299600000 7/18/2024,11:00,1299600000,1301100000,1294850000,1295650000 7/18/2024,12:00,1295650000,1296850000,1291450000,1292500000 7/18/2024,13:00,1292550000,1293100000,1290400000,1291400000 7/18/2024,14:00,1291450000,1292050000,1288650000,1289250000 7/18/2024,15:00,1289250000,1289650000,1287350000,1288300000 7/18/2024,16:00,1288300000,1288300000,1284850000,1286100000 7/18/2024,17:00,1286100000,1286200000,1283800000,1285450000 7/18/2024,18:00,1285400000,1290950000,1284400000,1290400000 7/18/2024,19:00,1290400000,1292500000,1289650000,1292500000 7/19/2024,09:00,1290400000,1292050000,1289750000,1291200000 7/19/2024,10:00,1291250000,1293550000,1285300000,1287250000 7/19/2024,11:00,1287250000,1292800000,1286100000,1289950000 7/19/2024,12:00,1289900000,1292250000,1286250000,1288400000 7/19/2024,13:00,1288400000,1288950000,1284750000,1287350000 7/19/2024,14:00,1287300000,1287800000,1286150000,1287300000 7/19/2024,15:00,1287300000,1288800000,1285750000,1286900000 7/19/2024,16:00,1286950000,1287050000,1282450000,1283350000 7/19/2024,17:00,1283350000,1284950000,1283000000,1284600000 7/19/2024,18:00,1284650000,1284700000,1283050000,1283400000 7/19/2024,19:00,1283350000,1283400000,1279000000,1279000000 """ # Use StringIO to simulate reading from a file df = pd.read_csv(StringIO(data), parse_dates=[['Date', 'Time']]) | You could take the negative values: import plotly.graph_objects as go fig = go.Figure(data=[go.Candlestick(x=df['Date_Time'], open=df['Open'], high=df['High'], low=df['Low'], close=df['Close'] )]) fig.show() fig = go.Figure(data=[go.Candlestick(x=df['Date_Time'], open=-df['Open'], high=-df['High'], low=-df['Low'], close=-df['Close'] )]) fig.show() | 1 | 2 |
79,258,896 | 2024-12-6 | https://stackoverflow.com/questions/79258896/how-to-do-an-advanced-grouping-in-pandas | The easiest way is to demonstrate my question with an example. Suppose I have the following long format data frame In [284]: import pandas as pd In [285]: data = pd.DataFrame({"day": [0,0,0,0,0,0,1,1,1,1,1,1], "cat1": ["A", "A", "A", "B", "B", "B", "A", "A", "B", "B", "B", "B"], "cat2":["1", "1", "2", "1", "2", "2", "1", "2", "1", "1", "2", "2"], "value": [10, 230, 32,12, 12, 65, 12, 34, 97, 0, 12,1]}) In [286]: data Out[286]: day cat1 cat2 value 0 0 A 1 10 1 0 A 1 230 2 0 A 2 32 3 0 B 1 12 4 0 B 2 12 5 0 B 2 65 6 1 A 1 12 7 1 A 2 34 8 1 B 1 97 9 1 B 1 0 10 1 B 2 12 11 1 B 2 1 Per day I have two categories. My goal is to aggregate the cat2 category in a specific way. For each tuple (date, cat1, cat2) I would like to perform the following: In [287]: data_day = data[data["day"]==0] In [288]: data_day_cat1 = data_day[data_day["cat1"]=="A"] In [289]: data_day_cat1_cat2 = data_day_cat1[data_day_cat1["cat2"]=="1"] In [290]: data_day_cat1_cat2["value"].pow(2).mean() Out[290]: np.float64(26500.0) In [291]: data_day_cat1_cat2 = data_day_cat1[data_day_cat1["cat2"]=="2"] In [292]: data_day_cat1_cat2["value"].pow(2).mean() Out[292]: np.float64(1024.0) That is on the first day, for cat1 being A, I want a single line for all occurrence of cat2, where the latter is like a "root mean square error". Currently I'm looping over all combination, but I was playing around with using groupby. However, something like: data.groupby(["day", "cat1", "cat2"])["value"].apply(lambda x: x**2).mean() Does work. What I would like to get is a DataFrame like this: day cat1 cat2 value 0 0 A 1 26500 1 0 A 2 1024 EDIT: Note, I want the complete DataFrame, was just too lazy to write down the whole data frame. Is this possible without looping over all day, cat1 and cat2? Could groupby be used? | You can create a new column with the square value and then do the groupby: data["value2"] = data["value"] * data["value"] gb = data.groupby(["day", "cat1", "cat2"])["value2"].mean() display(gb) | 1 | 1 |
79,261,474 | 2024-12-7 | https://stackoverflow.com/questions/79261474/python-generic-type-on-function-getting-lost-somewhere | Getting this typing error: error: Incompatible types in assignment (expression has type "object", variable has type "A | B") [assignment] With this code: from dataclasses import dataclass from typing import TypeVar, Mapping, reveal_type @dataclass class A: foo: str = "a" @dataclass class B: bar: str = "b" lookup_table: Mapping[str, type[A] | type[B]] = { "a": A, "b": B } reveal_type(lookup_table) # note: Revealed type is "typing.Mapping[builtins.str, Union[type[simple.A], type[simple.B]]]" T = TypeVar("T") def load(lookup_table: Mapping[str, type[T]], lookup_key:str) -> T: con: type[T] = lookup_table[lookup_key] instance: T = con() return instance example_a: A | B = load(lookup_table, "a") # error: Incompatible types in assignment (expression has type "object", variable has type "A | B") print(example_a) Edit: Logged a mypy bug here: https://github.com/python/mypy/issues/18265 | This is a mypy bug present in 1.13.0 and below (previously reported here and by OP here). pyright and basedmypy both accept the given snippet. mypy stores type[A | B] types as a union of types internally (type[A] | type[B]). This is usually convenient, but causes troubles when solving type[T] <: type[A] | type[B] for T, because most types aren't "distributive" (P[A, B] is not equivalent to P[A] | P[B]), and type special case isn't taken into account yet. General solver produces meet(A, B) in such case by solving type[T] <: type[A] => T <: A | | => T <: meet(A, B) type[T] <: type[B] => T <: B | When A and B have no explicit parent in common, the closest supertype is object. To understand the logic behind that, consider similar equations: x < 2 && x < 3 => x < min(2, 3), and intersection/meet is for types what min operation is for numbers (and union/join is similar to max). I submitted a PR to special-case this behaviour, so things may change in a future mypy release. | 4 | 2 |
79,259,509 | 2024-12-6 | https://stackoverflow.com/questions/79259509/ffmpeg-piped-output-producing-incorrect-metadata-frame-count | The short version: Using piped output from ffmpeg produces a file with incorrect metadata. ffmpeg -y -i .\test_mp4.mp4 -f avi -c:v libx264 - > output.avi to make an AVI file using the pipe output. ffprobe -v error -count_frames -show_entries stream=duration,nb_read_frames,r_frame_rate .\output.avi The output will show that the metadata does not match the actual frames contained in the video. Details below. Using Python, I am attempting to use ffmpeg to compress videos and put them in a PowerPoint. This works great, however, the video files themselves have incorrect frame counts which can cause issues when I read from those videos in other code. Edit for clarification: by "frame count" I mean the metadata frame count. The actual number of frames contained in the video is correct, but querying the metadata gives an incorrect frame count. Having eliminated the PowerPoint aspect of the code, I've narrowed this down to the following minimal reproducing example of saving an output from an ffmpeg pipe: from subprocess import Popen, PIPE video_path = 'test_mp4.mp4' ffmpeg_pipe = Popen(['ffmpeg', '-y', # Overwrite files '-i', f'{video_path}', # Input from file '-f', 'avi', # Output format '-c:v', 'libx264', # Codec '-'], # Output to pipe stdout=PIPE) new_path = "piped_video.avi" vid_file = open(new_path, "wb") vid_file.write(ffmpeg_pipe.stdout.read()) vid_file.close() I've tested several different videos. One small example video that I've tested can be found here. I've tried a few different codecs with avi format and tried libvpx with webm format. For the avi outputs, the frame count usually reads as 1073741824 (2^30). Weirdly, for the webm format, the frame count read as -276701161105643264. Edit: This issue can also be reproduced with just ffmpeg in command prompt using the following command: ffmpeg -y -i .\test_mp4.mp4 -f avi -c:v libx264 - > output.avi This is a snippet I used to read the frame count, but one could also see the error by opening the video details in Windows Explorer and seeing the total time as something like 9942 hours, 3 minutes, and 14 seconds. import cv2 video_path = 'test_mp4.mp4' new_path = "piped_video.webm" cap = cv2.VideoCapture(video_path) print(f"Original video frame count: = {int(cap.get(cv2.CAP_PROP_FRAME_COUNT)):d}") cap.release() cap = cv2.VideoCapture(new_path) print(f"Piped video frame count: = {int(cap.get(cv2.CAP_PROP_FRAME_COUNT)):d}") cap.release() The error can also be observed using ffprobe with the following command: ffprobe -v error -count_frames -show_entries stream=duration,nb_read_frames,r_frame_rate .\output.avi. Note that the frame rate and number of frames counted by ffprobe do not match with the duration from the metadata. For completeness, here is the ffmpeg output: ffmpeg version 2023-06-11-git-09621fd7d9-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers built with gcc 12.2.0 (Rev10, Built by MSYS2 project) configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint libavutil 58. 13.100 / 58. 13.100 libavcodec 60. 17.100 / 60. 17.100 libavformat 60. 6.100 / 60. 6.100 libavdevice 60. 2.100 / 60. 2.100 libavfilter 9. 8.101 / 9. 8.101 libswscale 7. 3.100 / 7. 3.100 libswresample 4. 11.100 / 4. 11.100 libpostproc 57. 2.100 / 57. 2.100 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test_mp4.mp4': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2022-08-10T12:54:09.000000Z Duration: 00:00:06.67, start: 0.000000, bitrate: 567 kb/s Stream #0:0[0x1](eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 384x264 [SAR 1:1 DAR 16:11], 563 kb/s, 30 fps, 30 tbr, 30k tbn (default) Metadata: creation_time : 2022-08-10T12:54:09.000000Z handler_name : Mainconcept MP4 Video Media Handler vendor_id : [0][0][0][0] encoder : AVC Coding Stream mapping: Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264)) Press [q] to stop, [?] for help [libx264 @ 0000018c68c8b9c0] using SAR=1/1 [libx264 @ 0000018c68c8b9c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2 [libx264 @ 0000018c68c8b9c0] profile High, level 2.1, 4:2:0, 8-bit Output #0, avi, to 'pipe:': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 ISFT : Lavf60.6.100 Stream #0:0(eng): Video: h264 (H264 / 0x34363248), yuv420p(progressive), 384x264 [SAR 1:1 DAR 16:11], q=2-31, 30 fps, 30 tbn (default) Metadata: creation_time : 2022-08-10T12:54:09.000000Z handler_name : Mainconcept MP4 Video Media Handler vendor_id : [0][0][0][0] encoder : Lavc60.17.100 libx264 Side data: cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A [out#0/avi @ 0000018c687f47c0] video:82kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 3.631060% frame= 200 fps=0.0 q=-1.0 Lsize= 85kB time=00:00:06.56 bitrate= 106.5kbits/s speed=76.2x [libx264 @ 0000018c68c8b9c0] frame I:1 Avg QP:16.12 size: 3659 [libx264 @ 0000018c68c8b9c0] frame P:80 Avg QP:21.31 size: 647 [libx264 @ 0000018c68c8b9c0] frame B:119 Avg QP:26.74 size: 243 [libx264 @ 0000018c68c8b9c0] consecutive B-frames: 3.0% 53.0% 0.0% 44.0% [libx264 @ 0000018c68c8b9c0] mb I I16..4: 17.6% 70.6% 11.8% [libx264 @ 0000018c68c8b9c0] mb P I16..4: 0.8% 1.7% 0.6% P16..4: 17.6% 4.6% 3.3% 0.0% 0.0% skip:71.4% [libx264 @ 0000018c68c8b9c0] mb B I16..4: 0.1% 0.3% 0.2% B16..8: 11.7% 1.4% 0.4% direct: 0.6% skip:85.4% L0:32.0% L1:59.7% BI: 8.3% [libx264 @ 0000018c68c8b9c0] 8x8 transform intra:59.6% inter:62.4% [libx264 @ 0000018c68c8b9c0] coded y,uvDC,uvAC intra: 48.5% 0.0% 0.0% inter: 3.5% 0.0% 0.0% [libx264 @ 0000018c68c8b9c0] i16 v,h,dc,p: 19% 39% 25% 17% [libx264 @ 0000018c68c8b9c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 25% 30% 3% 3% 4% 4% 4% 5% [libx264 @ 0000018c68c8b9c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 20% 16% 6% 8% 8% 8% 5% 6% [libx264 @ 0000018c68c8b9c0] i8c dc,h,v,p: 100% 0% 0% 0% [libx264 @ 0000018c68c8b9c0] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0000018c68c8b9c0] ref P L0: 76.2% 7.9% 11.2% 4.7% [libx264 @ 0000018c68c8b9c0] ref B L0: 85.6% 12.9% 1.5% [libx264 @ 0000018c68c8b9c0] ref B L1: 97.7% 2.3% [libx264 @ 0000018c68c8b9c0] kb/s:101.19 So the question is: why does this happen, and how can one avoid it? | As I commented above, it is 100% due to outputting AVI file to a pipe. Check out this part of FFmpeg source code: https://github.com/FFmpeg/FFmpeg/blob/c893dcce312af152f21a54874f88576ad279e722/libavformat/avienc.c#L911 Specifically, the if block starting on Line 924 is skipped if you write to a pipe: if (pb->seekable & AVIO_SEEKABLE_NORMAL) { ... if (avi->riff_id == 1) { ... } else { ... avio_wl32(pb, nb_frames); ... } } This cause the piped output to miss some header attributes, including nb_frames as the excerpts above indicates. In addition, some stream attributes are skipped as well (Lines 969-) So, what you are experiencing is indeed intentional, and it's highly unlikely FFmpeg devs will consider this as a bug. I was to offer a Python script to fill the nb_frames manually to the retrieved bytearray, but other skipped fields may cause issues anyway. So, I suggest you to just write the AVI file (MP4 file is probably a better choice for PowerPoint nowadays, BTW) to a temp dir and read the output file. Something like this: from tempfile import TemporaryDirectory import subprocess as sp from os import path video_path = 'test_mp4.mp4' with TemporaryDirectory() as temp_dir: new_path = path.join(temp_dir, "piped_video.avi") sp.run('ffmpeg', '-y', # Overwrite files '-i', f'{video_path}', # Input from file '-f', 'avi', # Output format '-c:v', 'libx264', # Codec new_path]) # Output to pipe from open(new_path,'rb') as f: b = f.read() | 1 | 1 |
79,261,408 | 2024-12-7 | https://stackoverflow.com/questions/79261408/how-can-i-clean-a-year-column-with-messy-values | I have a project I'm working on for a data analysis course, where we pick a data set and go through the steps of cleaning and exploring the data with a question to answer in mind. I want to be able to see how many instances of the data occur in different years, but right now the Year column in the data set is set to a datatype object, with values spanning from whole years like 1998, just the last 2 digits like 87, ranges of presumed years ('early 1990's', '89 or 90', '2011- 2012', 'approx 2001'). I'm trying to determine the best way to convert all these various instances to the proper format or would it be better to drop the values that are not definitive? I worry that this would lead to too much data loss because the dataset is already pretty small (about 5000 rows total). I have looked into regex and it seems like that is the path I should go down to keep and alter the values, but I still don't understand it conceptually very well, and I worry about the efficiency of filtering for so many different value variations. I'm still very new to Python and pandas. | Assuming your Year columns are strings, I would write a normalize function like this: import re import pandas as pd data = [ {"year": "early 1990's"}, {"year": "89 or 90"}, {"year": "2011-2012"}, {"year": "approx 2001"}, ] def normalize(row): year = row["year"] # Count the number of digits count = len(re.findall("\\d", year)) if count == 4: # match YYYY if m := re.search("\\d\\d\\d\\d", year): return m.group(0) if count == 2: # match YY if m := re.search("\\d\\d", year): return "19" + m.group(0) df = pd.DataFrame(data) df["normalized"] = df.apply(normalize, axis=1) print(df) => year normalized 0 early 1990's 1990 1 89 or 90 None 2 2011-2012 None 3 approx 2001 2001 The function returns None for unmatched pattern. You can list them as follows: >>> print(df[df["normalized"].isnull()]) ... year normalized 1 89 or 90 None 2 2011-2012 None Review the output and modify the normalize function as you like. Repeat these steps until you get satisfied. | 1 | 1 |
79,261,490 | 2024-12-7 | https://stackoverflow.com/questions/79261490/how-to-adjust-the-size-of-one-subplot-independently-of-other-subplots-in-a-matpl | I want to have horizontally aligned 3D and 2D plots, where the y-axis of the 2D plot is the same height as the z-axis of the 3D plot. The following code produces a default output: import matplotlib.pyplot as plt fig = plt.figure() fig.set_size_inches(10, 4) fig.subplots_adjust(wspace=0.5) # 3D surface plot ax1 = fig.add_subplot(121, projection="3d") ax1.set(zlabel="Value") # 2D line plot ax2 = fig.add_subplot(122) ax2.set(ylabel="Value") plt.show() I want to shrink down the right hand side 2D subplot so that the y-axis ("Value") is a similar height to (or preferably the same height as) the z-axis ("Value") on the 3D plot (ideally I want them to line up). I haven't been able to find a way to change subplot sizes independently like this in matplotlib (setting relative ratios and gridspec don't seem able to achieve this). Any help would be much appreciated. | You can use set_position() to change the dimensions of one of the subplot: plt.figure(1).axes[1].set_position([0.6,0.4,0.25,0.3]) # left, bottom, width, height It gives: | 2 | 1 |
79,261,137 | 2024-12-7 | https://stackoverflow.com/questions/79261137/how-to-create-a-connected-2d-grid-graph | I have a 2-dimensional array that represents a grid. The (numpy) array is as follows: dx_grid =[[ "A", "B", "C"], [ "L", "M", "N"], [ "X", "Y", "Z"]] I want to convert that into the following: I know that grid_2d_graph can connect 4 adjacent nodes. For example, it would connect node M to B, L, N and Y BUT NOT to A, C, X, Z. How would I create such a graph using networkx in python? | Something like this should work: import networkx as nx dx_grid =[[ "A", "B", "C"], [ "L", "M", "N"], [ "X", "Y", "Z"]] r, c = len(dx_grid), len(dx_grid[0]) g = nx.Graph() # add nodes for i in range(r): for j in range(c): g.add_node(dx_grid[i][j], pos=(j,r-i)) # add edges # for all nodes for i in range(r): for j in range(c): # connect all neighbors for k in range(-1,2): for l in range(-1,2): # check if neighbor node index is valid if i+k >= 0 and i+k < r and j+l >= 0 and j+l < c: if k == 0 and l == 0: continue # avoid self-loops g.add_edge(dx_grid[i][j], dx_grid[i+k][j+l]) # connect with neighbor node # get positions pos = nx.get_node_attributes(g,'pos') # draw network with nodes at given positions plt.figure(figsize=(6,6)) nx.draw(g, pos=pos, node_color='lightblue', with_labels=True, node_size=600) | 1 | 1 |
79,259,256 | 2024-12-6 | https://stackoverflow.com/questions/79259256/how-to-create-an-altair-faceted-and-layered-chart-with-dual-axis | I'm trying to create an Altair faceted bar chart with lines that are represents other measure and would better use of a second y-axis on the right side. I don't know if it is possible using Atair. The code is as follows: import altair as alt import pandas as pd data = { 'Date': ['2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02'], 'Value': [50, 200, 300, 150, 250, 350, 200, 200, 10, 20, 15, 20, 20, 30, 20, 30], 'DESCR': ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'C', 'C', 'C', 'C', 'D', 'D', 'D', 'D'], 'Company': ['X', 'X', 'Y', 'Y', 'X', 'X', 'Y', 'Y', 'X', 'X', 'Y', 'Y', 'X', 'X', 'Y', 'Y'], } source = pd.DataFrame(data) bars = alt.Chart(source).mark_bar().transform_filter(f"(datum.DESCR=='A') | (datum.DESCR=='B')").encode( x='Date:N', y='Value:Q', color=alt.Color('DESCR:N', legend=alt.Legend(title=None)) ) lines = alt.Chart(source).mark_line().transform_filter(f"(datum.DESCR=='C') | (datum.DESCR=='D')").encode( x='Date:N', y='Value', stroke=alt.Stroke('DESCR', legend=alt.Legend(title=None), scale=alt.Scale(scheme='redblue')) ) chart = (bars+lines).facet(column='Company') chart I have tried .resolve_scale(y='independent') after .facet, but it doesn't show the second axis on the right side. Any help would be appreciated. | As per https://github.com/vega/vega-lite/issues/4373#issuecomment-617153232, you need some additional resolves: chart = ( (bars+lines) .resolve_scale(y='independent') # Create dual axis .facet(column='Company') .resolve_axis(y='independent') # Make sure dual axis works with facet (redraws the axis for each subplot) .resolve_scale(y='shared') # Set the y-max across facets to be the same (the default when not having dual axes) ) | 2 | 1 |
79,261,118 | 2024-12-7 | https://stackoverflow.com/questions/79261118/turtle-snake-chain-project | Below is a Python code for my "Snake Project" (with Turtle and Tkinter). The purpose of this program is to create a chain of turtles, called followers, that follow each other, with the first turtle in the chain following a special turtle: the leader. The leader itself follows the movement of the mouse. It removes the last follower from the chain when the "Del" key is pressed. Since it is not possible to physically delete Followers from memory, when the last Follower in the chain is removed, it is added in the first position of another chain that contains all the "deleted" Followers. When a new Follower needs to be added to the chain, we first check if there are any in the deleted list and reuse one if available; otherwise, a new one must be created. However, with my Python code, only the last Turtle is deleted when I click on the "Del" key, even if I press it several times. (Normally, it should delete each time the last follower). I think the problem has to do with the method: remove_last_follower(self, event) of the "Leader" class. Do you have any idea how to resolve this problem? Thanks a lot for you support. My Python code: from turtle import Turtle, Screen, window_width, window_height # Importation du module Turtle D = 30 # Distance entre les tortues screen = Screen() class Follower(Turtle): def __init__(self, name): Turtle.__init__(self) self.shape("circle") self.color("lightblue") self.penup() self.Prev = None self.Next = None self.Name = "T" + str(name) self.coucou = False def move(self, x, y): angle = self.towards(x, y) self.setheading(angle) self.setposition(x,y) self.back(20) if self.Next: x, y = self.pos() self.Next.move(x,y) class Leader(Follower): def __init__(self): super().__init__('tkz') self.shape("turtle") self.color("lightgreen") self.Last = None self.is_moving = False self.freeze = False self.deleted_followers = [] def freeeze(self, event): self.freeze = not self.freeze def add_follower(self, name): if self.deleted_followers: new_follower = self.deleted_followers.pop() new_follower.showturtle() new_follower.goto(new_follower.Prev.pos()) new_follower.setheading(new_follower.Prev.heading()) else: new_follower = Follower(name) if self.Last == None: self.Last = new_follower self.Next = new_follower new_follower.Prev = self else: self.Last.Next = new_follower self.Last = new_follower new_follower.Prev = self.Last new_follower.Next = None def move(self, event): if self.freeze == False: x, y = event.x - window_width()/2, -event.y + window_height()/2 angle = self.towards(x, y) self.setheading(angle) self.goto(x, y) #self.forward(-20) if self.Next: self.Next.move(x, y) def on_move(self, event): if self.is_moving: return self.is_moving = True self.move(event) self.is_moving = False def remove_last_follower(self, event): if self.Last: last_follower = self.Last if last_follower.Prev: self.Last = last_follower.Prev self.Last.Next = None else: self.Last = None self.Next = None last_follower.hideturtle() self.deleted_followers.append(last_follower) screen.delay(0) try: le = Leader() except: le = Leader() screen.cv.bind("<Button-1>", le.add_follower) screen.cv.bind("<Button-3>", le.freeeze) screen.cv.bind("<Delete>", le.remove_last_follower) screen.cv.bind("<Motion>", le.on_move) screen.listen() screen.mainloop() | The linked list in add_follower isn't updated correctly. Change: self.Last.Next = new_follower self.Last = new_follower # self.Last updated new_follower.Prev = self.Last # should be the *old* version of self.Last To: self.Last.Next = new_follower temp = self.Last # save previous Last self.Last = new_follower # assign new Last new_follower.Prev = temp # assign previous correctly Or even: self.Last.Next = new_follower self.Last, new_follower.Prev = new_follower, self.Last | 2 | 1 |
Subsets and Splits