question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
78,334,657 | 2024-4-16 | https://stackoverflow.com/questions/78334657/how-to-run-mypy-on-3rd-party-package-version-sensitive-code | I'm currently responsible to write library code that is both compatible with pydantic v1 and v2. Getting the code functional is more or less straightforward since you can make version sensitive choices in your code to satisfy your test suite, e.g. using patterns like this: import pydantic PYDANTIC_VERSION = packaging.version.parse(pydantic.__version__) if PYDANTIC_VERSION.major == 1: # do some v1 things else: # do some v2 things However, getting this code compatible with mypy type checks seems difficult since mypy will not resolve the PYDANTIC_VERSION variable and always run through both sides of the if blocks. This will constantly trigger mypy errors in both worlds inside our CI (where mypy is supposed to run against both versions). This makes total sense since mypy is not a runtime type checker, but I wonder if there any best practices to achieve such a goal. The issue isn't really unique to pydantic, that's just an example. I know that such version switches do work on the interpreter version (sys.version_info) but I cannot find a way to have mypy skip over an entire block of code if the version of pydantic is equal to either 1.x.x or 2.x.x I've searched issues on mypy and Stack Overflow, but given the thousands of issues and articles it's very hard to find existing (eventually solved or fixed) details on the topic. | --always-true and --always-false exist, so you can make a boolean flag named PYDANTIC_V1 and run Mypy twice on the same code: $ pip install "pydantic==1.*" $ mypy file.py --always-true=PYDANTIC_V1 $ pip install -U pydantic $ mypy file.py --always-false=PYDANTIC_V1 | 3 | 3 |
78,335,103 | 2024-4-16 | https://stackoverflow.com/questions/78335103/overload-typing-for-variable-amount-of-arguments-args-or-kwargs | Example is below, need to make sure IDE type checker or reveal_type would identify k, j and i types correctly. Perhaps there is some way to suggest the typing that args is an empty tuple and kwargs empty dict and the return value then would be tuple[int]? from typing import Union, overload def test(*args: int, **kwargs: str) -> Union[int, str, tuple[int]]: if args: return 5 if kwargs: return "5" return (5,) # now all are Union[int, str, tuple[int]] k = test(1) j = test(i="1") i = test() reveal_type(k) # should be int reveal_type(j) # should be str reveal_type(i) # should be tuple[int] | Here's a possible solution that allows mypy to infer the correct types for k, j, and i. This approach does result in mypy complaining about the overload signatures overlapping. I'm not sure if it's possible to address that besides simply suppressing the errors. from typing import Union, overload @overload def test() -> tuple[int]: # type: ignore[overload-overlap] ... @overload def test(*args: int) -> int: # type: ignore[overload-overlap] ... @overload def test(**kwargs: str) -> str: ... def test(*args: int, **kwargs: str) -> Union[int, str, tuple[int]]: if args: return 5 if kwargs: return "5" return (5,) # now all are Union[int, str, tuple[int]] k = test(1) j = test(i="1") i = test() reveal_type(k) # should be int reveal_type(j) # should be str reveal_type(i) # should be tuple[int] Output from mypy --strict v1.9.0 (try it online!): main.py:29: note: Revealed type is "builtins.int" main.py:30: note: Revealed type is "builtins.str" main.py:31: note: Revealed type is "tuple[builtins.int]" Success: no issues found in 1 source file | 2 | 3 |
78,334,193 | 2024-4-16 | https://stackoverflow.com/questions/78334193/pandas-rolling-sum-with-a-maximum-number-of-valid-observations-in-a-window | I am looking for help to speed up a rolling calculation in pandas which would compute a rolling average with a predefined maximum number of most recent observations. Here is code to generate an example frame and the frame itself: import pandas as pd import numpy as np tmp = pd.DataFrame( [ [11.1]*3 + [12.1]*3 + [13.1]*3 + [14.1]*3 + [15.1]*3 + [16.1]*3 + [17.1]*3 + [18.1]*3, ['A', 'B', 'C']*8, [np.nan]*6 + [1, 1, 1] + [2, 2, 2] + [3, 3, 3] + [np.nan]*9 ], index=['Date', 'Name', 'Val'] ) tmp = tmp.T.pivot(index='Date', columns='Name', values='Val') Name A B C Date 11.1 NaN NaN NaN 12.1 NaN NaN NaN 13.1 1 1 1 14.1 2 2 2 15.1 3 3 3 16.1 NaN NaN NaN 17.1 NaN NaN NaN 18.1 NaN NaN NaN I would like to obtain this result: Name A B C Date 11.1 NaN NaN NaN 12.1 NaN NaN NaN 13.1 1.0 1.0 1.0 14.1 1.5 1.5 1.5 15.1 2.5 2.5 2.5 16.1 2.5 2.5 2.5 17.1 3.0 3.0 3.0 18.1 NaN NaN NaN Attempted Solution I tried the following code and it works, but its performance is very bad for data sets that I am stuck with in practice. tmp.rolling(window=3, min_periods=1).apply(lambda x: x[~np.isnan(x)][-2:].mean(), raw=True) Calculation above applied to a 3k x 50k frame takes about 20 minutes... Maybe there is a more elegant and faster way to obtain the same result? Maybe using a combination of multiple rolling computation results or something with groupby? Versions Python - 3.9.13, pandas - 2.0.3 and numpy - 1.25.2 | One idea is use numba for faster count output in Rolling.apply by parameter engine='numba': (tmp.rolling(window=3, min_periods=1) .apply(lambda x: x[~np.isnan(x)][-2:].mean(), engine='numba', raw=True)) Test performance: tmp = pd.concat([tmp] * 100000, ignore_index=True) In [88]: %timeit tmp.rolling(window=3, min_periods=1).apply(lambda x: x[~np.isnan(x)][-2:].mean(),engine='numba', raw=True) 901 ms Β± 6.76 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) In [89]: %timeit tmp.rolling(window=3, min_periods=1).apply(lambda x: x[~np.isnan(x)][-2:].mean(), raw=True) 13 s Β± 181 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) Numpy approach: You can convert DataFrame to 3d array with append first NaNs values, then shift non NaNs and get means: #https://stackoverflow.com/a/44559180/2901002 def justify_nd(a, invalid_val, axis, side): """ Justify ndarray for the valid elements (that are not invalid_val). Parameters ---------- A : ndarray Input array to be justified invalid_val : scalar invalid value axis : int Axis along which justification is to be made side : str Direction of justification. Must be 'front' or 'end'. So, with 'front', valid elements are pushed to the front and with 'end' valid elements are pushed to the end along specified axis. """ pushax = lambda a: np.moveaxis(a, axis, -1) if invalid_val is np.nan: mask = ~np.isnan(a) else: mask = a!=invalid_val justified_mask = np.sort(mask,axis=axis) if side=='front': justified_mask = np.flip(justified_mask,axis=axis) out = np.full(a.shape, invalid_val) if (axis==-1) or (axis==a.ndim-1): out[justified_mask] = a[mask] else: pushax(out)[pushax(justified_mask)] = pushax(a)[pushax(mask)] return out from numpy.lib.stride_tricks import sliding_window_view as swv window_size = 3 N = 2 a = tmp.astype(float).to_numpy() arr = np.vstack([np.full((window_size-1,a.shape[1]), np.nan),a]) out = np.nanmean(justify_nd(swv(arr, window_size, axis=0), invalid_val=np.nan, axis=2, side='end')[:, :, -N:], axis=2) print (out) [[nan nan nan] [nan nan nan] [1. 1. 1. ] [1.5 1.5 1.5] [2.5 2.5 2.5] [2.5 2.5 2.5] [3. 3. 3. ] [nan nan nan]] df = pd.DataFrame(out, index=tmp.index, columns=tmp.columns) print (df) Name A B C Date 11.1 NaN NaN NaN 12.1 NaN NaN NaN 13.1 1.0 1.0 1.0 14.1 1.5 1.5 1.5 15.1 2.5 2.5 2.5 16.1 2.5 2.5 2.5 17.1 3.0 3.0 3.0 18.1 NaN NaN NaN Performance: tmp = pd.concat([tmp] * 100000, ignore_index=True) In [99]: %%timeit ...: a = tmp.astype(float).to_numpy() ...: arr = np.vstack([np.full((window_size-1,a.shape[1]), np.nan),a]) ...: ...: out = np.nanmean(justify_nd(swv(arr, window_size, axis=0), ...: invalid_val=np.nan, axis=2, side='end')[:, :, -N:], axis=2) ...: ...: df = pd.DataFrame(out, index=tmp.index, columns=tmp.columns) ...: 338 ms Β± 4.61 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) | 2 | 3 |
78,332,877 | 2024-4-16 | https://stackoverflow.com/questions/78332877/typeerror-cannot-unpack-non-iterable-multipoint-object | In my python app I am using Shapely. Invoking the function below: def get_t_start(t_line: geometry.LineString): print('get_t_start', t_line.boundary) p1, p2 = t_line.boundary t_start = p1 if p1.y < p2.y else p2 return t_start produces the following output: get_t_start MULTIPOINT (965 80, 1565 1074) Traceback (most recent call last): ... File "/sites/mpapp.py", line 13, in get_t_start p1, p2 = t_line.boundary TypeError: cannot unpack non-iterable MultiPoint object From the print of t_line.boundary I guess the object is ok. I am sure I have used this object (MULTIPOINT) like this in other apps to get the boundary points. I really can't figure out why now this is not working. | That's most likely because your apps are running different versions of shapely. Starting with Shapely 1.8, iteration over multi-part geometries (like a MultiPoint) is deprecated and was removed in Shapely 2.0 (read more). So, you just need to access the boundary's geoms : from shapely import LineString, Point def get_t_start(t_line: LineString) -> Point: print("get_t_start", t_line.boundary) p1, p2 = t_line.boundary.geoms # << here t_start = p1 if p1.y < p2.y else p2 return t_start Output : from shapely import from_wkt line = from_wkt("LINESTRING (965 80, 1565 1074)") >>> get_t_start(line) # get_t_start MULTIPOINT (965 80, 1565 1074) # + a display of the point | 2 | 3 |
78,332,814 | 2024-4-16 | https://stackoverflow.com/questions/78332814/find-all-differences-between-groups-in-polars-dataframe | I have one polars dataframe and I am trying to find the differences (fields that have their values changed) on multiple columns between groups on one key. There can be multiple groups in the dataframe and more than one column. The groups is essentially a datetime in int format (YYYYMMDD) How can I find the rows where there is a new value of (any) column on a date? Sample Data: raw_df = pl.DataFrame([ {'id': 'AAPL','update_time': 20241112,'status':'trading', 'underlying': 'y'}, {'id': 'MSFT','update_time': 20241113,'status': 'trading', 'underlying': 'x'}, {'id': 'NVDA','update_time': 20241112,'status': 'trading', 'underlying': 'z'}, {'id': 'MSFT','update_time': 20241112,'status': 'pending','underlying': 'x'}, {'id': 'AAPL','update_time': 20241113,'status': 'trading', 'underlying': 'y'}, {'id': 'NVDA','update_time': 20241113,'status': 'trading', 'underlying': 'z'}, {'id': 'TSLA','update_time': 20241112,'status': 'closed', 'underlying': 'v'}, ] ) expected_df = pl.DataFrame([ {'id': 'MSFT','update_time': 20241112,'status':'pending', 'underlying': 'x'}, {'id': 'MSFT','update_time': 20241113,'status': 'trading', 'underlying': 'x'}, ] ) Below is what the data input looks like. shape: (7, 4) ββββββββ¬ββββββββββββββ¬ββββββββββ¬βββββββββββββ β id β update_time β status β underlying β β --- β --- β --- β --- β β str β i64 β str β str β ββββββββͺββββββββββββββͺββββββββββͺβββββββββββββ‘ β AAPL β 20241112 β trading β y β β MSFT β 20241113 β trading β x β β NVDA β 20241112 β trading β z β β MSFT β 20241112 β pending β x β β AAPL β 20241113 β trading β y β β NVDA β 20241113 β trading β z β β TSLA β 20241112 β closed β v β ββββββββ΄ββββββββββββββ΄ββββββββββ΄βββββββββββββ And the expected result below, showing the id the update time and the field that has changed. If there are more than 1 field changed/updated, ideally it should go into a new row. I am trying to find the fields that have changed, grouped by 'update_time' on the key 'id'. One caveat is that there can be ids' that are present in a group but not in another group for example 'TSLA'. Hence, these ids that are not common or the intersection between groups can be ignored. Since only MSFT has the status changed, it should be filtered to those two rows where it has been updated. The field changes should only be done on all other columns except update_time which is what we are using to group by. shape: (2, 3) ββββββββ¬ββββββββββββββ¬ββββββββββ β id β update_time β status β β --- β --- β --- β β str β i64 β str β ββββββββͺββββββββββββββͺββββββββββ‘ β MSFT β 20241112 β pending β β MSFT β 20241113 β trading β ββββββββ΄ββββββββββββββ΄ββββββββββ΄ Could not figure out how to do this, but this is the closest i have, which could not work on the caveat mentioned earlier. def find_updated_field_differences(df): columns_to_check = [col for col in df.columns if col != 'id' and col != 'update_time'] sorted_df = df.sort('update_time') grouped_df = sorted_df.groupby(["update_time"]) result_data = [] for group_key, group_df in grouped_df: print(group_df) for col in columns_to_check: group_df = group_df.with_columns( (pl.col(col) != pl.col(col).shift()).alias(f"{col}_changed") ) differing_rows = group_df.filter( pl.any([pl.col(f"{col}_changed") for col in columns_to_check]) ) result_data.append(differing_rows) differing_df = pl.concat(result_data) differing_df = differing_df.sort("id") return differing_df | Your approach is slightly over-complicating things. What I suggest is that you first sort the data by id and update_time, then shift the data to prepare for comaparison. After that, you can identify the rows where id is the same but where there is a difference: import polars as pl def find_updated_field_differences(df): sorted_df = df.sort(['id', 'update_time']) shifted_df = sorted_df.shift(-1) mask = ( (sorted_df['id'] == shifted_df['id']) & ((sorted_df['status'] != shifted_df['status']) | (sorted_df['underlying'] != shifted_df['underlying'])) ) start_changes = sorted_df.filter(mask) end_changes = sorted_df.shift(-1).filter(mask) differing_df = pl.concat([start_changes, end_changes]).unique() return differing_df.sort(['id', 'update_time']) raw_df = pl.DataFrame([ {'id': 'AAPL', 'update_time': 20241112, 'status': 'trading', 'underlying': 'y'}, {'id': 'MSFT', 'update_time': 20241113, 'status': 'trading', 'underlying': 'x'}, {'id': 'NVDA', 'update_time': 20241112, 'status': 'trading', 'underlying': 'z'}, {'id': 'MSFT', 'update_time': 20241112, 'status': 'pending', 'underlying': 'x'}, {'id': 'AAPL', 'update_time': 20241113, 'status': 'trading', 'underlying': 'y'}, {'id': 'NVDA', 'update_time': 20241113, 'status': 'trading', 'underlying': 'z'}, {'id': 'TSLA', 'update_time': 20241112, 'status': 'closed', 'underlying': 'v'}, ]) result_df = find_updated_field_differences(raw_df) print(result_df) which gives you shape: (2, 4) ββββββββ¬ββββββββββββββ¬ββββββββββ¬βββββββββββββ β id β update_time β status β underlying β β --- β --- β --- β --- β β str β i64 β str β str β ββββββββͺββββββββββββββͺββββββββββͺβββββββββββββ‘ β MSFT β 20241112 β pending β x β β MSFT β 20241113 β trading β x β | 3 | 1 |
78,327,110 | 2024-4-15 | https://stackoverflow.com/questions/78327110/install-gmp-library-on-mac-os-and-pycharm | I'm trying to run my Cython project. And one of the header is gmpxx.h. Even though I already installed the gmp library using brew install gmp. I could not run my cython file with python3 setup.py build_ext --inplace. fatal error: 'gmpxx.h' file not found #include <gmpxx.h> ^~~~~~~~~ 1 error generated. error: command '/usr/bin/clang' failed with exit code 1 So I use brew list gmp to check the location of the gmpxx.h header. So it is actually inside /opt/homebrew/Cellar/gmp/6.3.0/include/ folder. With Xcode, I can just add the location into the header search paths. But I'm trying to do the same thing with Pycharm. How do I add the location of my gmpxx.h header to pycharm? I need a little help. Please kindly give me your take. Thank you. | So I added the library location directly into setup.py file. from setuptools import setup, Extension, find_packages from Cython.Build import cythonize extension = Extension( "Class Name", sources=["Something" ], include_dirs=[ "/opt/homebrew/Cellar/gmp/6.3.0/include", # Include directory for GMP headers "/opt/homebrew/Cellar/gmp/6.3.0/include/gmp", # Additional include directory for GMP headers ], libraries=["gmpxx", "gmp"], library_dirs=[ "/opt/homebrew/Cellar/gmp/6.3.0/lib", # Directory containing libgmpxx.dylib ], extra_compile_args=["-std=c++17", "-O3"], language="c++" ) setup( name="Class Name", version="0.1", packages=find_packages(where="src"), package_dir={"": "src"}, package_data={"Class Name": ["*.pyi"]}, ext_modules=cythonize(extension, compiler_directives={"language_level":3}, annotate=True), zip_safe=False ) | 2 | 2 |
78,326,026 | 2024-4-15 | https://stackoverflow.com/questions/78326026/how-to-log-output-of-script | I want to log the output of my python file into a txt. I also want to still see it in the terminal too. I tried using sys.stdout, but it did not still have terminal output. To log it I open my log file with f = open("log.txt", "r+"), then set sys.stdout = f. When the code ended, I said f.close() to write changes. Here is some code: import sys f = open("files/log.txt", "r+") sys.stdout = f print("You will not see this in terminal") f.close() | You can set sys.stdout to a file-like object with a write method that writes to both the original sys.stdout and a given file: import sys class Tee: def __init__(self, file): self.file = file def write(self, text): self.orig_stdout.write(text) self.file.write(text) def start(self): self.orig_stdout = sys.stdout sys.stdout = self def flush(self): self.orig_stdout.flush() def close(self): sys.stdout = self.orig_stdout __enter__ = start def __exit__(self, exc_type, exc_val, exc_tb): self.close() so that: with open('test.txt', 'w') as file, Tee(file): print('both terminal and file') print('terminal only') would output to the terminal: both terminal and file terminal only and output to the file: both terminal and file If you mean to redirect the output of an entire program to both the terminal and a file, it would be safer to make sure that the file is properly closed upon the program's termination, normal or not, by registering the file closer as an exit handler with atexit.register. On the other hand, it isn't necessary to explicitly restore sys.stdout upon exit, since the OS, if not the interpreter, will always do that for you: import atexit file = open('test.txt', 'w') atexit.register(file.close) Tee(file).start() print('both terminal and file') | 2 | 1 |
78,331,504 | 2024-4-16 | https://stackoverflow.com/questions/78331504/list-comprehension-to-return-list-if-value-is-non-existent | I'm aiming to use list comprehension to return values in a list. Specifically, if 'x' is in a list, I want to drop all other values. However, if 'x' is not in the list, I want to return the same values (not return an empty list). list1 = ['d','x','c'] list2 = ['d','b','c'] list1 = [s for s in list1 if s == 'x'] list2 = [s for s in list2 if s == 'x'] List2 would return []. Where I want it to be ['d','b','c'] list2 = [s for s in list2 if s == 'x' else list2] Returns: list2 = [s for s in list2 if s == 'x' else list2] ^^^^ SyntaxError: invalid syntax | This would keep all elements that aren't x list2 = [x for x in list2 if x != 'x'] However, if x is in the list, it'll still return all other elements. So, you'd need two passes to check whether x does exist since list comprehension alone cannot return that information def filter_x(lst): if 'x' in lst: return [x for x in lst if x == 'x'] else: return lst | 2 | 3 |
78,329,495 | 2024-4-15 | https://stackoverflow.com/questions/78329495/what-is-the-equivalent-of-numpy-accumulate-ufunc-in-pytorch | In numpy, I can do the following: >>> x = np.array([[False, True, False], [True, False, True]]) >>> z0 = np.logical_and.accumulate(x, axis=0) >>> z1 = np.logical_and.accumulate(x, axis=1) This returns the following: >>> z0 array([[False, True, False], [False, False, False]]) >>> z1 array([[False, False, False], [ True, False, False]]) What is the equivalent of this ufunc operation in pytorch? | The logical and corresponds to a product in binary terms. You can use cumprod for that: >>> x.cumprod(dim=0).bool() tensor([[False, True, False], [False, False, False]]) >>> x.cumprod(dim=1).bool() tensor([[False, False, False], [ True, False, False]]) | 2 | 2 |
78,329,714 | 2024-4-15 | https://stackoverflow.com/questions/78329714/pandas-groupby-string-field-and-select-by-time-of-day-range | I have a dataset like this index Date_Time Pass_ID El 0 3/30/23 05:12:36.36 A 1 1 3/30/23 05:12:38.38 A 2 1 3/30/23 05:12:40.40 A 3 1 3/30/23 05:12:42.42 A 4 1 3/30/23 05:12:44.44 A 4 1 3/30/23 12:12:50.50 B 3 1 3/30/23 12:12:52.52 B 4 1 3/30/23 12:12:54.54 B 5 1 3/30/23 12:12:56.56 B 6 1 3/30/23 12:12:58.58 B 7 1 3/30/23 12:13:00.00 B 8 1 3/30/23 12:13:02.02 B 9 1 3/31/23 20:02:02.02 C 3 1 3/31/23 20:02:05.05 C 4 The Date_Time is pandas datetime object. I'd like to group the records by Pass_ID, and then select out only those unique Pass_IDs that occur between specific hours in the day: for instance, between 10:00 and 13:00 would return B. I don't know how to get groupby and 'between_time' to work in this case... which would seem to be the best way forward. I've also tried using a lambda function after setting the Date_Time as the index, but that didn't work. And using aggregate doesn't seem to allow me to pull out the dt.hour of the Date_Time field. Anyone know how to do this concisely? | Try: # to datetime if necessary # df["Date_Time"] = pd.to_datetime(df["Date_Time"]) out = df.set_index("Date_Time").between_time("10:00", "13:00")["Pass_ID"].unique() print(out) Prints: ['B'] OR: If you want to filter whole groups between time 10:00-13:00: out = ( df.groupby("Pass_ID") .filter( lambda x: len(x.set_index("Date_Time").between_time("10:00", "13:00")) == len(x) )["Pass_ID"] .unique() ) print(out) | 2 | 3 |
78,315,455 | 2024-4-12 | https://stackoverflow.com/questions/78315455/fastapi-error-when-using-annotated-in-class-dependencies | FastAPI added support for Annotated (and started recommending it) in version 0.95.0. Additionally, FastAPI has a very powerful but intuitive Dependency Injection system (documentation). Moreover, FastAPI support Classes as Dependencies. However, it seams like Annotated cannot be used in class dependencies, but on function dependencies. I use FastAPI version 0.110.1. from __future__ import annotations from typing import Annotated from fastapi import FastAPI, Depends, Query app = FastAPI() class ClassDependency: def __init__(self, name: Annotated[str, Query(description="foo")]): self.name = name async def function_dependency(name: Annotated[str, Query(description="foo")]) -> dict: return {"name": name} @app.get("/") async def search(c: Annotated[ClassDependency, Depends(function_dependency)]) -> dict: return {"c": c.name} The example above works without errors, but if I replace Depends(function_dependency) with Depends(ClassDependency) an exception is raised with the following message: pydantic.errors.PydanticUndefinedAnnotation: name 'Query' is not defined Then, if I remove Annotated from the ClassDependency, by replacing the name: Annotated[str, Query(description="foo")] with the name: str, the example works. My question: Can I use Class dependencies and put Annotated to the parameters set in the constructor? Because it seams this is not working. My need: I want to have a class hierarchy for the query params of my api endpoints and provide validation and documentation extras for each of the param. | Remove the following and it will work. from __future__ import annotations | 3 | 2 |
78,328,020 | 2024-4-15 | https://stackoverflow.com/questions/78328020/derotation-algorithm-in-2d-space | I have an array of coordinates in x and y, relative to the movement of an object in a constantly rotating circular environment (10 rpm). How can I disentangle between the movement of the object from that of the environment? I tried polar coordinates, speed and movement vectors, and I get results, but I'd like to know if someone knows the right way to go. Mainly, I derotate every point according to the distance from the center of rotation, converting to polar coordinates, creating an array of derotated positions. Then I calculate the movement vector between each original_points[idx] and derotated_points[idx+1]. Then, I take the first original point as the starting point for a new array, and to that I add the first vector, storing the new location. Which will be the starting point for adding the second vector, and so on... Seems reasonable but I'd like to know if there are other methods. | This is a really nice area and a good question about matrix transformations. If you wish to know about this, do read this course: https://graphics.stanford.edu/courses/cs248-01/ Now, Let's start by creating some sample data: import numpy as np import matplotlib.pyplot as plt import pandas as pd rpm = 10 omega = rpm * 2 * np.pi / 60 time = np.linspace(0, 60, 60) speed = 0.1 x_linear = speed * time y_linear = np.zeros_like(time) x_rotated = x_linear * np.cos(omega * time) - y_linear * np.sin(omega * time) y_rotated = x_linear * np.sin(omega * time) + y_linear * np.cos(omega * time) df = pd.DataFrame({ 'time': time, 'x': x_rotated, 'y': y_rotated }) print(df) Here, I assume that rpm = 10, the angular velocity, in rad(s, is given as omega = rpm * 2 * np.pi / 60 and that the speed of the obejct is speed = 0.1 units/sec. This gives you time x y 0 0.000000 0.000000 0.000000e+00 1 1.016949 0.049276 8.895896e-02 2 2.033898 -0.107882 1.724206e-01 3 3.050847 -0.304652 -1.623727e-02 4 4.067797 -0.177888 -3.658219e-01 5 5.084746 0.292265 -4.160862e-01 6 6.101695 0.606713 6.485704e-02 7 7.118644 0.276790 6.558492e-01 8 8.135593 -0.502393 6.399064e-01 9 9.152542 -0.903602 -1.455835e-01 10 10.169492 -0.344989 -9.566443e-01 11 11.186441 0.736640 -8.418588e-01 12 12.203390 1.192763 2.579585e-01 13 13.220339 0.381661 1.265744e+00 ... 56 56.949153 -5.686844 3.030958e-01 57 57.966102 -3.074643 -4.913986e+00 58 58.983051 2.858029 -5.159620e+00 59 60.000000 6.000000 -5.732833e-14 Now, to derotate (transform the above matrix); you need to define the following function (for your reference, Gilbert Strand's Linear Algebra and Its Applications is a perfect book. def derotate_coordinates(df, omega): df['x_derotated'] = df['x']*np.cos(-omega*df['time']) - df['y']*np.sin(-omega*df['time']) df['y_derotated'] = df['x']*np.sin(-omega*df['time']) + df['y']*np.cos(-omega*df['time']) return df which applied df_derotated = derotate_coordinates(df.copy(), omega) print(df_derotated) will give you time x y x_derotated y_derotated 0 0.000000 0.000000 0.000000e+00 0.000000 0.000000e+00 1 1.016949 0.049276 8.895896e-02 0.101695 0.000000e+00 2 2.033898 -0.107882 1.724206e-01 0.203390 -1.387779e-17 3 3.050847 -0.304652 -1.623727e-02 0.305085 0.000000e+00 4 4.067797 -0.177888 -3.658219e-01 0.406780 0.000000e+00 5 5.084746 0.292265 -4.160862e-01 0.508475 0.000000e+00 6 6.101695 0.606713 6.485704e-02 0.610169 0.000000e+00 7 7.118644 0.276790 6.558492e-01 0.711864 0.000000e+00 8 8.135593 -0.502393 6.399064e-01 0.813559 0.000000e+00 9 9.152542 -0.903602 -1.455835e-01 0.915254 2.775558e-17 10 10.169492 -0.344989 -9.566443e-01 1.016949 -5.551115e-17 11 11.186441 0.736640 -8.418588e-01 1.118644 0.000000e+00 12 12.203390 1.192763 2.579585e-01 1.220339 0.000000e+00 13 13.220339 0.381661 1.265744e+00 1.322034 -5.551115e-17 ... 56 56.949153 -5.686844 3.030958e-01 5.694915 0.000000e+00 57 57.966102 -3.074643 -4.913986e+00 5.796610 0.000000e+00 58 58.983051 2.858029 -5.159620e+00 5.898305 -4.440892e-16 59 60.000000 6.000000 -5.732833e-14 6.000000 0.000000e+00 If you want to visualize this plt.figure(figsize=(12, 6)) plt.subplot(1, 2, 1) plt.plot(df['x'], df['y'], 'ro-') plt.title('Path in Rotating Frame') plt.xlabel('X') plt.ylabel('Y') plt.axis('equal') plt.subplot(1, 2, 2) plt.plot(df_derotated['x_derotated'], df_derotated['y_derotated'], 'bo-') plt.title('Path After Derotation') plt.xlabel('X') plt.ylabel('Y') plt.axis('equal') plt.show() which gives Update: In 3D, just because this is a fascinating topic In this case, let's define a rotating spiral and work in polar coordinates: import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D t = np.linspace(0, 4 * np.pi, 100) x = t * np.sin(t) y = t * np.cos(t) z = t original_points = np.vstack([x, y, z]) def rotation_matrix_y(theta): cos_theta, sin_theta = np.cos(theta), np.sin(theta) return np.array([ [cos_theta, 0, sin_theta], [0, 1, 0], [-sin_theta, 0, cos_theta] ]) def rotation_matrix_z(theta): cos_theta, sin_theta = np.cos(theta), np.sin(theta) return np.array([ [cos_theta, -sin_theta, 0], [sin_theta, cos_theta, 0], [0, 0, 1] ]) theta_z = np.radians(45) theta_y = np.radians(30) rot_matrix_z = rotation_matrix_z(theta_z) rot_matrix_y = rotation_matrix_y(theta_y) combined_rot_matrix = rot_matrix_y @ rot_matrix_z rotated_points = combined_rot_matrix @ original_points inverse_combined_rot_matrix = np.transpose(combined_rot_matrix) derotated_points = inverse_combined_rot_matrix @ rotated_points fig = plt.figure(figsize=(18, 6)) ax1 = fig.add_subplot(131, projection='3d') ax1.plot(*original_points, 'r') ax1.set_title('Original Spiral') ax1.set_xlim([-20, 20]) ax1.set_ylim([-20, 20]) ax1.set_zlim([0, 40]) ax2 = fig.add_subplot(132, projection='3d') ax2.plot(*rotated_points, 'b') ax2.set_title('Rotated Spiral') ax2.set_xlim([-20, 20]) ax2.set_ylim([-20, 20]) ax2.set_zlim([0, 40]) ax3 = fig.add_subplot(133, projection='3d') ax3.plot(*derotated_points, 'g') ax3.set_title('Derotated Spiral') ax3.set_xlim([-20, 20]) ax3.set_ylim([-20, 20]) ax3.set_zlim([0, 40]) plt.show() which gives you | 2 | 1 |
78,314,966 | 2024-4-12 | https://stackoverflow.com/questions/78314966/drag-image-to-another-image | I write this code to move image to another place but when I moving the image each point create new box so I get like this view: Image box duplicate each moving step. But I cant find the solution for this. import sys from PyQt5.QtCore import QRect from PyQt5.QtGui import QPixmap, QPainter, QPen, QColor from PyQt5.QtWidgets import * from test import Ui_MainWindow from PyQt5.QtCore import Qt, QRect, QPoint class MainWindow(QMainWindow): def __init__(self,parent = None): QMainWindow.__init__(self) self.ui = Ui_MainWindow() self.ui.setupUi(self) self.boxes = [] # KlonlanmΔ±Ε box'larΔ±n listesi self.original_box = QPixmap("box.png").scaled(100, 150) # Orjinal box resmi self.original_palette = QPixmap("palette.png") # print( self.img.width(),self.img.height()) self.ui.image.setPixmap(self.original_box) self.ui.palet.setPixmap(self.original_palette) self.dragging = False # SΓΌrΓΌkleme durumu self.offset = None # SΓΌrΓΌkleme konumu self.ui.image.setFixedSize(self.original_box.width(), self.original_box.height()) self.ui.palet.setFixedSize(self.original_palette.width(), self.original_palette.height()) # self.ui.image.setScaledContents(True) self.ui.image.setContentsMargins(0, 0, 0, 0) self.ui.image.setStyleSheet("QLabel { padding: 0px; }") self.ui.palet.setContentsMargins(0, 0, 0, 0) self.ui.palet.setStyleSheet("QLabel { padding: 0px; }") print("After") self.ui.widget_2.setAcceptDrops(True) # BΔ±rakma iΕlemini kabul et self.last_offset = None def paintEvent(self, event): self.painterInstance = QPainter(self.original_palette) # set rectangle color and thickness self.penRectangle = QPen(Qt.red) self.penRectangle.setWidth(3) # draw rectangle on painter self.painterInstance.setPen(self.penRectangle) # SΓΌrΓΌklenen box'Δ± Γ§iz if self.dragging : # if self.last_offset != None: # self.painterInstance.setCompositionMode(QPainter.CompositionMode_Source) # self.painterInstance.fillRect(QRect(self.offset, self.original_box.size()), Qt.transparent) # self.painterInstance.fillRect(QRect(self.last_offset, self.original_box.size()), Qt.transparent) # self.painterInstance.eraseRect(QRect(int(self.last_offset.x() - self.original_box.width() / 2), # int(self.last_offset.y() - self.original_box.height() / 2), # self.original_box.width(), self.original_box.height())) # # self.painterInstance.setCompositionMode(QPainter.CompositionMode_Source) # self.painterInstance.fillRect(QRect(self.last_offset, self.original_box.size()), Qt.transparent) # print(self.dragging) # painter.drawPixmap(int(self.offset.x() - self.original_box.width() / 2), # int(self.offset.y() - self.original_box.height() / 2), # self.original_box) # self.painterInstance.drawRect(500, 360, 50, 50) self.painterInstance.drawPixmap(int(self.offset.x() - self.original_box.width() / 2), int(self.offset.y() - self.original_box.height() / 2), self.original_box) self.last_offset = self.offset # set pixmap onto the label widget self.ui.palet.setPixmap(self.original_palette) # self.ui.image.setPixmap(self.original_box) # self.relase = False # self.dragging = False # KlonlanmΔ±Ε box'larΔ± Γ§iz # for box in self.boxes: # self.painterInstance.drawPixmap(box.topLeft(), self.original_box) def mousePressEvent(self, event): if event.button() == Qt.LeftButton: print(event.pos()) # print("MAUSE GLOBAL POS",self.ui.image.mapFromGlobal(event.globalPos())) # print(self.ui.image.rect().contains(self.ui.image.mapFromGlobal(event.globalPos()))) print(self.ui.widget_2.rect()) if(self.ui.image.rect().contains(self.ui.image.mapFromGlobal(event.globalPos()))): self.dragging = True self.offset = event.pos() print("Offset point", self.offset) def mouseMoveEvent(self, event): if self.dragging: # print("DRAGGING") # SΓΌrΓΌklenen box'Δ±n merkezini gΓΌncelle self.offset = event.pos() self.update() def mouseReleaseEvent(self, event): if event.button() == Qt.LeftButton and self.dragging: self.dragging = False print("RELEASE") self.relase = True # Yeni box'Δ± oluΕtur new_box = QRect(int(self.offset.x() - self.original_box.width() / 2), int(self.offset.y() - self.original_box.height() / 2), self.original_box.width(), self.original_box.height()) # self.ui.palet.setPixmap(self.original_palette) # self.ui.palet.show() self.boxes.append(new_box) self.update() if __name__ == "__main__": app = QApplication(sys.argv) window = MainWindow() window.show() sys.exit(app.exec_()) I try this in paint method: def paintEvent(self, event): self.painterInstance = QPainter(self.original_palette) # set rectangle color and thickness self.penRectangle = QPen(Qt.red) self.penRectangle.setWidth(3) # draw rectangle on painter self.painterInstance.setPen(self.penRectangle) # SΓΌrΓΌklenen box'Δ± Γ§iz if self.dragging : if self.last_offset != None: self.painterInstance.eraseRect(QRect(int(self.last_offset.x() - self.original_box.width() / 2), int(self.last_offset.y() - self.original_box.height() / 2), self.original_box.width(), self.original_box.height())) # # self.painterInstance.setCompositionMode(QPainter.CompositionMode_Source) # self.painterInstance.fillRect(QRect(self.last_offset, self.original_box.size()), Qt.transparent) # print(self.dragging) # painter.drawPixmap(int(self.offset.x() - self.original_box.width() / 2), # int(self.offset.y() - self.original_box.height() / 2), # self.original_box) # self.painterInstance.drawRect(500, 360, 50, 50) self.painterInstance.drawPixmap(int(self.offset.x() - self.original_box.width() / 2), int(self.offset.y() - self.original_box.height() / 2), self.original_box) self.last_offset = self.offset # set pixmap onto the label widget I get this result, erase the background. How can I solve this problem. I almost close the solution but I cant do the draw related place. EDIT - 2: By the way if I use the region as a mainwindow like this this code move the object with It creates, destroys and re-creates the object at each step. But I can't do this on a layer, actually I should be able to do it.: import sys from PyQt5.QtWidgets import QApplication, QWidget from PyQt5.QtGui import QPainter, QColor, QPixmap from PyQt5.QtCore import Qt, QRect, QPoint class Rectangle(QWidget): def __init__(self): super().__init__() self.setWindowTitle('Klonlanabilir Boxlar') self.setGeometry(100, 100, 600, 400) self.boxes = [] # KlonlanmΔ±Ε box'larΔ±n listesi self.original_box = QPixmap("box.png").scaled(300, 400) # Orjinal box resmi self.dragging = False # SΓΌrΓΌkleme durumu self.offset = None # SΓΌrΓΌkleme konumu def paintEvent(self, event): painter = QPainter(self) # Orjinal box'Δ± Γ§iz painter.drawPixmap(100, 100, self.original_box) # SΓΌrΓΌklenen box'Δ± Γ§iz if self.dragging: painter.drawPixmap(self.offset.x() - self.original_box.width() / 2, self.offset.y() - self.original_box.height() / 2, self.original_box) # KlonlanmΔ±Ε box'larΔ± Γ§iz for box in self.boxes: painter.drawPixmap(box.topLeft(), self.original_box) def mousePressEvent(self, event): if event.button() == Qt.LeftButton: if QRect(100, 150, self.original_box.width(), self.original_box.height()).contains(event.pos()): self.dragging = True self.offset = event.pos() print("Offset point",self.offset) def mouseMoveEvent(self, event): if self.dragging: # SΓΌrΓΌklenen box'Δ±n merkezini gΓΌncelle self.offset = event.pos() self.update() def mouseReleaseEvent(self, event): if event.button() == Qt.LeftButton and self.dragging: self.dragging = False # Yeni box'Δ± oluΕtur new_box = QRect(self.offset.x() - self.original_box.width() / 2, self.offset.y() - self.original_box.height() / 2, self.original_box.width(), self.original_box.height()) self.boxes.append(new_box) self.update() if __name__ == '__main__': app = QApplication(sys.argv) window = Rectangle() window.show() sys.exit(app.exec_()) | I solved my problem with this code: import sys from PyQt5.QtCore import QRect from PyQt5.QtGui import QPixmap, QPainter, QPen, QColor from PyQt5.QtWidgets import * from test import Ui_MainWindow from PyQt5.QtCore import Qt, QRect, QPoint class MainWindow(QMainWindow): def __init__(self,parent = None): QMainWindow.__init__(self) self.ui = Ui_MainWindow() self.ui.setupUi(self) self.boxes = [] # KlonlanmΔ±Ε box'larΔ±n listesi self.original_box = QPixmap("box.png").scaled(100, 150) # Orjinal box resmi self.original_palette = QPixmap("palette.png") # print( self.img.width(),self.img.height()) self.ui.image.setPixmap(self.original_box) self.ui.palet.setPixmap(self.original_palette) self.dragging = False # SΓΌrΓΌkleme durumu self.offset = None # SΓΌrΓΌkleme konumu self.ui.image.setFixedSize(self.original_box.width(), self.original_box.height()) self.ui.palet.setFixedSize(self.original_palette.width(), self.original_palette.height()) # self.ui.image.setScaledContents(True) self.ui.image.setContentsMargins(0, 0, 0, 0) self.ui.image.setStyleSheet("QLabel { padding: 0px; }") # self.ui.palet.setContentsMargins(0, 0, 0, 0) # self.ui.palet.setStyleSheet("QLabel { padding: 0px; }") print("After") self.ui.widget_2.setAcceptDrops(True) # BΔ±rakma iΕlemini kabul et self.firstMove = False self.last_offset = None def paintEvent(self, event): self.ui.palet.setPixmap(self.original_palette) painter = QPainter(self.ui.palet.pixmap()) if self.dragging : # self.ui.palet.setPixmap(self.original_palette) painter.drawPixmap(int(self.offset.x() - self.original_box.width() / 2), int(self.offset.y() - self.original_box.height() / 2), self.original_box) for box in self.boxes: painter.drawPixmap(box.topLeft(), self.original_box) def mousePressEvent(self, event): if event.button() == Qt.LeftButton: print(event.pos()) # print("MAUSE GLOBAL POS",self.ui.image.mapFromGlobal(event.globalPos())) # print(self.ui.image.rect().contains(self.ui.image.mapFromGlobal(event.globalPos()))) if(self.ui.image.rect().contains(self.ui.image.mapFromGlobal(event.globalPos()))): self.dragging = True self.offset = event.pos() self.firstMove = True print("Offset point", self.offset) def mouseMoveEvent(self, event): if self.dragging: # print("DRAGGING") print(self.ui.image.mapFromGlobal(event.globalPos())) # SΓΌrΓΌklenen box'Δ±n merkezini gΓΌncelle self.offset = event.pos() self.update() def mouseReleaseEvent(self, event): if event.button() == Qt.LeftButton and self.dragging: self.dragging = False print("RELEASE") self.relase = True # Yeni box'Δ± oluΕtur new_box = QRect(int(self.offset.x() - self.original_box.width() / 2), int(self.offset.y() - self.original_box.height() / 2), self.original_box.width(), self.original_box.height()) # self.ui.palet.setPixmap(self.original_palette) # self.ui.palet.show() self.boxes.append(new_box) self.update() self.boxes[-1].moveCenter(self.offset) if __name__ == "__main__": app = QApplication(sys.argv) window = MainWindow() window.show() sys.exit(app.exec_()) This part solved my problem: def paintEvent(self, event): self.ui.palet.setPixmap(self.original_palette) painter = QPainter(self.ui.palet.pixmap()) if self.dragging : # self.ui.palet.setPixmap(self.original_palette) painter.drawPixmap(int(self.offset.x() - self.original_box.width() / 2), int(self.offset.y() - self.original_box.height() / 2), self.original_box) for box in self.boxes: painter.drawPixmap(box.topLeft(), self.original_box) I can drag and drop into this palet image: But I have little problem here, when I click the box it detects but this create on here: I think I have problem with event.pos() how can I solve this ? | 3 | 0 |
78,324,913 | 2024-4-14 | https://stackoverflow.com/questions/78324913/problem-when-running-terminal-command-pip-install-anonympy | I'm on an MacBook Pro w/ M1 Pro Chip running macOS Venture 13.0.1 and have Python 3.9.6 installed. When trying to run the following command(s): pip install anonympy pip3 install anonympy I get the following output: Collecting anonympy Using cached anonympy-0.3.7.tar.gz (5.8 MB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error Γ Getting requirements to build wheel did not run successfully. β exit code: 1 β°β> [1 lines of output] error in anonympy setup command: 'python_requires' must be a string containing valid version specifiers; Invalid specifier: '>=3.6*' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error Γ Getting requirements to build wheel did not run successfully. β exit code: 1 β°β> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. I tried this on Colab and on a virtual environment running the same python version locally, getting the same build wheel error both times. I have also tried running the following command to no avail. pip(3) install --upgrade setuptools Is there a problem with the package, or am I missing something? | It seems it's an error in the package that is not compatible with the last PyPi version. You can install it directly with pip+git : pip install git+https://github.com/ArtLabss/open-data-anonymizer.git Related Github Issue : https://github.com/ArtLabss/open-data-anonymizer/issues/26 | 2 | 0 |
78,324,285 | 2024-4-14 | https://stackoverflow.com/questions/78324285/how-can-i-find-the-first-row-that-meets-conditions-of-a-mask-for-each-group | This is my DataFrame: import pandas as pd df = pd.DataFrame( { 'a': ['x', 'x', 'x', 'x', 'x', 'y', 'y', 'y', 'y', 'y', 'y', 'y'], 'b': [1, 1, 1, 2, 2, 1, 1, 1, 2, 2, 2, 2], 'c': [9, 8, 11, 13, 14, 3, 104, 106, 11, 100, 70, 7] } ) Expected output: Creating column out: a b c out 0 x 1 9 NaN 1 x 1 8 NaN 2 x 1 11 NaN 3 x 2 13 found 4 x 2 14 NaN 5 y 1 3 NaN 6 y 1 104 found 7 y 1 106 NaN 8 y 2 11 NaN 9 y 2 100 NaN 10 y 2 70 NaN 11 y 2 7 NaN The mask is: mask = (df.c > 10) The process: Grouping is by column a: a) For each group, finding the first row that meets the conditions of the mask. b) For group x this condition only applies when b == 2. That is why row 3 is selected. And this is my attempt. It is getting close but it feels like this is not the way: def func(g): mask = (g.c > 10) g.loc[mask.cumsum().eq(1) & mask, 'out'] = 'found' return g df = df.groupby('a').apply(func) | One option with groupby.idxmax: mask = (df['c'] > 10) & (df['a'].ne('x') | df['b'].eq(2)) idx = mask.groupby(df['a']).idxmax() df.loc[idx[mask.loc[idx].values], 'out'] = 'found' Another with groupby.transform: mask = (df['c'] > 10) & (df['a'].ne('x') | df['b'].eq(2)) df.loc[mask & mask.groupby(df['a']) .transform(lambda m: (~m).shift(fill_value=True) .cummin()), 'out'] = 'found' Output, with an extra group z that has no match: a b c out 0 x 1 9 NaN 1 x 1 8 NaN 2 x 1 11 NaN 3 x 2 13 found 4 x 2 14 NaN 5 y 1 3 NaN 6 y 1 104 found 7 y 1 106 NaN 8 y 2 11 NaN 9 y 2 100 NaN 10 y 2 70 NaN 11 y 2 7 NaN 12 z 3 1 NaN 13 z 3 1 NaN last match to get the last match (instead of the first one), just inver the mask: Example: mask = (df['c'] > 10) & (df['a'].ne('x') | df['b'].eq(2)) mask = mask[::-1] idx = mask.groupby(df['a']).idxmax() df.loc[idx[mask.loc[idx].values], 'out'] = 'found' a b c out 0 x 1 9 NaN 1 x 1 8 NaN 2 x 1 11 NaN 3 x 2 13 NaN 4 x 2 14 found 5 y 1 3 NaN 6 y 1 104 NaN 7 y 1 106 NaN 8 y 2 11 NaN 9 y 2 100 NaN 10 y 2 70 found 11 y 2 7 NaN 12 z 3 1 NaN 13 z 3 1 NaN | 2 | 3 |
78,324,061 | 2024-4-14 | https://stackoverflow.com/questions/78324061/str-replace-method-for-pandas-series-does-not-work-as-expected | I have encountered the issue in one particular stage of my project. Replicate with following: import pandas as pd # Recreated a sample data data = { "FailCodes": ['4301,4090,5003(1)'], } df = pd.DataFrame(data) # Want to replace the '(1)' with 'p1q' print(df.FailCodes.str.replace('(1)','p1q'),'\n') # Not giving expected result # compare to string object method from standard python, which gives desired result print('4301,4090,5003(1)'.replace('(1)','p1q'),'\n') # Can get wanted result with following longer code,but would like explanation why first approach giving the unexpected result. print(df.FailCodes.str.replace('(','p').str.replace(')','q' )) Unexpected result end like: 430p1q,4090,5003(p1q) Expected result: 4301,4090,5003p1q | You probably use older version of Pandas, where default argument is regex=True (and (1) is regular pattern). Put regex=False to .str.replace: print(df.FailCodes.str.replace("(1)", "p1q", regex=False)) Prints: 0 4301,4090,5003p1q Name: FailCodes, dtype: object | 2 | 1 |
78,322,637 | 2024-4-14 | https://stackoverflow.com/questions/78322637/langchain-how-to-view-the-context-my-retriever-used-when-invoke | I am trying to make a private llm with RAG capabilities. I successfully followed a few tutorials and made one. But I wish to view the context the MultiVectorRetriever retriever used when langchain invokes my query. This is my code: from langchain_core.output_parsers import StrOutputParser from langchain_core.prompts import ChatPromptTemplate from langchain.retrievers.multi_vector import MultiVectorRetriever from langchain.storage import InMemoryStore from langchain_community.chat_models import ChatOllama from langchain_community.embeddings import GPT4AllEmbeddings from langchain_community.vectorstores import Chroma from langchain_core.documents import Document from langchain_core.runnables import RunnablePassthrough from PIL import Image import io import os import uuid import json import base64 def convert_bytes_to_base64(image_bytes): encoded_string= base64.b64encode(image_bytes).decode("utf-8") return "data:image/jpeg;base64," + encoded_string #Load Retriever path="./vectorstore/pdf_test_file.pdf" #Load from JSON files texts = json.load(open(os.path.join(path, "json", "texts.json"))) text_summaries = json.load(open(os.path.join(path, "json", "text_summaries.json"))) tables = json.load(open(os.path.join(path, "json", "tables.json"))) table_summaries = json.load(open(os.path.join(path, "json", "table_summaries.json"))) img_summaries = json.load(open(os.path.join(path, "json", "img_summaries.json"))) #Load from figures images_base64_list = [] for image in (os.listdir(os.path.join(path, "figures"))): img = Image.open(os.path.join(path, "figures",image)) buffered = io.BytesIO() img.save(buffered,format="png") image_base64 = convert_bytes_to_base64(buffered.getvalue()) #Warning: this section of the code does not support external IDEs like spyder and will break. Run it loccally in the native terminal images_base64_list.append(image_base64) #Add to vectorstore # The vectorstore to use to index the child chunks vectorstore = Chroma( collection_name="summaries", embedding_function=GPT4AllEmbeddings() ) # The storage layer for the parent documents store = InMemoryStore() # <- Can we extend this to images id_key = "doc_id" # The retriever (empty to start) retriever = MultiVectorRetriever( vectorstore=vectorstore, docstore=store, id_key=id_key, ) # Add texts doc_ids = [str(uuid.uuid4()) for _ in texts] summary_texts = [ Document(page_content=s, metadata={id_key: doc_ids[i]}) for i, s in enumerate(text_summaries) ] retriever.vectorstore.add_documents(summary_texts) retriever.docstore.mset(list(zip(doc_ids, texts))) # Add tables table_ids = [str(uuid.uuid4()) for _ in tables] summary_tables = [ Document(page_content=s, metadata={id_key: table_ids[i]}) for i, s in enumerate(table_summaries) ] retriever.vectorstore.add_documents(summary_tables) retriever.docstore.mset(list(zip(table_ids, tables))) # Add images img_ids = [str(uuid.uuid4()) for _ in img_summaries] summary_img = [ Document(page_content=s, metadata={id_key: img_ids[i]}) for i, s in enumerate(img_summaries) ] retriever.vectorstore.add_documents(summary_img) retriever.docstore.mset( list(zip(img_ids, img_summaries)) ) # Store the image summary as the raw document img_summaries_ids_and_images_base64=[] count=0 for img in images_base64_list: new_summary = [img_ids[count],img] img_summaries_ids_and_images_base64.append(new_summary) count+=1 # Check Response # Question Example: "What is the issues plagueing the acres?" """ Testing Retrival print("\nTesting Retrival: \n") prompt = "Images / figures with playful and creative examples" responce = retriever.get_relevant_documents(prompt)[0] print(responce) """ """ retriever.vectorstore.similarity_search("What is the issues plagueing the acres? show any relevant tables",k=10) """ # Prompt template template = """Answer the question based only on the following context, which can include text, tables and images/figures: {context} Question: {question} """ prompt = ChatPromptTemplate.from_template(template) # Multi-modal LLM # model = LLaVA model = ChatOllama(model="custom-mistral") # RAG pipeline chain = ( {"context": retriever, "question": RunnablePassthrough()} | prompt | model | StrOutputParser() ) print("\n\n\nTesting Responce: \n") print(chain.invoke( "What is the issues plagueing the acres? show any relevant tables" )) The output will look something like this: Testing Responce: In the provided text, the main issue with acres is related to wildfires and their impact on various lands and properties. The text discusses the number of fires, acreage burned, and the level of destruction caused by wildfires in the United States from 2018 to 2022. It also highlights that most wildfires are human-caused (89% of the average number of wildfires from 2018 to 2022) and that fires caused by lightning tend to be slightly larger and burn more acreage than those caused by humans. Here's the table provided in the text, which shows the number of fires and acres burned on federal lands (by different organizations), other non-federal lands, and total: | Year | Number of Fires (thousands) | Acres Burned (millions) | |------|-----------------------------|--------------------------| | 2018 | 58.1 | 8.8 | | 2019 | 58.1 | 4.7 | | 2020 | 58.1 | 10.1 | | 2021 | 58.1 | 10.1 | | 2022 | 58.1 | 3.6 | The table also breaks down the acreage burned by federal lands (DOI and FS) and other non-federal lands, as well as showing the total acreage burned each year.<|im_end|> From the RAG pipline i wish to print out the the context used from the retriever which stores tons of vector embeddings. i wish to know which ones it uses for the query. something like : chain.invoke("What is the issues plagueing the acres? show any relevant tables").get_context_used() i know there are functions like retriever.get_relevant_documents(prompt) and retriever.vectorstore.similarity_search(prompt) which provides the most relevant context to the query but I'm unsure whether the invoke function pulls the same context with the other 2 functions. the Retriver Im using from Langchain is the MultiVectorRetriever | You can tap into langchains with a RunnableLambda and print the state passed from the retriever to the prompt from langchain_core.runnables import RunnableLambda def inspect(state): """Print the state passed between Runnables in a langchain and pass it on""" print(state) return state # RAG pipeline chain = ( {"context": retriever, "question": RunnablePassthrough()} | RunnableLambda(inspect) # Add the inspector here to print the intermediate results | prompt | model | StrOutputParser() ) | 2 | 4 |
78,302,031 | 2024-4-10 | https://stackoverflow.com/questions/78302031/stable-diffusion-attributeerror-module-jax-random-has-no-attribute-keyarray | When I run the stable diffusion on colab https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb with no modification, it fails on the line from diffusers import StableDiffusionPipeline The error log is AttributeError: module 'jax.random' has no attribute 'KeyArray' How can I fix this or any clue ? The import should work, the ipynb should run with no error. | jax.random.KeyArray was deprecated in JAX v0.4.16 and removed in JAX v0.4.24. Given this, it sounds like the HuggingFace stable diffusion code only works JAX v0.4.23 or earlier. You can install JAX v0.4.23 with GPU support like this: pip install "jax[cuda12_pip]==0.4.23" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html or, if you prefer targeting a local CUDA installation, like this: pip install "jax[cuda12_local]==0.4.23" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html For more information on GPU installation, see JAX Installation: NVIDIA GPU. From the colab tutorial, update the second segment into: !pip install "jax[cuda12_local]==0.4.23" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html !pip install diffusers==0.11.1 !pip install transformers scipy ftfy accelerate | 7 | 10 |
78,321,025 | 2024-4-13 | https://stackoverflow.com/questions/78321025/why-is-this-trivial-numba-function-with-a-list-input-so-slow | import numba from typing import List @numba.njit def test(a: List[int]) -> int: return 1 test([i for i in range(2_000_000)]) takes 2s and scales linearly with the size of the list. Wrapping the input argument with numba.typed.List takes even longer. (all the time is spent on the numba.typed.List call. The timings don't get better if the function is called multiple times (while only being defined once), i.e., this is not a matter of compilation time. Is there a way to tell numba to just use the list as is? In my actual application, the raw data comes from an external library that cannot return numpy arrays or numba lists directly, only Python lists. I'm using numba 0.59.1 and Python 3.12 on a 4 core Ubuntu22 laptop with 16GB RAM. | Numba only operates on typed variables. It needs not only to check the types of all the items but also convert the whole list into a typed list. This implicit conversion can be particularly expensive since CPython lists, a.k.a. reflected lists contain pointers on allocated objects and each objects is reference counted. Typed lists of Numba are homogeneous and they do not contain references but directly the value in it. This is far more efficient and similar to a Numpy array with additional features like resizing. Is there a way to tell numba to just use the list as is? AFAIK, no. Reflected lists are not supported any-more in Numba code. Operating on reflected lists is not only inefficient, but also break the type system. The best option is to create directly a typed list. Here is an example: import numba as nb # Quite fast part (less than 0.1 seconds) reflected_lst = [i for i in range(2_000_000)] # Slow part (3 seconds) typed_lst = nb.typed.typedlist.List(reflected_lst) # Very fast part (less than 2 Β΅s) since `lst` is already a typed-list test(typed_lst) As mentioned by @roganjosh, note the list generation is included in your benchmark, but this only takes a tiny fraction of the execution time (<5% on my machine). Note the conversion process is particularly expensive in Numba (as opposed to Numpy, see the below comments). There is an opened issue on this topic. To quote the developers: [...] we came to the conclusion that there is probably room for improvement. [...] this problem is all down to implementation details. As of now, the issue is still opened and there is a patch to improve a bit the performance of the conversion, but it is still rather slow with it. | 3 | 3 |
78,320,762 | 2024-4-13 | https://stackoverflow.com/questions/78320762/how-to-destructure-nested-structs-in-polars-python-api | I am unfortunately having to work with some nested data in a polars dataframe. (I know it is bad practice) Consider data: data = { "positions": [ { "company": { "companyName": "name1" }, }, { "company": { "companyName": "name2" }, }, { "company": { "companyName": "name3" }, } ] } positions is a column in the dataframe. I have explored the polars python api docs but cannot figure out how to extract out just the companyName fields into a separate list column. I want to achieve the same that this comprehension does: names = ( [ p["company"]["companyName"] for p in data["positions"] if p.get("company") and p.get("company").get("companyName") ] if data.get("positions") else None ) Note the null checks. I get a sense that I have to use the pl.list.eval function along with pl.element but I am a bit foggy on the api. Before: shape: (3, 1) βββββββββββββββ β positions β β --- β β struct[1] β βββββββββββββββ‘ β {{"name1"}} β β {{"name2"}} β β {{"name3"}} β βββββββββββββββ After: shape: (3, 1) βββββββββ β names β β --- β β str β βββββββββ‘ β name1 β β name2 β β name3 β βββββββββ | Structs You can use .struct.field() or .struct[] syntax to extract struct fields. https://docs.pola.rs/user-guide/expressions/structs/#extracting-individual-values-of-a-struct df = pl.DataFrame(data) df.with_columns( pl.col("positions").struct["company"].struct["companyName"] ) shape: (3, 2) βββββββββββββββ¬ββββββββββββββ β positions β companyName β β --- β --- β β struct[1] β str β βββββββββββββββͺββββββββββββββ‘ β {{"name1"}} β name1 β β {{"name2"}} β name2 β β {{"name3"}} β name3 β βββββββββββββββ΄ββββββββββββββ Alternatively, you can work at the frame-level and .unnest() the structs into columns. df.unnest("positions").unnest("company") shape: (3, 1) βββββββββββββββ β companyName β β --- β β str β βββββββββββββββ‘ β name1 β β name2 β β name3 β βββββββββββββββ List of structs If working with a list of structs you could use the .list.eval() API: df = pl.DataFrame([data]) df.with_columns( pl.col("positions").list.eval( pl.element().struct["company"].struct["companyName"] ) ) shape: (1, 1) βββββββββββββββββββββββββββββββ β positions β β --- β β list[str] β βββββββββββββββββββββββββββββββ‘ β ["name1", "name2", "name3"] β βββββββββββββββββββββββββββββββ Or at the frame-level using .explode() and .unnest() df.explode("positions").unnest("positions").unnest("company") shape: (3, 1) βββββββββββββββ β companyName β β --- β β str β βββββββββββββββ‘ β name1 β β name2 β β name3 β βββββββββββββββ | 4 | 4 |
78,319,421 | 2024-4-13 | https://stackoverflow.com/questions/78319421/min-from-columns-from-dict | I have a dict with item\column name and a df with columns from dict and other columns. How can I add column to df with min value for every item just from columns corresponding from dict? import pandas as pd my_dict={'Item1':['Col1','Col3'], 'Item2':['Col2','Col4'] } df=pd.DataFrame({ 'Col0':['Item1','Item2'], 'Col1':[20,25], 'Col2':[89,15], 'Col3':[26,30], 'Col4':[40,108], 'Col5':[55,2] }) df['min']=? I tried df['min']=df[df.columns[df.columns.isin(my_dict)]].min(axis=1), but it didn't work. | You can use apply with a function that reads the appropriate column names out of the dictionary (returning an empty list if there is no match) and then takes the minimum of the specified columns: my_dict = { 'Item1': ['Col1', 'Col3'], 'Item2': ['Col2', 'Col4'] } df['min'] = df.apply(lambda r:r[my_dict.get(r['Col0'], [])].min(), axis=1) Output: Col0 Col1 Col2 Col3 Col4 Col5 min 0 Item1 20 89 26 40 55 20 1 Item2 25 15 30 108 2 15 If it's possible my_dict may contain column names that don't exist in the dataframe, you can check for that in the function. For example: my_dict = { 'Item1': ['Col1', 'Col3'], 'Item2': ['Col4', 'Col6'] } df['min'] = df.apply( lambda r:r[[col for col in my_dict.get(r['Col0'], []) if col in r]].min(), axis=1 ) Output: Col0 Col1 Col2 Col3 Col4 Col5 min 0 Item1 20 89 26 40 55 20 1 Item2 25 15 30 108 2 108 You can even get the column names if you want: my_dict = { 'Item1': ['Col1', 'Col3'], 'Item2': ['Col2', 'Col4'] } df[['min', 'name']] = df.apply( lambda r:min((r[col], col) for col in my_dict.get(r['Col0'], []) if col in r), axis=1, result_type='expand' ) Output: Col0 Col1 Col2 Col3 Col4 Col5 min name 0 Item1 20 89 26 40 55 20 Col1 1 Item2 25 15 30 108 2 15 Col2 | 2 | 4 |
78,313,586 | 2024-4-12 | https://stackoverflow.com/questions/78313586/python-library-optionally-support-numpy-types-without-depending-on-numpy | Context We develop a Python library that contains a function expecting a numlike parameter. We specify this in our signature and make use of python type hints: def cool(value: float | int | List[float | int]) π³ Problem & Goal During runtime, we noticed it's fine to pass in numpy number types as well, e.g. np.float16(1.2345). So we thought: why not incorporate "numpy number types" into our signature as this would be beneficial for the community that will use our library. However, we don't want numpy as dependency in our project. We'd like to only signify in the method signature that we can take a float, int, a list of them OR any "numpy number type". If the user hasn't installed numpy on their system, they should still be able to use our library and just ignore that they could possibly pass in a "numpy number type" as well. We don't want to depend on numpy as we don't use it in our library (except for allowing their types in our method signature). So why include it in our dependency graph? There's no reason to do so. One dependency less is better. Additional requirements/context We search for an answer that is compatible with all Python versions >=3.8. (The answer should work with setuptools>=69.0.) The answer should be such that we get proper IntelliSense (Ctrl + Space) when typing cool( in an IDE, e.g. VSCode. This is what our pyproject.toml looks like. Efforts We've noticed the option [project.optional-dependencies] for the pyproject.toml file, see here. However, it remains unclear how this optional dependencies declaration helps us in providing optional numpy datatypes in our method signatures. numpy provides the numpy.typing type annotations. Is it somehow possible to only depend on this subpackage? We did search on search engines and found this SO question, however our question is more specific with regards to how we can only use types from another module. We also found this SO question, but despite having "optional" in its title, it's not about optional numpy types. | Defer evaluation of annotations, and only import numpy conditionally. from __future__ import annotations import typing as t if t.TYPE_CHECKING: import numpy as np def cool(value: int | np.floating | etc ...): ... Now the numpy dependency is only necessary when type-checking. See PEP 563 β Postponed Evaluation of Annotations Side-questions.. numpy provides the numpy.typing type annotations. Is it somehow possible to only depend on this subpackage? No, this is not possible. We've noticed the option [project.optional-dependencies] for the pyproject.toml file ... It doesn't really help you much here. It could still be useful if you wanted "extra" dependencies which the user can opt-in for, e.g.: pip install mypkg # install with required dependencies pip install mypkg[typing] # install with extra dependencies such as numpy Then you could use this to easily install the package along with the soft-deps in your CI, for example. | 7 | 2 |
78,318,586 | 2024-4-12 | https://stackoverflow.com/questions/78318586/propagating-true-entries-along-axis-in-an-array | I have to perform the operation below many times. Using numpy functions instead of loops I usually get a very good performance but I have not been able to replicate this for higher dimensional arrays. Any suggestion or alternative would be most welcome: I have a boolean array and I would like to propagate the true indeces to the next 2 positions for example: If this 1 dimensional array (A) is: import numpy as np # Number of positions to propagate the array propagation = 2 # Original array A = np.array([False, True, False, False, False, True, False, False, False, False, False, True, False]) I can create an "empty" array and then find the indices, propagate them, and then flatten argwhere and then flatten it: B = np.zeros(A.shape, dtype=bool) # Compute the indeces of the True values and make the two next to it True as well idcs_true = np.argwhere(A) + np.arange(propagation + 1) idcs_true = idcs_true.flatten() idcs_true = idcs_true[idcs_true < A.size] # in case the error propagation gives a great B[idcs_true] = True # Array print(f'Original array A = {A}') print(f'New array (2 true) B = {B}') which gives: Original array A = [False True False False False True False False False False False True False] New array (2 true) B = [False True True True False True True True False False False True True] However, this becomes much more complex and fails if for example: AA = np.array([[False, True, False, False, False, True, False, False, False, False, False, True, False], [False, True, False, False, False, True, False, False, False, False, False, True, False]]) Thanks for any advice. | I just leave here numba version so you can compare the speed against the proposed numpy solution: import numba import numpy as np @numba.njit(parallel=True) def propagate_true_numba(arr, n=2): out = np.zeros_like(arr, dtype="uint8") for i in numba.prange(arr.shape[0]): prop = 0 for j in range(arr.shape[1]): if arr[i, j] == 1: prop = n out[i, j] = 1 elif prop: prop -= 1 out[i, j] = 1 return out Benchmark: import numba import numpy as np import perfplot @numba.njit(parallel=True) def propagate_true_numba(arr, n=2): out = np.zeros_like(arr, dtype="uint8") for i in numba.prange(arr.shape[0]): prop = 0 for j in range(arr.shape[1]): if arr[i, j] == 1: prop = n out[i, j] = 1 elif prop: prop -= 1 out[i, j] = 1 return out def _prop_func(A, propagation): B = np.zeros(A.shape, dtype=bool) # Compute the indices of the True values and make the two next to it True as well idcs_true = np.argwhere(A) + np.arange(propagation + 1) idcs_true = idcs_true.flatten() idcs_true = idcs_true[ idcs_true < A.size ] # in case the error propagation gives a great B[idcs_true] = True return B def propagate_true_numpy(arr, n=2): return np.apply_along_axis(_prop_func, 1, arr, n) AA = np.array([[False, True, False, False, False, True, False, False, False, False, False, True, False], [False, True, False, False, False, True, False, False, False, False, False, True, False]]) x = propagate_true_numba(AA, 2) y = propagate_true_numpy(AA, 2) assert np.allclose(x, y) np.random.seed(0) perfplot.show( setup=lambda n: np.random.randint(0, 2, size=(n, n), dtype="uint8"), kernels=[ lambda arr: propagate_true_numpy(arr, 2), lambda arr: propagate_true_numba(arr, 2), ], labels=["numpy", "numba"], n_range=[10, 25, 50, 100, 250, 500, 1000, 2500, 5000], xlabel="N * N", logx=True, logy=True, equality_check=np.allclose, ) Creates on my AMD 5700x this graph: | 2 | 1 |
78,317,383 | 2024-4-12 | https://stackoverflow.com/questions/78317383/np-unique-after-np-round-unrounds-the-data | This code snippet describes a problem I have been having. For some reason rounded_data seems to be rounded, but once passed in np.unique and np.column_stack the result_array seems to be unrounded, meanwhile the rounded_data is still rounded. rounded_data = data_with_target_label.round(decimals=2) unique_values, counts = np.unique(rounded_data, return_counts=True) result_array = np.column_stack((unique_values, counts)) print(rounded_data) print(result_array) Result: 443392 0.01 443393 0.00 443394 0.00 443395 0.00 443396 0.11 ... 452237 0.04 452238 0.00 452239 0.00 452240 0.00 452241 0.00 Name: values, Length: 8850, dtype: float32 [[0.00000000e+00 4.80000000e+01] [9.99999978e-03 2.10000000e+01] [1.99999996e-02 1.10000000e+01] ... [3.29000015e+01 1.00000000e+00] [3.94099998e+01 1.00000000e+00] | this is because your dataframe is in float32 while default number format in numpy is float64. So the number that is rounded in float32 won't be visibly rounded in float64, because number representation is a bit different. Solution is to convert either input array to float64 or the result_array into float 32. Solution 1 Converting numpy array to float32: rounded_data = data_with_target_label.round(decimals=2) unique_values, counts = np.unique(rounded_data, return_counts=True) result_array = np.column_stack((unique_values, counts)) result_array = np.float32(result_array) Solution 2 Converting input data. For example input is pd.DataFrame (or pd.Series): df = pd.DataFrame({'vals': np.array([0.013242, 3.94099998, 9.99999978, 0.03234, 0.05532, 33.22, 33.44, 55.66])}, dtype = 'float32') rounded_data = df['vals'].astype('float64').round(decimals=2) unique_values, counts = np.unique(rounded_data, return_counts=True) result_array = np.column_stack((unique_values, counts)) | 2 | 1 |
78,316,919 | 2024-4-12 | https://stackoverflow.com/questions/78316919/polars-replace-parts-of-dataframe-with-other-parts-of-dataframe | I'm looking for an efficient way to copy / replace parts of a dataframe with other parts of the same dataframe in Polars. For instance, in the following minimal example dataframe pl.DataFrame({ "year": [2020,2021,2020,2021], "district_id": [1,2,1,2], "distribution_id": [1, 1, 2, 2], "var_1": [1,2,0.1,0.3], "var_N": [1,2,0.3,0.5], "unrelated_var": [0.2,0.5,0.3,0.7], }) I'd like to replace all column values of "var_1" & "var_N" where the "distribution_id" = 2 with the corresponding values where the "distribution_id" = 1. This is the desired result: pl.DataFrame({ "year": [2020,2021,2020,2021], "district_id": [1,2,1,2], "distribution_id": [1, 1, 2, 2], "var_1": [1,2,1,2], "var_N": [1,2,1,2], "unrelated_var": [0.2,0.5,0.3,0.7], }) I tried to use a "when" expression, but it fails with "polars.exceptions.ShapeError: shapes of self, mask and other are not suitable for zip_with operation" df = df.with_columns([ pl.when(pl.col("distribution_id") == 2).then(df.filter(pl.col("distribution_id") == 1).otherwise(pl.col(col)).alias(col) for col in columns_to_copy ] ) Here's what I used to do with SQLAlchemy: table_alias = table.alias("table_alias") stmt = table.update().\ where(table.c.year == table_alias.c.year).\ where(table.c.d_id == table_alias.c.d_id).\ where(table_alias.c.distribution_id == 1).\ where(table.c.distribution_id == 2).\ values(var_1=table_alias.c.var_1, var_n=table_alias.c.var_n) Thanks a lot for you help! | You could filter the 1 columns, change their id to 2 and discard the unneeded columns. df.filter(distribution_id = 1).select( "year", "district_id", "^var_.+$", distribution_id = pl.lit(2, pl.Int64) ) shape: (2, 5) ββββββββ¬ββββββββββββββ¬ββββββββ¬ββββββββ¬ββββββββββββββββββ β year β district_id β var_1 β var_N β distribution_id β β --- β --- β --- β --- β --- β β i64 β i64 β f64 β f64 β i64 β ββββββββͺββββββββββββββͺββββββββͺββββββββͺββββββββββββββββββ‘ β 2020 β 1 β 1.0 β 1.0 β 2 β β 2021 β 2 β 2.0 β 2.0 β 2 β ββββββββ΄ββββββββββββββ΄ββββββββ΄ββββββββ΄ββββββββββββββββββ (note: "^var_.+$" selects columns by regex, but selectors can be used if preferred.) With the data "aligned", you can pass it to .update() df.update( df.filter(distribution_id = 1) .select("year", "district_id", "^var_.+$", distribution_id = pl.lit(2, pl.Int64)), on=["year", "district_id", "distribution_id"] ) shape: (4, 6) ββββββββ¬ββββββββββββββ¬ββββββββββββββββββ¬ββββββββ¬ββββββββ¬ββββββββββββββββ β year β district_id β distribution_id β var_1 β var_N β unrelated_var β β --- β --- β --- β --- β --- β --- β β i64 β i64 β i64 β f64 β f64 β f64 β ββββββββͺββββββββββββββͺββββββββββββββββββͺββββββββͺββββββββͺββββββββββββββββ‘ β 2020 β 1 β 1 β 1.0 β 1.0 β 0.2 β β 2021 β 2 β 1 β 2.0 β 2.0 β 0.5 β β 2020 β 1 β 2 β 1.0 β 1.0 β 0.3 β β 2021 β 2 β 2 β 2.0 β 2.0 β 0.7 β ββββββββ΄ββββββββββββββ΄ββββββββββββββββββ΄ββββββββ΄ββββββββ΄ββββββββββββββββ | 3 | 4 |
78,316,845 | 2024-4-12 | https://stackoverflow.com/questions/78316845/counting-number-of-unique-values-in-groups | I have data where for multiple years, observations i are categorized in cat. An observation i can be in multiple categories in any year, but is unique across years. I am trying to count unique values for i by year, by cat, and by year and cat. I'm learning Python (v3.12) & Pandas (v2.2.1). I can make this work, but only by creating separate tables for the counts, and merging them back in with the main data. See the example below. I suspect there is a better way to do this. Is there, and, if so, how? import pandas as pd df = pd.DataFrame( {'year': [2020,2020,2020,2021,2021,2022,2023,2023,2023,2023], 'cat': [1,1,2,2,3,3,1,2,3,4], 'i': ['a','a','b','c','d','e','f','f','g','g'] }) df df_cat = df.groupby('cat')['i'].nunique() df_year = df.groupby('year')['i'].nunique() df_catyear = df.groupby(['cat', 'year'])['i'].nunique() df_merged = df.merge(df_cat, how='left', on='cat').rename(columns={'i_x': 'i', 'i_y': 'n_by_cat'}) df_merged = df_merged.merge(df_year, how='left', on='year').rename(columns={'i_x': 'i', 'i_y': 'n_by_year'}) df_merged = df_merged.merge(df_catyear, how='left', on=['cat', 'year']).rename(columns={'i_x': 'i', 'i_y': 'n_by_catyear'}) | You could use a simple loop and groupby.transform: groups = ['cat', 'year', ['cat', 'year']] for g in groups: df[f"n_by_{''.join(g)}"] = df.groupby(g)['i'].transform('nunique') Output: year cat i n_by_cat n_by_year n_by_catyear 0 2020 1 a 2 2 1 1 2020 1 a 2 2 1 2 2020 2 b 3 2 1 3 2021 2 c 3 2 1 4 2021 3 d 3 2 1 5 2022 3 e 3 1 1 6 2023 1 f 2 2 1 7 2023 2 f 3 2 1 8 2023 3 g 3 2 1 9 2023 4 g 1 2 1 | 2 | 2 |
78,316,200 | 2024-4-12 | https://stackoverflow.com/questions/78316200/how-to-connect-mariadb-running-inside-docker-compose-with-python-script-running | I wrote a docker compose file and used docker compose up -d command. Then I wrote a simple python script to connect with maria db but I get error everytime. mariadb version in my virtual environment is 1.0.11 pip install mariadb==1.0.11 version: '3.8' services: mariadb: image: mariadb:latest container_name: my_mariadb restart: always environment: MYSQL_ROOT_PASSWORD: myrpass MYSQL_DATABASE: db1 MYSQL_USER: user MYSQL_PASSWORD: mypass volumes: - mariadb-data:/var/lib/mysql ports: - "3307:3307" volumes: mariadb-data: {} My python code: import mariadb host = "mariadb" port = 3307 user = "user" password = "mypass" database = "db1" try: conn = mariadb.connect( host=host, port=port, user=user, password=password, database=database ) cursor = conn.cursor() cursor.execute("SELECT VERSION()") version = cursor.fetchone()[0] print(f"Connected to MariaDB server version: {version}") except mariadb.Error as e: print(f"Error connecting to database: {e}") else: print('Connection found attempting to close it now') if conn: conn.cursor().close() conn.close() finally: print('Code execution complete') Error when I use mariadb as host: Error connecting to database: Unknown MySQL server host 'mariadb' (-3) Code execution complete Error when I use localhost as host: Error connecting to database: Access denied for user 'user'@'localhost' (using password: YES) Code execution complete | version: '3.8' services: mysql: image: mysql:latest container_name: db_mysql restart: always environment: MYSQL_ROOT_PASSWORD: r_pass MYSQL_AUTHENTICATION_PLUGIN: 'mysql_native_password' # ... other environment variables volumes: - mysql-data:/var/lib/mysql ports: - "3306:3306" mariadb: image: mariadb:latest container_name: db_mariadb restart: always environment: MYSQL_ROOT_PASSWORD: r_pass MYSQL_AUTHENTICATION_PLUGIN: 'mysql_native_password' # ... other environment variables volumes: - mariadb-data:/var/lib/mysql ports: - "3307:3306" volumes: mysql-data: {} mariadb-data: {} Here is my docker-compose.yml file Create docker compose images docker compose up -d Stop docker compose docker compose down -v Enter inside docker containers for mysql docker exec -it db_mysql mysql -u root -p mariadb docker exec -it db_mariadb mariadb -u root -p password for mysql and mariadb r_pass Prerequisites if you are a ubuntu user sudo apt-get install libmariadb-dev pip3 install mariadb==1.0.11 Python code # Module Imports import mariadb import sys # Connect to MariaDB Platform try: conn = mariadb.connect( user="root", password="r_pass", host="<docker_ip>", # docker inspect db_mariadb port=3306, database="test" ) except mariadb.Error as e: print(f"Error connecting to MariaDB Platform: {e}") sys.exit(1) # Get Cursor cur = conn.cursor() | 4 | 1 |
78,312,866 | 2024-4-11 | https://stackoverflow.com/questions/78312866/remove-all-whitespaces-from-the-headers-of-a-polars-dataframe | I'm reading some csv files where the column headers are pretty annoying: they contain whitespaces, tabs, etc. A B C D E CD E 300 0 0 0 CD E 1071 0 0 0 K E 390 0 0 0 I want to read the file, then remove all whitespaces and/or tabs from the column names. Currently I do import polars as pl file_df = pl.read_csv(csv_file, comment_prefix='#', separator='\t') file_df = file_df.rename(lambda column_name: column_name.strip()) Is this the "polaric" way to do it? I'm not a big fan of lambdas, but if the only other solution is to write a function just for this, I guess I'll stick to lambdas. | The solution is to use a function as you have shown. However, in the case of .strip() without arguments it can be simplified slightly. Another way to write the strip is by using str.strip() >>> " A ".strip() # 'A' >>> str.strip(" A ") # 'A' str.strip and the lambda in this case do the same thing: one = lambda column: column.strip() two = str.strip >>> one(" A ") # 'A' >>> two(" A ") # 'A' df.rename() runs a function at the Python level, meaning we can pass str.strip directly. import polars as pl csv = b""" A \t B \t C 1\t2\t3 4\t5\t6 """ df = pl.read_csv(csv, separator="\t") >>> df.columns # [' A ', ' B ', ' C'] >>> df.rename(str.strip).columns # ['A', 'B', 'C'] >>> df.rename(str.lower).columns # [' a ', ' b ', ' c'] It's only useful if you're calling functions without additional arguments. For anything more complex, you'll need to use a lambda (or def). | 2 | 3 |
78,314,829 | 2024-4-12 | https://stackoverflow.com/questions/78314829/how-to-effectively-use-put-and-delete-http-methods-in-django-class-based-views | I'm setting up a CRUD system with Django, using Class-Based Views. Currently I'm trying to figure out how to handle HTTP PUT and DELETE requests in my application. Despite searching the Django documentation extensively, I'm having trouble finding concrete examples and clear explanations of how to submit these types of queries to a class-based view. I created a view class named CategoryView, extending from: django.views.View, in which I implemented the get and post methods successfully. And I want to build my urls like this: New Category: 127.0.0.1:8000/backendapp/categories/create List all Category: 127.0.0.1:8000/backendapp/categories/ Retrieve only one Category: 127.0.0.1:8000/backendapp/categories/1 Etc... However, when I try to implement the put and delete methods, I get stuck. For example : from django.views import View class CategoryView(View): template_name = 'backendapp/pages/category/categories.html' def get(self, request): categories = Category.objects.all() context = { 'categories': categories } return render(request, self.template_name, context) def post(self, request): return def delete(self, request, pk): return def put(self, request): return I read through the Django documentation and found that Class-Based Views support HTTP requests: ["get", "post", "put", "patch", "delete", "head ", "options", "trace"]. link: https://docs.djangoproject.com/en/5.0/ref/class-based-views/base/#django.views.generic.base.View Despite this, I can't figure out how to do it. So I'm asking for your help to unblock me. I looked at the Django documentation and searched online for examples and tutorials on handling HTTP requests in class-based views. I also tried experimenting with adding the put and delete methods to my CategoryView view class, but without success. I expected to find resources that clearly explain how to integrate these queries into my Django application, as well as practical examples demonstrating their use. However, I haven't found a working solution and am now seeking help from the community to overcome this difficulty. | Beware that HTML does not support PUT, PATCH, DELETE "out of the box". You can use AJAX, but <form method="delete"> does not work, simply because the browser does not make a DELETE request. So you will need AJAX to make the request. Another problem is the routing: your view sits behind categories/, so that is where you then would make the PUT, PATCH, etc. requests. You will thus need to define multiple views, that each handle part of it, like: from django.urls import path urlpatterns = [ path('categories/', MyListlikeCategoryView.as_view()), path('categories/<int:pk>/', MyObjectlikeCategoryView.as_view()), path('categories/create/', MyCreateCategoryView.as_view()), ] and then in these separate views, implement the applicable methods: class MyListlikeCategoryView(View): # list of categories def get(self, request): # β¦ pass class MyObjectlikeCategoryView(View): # put new object def put(self, request, pk): # β¦ pass # update object def patch(self, request, pk): # β¦ pass # delete object def delete(self, request, pk): # β¦ pass class MyCreateCategoryView(View): # create object def post(self, request): # β¦ pass But regardless, Django is not designed to make (CRUD) APIs. What you can use to make such APIs is a ModelViewSet (or ViewSet) from the Django REST framework [drf-doc]. This also provides serializers that normally make the mapping between the request data and a model object, and from a model object to response data easier, and works with a router that thus then can route the items properly. We thus can implement that then with: class MyCategoryViewSet(viewsets.ViewSet): # GET /categories/ def list(self, request): # β¦ pass # POST /categories/ def create(self, request): # β¦ pass # GET /categories/1/ def retrieve(self, request, pk=None): # β¦ pass # PUT /categories/1/ def update(self, request, pk=None): # β¦ pass # PATCH /categories/1/ def partial_update(self, request, pk=None): # β¦ pass # DELETE /categories/1/ def destroy(self, request, pk=None): # β¦ pass The paths are not determined by the ViewSet though, the comment at the top of each mehtod is just how a SimpleRouter [drf-doc] would do this. You thus register the ViewSet with: from rest_framework import routers router = routers.SimpleRouter() router.register('categories', MyCategoryViewSet) urlpatterns = router.urls | 3 | 2 |
78,314,168 | 2024-4-12 | https://stackoverflow.com/questions/78314168/how-to-filter-dataframe-column-names-containing-2-specified-substrings | I need the column names from the dataframe that contain both the term software and packages. I'm able to filter out columns containing one string.. for eg: software_cols = df.filter(regex='Software|software|SOFTWARE').columns How do I achieve the same by mentioning 'Packages/packages/PACKAGES' as well. Eligible column names should be like 'Local Software Packages', 'Software XYZ Packages', 'Software Package' | Keep things simple as you don't need a regex here, just use two boolean masks and a case independent comparison: # does the column name contain "software"? m1 = df.columns.str.contains('software', case=False) # does it contain "package"? m2 = df.columns.str.contains('package', case=False) # if both conditions are met, keep the column out = df.loc[:, m1&m2] Example input: df = pd.DataFrame(columns=['Local Software Packages', 'Software XYZ Packages', 'Software Package', 'Other', 'Software only'], index=[0]) Output: Local Software Packages Software XYZ Packages Software Package 0 NaN NaN NaN If you just want the names: df.columns[m1&m2] # Index(['Local Software Packages', 'Software XYZ Packages', 'Software Package'], dtype='object') | 3 | 5 |
78,313,700 | 2024-4-12 | https://stackoverflow.com/questions/78313700/create-a-dataframe-from-numpy-array-and-parameters | Running Elastic Net simulations by varying a couple parameters and looking to save output coefficients to a dataframe for potential review later. Ultimately looking to save off a dataframe with two parameter identifier columns (ie, 'alpha', 'l1_ratio') and a number of other columns for the resulting coefficients for the model fit with those parameters. 'alpha' is a float (varying from .1 to 1000 in increments) and 'l1_ratio' is a float from 0 to 1. 'coefs' is a numpy array that I'd like to expand into individual columns for each coefficient value (total number will stay fixed, say 5 for this simple case). For instance: alpha = .1 l1_ratio = .5 coefs = array([-1.30, -0.45, .04, .65, -0.88]) would result in a row record in the final dataframe of: alpha l1_ratio c1 c2 c3 c4 c5 0 .1 .5 -1.30 -0.45 .04 .65 -0.88 I'll ultimately loop over and place additional rows for each scenario. Would also prefer not to label coefficient columns manually as there can be dozens depending on the situation--leaving column header empty is fine. How would I do this? | I imagine you will generate all data points iteratively. However, DataFrames don't like to be grown this way. Performance of adding new rows repeatedly is terrible. Assuming logic the function that will produce the new data points, I would use a dictionary to collect them, and create the DataFrame once in the end: data = {} for alpha, l1_ratio, coeffs in logic(): data[(alpha, l1_ratio)] = coeffs df = (pd.DataFrame(data).T .rename(columns=lambda x: f'c{x+1}') .rename_axis(['alpha', 'l1_ratio']) .reset_index() # optional ) Variant to generate the dataframe: df = (pd.DataFrame(data.keys(), columns=['alpha', 'l1_ratio']) .join(pd.DataFrame(data.values()) .rename(columns=lambda x: f'c{x+1}')) ) Example output: alpha l1_ratio c1 c2 c3 c4 c5 0 0.1 0.5 -1.3 -0.45 0.04 0.65 -0.88 1 0.2 0.5 -1.3 -0.45 0.04 0.65 -0.88 | 2 | 1 |
78,312,849 | 2024-4-11 | https://stackoverflow.com/questions/78312849/how-to-enumerate-pandigital-prime-sets | Project Euler problem 118 reads, "Using all of the digits 1 through 9 and concatenating them freely to form decimal integers, different sets can be formed. Interestingly with the set {2,5,47,89,631} all of the elements belonging to it are prime. How many distinct sets containing each of the digits one through nine exactly once contain only prime elements." This is the only problem I've encountered so far that doesn't provide an answer to an easier version of the question. As a result, it is difficult for me to check the algorithm I've made. Do you see what could be the problem? I coded up a pretty straightforward dynamic programming approach. For that, I will answer the question, "How many distinct sets containing each of the digits in S exactly once contain only prime elements", for different values of S, and then combine their results to answer the original question. First, I generate a list of all the relevant primes. This is every prime with 9 or fewer digits, that has no duplicate digits, and that doesn't have a 0 as one of its digits. I then use this list to create a set count dictionary. The keys to this dictionary are frozensets that represent which digits each prime has, and the values are integers that count the number of relevant primes that reduce to that set: from itertools import chain, combinations from functools import cache from collections import defaultdict from pyprimesieve import primes primes = [ p for p in primes(10**9) if len(set(str(p))) == len(str(p)) and "0" not in str(p) ] set_counts = defaultdict(int) for p in primes: set_counts[frozenset(map(int, str(p)))] += 1 set_counts will then serve as the base case for our recursion. Next, we need a way to break down our problem into relevant subproblems. So I wrote a function which will generate all of the ways to split a set into two disjoint nonempty subsets: @cache def set_decomps(s): """ Generates all possible ways to partition a set s into two non-empty subsets """ l = list( map( lambda x: (frozenset(x), s - frozenset(x)), chain.from_iterable(combinations(s, r) for r in range(1, len(s))), ) ) return l[: len(l) // 2] Finally, we should be able to just use a simple memoized approach to use these together to solve the problem, and all of its subvariants. @cache def dp(s): """ Returns the number of ways to create a set containing prime numbers Such that the set of all the digits in the primes is equal to s With no duplicates or extra digits """ base = set_counts[s] if len(s) > 1: for a, b in set_decomps(s): base += dp(a) * dp(b) return base print(dp(frozenset(range(1,10)))) # prints 114566, which is wrong Obviously there is a flaw in my approach. | The flaw is this. Consider the set {2,5,47,89,631}. Different initial splits can lead to it. One is {1,2,3,6}, {4,5,7,8,9} and another is {1,3,6,8,9} {2,4,5,7}. There are many more. And therefore you are overcounting. To leave you the fun of the problem, I won't tell you how to fix this overcounting. I'll just tell you that if you are counting multiple ways to get to a particular set, then your number will be too high. | 2 | 4 |
78,312,648 | 2024-4-11 | https://stackoverflow.com/questions/78312648/python-inheritance-with-dataclasses-dataclass-and-annotations | I am very confused by the following code: import dataclasses @dataclasses.dataclass() class Base(): x: int = 100 @dataclasses.dataclass() class Derived(Base): x: int = 200 @dataclasses.dataclass() class DerivedRaw(Base): x = 300 base = Base() derived = Derived() derived_raw = DerivedRaw() print(base.x) print(derived.x) print(derived_raw.x) What it prints is: 100 200 100 I don't understand why the last number isn't 300. Why do the annotations matter? This seems to be an interaction with @dataclasses.dataclass(), as the code: class Base(): x: int = 100 class DerivedRaw(Base): x = 300 derived_raw = DerivedRaw() print(derived_raw.x) Does print 300. | From the dataclass documentation: The @dataclass decorator examines the class to find fields. A field is defined as a class variable that has a type annotation. ... So to have proper dataclass field must have type annotation. | 2 | 2 |
78,311,305 | 2024-4-11 | https://stackoverflow.com/questions/78311305/how-can-i-get-the-program-to-print-each-line-on-a-new-paragraph | I wrote this shopping program myself and I'm trying to get it to print to the shell and to the text file with each shopping list item on a new line. Please can you help me with this? #By Simeon Beckford-Tongs BSc MSc Copyright Β© 2024. All rights reserved. By Simeon Beckford-Tongs BSc MSc Copyright Β© 2024. All rights reserved. items = input('What is the first item on your shopping list:\n') items += input('What is the second item on your shopping list:\n') items += input('What is the third item on your shopping list:\n') items += input('What is the fourth item on your shopping list:\n') items += input('What is the fith item on your shopping list:\n') items += input('What is the sixth item on your shopping list:\n') items += input('What is the seventh item on your shopping list:\n') items += input('What is the eighth item on your shopping list:\n') items += input('What is the ninth item on your shopping list:\n') items += input('What is the tenth item on your shopping list:\n') shopping_list = 'shopping list:' file = open('shopping_list.txt', 'w') file.write (items) file.close() file = open('shopping_list.txt', 'r') for line in file: print( line, end = '\n') file.close() I tried adding a \n character(s) to lines 3-12 and line 26. | So, you are appending each item to the items variable, but you're not adding any newline characters (\n) between them, which results in them being concatenated together into one long string. items = "" items += input('What is the first item on your shopping list:\n') + '\n' items += input('What is the second item on your shopping list:\n') + '\n' items += input('What is the third item on your shopping list:\n') + '\n' items += input('What is the fourth item on your shopping list:\n') + '\n' items += input('What is the fifth item on your shopping list:\n') + '\n' items += input('What is the sixth item on your shopping list:\n') + '\n' items += input('What is the seventh item on your shopping list:\n') + '\n' items += input('What is the eighth item on your shopping list:\n') + '\n' items += input('What is the ninth item on your shopping list:\n') + '\n' items += input('What is the tenth item on your shopping list:\n') + '\n' shopping_list = 'shopping list:' file = open('shopping_list.txt', 'w') file.write(items) file.close() file = open('shopping_list.txt', 'r') for line in file: print(line, end='\n') file.close() This should to do what you asked for. | 2 | 1 |
78,307,173 | 2024-4-10 | https://stackoverflow.com/questions/78307173/dealing-with-duplicates-cols-on-duckdb-with-gaps-nulls-and-filling-them-effici | I'm new to duckdb (v0.10.1) so this question comes from my lack of knowledge of the built-in functionality that duckdb has. I have a special use case that I haven't found a cleaver way to do this with duckdb with timeseries data. Sometimes there are some rare occurrences where we get duplicate values for a timestamp in a table that we need to dedup. The problem here is that, sometimes, duplicate rows have different gaps for different columns that, when combined together, fills out most/all the gaps if we merge the rows together. Here is an example to ilustrate the situation Table with duplicate values for the timestamp column in the following table timestamp (varchar) A (int64) B (int64) 2022-01-01 1 NULL 2022-01-01 NULL 2 2022-01-02 3 6 2022-01-02 4 NULL There are duplicate timestamps that we want to dedup, but some columns have a value and some dont in different rows that can be used to fill the gaps. So, the desired output for these cases should be the following: timestamp (varchar) A (int64) B (int64) 2022-01-01 1 2 2022-01-02 3 6 Using pandas this can be efficiently done using the following logic: # Fix filling issues for rows with the same timestamp df = df.set_index("timestamp", drop=False) df_first = df[~df.index.duplicated(keep="first")] df_last = df[~df.index.duplicated(keep="last")] df = df_first.fillna(df_last) In here we fetch into separate dataframes the first duplicate row and the last one and then try to fill the gaps of the first dataframe with the second one. Using duckdb I wasn't able to find a clever enough solution to do this besides aggregating every possible column in a pair with the timestamp and stich them all together in a query like the following: # Construct the SQL query subqueries = [] for column in list(df.columns).remove('timestamp'): subquery = f""" ( SELECT timestamp, {column} FROM ( SELECT timestamp, {column}, ROW_NUMBER() OVER(PARTITION BY timestamp ORDER BY (CASE WHEN {column} IS NULL THEN 1 ELSE 0 END)) as rn FROM df ) sub WHERE rn = 1 ) AS {column} """ subqueries.append(subquery) #query = "SELECT " + ", ".join([f"{column}.timestamp, {column}.{column}" for column in column_names]) query = "SELECT " + ", ".join([f"{column_names[0]}.timestamp"] + [f"{column}.{column}" for column in column_names]) query += " FROM " + subqueries[0] for i in range(1, len(subqueries)): query += f" JOIN {subqueries[i]} ON {column_names[0]}.timestamp = {column_names[i]}.timestamp" The output query of this code is the following SELECT A.timestamp, A.A, B.B FROM ( SELECT timestamp, A FROM ( SELECT timestamp, A, ROW_NUMBER() OVER(PARTITION BY timestamp ORDER BY (CASE WHEN A IS NULL THEN 1 ELSE 0 END)) as rn FROM df ) sub WHERE rn = 1 ) A JOIN ( SELECT timestamp, B FROM ( SELECT timestamp, B, ROW_NUMBER() OVER(PARTITION BY timestamp ORDER BY (CASE WHEN B IS NULL THEN 1 ELSE 0 END)) as rn FROM df ) sub WHERE rn = 1 ) B ON A.timestamp = B.timestamp This does get the right result but it balloons in memory usage and takes way longer compared to the solution using pandas. Also, the queries get quite large when having dataframes with many columns. The current work around I'm using in order to cope with this is to just drop to pandas when neede in order to run the dedup logic and then get back to duckdb. I'm wondering if anyone has a better way to do this that can be as performant as the pandas implementation. Cheers! I have tried using only sql to solve the problem on duckdb, because the built-in functions didn't see to have a fit for what I was looking for. | It looks like you want the first non-NULL per timestamp group? any_value(): Returns the first non-null value from arg. This function is affected by ordering. >>> df shape: (4, 3) ββββββββββββββ¬βββββββ¬βββββββ β timestamp β A β B β β --- β --- β --- β β str β i64 β i64 β ββββββββββββββͺβββββββͺβββββββ‘ β 2022-01-01 β 1 β null β β 2022-01-01 β null β 2 β β 2022-01-02 β 3 β 6 β β 2022-01-02 β 4 β null β ββββββββββββββ΄βββββββ΄βββββββ columns(*) allows us to easily run a function on each column. duckdb.sql(""" from (from df select row_number() over () row_number, *) select any_value(columns(*) order by row_number) group by timestamp order by row_number """) ββββββββββββββ¬βββββββββββββ¬ββββββββ¬ββββββββ β row_number β timestamp β A β B β β int64 β varchar β int64 β int64 β ββββββββββββββΌβββββββββββββΌββββββββΌββββββββ€ β 1 β 2022-01-01 β 1 β 2 β β 3 β 2022-01-02 β 3 β 6 β ββββββββββββββ΄βββββββββββββ΄ββββββββ΄ββββββββ | 2 | 0 |
78,309,756 | 2024-4-11 | https://stackoverflow.com/questions/78309756/mistral-model-generates-the-same-embeddings-for-different-input-texts | I am using pre-trained LLM to generate a representative embedding for an input text. But it is wired that the output embeddings are all the same regardless of different input texts. The codes: from transformers import pipeline, AutoTokenizer, AutoModel import numpy as np PRETRAIN_MODEL = 'mistralai/Mistral-7B-Instruct-v0.2' tokenizer = AutoTokenizer.from_pretrained(PRETRAIN_MODEL) model = AutoModel.from_pretrained(PRETRAIN_MODEL) def generate_embedding(document): inputs = tokenizer(document, return_tensors='pt') print("Tokenized inputs:", inputs) with torch.no_grad(): outputs = model(**inputs) embedding = outputs.last_hidden_state[0, 0, :].numpy() print("Generated embedding:", embedding) return embedding text1 = "this is a test" text2 = "this is another test" text3 = "there are other tests" embedding1 = generate_embedding(text1) embedding2 = generate_embedding(text2) embedding3 = generate_embedding(text3) are_equal = np.array_equal(embedding1, embedding2) and np.array_equal(embedding2, embedding3) if are_equal: print("The embeddings are the same.") else: print("The embeddings are not the same.") The printed tokens are different, but the printed embeddings are the same. The outputs: Tokenized inputs: {'input_ids': tensor([[ 1, 456, 349, 264, 1369]]), 'attention_mask': tensor([[1, 1, 1, 1, 1]])} Generated embedding: [-1.7762679 1.9293272 -2.2413437 ... 2.6379988 -3.104867 4.806004 ] Tokenized inputs: {'input_ids': tensor([[ 1, 456, 349, 1698, 1369]]), 'attention_mask': tensor([[1, 1, 1, 1, 1]])} Generated embedding: [-1.7762679 1.9293272 -2.2413437 ... 2.6379988 -3.104867 4.806004 ] Tokenized inputs: {'input_ids': tensor([[ 1, 736, 460, 799, 8079]]), 'attention_mask': tensor([[1, 1, 1, 1, 1]])} Generated embedding: [-1.7762679 1.9293272 -2.2413437 ... 2.6379988 -3.104867 4.806004 ] The embeddings are the same. Does anyone know where the problem is? Many thanks! | You're not slicing it the dimensions right at outputs.last_hidden_state[0, 0, :].numpy() Q: What is the 0th token in all inputs? A: Beginning of sentence token (BOS) Q: So that's the "embeddings" I'm slicing is the BOS token? A: Try this: from transformers import pipeline, AutoTokenizer, AutoModel import numpy as np PRETRAIN_MODEL = 'mistralai/Mistral-7B-Instruct-v0.2' tokenizer = AutoTokenizer.from_pretrained(PRETRAIN_MODEL) model = AutoModel.from_pretrained(PRETRAIN_MODEL) model(**tokenizer("", return_tensors='pt')).last_hidden_state [out]: tensor([[[-1.7763, 1.9293, -2.2413, ..., 2.6380, -3.1049, 4.8060]]], grad_fn=<MulBackward0>) Q: Then, how do I get the embeddings from a decoder-only model? A: Can you really get an "embedding" from a decoder-only model? The model outputs a hidden state per token it "regress" through, so different texts get different tensor output size. Q: How do you make it into a single fixed size vector then? A: Most probably, some sort of pooling function? an open research question (as of now Apr 2024) but there's work on tools like https://arxiv.org/abs/2404.05961 | 3 | 3 |
78,309,847 | 2024-4-11 | https://stackoverflow.com/questions/78309847/how-to-stop-otherwise-in-python-polars-when-using-a-when-expression | This is my dataframe: βββββββββββββββββββββββ¬βββββββββββ β date β price β β --- β --- β β datetime[ΞΌs] β f64 β βββββββββββββββββββββββͺβββββββββββ‘ β 2023-12-20 14:10:00 β 2039.105 β β 2023-12-21 14:45:00 β 2045.795 β β 2023-12-22 15:10:00 β 2069.708 β β 2023-12-26 06:45:00 β 2064.885 β β 2023-12-27 18:00:00 β 2083.865 β β 2023-12-28 03:05:00 β 2088.224 β β 2023-12-28 15:00:00 β 2080.245 β β 2023-12-29 07:10:00 β 2074.485 β βββββββββββββββββββββββ΄βββββββββββ My main issue was to find prices close to each other and group them together in Polars, but I didn't find any useful code. So, I decided to do it outside of polars. Now I have this problem with Polars that I want to categorize price based on a nested list I have separately. I am using the below code: for i ,group in enumerate(resistance_groups): highs = highs.with_columns( pl.when(pl.col('price').is_in(group)) .then(i+1) .otherwise(None) .alias('groups') ) Which resistance_groups is like this: [[2064.885, 2069.708, 2074.485], [2080.245, 2083.865, 2088.224]] And highs is the above dataframe. The result of above code in first loop is : βββββββββββββββββββββββ¬βββββββββββ¬βββββββββ β date β price β groups β β --- β --- β --- β β datetime[ΞΌs] β f64 β i32 β βββββββββββββββββββββββͺβββββββββββͺβββββββββ‘ β 2023-12-20 14:10:00 β 2039.105 β null β β 2023-12-21 14:45:00 β 2045.795 β null β β 2023-12-22 15:10:00 β 2069.708 β 1 β β 2023-12-26 06:45:00 β 2064.885 β 1 β β 2023-12-27 18:00:00 β 2083.865 β null β β 2023-12-28 03:05:00 β 2088.224 β null β β 2023-12-28 15:00:00 β 2080.245 β null β β 2023-12-29 07:10:00 β 2074.485 β 1 β βββββββββββββββββββββββ΄βββββββββββ΄βββββββββ And in second loop, it is: βββββββββββββββββββββββ¬βββββββββββ¬βββββββββ β date β price β groups β β --- β --- β --- β β datetime[ΞΌs] β f64 β i32 β βββββββββββββββββββββββͺβββββββββββͺβββββββββ‘ β 2023-12-20 14:10:00 β 2039.105 β null β β 2023-12-21 14:45:00 β 2045.795 β null β β 2023-12-22 15:10:00 β 2069.708 β null β β 2023-12-26 06:45:00 β 2064.885 β null β β 2023-12-27 18:00:00 β 2083.865 β 2 β β 2023-12-28 03:05:00 β 2088.224 β 2 β β 2023-12-28 15:00:00 β 2080.245 β 2 β β 2023-12-29 07:10:00 β 2074.485 β null β βββββββββββββββββββββββ΄βββββββββββ΄βββββββββ As you see the first loop results are removed from df. Can anyone suggest a way to either stop the .otherwise() or any other way to categorize the price column? I tried to use multiple when-then expressions too and it didn't work too, using another column wasn't that good either. And just in case: removing .otherwise( ) means setting the values to null. | Alternatively to @DeanMacGregor's solution, you could use pl.coalesce instead. This leverages that pl.when().then() evaluates to None if the when case is not True. ( df .with_columns( pl.coalesce( pl.when(pl.col("price").is_in(group)).then(index+1) for index, group in enumerate(resistance_groups) ) .alias("groups") ) ) shape: (8, 3) βββββββββββββββββββββββ¬βββββββββββ¬βββββββββ β date β price β groups β β --- β --- β --- β β datetime[ΞΌs] β f64 β i32 β βββββββββββββββββββββββͺβββββββββββͺβββββββββ‘ β 2023-12-20 14:10:00 β 2039.105 β null β β 2023-12-21 14:45:00 β 2045.795 β null β β 2023-12-22 15:10:00 β 2069.708 β 1 β β 2023-12-26 06:45:00 β 2064.885 β 1 β β 2023-12-27 18:00:00 β 2083.865 β 2 β β 2023-12-28 03:05:00 β 2088.224 β 2 β β 2023-12-28 15:00:00 β 2080.245 β 2 β β 2023-12-29 07:10:00 β 2074.485 β 1 β βββββββββββββββββββββββ΄βββββββββββ΄βββββββββ | 3 | 3 |
78,309,010 | 2024-4-11 | https://stackoverflow.com/questions/78309010/remove-everything-in-string-after-the-first-occurrence-of-a-word | I have a dataframe with a column consisting of strings. I want to trim the strings in the column such that everything is removed after the first appearance of a given word. The words are in this list: words_to_trim_after = ['test', 'hello', 'very good'] So if I have a dataframe such as the following df = pd.DataFrame({'a':['test this is a test bla bla', 'hello bla bla this is a test', 'very good qwerty this is nice']}) I want to end up with df_end = pd.DataFrame({'a':['test', 'hello', 'very good']}) | You can use split def trim_after_first_word(s, words): for word in words: parts = s.split(word, maxsplit=1) if len(parts) > 1: return word return None | 2 | 2 |
78,307,146 | 2024-4-10 | https://stackoverflow.com/questions/78307146/selenium-in-python-unable-to-locate-element-error-404 | I am trying to use selenium in python to click the load more button to show all the reviews in a specific webpage, however, I am encountering some issues when finding the button element in selenium which always return an ** Error: 404 - No Such Element: Unable to locate element** import requests from urllib.parse import urljoin import time from bs4 import BeautifulSoup as bs from selenium import webdriver from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.common.by import By from selenium.webdriver.support.wait import WebDriverWait from selenium.webdriver.support import expected_conditions as EC options = webdriver.ChromeOptions() options.add_argument("start-maximized") options.add_experimental_option("detach", True) driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()),options=options) url = 'https://www.walgreens.com/store/c/allegra-adult-24hr-tablet-180-mg,-allergy-relief/ID=300409806-product' driver.get(url) time.sleep(5) while True: try: # loadMoreBtn1 = driver.find_element(By.CSS_SELECTOR , 'button.bv-rnr__sc-16j1lpy-3 bv-rnr__sc-17t5kn5-1.jSgrbb.gSrzYA') # loadMoreBtn2 = driver.find_element(By.XPATH,'//*[@class="bv-rnr__sc-16j1lpy-3 bv-rnr__sc-17t5kn5-1 jSgrbb gSrzYA"]') loadMoreBtn3 = driver.find_element(By.CLASS_NAME,'bv-rnr__sc-16j1lpy-3 bv-rnr__sc-17t5kn5-1 jSgrbb gSrzYA') loadMoreBtn3.click() time.sleep(2) except: break I already tried finding the element by css, xpath and class name with no luck in making it work. I checked the webpage and it seems the class name is fix as well <div class="bv-rnr__sc-17t5kn5-0 gcGJRh"> <button aria-label="Load More , This action will load more reviews on screen" class="bv-rnr__sc-16j1lpy-3 bv-rnr__sc-17t5kn5-1 jSgrbb gSrzYA">Load More</button> </div> I also tried the CTRL+F to test the css and xpath in the page source and it returns zero match as well. | I'd simplify the code and use their Ajax API to get more reviews: import requests url = "https://api.bazaarvoice.com/data/reviews.json" params = { "resource": "reviews", "action": "REVIEWS_N_STATS", "filter": [ "productid:eq:300409806", # <--- change this to product ID "contentlocale:eq:en,en_AU,en_CA,en_GB,en_US,en_US", "isratingsonly:eq:false", ], "filter_reviews": "contentlocale:eq:en,en_AU,en_CA,en_GB,en_US,en_US", "include": "authors,products,comments", "filteredstats": "reviews", "Stats": "Reviews", "limit": "30", "offset": "8", "limit_comments": "3", "sort": "submissiontime:desc", "passkey": "tpcm2y0z48bicyt0z3et5n2xf", "apiversion": "5.5", "displaycode": "2001-en_us", } for page in range(3): # <-- change this for required number of pages params["offset"] = 30 * page data = requests.get(url, params=params).json() for r in data["Results"]: print(r["UserNickname"], " -> ", r["ReviewText"]) print() Prints: ... AllergyMan2 -> [This review was collected as part of a promotion.] This was the first time I used Allegra and I'm thoroughly impressed with this product. I've used other allergy pills in the past and this one takes the cake, it's fast acting for all your allergy needs. I recommend if your like me and you have used them all and just want one that works go get some Allegra 24 hour and you'll be in great hands. pinkdiemond1 -> [This review was collected as part of a promotion.] I got the chance to try Allegra 24Hr 60ct Tablets thru Home Tester. The pills were easy to swallow and fast acting. I did not get sleep after a few hours of ingestion. Yes I would recommend Allegra 24Hr 60ct Tablets to my friends and family. Rachael7 -> [This review was collected as part of a promotion.] I have seasonal allergies and my first pick is always Allegra. I love that it works within an hour of using and I feel great all day. A plus that itβs non drowsy. The only suggestion Iβd like to make is: have the pill me smaller. Other than that, itβs great to take for those annoying allergy symptoms. ... | 2 | 1 |
78,306,681 | 2024-4-10 | https://stackoverflow.com/questions/78306681/pyspark-repeat-value-until-change-in-column | I have a dataframe with this structure Order Number Line Number Item Type 12345 1 1001 Parent 12345 2 1002 Child 12345 3 1003 Child 12345 4 1004 Child 12345 5 1005 Parent 12345 6 1006 Child I would like to add a column which shows the "Parent Item" for each item. The parent item is the first parent type that each child follows. There are no relationships or links to use. Line Number dictates the children for each parent. Line Number Item Type Parent Item 1 1001 Parent 1001 2 1002 Child 1001 3 1003 Child 1001 4 1004 Child 1001 5 1005 Parent 1005 6 1006 Child 1005 The parent item number must repeat until a new parent is found. I have tried adding a LAG column to do checks but couldn't quite nail down the logic. I felt like I needed more than one column but couldn't do it. I've also tried a window function to "group" them together by row number, partitioning by order number and type but that doesn't work as it separates the parents from the children. | Try this: from pyspark.sql import functions as F from pyspark.sql.window import Window df = df.withColumn( "Parent_Item", F.last(F.when(F.col("Type") == "Parent", F.col("Item")), ignorenulls=True).over( Window.partitionBy("Order Number").orderBy("Line Number") ), ) df.show() Output: +------------+-----------+----+------+-----------+ |Order Number|Line Number|Item| Type|Parent_Item| +------------+-----------+----+------+-----------+ | 12345| 1|1001|Parent| 1001| | 12345| 2|1002| Child| 1001| | 12345| 3|1003| Child| 1001| | 12345| 4|1004| Child| 1001| | 12345| 5|1005|Parent| 1005| | 12345| 6|1006| Child| 1005| +------------+-----------+----+------+-----------+ | 2 | 2 |
78,304,744 | 2024-4-10 | https://stackoverflow.com/questions/78304744/how-to-detect-usb-on-raspberry-pi-and-access-it-using-python | I want to detect USB on Raspberry Pi and access USB to copy some data. I used pyudev and get some info from USB but I can't access it. What should I do? This is my code: import pyudev context = pyudev.Context() for device in context.list_devices(subsystem='block', DEVTYPE='disk'): for props in device.properties: if device.get("ID_BUS") == "usb": print(props, device.get(props)) The path of the USB is /media/pi/CCCOMA_X64FRE_EN-GB_DV9 but I don't find it in list properties, when I print them out. How can I fix it? Thanks. | The reason that you can't find the mount point of the usb drive is that device.properties doesn't include the mount point. You could however get the disk name (/dev/sdx) from device.properties and then use subprocess.check_output(['findmnt', '/dev/sdx1', '-no', 'TARGET']) you can find your mount point in the return of this function. | 2 | 2 |
78,302,788 | 2024-4-10 | https://stackoverflow.com/questions/78302788/how-can-i-make-fastest-calculation-speed-for-given-condition-for-numpy-array | I made category as below. 1~4 : 0 5~9 : 1 10~15 : 2 I have a numpy array as below. np.array([2, 5, 10, 13, 7, 9]) How can I make fastest way to change above numpy array based on given conditioin as below? np.array([0, 1, 2, 2, 1, 1]) Because I think 'for loop' will consume lots of time. Is there any way to make fastest time calculation? | You can also use np.searchsorted as below: a = np.array([2, 5, 10, 13, 7, 9]) np.searchsorted([4, 9, 15], a) array([0, 1, 2, 2, 1, 1], dtype=int64) labels = np.array([23, 45, 87]) labels[np.searchsorted([4, 9, 15], a)] array([23, 45, 87, 87, 45, 45]) | 2 | 4 |
78,302,095 | 2024-4-10 | https://stackoverflow.com/questions/78302095/trying-to-concatenate-a-row-to-a-matrix-in-tensorflow | I have time series data in 4 channels and am trying to generate a sequence of length N using my model. I am determining N from the input data supplied to my sequence generation function: def generate_sequence(self, input_data): predicted_sequence = tf.convert_to_tensor(input_data, dtype=tf.float32) data_shape = predicted_sequence.shape for i in range(len(predicted_sequence)): model_input = tf.reshape(predicted_sequence, shape=data_shape) result = self.model(model_input) predicted_sequence = tf.concat([predicted_sequence[:, 1:, :], result], 0) return predicted_sequence This causes the following error: ConcatOp : Dimension 1 in both shapes must be equal: shape[0] = [1,1439,4] vs. shape[1] = [1,1,4] [Op:ConcatV2] name: concat This seems to suggest that I am using the wrong method to generate my sequence (I naively wrote this function assuming that tensorflow tensors would behave like numpy arrays). In my loop I start with my input data: [[[a1, b1, c1, d1] [a2, b2, c2, d2] ... [aN, bN, cN, dN]] and I generate a prediction using my model: [[aP1, bP1, cP1, dP1]] My intention at this point is to remove the first entry in the input data, as it is the oldest row of data, and add the predicted data to the end: [[[a2, b2, c2, b2] [a3, b3, c3, d3] ... [aN, bN, cN, dN] [aP1, bP1, cP1, dP1]]] From here the loop is run until the entire sequence contains predictions for the next N rows of data. Is there another tensorflow method that is better suited to this or am I missing something in the tf.concat method? Any help would be greatly appreciated. | Everything is good except the axis of concatenation. It should be axis=1. def generate_sequence(self, input_data): predicted_sequence = tf.convert_to_tensor(input_data, dtype=tf.float32) data_shape = predicted_sequence.shape for i in range(len(predicted_sequence)): model_input = tf.reshape(predicted_sequence, shape=data_shape) result = self.model(model_input) predicted_sequence = tf.concat([predicted_sequence[:, 1:, :], result], axis=1) return predicted_sequence The rule of thumb is "all the tensors should possess the same shape in all the axes except the concatenating axis" | 2 | 1 |
78,285,959 | 2024-4-6 | https://stackoverflow.com/questions/78285959/how-do-you-select-fields-from-all-structs-in-a-list-in-polars | I'm working with a deeply nested DataFrame (not good practice, I know), and I'd like to express something like "select field X for all structs in list Y". An example of the data structure: import polars as pl data = { "a": [ [{ "x": [1, 2, 3], "y": [4, 5, 6] }, { "x": [2, 3, 4], "y": [3, 4, 5] } ] ], } df = pl.DataFrame(data) In this case, I'd like to select field "x" in both of the structs, and gather them into a df with two series, call them"x_1" and "x_2". In other words, the desired output is: βββββββββββββ¬ββββββββββββ β x_1 β x_2 β β --- β --- β β list[i64] β list[i64] β βββββββββββββͺββββββββββββ‘ β [1, 2, 3] β [2, 3, 4] β βββββββββββββ΄ββββββββββββ I don't know the length of the list ahead of time, and I'd like to do this dynamically (i.e. without hard-coding the field names). I'm not sure whether this is possible using Polars expressions? Thanks in advance! | Update: Perhaps a simpler approach using .unstack() (df.select(pl.col("a").flatten().struct.field("x")) .unstack(1) ) shape: (1, 2) βββββββββββββ¬ββββββββββββ β x_0 β x_1 β β --- β --- β β list[i64] β list[i64] β βββββββββββββͺββββββββββββ‘ β [1, 2, 3] β [2, 3, 4] β βββββββββββββ΄ββββββββββββ Original answer: df.select( pl.col("a").list.eval(pl.element().struct["x"]) .list.to_struct("max_width", lambda idx: f"x_{idx + 1}") ).unnest("a") shape: (1, 2) βββββββββββββ¬ββββββββββββ β x_1 β x_2 β β --- β --- β β list[i64] β list[i64] β βββββββββββββͺββββββββββββ‘ β [1, 2, 3] β [2, 3, 4] β βββββββββββββ΄ββββββββββββ Explanation .list.eval() to loop through each list element, we extract each struct field. df.select( pl.col("a").list.eval(pl.element().struct["x"]) ) # shape: (1, 1) # ββββββββββββββββββββββββββ # β a β # β --- β # β list[list[i64]] β # ββββββββββββββββββββββββββ‘ # β [[1, 2, 3], [2, 3, 4]] β # ββββββββββββββββββββββββββ .list.to_struct() to convert to a struct which will allow us to turn each inner list into its own column. df.select( pl.col("a").list.eval(pl.element().struct["x"]) .list.to_struct("max_width", lambda idx: f"x_{idx + 1}") ) # shape: (1, 1) # βββββββββββββββββββββββββ # β a β # β --- β # β struct[2] β # βββββββββββββββββββββββββ‘ # β {[1, 2, 3],[2, 3, 4]} β # βββββββββββββββββββββββββ .unnest() the struct to create individual columns. | 4 | 5 |
78,279,136 | 2024-4-5 | https://stackoverflow.com/questions/78279136/importerror-cannot-import-name-triu-from-scipy-linalg-when-importing-gens | I am trying to use Gensim, but running import gensim raises this error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/gensim/__init__.py", line 11, in <module> from gensim import parsing, corpora, matutils, interfaces, models, similarities, utils # noqa:F401 File "/usr/local/lib/python3.10/dist-packages/gensim/corpora/__init__.py", line 6, in <module> from .indexedcorpus import IndexedCorpus # noqa:F401 must appear before the other classes File "/usr/local/lib/python3.10/dist-packages/gensim/corpora/indexedcorpus.py", line 14, in <module> from gensim import interfaces, utils File "/usr/local/lib/python3.10/dist-packages/gensim/interfaces.py", line 19, in <module> from gensim import utils, matutils File "/usr/local/lib/python3.10/dist-packages/gensim/matutils.py", line 20, in <module> from scipy.linalg import get_blas_funcs, triu ImportError: cannot import name 'triu' from 'scipy.linalg' (/usr/local/lib/python3.10/dist-packages/scipy/linalg/__init__.py) Why is this happening and how can I fix it? | I found the issue. The scipy.linalg functions tri, triu & tril are deprecated and will be removed in SciPy 1.13. β SciPy 1.11.0 Release Notes Β§ Deprecated features So, I installed SciPy v1.10.1 instead of the latest version and it was working well. pip install scipy==1.10.1 | 45 | 65 |
78,268,633 | 2024-4-3 | https://stackoverflow.com/questions/78268633/clean-way-to-check-if-variable-is-list-of-lists-using-pattern-matching | In my code, I need to distinguish a list of records from a list of lists of records. The existing code does it like so: if isinstance(input_var, list): if len(input_var) > 0: if all(isinstance(item) for item in input_var): return list_of_lists_processing(input_var) elif not any(instance(item, list) for item in input_var): return list_of_data_processing(input_var) else: raise ValueError(f"Unexpected input_var value {input_var}") else: return list() else: raise ValueError(f"Unexpected input_var value {input_var}") However, this seems ugly. I want to use Python 3.10's pattern matching to simplify the code. I came up with this version: match input_var: case [list(), *_]: return list_of_lists_processing(input_var) case list(): # also process empty list case return list_of_data_processing(input_var) case _: ValueError(f"Unexpected value {input_var=}") But there is a flaw here: case [*list(), *_] only checks the first element of input_var, not all of them. In practice, this is enough for my purposes, but I want to ask anyway: is there a clean way to match only a list where every element is a list? I tried case [*list()]:, but this causes a SyntaxError. case list(list()): is syntactically correct, but doesn't work as expected (for example, it matches ["a"] - what is going on here?) | You can match the set of item types inside your list. class Constants: set_of_list = {list} match set(type(elem) for elem in input_var): case Constants.set_of_list: return list_of_lists_processing(input_var) case types if list in types: raise ValueError(f"Unexpected input_var value {input_var}") case _: return list_of_data_processing(input_var) Python3.10 does not support matching values inside a set so you still have to check if list is one of the types. The Constants class is used to trigger a value pattern. Raymond Hettinger made a great talk to explain this and other concepts related to pattern matching. | 5 | 2 |
78,293,639 | 2024-4-8 | https://stackoverflow.com/questions/78293639/visualizing-time-series-data-with-heatmaps-and-3d-surface-plots | I need to visualize different indices as a heatmap or image. The goal is to plot hours on the y-axis, dates on the x-axis, and intensity values for each day. The attached figure illustrates the desired output: I aim to create a figure using pcolormesh, pcolor, imshow, or Seaborn's heatmap. For context, KP > 40 indicates a storm, and DST < -30 also indicates a storm. An important date in the data is 2020/09/01, which corresponds to an earthquake. Here's the approach I've tried so far: X, Y = np.meshgrid(sw_indices.index.day, sw_indices.index.hour) from scipy.interpolate import griddata Z = griddata((sw_indices.index.day, sw_indices.index.hour), sw_indices.kp, (X, Y), method="nearest") fig, ax = plt.subplots() ax.pcolormesh(X, Y, Z) Context The data being visualized in the described scenario is time series data concerning space weather indices, which track various environmental and astronomical measurements over time. The specific indices mentioned are: kp: A planetary geomagnetic activity index, which measures geomagnetic storms. KP values above 40 typically indicate significant geomagnetic activity or storms. dst: Another geomagnetic index, where values below -30 indicate geomagnetic storms. ssn: Sunspot number, which is an index of the number of sunspots and groups of sunspots present on the surface of the sun. f10.7: The solar radio flux at 10.7 cm (2800 MHz), which is a measurement of the solar emissions in the microwave range, and it's often used as a proxy for solar activity. The visualization aims to present space weather conditions by plotting KP index values across different hours and days. In this heatmap, the KP index is used as an indicator of geomagnetic activity, with higher values signifying more intense geomagnetic storms. By setting up the axes with dates on the x-axis and hours on the y-axis, and by mapping KP index values to color intensity, the visualization provides an insightful look into how geomagnetic conditions fluctuate over time. This approach allows for easy identification of critical events, such as storms, by observing changes in color intensity across the grid. The intended visualization method, such as using pcolormesh, pcolor, imshow, or Seaborn's heatmap, is apt for this kind of data as they can effectively show variations across two dimensions with color coding to represent the third dimension (intensity or magnitude of the indices). Other indices like dst, ssn, and f10.7 could potentially be visualized in a similar manner to track different types of environmental conditions or solar activity. Data Here's the link to the full data file: Google Drive Link Here are excerpts from the data: ,year,doy,hr,kp,ssn,dst,f10_7 2020-08-02 00:00:00,YEAR,DOY,HR,1,2,3,4.0 2020-08-02 01:00:00,2020,215,0,3,15,18,74.9 2020-08-02 02:00:00,2020,215,1,3,15,16,74.9 2020-08-02 03:00:00,2020,215,2,3,15,10,74.9 2020-08-02 04:00:00,2020,215,3,7,15,7,74.9 2020-08-02 05:00:00,2020,215,4,7,15,6,74.9 2020-08-02 06:00:00,2020,215,5,7,15,11,74.9 2020-08-02 07:00:00,2020,215,6,10,15,17,74.9 2020-08-02 08:00:00,2020,215,7,10,15,14,74.9 2020-08-02 09:00:00,2020,215,8,10,15,19,74.9 2020-08-02 10:00:00,2020,215,9,27,15,22,74.9 2020-08-02 11:00:00,2020,215,10,27,15,21,74.9 2020-08-02 12:00:00,2020,215,11,27,15,12,74.9 2020-08-02 13:00:00,2020,215,12,33,15,-2,74.9 2020-08-02 14:00:00,2020,215,13,33,15,-14,74.9 2020-08-02 15:00:00,2020,215,14,33,15,-12,74.9 2020-08-02 16:00:00,2020,215,15,30,15,-14,74.9 2020-08-02 17:00:00,2020,215,16,30,15,-14,74.9 2020-08-02 18:00:00,2020,215,17,30,15,-7,74.9 2020-08-02 19:00:00,2020,215,18,20,15,0,74.9 2020-08-02 20:00:00,2020,215,19,20,15,3,74.9 2020-08-02 21:00:00,2020,215,20,20,15,9,74.9 2020-08-02 22:00:00,2020,215,21,30,15,-3,74.9 2020-08-02 23:00:00,2020,215,22,30,15,-8,74.9 2020-08-03 00:00:00,2020,215,23,30,15,-5,74.9 2020-08-03 01:00:00,2020,216,0,33,11,-2,74.8 2020-08-03 02:00:00,2020,216,1,33,11,-1,74.8 2020-08-03 03:00:00,2020,216,2,33,11,-2,74.8 2020-08-03 04:00:00,2020,216,3,33,11,-19,74.8 2020-08-03 05:00:00,2020,216,4,33,11,-17,74.8 2020-08-03 06:00:00,2020,216,5,33,11,-16,74.8 2020-08-03 07:00:00,2020,216,6,30,11,-22,74.8 2020-08-03 08:00:00,2020,216,7,30,11,-20,74.8 2020-08-03 09:00:00,2020,216,8,30,11,-28,74.8 2020-08-03 10:00:00,2020,216,9,27,11,-30,74.8 2020-08-03 11:00:00,2020,216,10,27,11,-18,74.8 2020-08-03 12:00:00,2020,216,11,27,11,-12,74.8 2020-08-03 13:00:00,2020,216,12,23,11,-7,74.8 2020-08-03 14:00:00,2020,216,13,23,11,-7,74.8 2020-08-03 15:00:00,2020,216,14,23,11,-9,74.8 2020-08-03 16:00:00,2020,216,15,33,11,-8,74.8 2020-08-03 17:00:00,2020,216,16,33,11,-3,74.8 2020-08-03 18:00:00,2020,216,17,33,11,-7,74.8 2020-08-03 19:00:00,2020,216,18,30,11,-13,74.8 2020-08-03 20:00:00,2020,216,19,30,11,-16,74.8 2020-08-03 21:00:00,2020,216,20,30,11,-16,74.8 2020-08-03 22:00:00,2020,216,21,33,11,-15,74.8 2020-08-03 23:00:00,2020,216,22,33,11,-21,74.8 2020-08-04 00:00:00,2020,216,23,33,11,-18,74.8 2020-08-04 01:00:00,2020,217,0,20,11,-14,75.1 2020-08-04 02:00:00,2020,217,1,20,11,-13,75.1 2020-08-04 03:00:00,2020,217,2,20,11,-18,75.1 2020-08-04 04:00:00,2020,217,3,33,11,-21,75.1 2020-08-04 05:00:00,2020,217,4,33,11,-20,75.1 2020-08-04 06:00:00,2020,217,5,33,11,-15,75.1 2020-08-04 07:00:00,2020,217,6,13,11,-13,75.1 2020-08-04 08:00:00,2020,217,7,13,11,-12,75.1 2020-08-04 09:00:00,2020,217,8,13,11,-11,75.1 2020-08-04 10:00:00,2020,217,9,17,11,-10,75.1 2020-08-04 11:00:00,2020,217,10,17,11,-9,75.1 2020-08-04 12:00:00,2020,217,11,17,11,-5,75.1 2020-08-04 13:00:00,2020,217,12,13,11,-5,75.1 2020-08-04 14:00:00,2020,217,13,13,11,-5,75.1 2020-08-04 15:00:00,2020,217,14,13,11,-5,75.1 2020-08-04 16:00:00,2020,217,15,13,11,-2,75.1 2020-08-04 17:00:00,2020,217,16,13,11,-1,75.1 2020-08-04 18:00:00,2020,217,17,13,11,-3,75.1 2020-08-04 19:00:00,2020,217,18,13,11,-3,75.1 2020-08-04 20:00:00,2020,217,19,13,11,-6,75.1 2020-08-04 21:00:00,2020,217,20,13,11,-10,75.1 2020-08-04 22:00:00,2020,217,21,17,11,-8,75.1 2020-08-04 23:00:00,2020,217,22,17,11,-11,75.1 2020-08-05 00:00:00,2020,217,23,17,11,-13,75.1 2020-08-05 01:00:00,2020,218,0,17,12,-11,75.6 2020-08-05 02:00:00,2020,218,1,17,12,-8,75.6 2020-08-05 03:00:00,2020,218,2,17,12,-11,75.6 2020-08-05 04:00:00,2020,218,3,13,12,-12,75.6 2020-08-05 05:00:00,2020,218,4,13,12,-11,75.6 2020-08-05 06:00:00,2020,218,5,13,12,-14,75.6 2020-08-05 07:00:00,2020,218,6,10,12,-14,75.6 2020-08-05 08:00:00,2020,218,7,10,12,-12,75.6 2020-08-05 09:00:00,2020,218,8,10,12,-11,75.6 2020-08-05 10:00:00,2020,218,9,7,12,-11,75.6 2020-08-05 11:00:00,2020,218,10,7,12,-11,75.6 2020-08-05 12:00:00,2020,218,11,7,12,-9,75.6 2020-08-05 13:00:00,2020,218,12,7,12,-7,75.6 2020-08-05 14:00:00,2020,218,13,7,12,-7,75.6 2020-08-05 15:00:00,2020,218,14,7,12,-4,75.6 2020-08-05 16:00:00,2020,218,15,10,12,-2,75.6 2020-08-05 17:00:00,2020,218,16,10,12,-3,75.6 2020-08-05 18:00:00,2020,218,17,10,12,-5,75.6 2020-08-05 19:00:00,2020,218,18,23,12,-4,75.6 2020-08-05 20:00:00,2020,218,19,23,12,-7,75.6 2020-08-05 21:00:00,2020,218,20,23,12,-5,75.6 2020-08-05 22:00:00,2020,218,21,13,12,-7,75.6 2020-08-05 23:00:00,2020,218,22,13,12,-12,75.6 2020-08-06 00:00:00,2020,218,23,13,12,-15,75.6 2020-08-06 01:00:00,2020,219,0,7,14,-16,75.1 2020-08-06 02:00:00,2020,219,1,7,14,-16,75.1 2020-08-06 03:00:00,2020,219,2,7,14,-13,75.1 2020-08-06 04:00:00,2020,219,3,13,14,-7,75.1 2020-08-06 05:00:00,2020,219,4,13,14,-2,75.1 2020-08-06 06:00:00,2020,219,5,13,14,1,75.1 2020-08-06 07:00:00,2020,219,6,3,14,1,75.1 2020-08-06 08:00:00,2020,219,7,3,14,0,75.1 2020-08-06 09:00:00,2020,219,8,3,14,-2,75.1 2020-08-06 10:00:00,2020,219,9,7,14,-5,75.1 2020-08-06 11:00:00,2020,219,10,7,14,-3,75.1 2020-08-06 12:00:00,2020,219,11,7,14,-1,75.1 2020-08-06 13:00:00,2020,219,12,17,14,-3,75.1 2020-08-06 14:00:00,2020,219,13,17,14,-8,75.1 2020-08-06 15:00:00,2020,219,14,17,14,-6,75.1 2020-08-06 16:00:00,2020,219,15,13,14,-6,75.1 2020-08-06 17:00:00,2020,219,16,13,14,-8,75.1 2020-08-06 18:00:00,2020,219,17,13,14,-9,75.1 2020-08-06 19:00:00,2020,219,18,7,14,-12,75.1 2020-08-06 20:00:00,2020,219,19,7,14,-10,75.1 2020-08-06 21:00:00,2020,219,20,7,14,-4,75.1 2020-08-06 22:00:00,2020,219,21,20,14,-7,75.1 2020-08-06 23:00:00,2020,219,22,20,14,-14,75.1 2020-08-07 00:00:00,2020,219,23,20,14,-16,75.1 2020-08-07 01:00:00,2020,220,0,17,17,-13,76.0 2020-08-07 02:00:00,2020,220,1,17,17,-14,76.0 2020-08-07 03:00:00,2020,220,2,17,17,-14,76.0 2020-08-07 04:00:00,2020,220,3,10,17,-13,76.0 2020-08-07 05:00:00,2020,220,4,10,17,-13,76.0 2020-08-07 06:00:00,2020,220,5,10,17,-11,76.0 2020-08-07 07:00:00,2020,220,6,0,17,-5,76.0 2020-08-07 08:00:00,2020,220,7,0,17,-3,76.0 2020-08-07 09:00:00,2020,220,8,0,17,-2,76.0 2020-08-07 10:00:00,2020,220,9,3,17,-2,76.0 2020-08-07 11:00:00,2020,220,10,3,17,-2,76.0 2020-08-07 12:00:00,2020,220,11,3,17,2,76.0 2020-08-07 13:00:00,2020,220,12,3,17,4,76.0 2020-08-07 14:00:00,2020,220,13,3,17,5,76.0 2020-08-07 15:00:00,2020,220,14,3,17,4,76.0 2020-08-07 16:00:00,2020,220,15,3,17,3,76.0 2020-08-07 17:00:00,2020,220,16,3,17,0,76.0 2020-08-07 18:00:00,2020,220,17,3,17,0,76.0 2020-08-07 19:00:00,2020,220,18,7,17,1,76.0 2020-08-07 20:00:00,2020,220,19,7,17,1,76.0 2020-08-07 21:00:00,2020,220,20,7,17,1,76.0 2020-08-07 22:00:00,2020,220,21,13,17,-2,76.0 2020-08-07 23:00:00,2020,220,22,13,17,-3,76.0 2020-08-08 00:00:00,2020,220,23,13,17,-1,76.0 2020-08-08 01:00:00,2020,221,0,20,12,-3,76.8 2020-08-08 02:00:00,2020,221,1,20,12,-6,76.8 2020-08-08 03:00:00,2020,221,2,20,12,-5,76.8 2020-08-08 04:00:00,2020,221,3,17,12,-4,76.8 2020-08-08 05:00:00,2020,221,4,17,12,-4,76.8 2020-08-08 06:00:00,2020,221,5,17,12,-5,76.8 2020-08-08 07:00:00,2020,221,6,13,12,-8,76.8 2020-08-08 08:00:00,2020,221,7,13,12,-10,76.8 2020-08-08 09:00:00,2020,221,8,13,12,-11,76.8 2020-08-08 10:00:00,2020,221,9,10,12,-7,76.8 2020-08-08 11:00:00,2020,221,10,10,12,-5,76.8 2020-08-08 12:00:00,2020,221,11,10,12,-4,76.8 2020-08-08 13:00:00,2020,221,12,13,12,-7,76.8 2020-08-08 14:00:00,2020,221,13,13,12,-8,76.8 2020-08-08 15:00:00,2020,221,14,13,12,-8,76.8 2020-08-08 16:00:00,2020,221,15,10,12,-10,76.8 2020-08-08 17:00:00,2020,221,16,10,12,-12,76.8 2020-08-08 18:00:00,2020,221,17,10,12,-12,76.8 2020-08-08 19:00:00,2020,221,18,7,12,-12,76.8 2020-08-08 20:00:00,2020,221,19,7,12,-10,76.8 2020-08-08 21:00:00,2020,221,20,7,12,-7,76.8 2020-08-08 22:00:00,2020,221,21,3,12,-2,76.8 2020-08-08 23:00:00,2020,221,22,3,12,0,76.8 2020-08-09 00:00:00,2020,221,23,3,12,-2,76.8 2020-08-09 01:00:00,2020,222,0,0,13,-4,76.0 2020-08-09 02:00:00,2020,222,1,0,13,-7,76.0 2020-08-09 03:00:00,2020,222,2,0,13,-8,76.0 2020-08-09 04:00:00,2020,222,3,3,13,-7,76.0 2020-08-09 05:00:00,2020,222,4,3,13,-7,76.0 2020-08-09 06:00:00,2020,222,5,3,13,-5,76.0 2020-08-09 07:00:00,2020,222,6,3,13,-2,76.0 2020-08-09 08:00:00,2020,222,7,3,13,-1,76.0 2020-08-09 09:00:00,2020,222,8,3,13,-2,76.0 2020-08-09 10:00:00,2020,222,9,7,13,-7,76.0 2020-08-09 11:00:00,2020,222,10,7,13,-8,76.0 2020-08-09 12:00:00,2020,222,11,7,13,-8,76.0 2020-08-09 13:00:00,2020,222,12,3,13,-8,76.0 2020-08-09 14:00:00,2020,222,13,3,13,-8,76.0 2020-08-09 15:00:00,2020,222,14,3,13,-6,76.0 2020-08-09 16:00:00,2020,222,15,3,13,-3,76.0 2020-08-09 17:00:00,2020,222,16,3,13,-4,76.0 2020-08-09 18:00:00,2020,222,17,3,13,-5,76.0 2020-08-09 19:00:00,2020,222,18,3,13,-4,76.0 2020-08-09 20:00:00,2020,222,19,3,13,0,76.0 2020-08-09 21:00:00,2020,222,20,3,13,4,76.0 2020-08-09 22:00:00,2020,222,21,7,13,5,76.0 2020-08-09 23:00:00,2020,222,22,7,13,5,76.0 2020-08-10 00:00:00,2020,222,23,7,13,6,76.0 2020-08-10 01:00:00,2020,223,0,3,13,4,76.2 2020-08-10 02:00:00,2020,223,1,3,13,4,76.2 2020-08-10 03:00:00,2020,223,2,3,13,6,76.2 2020-08-10 04:00:00,2020,223,3,3,13,5,76.2 2020-08-10 05:00:00,2020,223,4,3,13,2,76.2 2020-08-10 06:00:00,2020,223,5,3,13,0,76.2 2020-08-10 07:00:00,2020,223,6,3,13,-3,76.2 2020-08-10 08:00:00,2020,223,7,3,13,-5,76.2 2020-08-10 09:00:00,2020,223,8,3,13,-6,76.2 2020-08-10 10:00:00,2020,223,9,3,13,-7,76.2 2020-08-10 11:00:00,2020,223,10,3,13,-5,76.2 2020-08-10 12:00:00,2020,223,11,3,13,-4,76.2 2020-08-10 13:00:00,2020,223,12,0,13,-5,76.2 2020-08-10 14:00:00,2020,223,13,0,13,-4,76.2 2020-08-10 15:00:00,2020,223,14,0,13,0,76.2 2020-08-10 16:00:00,2020,223,15,7,13,1,76.2 2020-08-10 17:00:00,2020,223,16,7,13,-1,76.2 2020-08-10 18:00:00,2020,223,17,7,13,-1,76.2 2020-08-10 19:00:00,2020,223,18,3,13,0,76.2 2020-08-10 20:00:00,2020,223,19,3,13,3,76.2 2020-08-10 21:00:00,2020,223,20,3,13,4,76.2 2020-08-10 22:00:00,2020,223,21,7,13,3,76.2 2020-08-10 23:00:00,2020,223,22,7,13,1,76.2 2020-08-11 00:00:00,2020,223,23,7,13,-1,76.2 2020-08-11 01:00:00,2020,224,0,10,12,2,75.4 2020-08-11 02:00:00,2020,224,1,10,12,1,75.4 2020-08-11 03:00:00,2020,224,2,10,12,-3,75.4 2020-08-11 04:00:00,2020,224,3,7,12,-6,75.4 2020-08-11 05:00:00,2020,224,4,7,12,-6,75.4 2020-08-11 06:00:00,2020,224,5,7,12,-5,75.4 2020-08-11 07:00:00,2020,224,6,3,12,-4,75.4 2020-08-11 08:00:00,2020,224,7,3,12,-3,75.4 2020-08-11 09:00:00,2020,224,8,3,12,-4,75.4 2020-08-11 10:00:00,2020,224,9,3,12,-3,75.4 2020-08-11 11:00:00,2020,224,10,3,12,-2,75.4 2020-08-11 12:00:00,2020,224,11,3,12,-3,75.4 2020-08-11 13:00:00,2020,224,12,3,12,-3,75.4 2020-08-11 14:00:00,2020,224,13,3,12,0,75.4 2020-08-11 15:00:00,2020,224,14,3,12,1,75.4 2020-08-11 16:00:00,2020,224,15,0,12,2,75.4 2020-08-11 17:00:00,2020,224,16,0,12,1,75.4 2020-08-11 18:00:00,2020,224,17,0,12,1,75.4 2020-08-11 19:00:00,2020,224,18,0,12,5,75.4 2020-08-11 20:00:00,2020,224,19,0,12,6,75.4 2020-08-11 21:00:00,2020,224,20,0,12,5,75.4 2020-08-11 22:00:00,2020,224,21,3,12,4,75.4 2020-08-11 23:00:00,2020,224,22,3,12,7,75.4 2020-08-12 00:00:00,2020,224,23,3,12,10,75.4 2020-08-12 01:00:00,2020,225,0,3,17,10,75.1 2020-08-12 02:00:00,2020,225,1,3,17,10,75.1 2020-08-12 03:00:00,2020,225,2,3,17,12,75.1 2020-08-12 04:00:00,2020,225,3,13,17,9,75.1 2020-08-12 05:00:00,2020,225,4,13,17,5,75.1 2020-08-12 06:00:00,2020,225,5,13,17,3,75.1 2020-08-12 07:00:00,2020,225,6,10,17,2,75.1 2020-08-12 08:00:00,2020,225,7,10,17,2,75.1 2020-08-12 09:00:00,2020,225,8,10,17,0,75.1 2020-08-12 10:00:00,2020,225,9,7,17,1,75.1 2020-08-12 11:00:00,2020,225,10,7,17,2,75.1 2020-08-12 12:00:00,2020,225,11,7,17,3,75.1 2020-08-12 13:00:00,2020,225,12,10,17,3,75.1 2020-08-12 14:00:00,2020,225,13,10,17,6,75.1 2020-08-12 15:00:00,2020,225,14,10,17,7,75.1 2020-08-12 16:00:00,2020,225,15,7,17,8,75.1 2020-08-12 17:00:00,2020,225,16,7,17,8,75.1 2020-08-12 18:00:00,2020,225,17,7,17,7,75.1 2020-08-12 19:00:00,2020,225,18,7,17,5,75.1 2020-08-12 20:00:00,2020,225,19,7,17,4,75.1 2020-08-12 21:00:00,2020,225,20,7,17,6,75.1 2020-08-12 22:00:00,2020,225,21,3,17,7,75.1 2020-08-12 23:00:00,2020,225,22,3,17,5,75.1 2020-08-13 00:00:00,2020,225,23,3,17,2,75.1 2020-08-13 01:00:00,2020,226,0,17,10,1,74.2 2020-08-13 02:00:00,2020,226,1,17,10,0,74.2 2020-08-13 03:00:00,2020,226,2,17,10,0,74.2 2020-08-13 04:00:00,2020,226,3,7,10,-2,74.2 2020-08-13 05:00:00,2020,226,4,7,10,-1,74.2 2020-08-13 06:00:00,2020,226,5,7,10,2,74.2 2020-08-13 07:00:00,2020,226,6,3,10,4,74.2 2020-08-13 08:00:00,2020,226,7,3,10,7,74.2 2020-08-13 09:00:00,2020,226,8,3,10,6,74.2 2020-08-13 10:00:00,2020,226,9,3,10,5,74.2 2020-08-13 11:00:00,2020,226,10,3,10,4,74.2 2020-08-13 12:00:00,2020,226,11,3,10,2,74.2 2020-08-13 13:00:00,2020,226,12,13,10,2,74.2 2020-08-13 14:00:00,2020,226,13,13,10,2,74.2 2020-08-13 15:00:00,2020,226,14,13,10,3,74.2 2020-08-13 16:00:00,2020,226,15,3,10,2,74.2 2020-08-13 17:00:00,2020,226,16,3,10,3,74.2 2020-08-13 18:00:00,2020,226,17,3,10,1,74.2 2020-08-13 19:00:00,2020,226,18,3,10,-1,74.2 2020-08-13 20:00:00,2020,226,19,3,10,-1,74.2 2020-08-13 21:00:00,2020,226,20,3,10,2,74.2 2020-08-13 22:00:00,2020,226,21,13,10,-1,74.2 2020-08-13 23:00:00,2020,226,22,13,10,-5,74.2 2020-08-14 00:00:00,2020,226,23,13,10,-5,74.2 2020-08-14 01:00:00,2020,227,0,10,5,-2,72.6 2020-08-14 02:00:00,2020,227,1,10,5,1,72.6 2020-08-14 03:00:00,2020,227,2,10,5,6,72.6 2020-08-14 04:00:00,2020,227,3,17,5,4,72.6 2020-08-14 05:00:00,2020,227,4,17,5,1,72.6 2020-08-14 06:00:00,2020,227,5,17,5,3,72.6 2020-08-14 07:00:00,2020,227,6,3,5,3,72.6 2020-08-14 08:00:00,2020,227,7,3,5,2,72.6 2020-08-14 09:00:00,2020,227,8,3,5,3,72.6 2020-08-14 10:00:00,2020,227,9,3,5,6,72.6 2020-08-14 11:00:00,2020,227,10,3,5,8,72.6 2020-08-14 12:00:00,2020,227,11,3,5,7,72.6 2020-08-14 13:00:00,2020,227,12,7,5,5,72.6 2020-08-14 14:00:00,2020,227,13,7,5,6,72.6 2020-08-14 15:00:00,2020,227,14,7,5,6,72.6 2020-08-14 16:00:00,2020,227,15,3,5,7,72.6 2020-08-14 17:00:00,2020,227,16,3,5,6,72.6 2020-08-14 18:00:00,2020,227,17,3,5,3,72.6 2020-08-14 19:00:00,2020,227,18,7,5,-1,72.6 2020-08-14 20:00:00,2020,227,19,7,5,-5,72.6 2020-08-14 21:00:00,2020,227,20,7,5,-8,72.6 2020-08-14 22:00:00,2020,227,21,7,5,-9,72.6 2020-08-14 23:00:00,2020,227,22,7,5,-9,72.6 2020-08-15 00:00:00,2020,227,23,7,5,-11,72.6 2020-08-15 01:00:00,2020,228,0,0,0,-10,72.4 2020-08-15 02:00:00,2020,228,1,0,0,-5,72.4 2020-09-03 22:00:00,2020,247,21,17,0,-15,71.2 2020-09-03 23:00:00,2020,247,22,17,0,-17,71.2 2020-09-04 00:00:00,2020,247,23,17,0,-21,71.2 2020-09-04 01:00:00,2020,248,0,27,0,-22,70.8 2020-09-04 02:00:00,2020,248,1,27,0,-23,70.8 2020-09-04 03:00:00,2020,248,2,27,0,-19,70.8 2020-09-04 04:00:00,2020,248,3,13,0,-16,70.8 2020-09-04 05:00:00,2020,248,4,13,0,-14,70.8 2020-09-04 06:00:00,2020,248,5,13,0,-15,70.8 2020-09-04 07:00:00,2020,248,6,13,0,-18,70.8 2020-09-04 08:00:00,2020,248,7,13,0,-18,70.8 2020-09-04 09:00:00,2020,248,8,13,0,-15,70.8 2020-09-04 10:00:00,2020,248,9,20,0,-11,70.8 2020-09-04 11:00:00,2020,248,10,20,0,-18,70.8 2020-09-04 12:00:00,2020,248,11,20,0,-19,70.8 2020-09-04 13:00:00,2020,248,12,20,0,-18,70.8 2020-09-04 14:00:00,2020,248,13,20,0,-22,70.8 2020-09-04 15:00:00,2020,248,14,20,0,-21,70.8 2020-09-04 16:00:00,2020,248,15,17,0,-26,70.8 2020-09-04 17:00:00,2020,248,16,17,0,-27,70.8 2020-09-04 18:00:00,2020,248,17,17,0,-25,70.8 2020-09-04 19:00:00,2020,248,18,23,0,-22,70.8 2020-09-04 20:00:00,2020,248,19,23,0,-20,70.8 2020-09-04 21:00:00,2020,248,20,23,0,-24,70.8 2020-09-04 22:00:00,2020,248,21,3,0,-22,70.8 2020-09-04 23:00:00,2020,248,22,3,0,-21,70.8 2020-09-05 00:00:00,2020,248,23,3,0,-21,70.8 2020-09-05 01:00:00,2020,249,0,17,0,-21,70.3 2020-09-05 02:00:00,2020,249,1,17,0,-22,70.3 2020-09-05 03:00:00,2020,249,2,17,0,-23,70.3 2020-09-05 04:00:00,2020,249,3,13,0,-22,70.3 2020-09-05 05:00:00,2020,249,4,13,0,-19,70.3 2020-09-05 06:00:00,2020,249,5,13,0,-16,70.3 2020-09-05 07:00:00,2020,249,6,17,0,-15,70.3 2020-09-05 08:00:00,2020,249,7,17,0,-15,70.3 2020-09-05 09:00:00,2020,249,8,17,0,-21,70.3 2020-09-05 10:00:00,2020,249,9,20,0,-23,70.3 2020-09-05 11:00:00,2020,249,10,20,0,-22,70.3 2020-09-05 12:00:00,2020,249,11,20,0,-11,70.3 2020-09-05 13:00:00,2020,249,12,17,0,-7,70.3 2020-09-05 14:00:00,2020,249,13,17,0,-7,70.3 2020-09-05 15:00:00,2020,249,14,17,0,-7,70.3 2020-09-05 16:00:00,2020,249,15,3,0,-8,70.3 2020-09-05 17:00:00,2020,249,16,3,0,-9,70.3 2020-09-05 18:00:00,2020,249,17,3,0,-7,70.3 2020-09-05 19:00:00,2020,249,18,10,0,-8,70.3 2020-09-05 20:00:00,2020,249,19,10,0,-6,70.3 2020-09-05 21:00:00,2020,249,20,10,0,-6,70.3 2020-09-05 22:00:00,2020,249,21,10,0,-8,70.3 2020-09-05 23:00:00,2020,249,22,10,0,-9,70.3 2020-09-06 00:00:00,2020,249,23,10,0,-14,70.3 2020-09-06 01:00:00,2020,250,0,20,0,-18,70.5 2020-09-06 02:00:00,2020,250,1,20,0,-21,70.5 2020-09-06 03:00:00,2020,250,2,20,0,-16,70.5 2020-09-06 04:00:00,2020,250,3,10,0,-12,70.5 2020-09-06 05:00:00,2020,250,4,10,0,-10,70.5 2020-09-06 06:00:00,2020,250,5,10,0,-11,70.5 2020-09-06 07:00:00,2020,250,6,3,0,-8,70.5 2020-09-06 08:00:00,2020,250,7,3,0,-5,70.5 2020-09-06 09:00:00,2020,250,8,3,0,-7,70.5 2020-09-06 10:00:00,2020,250,9,10,0,-6,70.5 2020-09-06 11:00:00,2020,250,10,10,0,-6,70.5 2020-09-06 12:00:00,2020,250,11,10,0,-9,70.5 2020-09-06 13:00:00,2020,250,12,10,0,-9,70.5 2020-09-06 14:00:00,2020,250,13,10,0,-7,70.5 2020-09-06 15:00:00,2020,250,14,10,0,-6,70.5 2020-09-06 16:00:00,2020,250,15,3,0,-7,70.5 2020-09-06 17:00:00,2020,250,16,3,0,-7,70.5 2020-09-06 18:00:00,2020,250,17,3,0,-6,70.5 2020-09-06 19:00:00,2020,250,18,7,0,-5,70.5 2020-09-06 20:00:00,2020,250,19,7,0,-6,70.5 2020-09-06 21:00:00,2020,250,20,7,0,-9,70.5 2020-09-06 22:00:00,2020,250,21,17,0,-15,70.5 2020-09-06 23:00:00,2020,250,22,17,0,-20,70.5 2020-09-07 00:00:00,2020,250,23,17,0,-21,70.5 2020-09-07 01:00:00,2020,251,0,10,0,-21,71.3 2020-09-07 02:00:00,2020,251,1,10,0,-16,71.3 2020-09-07 03:00:00,2020,251,2,10,0,-13,71.3 2020-09-07 04:00:00,2020,251,3,13,0,-11,71.3 2020-09-07 05:00:00,2020,251,4,13,0,-14,71.3 2020-09-07 06:00:00,2020,251,5,13,0,-17,71.3 2020-09-07 07:00:00,2020,251,6,13,0,-14,71.3 2020-09-07 08:00:00,2020,251,7,13,0,-10,71.3 2020-09-07 09:00:00,2020,251,8,13,0,-11,71.3 2020-09-07 10:00:00,2020,251,9,7,0,-11,71.3 2020-09-07 11:00:00,2020,251,10,7,0,-13,71.3 2020-09-07 12:00:00,2020,251,11,7,0,-14,71.3 2020-09-07 13:00:00,2020,251,12,7,0,-15,71.3 2020-09-07 14:00:00,2020,251,13,7,0,-12,71.3 2020-09-07 15:00:00,2020,251,14,7,0,-12,71.3 2020-09-07 16:00:00,2020,251,15,10,0,-10,71.3 2020-09-07 17:00:00,2020,251,16,10,0,-13,71.3 2020-09-07 18:00:00,2020,251,17,10,0,-12,71.3 2020-09-07 19:00:00,2020,251,18,3,0,-9,71.3 2020-09-07 20:00:00,2020,251,19,3,0,-8,71.3 2020-09-07 21:00:00,2020,251,20,3,0,-10,71.3 2020-09-07 22:00:00,2020,251,21,0,0,-14,71.3 2020-09-07 23:00:00,2020,251,22,0,0,-16,71.3 2020-09-08 00:00:00,2020,251,23,0,0,-16,71.3 2020-09-08 01:00:00,2020,252,0,7,0,-12,70.9 2020-09-08 02:00:00,2020,252,1,7,0,-12,70.9 2020-09-08 03:00:00,2020,252,2,7,0,-13,70.9 2020-09-08 04:00:00,2020,252,3,3,0,-12,70.9 2020-09-08 05:00:00,2020,252,4,3,0,-12,70.9 2020-09-08 06:00:00,2020,252,5,3,0,-12,70.9 2020-09-08 07:00:00,2020,252,6,10,0,-9,70.9 2020-09-08 08:00:00,2020,252,7,10,0,-6,70.9 2020-09-08 09:00:00,2020,252,8,10,0,-6,70.9 2020-09-08 10:00:00,2020,252,9,10,0,-1,70.9 2020-09-08 11:00:00,2020,252,10,10,0,2,70.9 2020-09-08 12:00:00,2020,252,11,10,0,1,70.9 2020-09-08 13:00:00,2020,252,12,7,0,2,70.9 2020-09-08 14:00:00,2020,252,13,7,0,3,70.9 2020-09-08 15:00:00,2020,252,14,7,0,-1,70.9 2020-09-08 16:00:00,2020,252,15,13,0,-6,70.9 2020-09-08 17:00:00,2020,252,16,13,0,-7,70.9 2020-09-08 18:00:00,2020,252,17,13,0,-9,70.9 2020-09-08 19:00:00,2020,252,18,10,0,-7,70.9 2020-09-08 20:00:00,2020,252,19,10,0,-5,70.9 2020-09-08 21:00:00,2020,252,20,10,0,-3,70.9 2020-09-08 22:00:00,2020,252,21,0,0,-2,70.9 2020-09-08 23:00:00,2020,252,22,0,0,-2,70.9 2020-09-09 00:00:00,2020,252,23,0,0,-2,70.9 2020-09-09 01:00:00,2020,253,0,7,0,-3,70.7 2020-09-09 02:00:00,2020,253,1,7,0,-4,70.7 2020-09-09 03:00:00,2020,253,2,7,0,-4,70.7 2020-09-09 04:00:00,2020,253,3,0,0,-5,70.7 2020-09-09 05:00:00,2020,253,4,0,0,-5,70.7 2020-09-09 06:00:00,2020,253,5,0,0,-6,70.7 2020-09-09 07:00:00,2020,253,6,0,0,-2,70.7 2020-09-09 08:00:00,2020,253,7,0,0,0,70.7 2020-09-09 09:00:00,2020,253,8,0,0,0,70.7 2020-09-09 10:00:00,2020,253,9,0,0,1,70.7 2020-09-09 11:00:00,2020,253,10,0,0,2,70.7 2020-09-09 12:00:00,2020,253,11,0,0,1,70.7 2020-09-09 13:00:00,2020,253,12,0,0,1,70.7 2020-09-09 14:00:00,2020,253,13,0,0,0,70.7 2020-09-09 15:00:00,2020,253,14,0,0,-1,70.7 2020-09-09 16:00:00,2020,253,15,3,0,-2,70.7 2020-09-09 17:00:00,2020,253,16,3,0,-1,70.7 2020-09-09 18:00:00,2020,253,17,3,0,0,70.7 2020-09-09 19:00:00,2020,253,18,0,0,0,70.7 2020-09-09 20:00:00,2020,253,19,0,0,1,70.7 2020-09-09 21:00:00,2020,253,20,0,0,0,70.7 2020-09-09 22:00:00,2020,253,21,0,0,-1,70.7 2020-09-09 23:00:00,2020,253,22,0,0,1,70.7 2020-09-10 00:00:00,2020,253,23,0,0,1,70.7 2020-09-10 01:00:00,2020,254,0,0,0,0,70.2 2020-09-10 02:00:00,2020,254,1,0,0,0,70.2 2020-09-10 03:00:00,2020,254,2,0,0,0,70.2 2020-09-10 04:00:00,2020,254,3,0,0,-1,70.2 2020-09-10 05:00:00,2020,254,4,0,0,-1,70.2 2020-09-10 06:00:00,2020,254,5,0,0,-2,70.2 2020-09-10 07:00:00,2020,254,6,0,0,-3,70.2 2020-09-10 08:00:00,2020,254,7,0,0,-1,70.2 2020-09-10 09:00:00,2020,254,8,0,0,1,70.2 2020-09-10 10:00:00,2020,254,9,3,0,3,70.2 2020-09-10 11:00:00,2020,254,10,3,0,-1,70.2 2020-09-10 12:00:00,2020,254,11,3,0,1,70.2 2020-09-10 13:00:00,2020,254,12,3,0,2,70.2 2020-09-10 14:00:00,2020,254,13,3,0,4,70.2 2020-09-10 15:00:00,2020,254,14,3,0,3,70.2 2020-09-10 16:00:00,2020,254,15,3,0,2,70.2 2020-09-10 17:00:00,2020,254,16,3,0,2,70.2 2020-09-10 18:00:00,2020,254,17,3,0,4,70.2 2020-09-10 19:00:00,2020,254,18,10,0,3,70.2 2020-09-10 20:00:00,2020,254,19,10,0,1,70.2 2020-09-10 21:00:00,2020,254,20,10,0,1,70.2 2020-09-10 22:00:00,2020,254,21,3,0,4,70.2 2020-09-10 23:00:00,2020,254,22,3,0,4,70.2 2020-09-11 00:00:00,2020,254,23,3,0,2,70.2 2020-09-11 01:00:00,2020,255,0,0,0,2,69.6 2020-09-11 02:00:00,2020,255,1,0,0,4,69.6 2020-09-11 03:00:00,2020,255,2,0,0,3,69.6 2020-09-11 04:00:00,2020,255,3,0,0,4,69.6 2020-09-11 05:00:00,2020,255,4,0,0,2,69.6 2020-09-11 06:00:00,2020,255,5,0,0,2,69.6 2020-09-11 07:00:00,2020,255,6,3,0,2,69.6 2020-09-11 08:00:00,2020,255,7,3,0,2,69.6 2020-09-11 09:00:00,2020,255,8,3,0,-1,69.6 2020-09-11 10:00:00,2020,255,9,3,0,-2,69.6 2020-09-11 11:00:00,2020,255,10,3,0,-1,69.6 2020-09-11 12:00:00,2020,255,11,3,0,-2,69.6 2020-09-11 13:00:00,2020,255,12,0,0,-2,69.6 2020-09-11 14:00:00,2020,255,13,0,0,-1,69.6 2020-09-11 15:00:00,2020,255,14,0,0,-1,69.6 2020-09-11 16:00:00,2020,255,15,7,0,-1,69.6 2020-09-11 17:00:00,2020,255,16,7,0,0,69.6 2020-09-11 18:00:00,2020,255,17,7,0,-2,69.6 2020-09-11 19:00:00,2020,255,18,10,0,0,69.6 2020-09-11 20:00:00,2020,255,19,10,0,-1,69.6 2020-09-11 21:00:00,2020,255,20,10,0,-2,69.6 2020-09-11 22:00:00,2020,255,21,10,0,-6,69.6 2020-09-11 23:00:00,2020,255,22,10,0,-10,69.6 2020-09-12 00:00:00,2020,255,23,10,0,-11,69.6 2020-09-12 01:00:00,2020,256,0,13,0,-10,70.2 2020-09-12 02:00:00,2020,256,1,13,0,-8,70.2 2020-09-12 03:00:00,2020,256,2,13,0,-7,70.2 2020-09-12 04:00:00,2020,256,3,13,0,-5,70.2 2020-09-12 05:00:00,2020,256,4,13,0,-5,70.2 2020-09-12 06:00:00,2020,256,5,13,0,-8,70.2 2020-09-12 07:00:00,2020,256,6,20,0,-8,70.2 2020-09-12 08:00:00,2020,256,7,20,0,-7,70.2 2020-09-12 09:00:00,2020,256,8,20,0,-8,70.2 2020-09-12 10:00:00,2020,256,9,13,0,-8,70.2 2020-09-12 11:00:00,2020,256,10,13,0,-2,70.2 2020-09-12 12:00:00,2020,256,11,13,0,1,70.2 2020-09-12 13:00:00,2020,256,12,3,0,1,70.2 2020-09-12 14:00:00,2020,256,13,3,0,1,70.2 2020-09-12 15:00:00,2020,256,14,3,0,-2,70.2 2020-09-12 16:00:00,2020,256,15,7,0,-5,70.2 2020-09-12 17:00:00,2020,256,16,7,0,-4,70.2 2020-09-12 18:00:00,2020,256,17,7,0,-2,70.2 2020-09-12 19:00:00,2020,256,18,13,0,0,70.2 2020-09-12 20:00:00,2020,256,19,13,0,-1,70.2 2020-09-12 21:00:00,2020,256,20,13,0,-3,70.2 2020-09-12 22:00:00,2020,256,21,10,0,-5,70.2 2020-09-12 23:00:00,2020,256,22,10,0,-8,70.2 2020-09-13 00:00:00,2020,256,23,10,0,-9,70.2 2020-09-13 01:00:00,2020,257,0,10,0,-7,70.6 2020-09-13 02:00:00,2020,257,1,10,0,-8,70.6 2020-09-13 03:00:00,2020,257,2,10,0,-11,70.6 2020-09-13 04:00:00,2020,257,3,13,0,-11,70.6 2020-09-13 05:00:00,2020,257,4,13,0,-6,70.6 2020-09-13 06:00:00,2020,257,5,13,0,-5,70.6 2020-09-13 07:00:00,2020,257,6,13,0,-6,70.6 2020-09-13 08:00:00,2020,257,7,13,0,-9,70.6 2020-09-13 09:00:00,2020,257,8,13,0,-8,70.6 2020-09-13 10:00:00,2020,257,9,3,0,-6,70.6 2020-09-13 11:00:00,2020,257,10,3,0,-3,70.6 2020-09-13 12:00:00,2020,257,11,3,0,-2,70.6 2020-09-13 13:00:00,2020,257,12,13,0,0,70.6 2020-09-13 14:00:00,2020,257,13,13,0,2,70.6 2020-09-13 15:00:00,2020,257,14,13,0,0,70.6 2020-09-13 16:00:00,2020,257,15,17,0,1,70.6 2020-09-13 17:00:00,2020,257,16,17,0,1,70.6 2020-09-13 18:00:00,2020,257,17,17,0,9,70.6 2020-09-13 19:00:00,2020,257,18,13,0,8,70.6 2020-09-13 20:00:00,2020,257,19,13,0,8,70.6 2020-09-13 21:00:00,2020,257,20,13,0,12,70.6 2020-09-13 22:00:00,2020,257,21,30,0,8,70.6 2020-09-13 23:00:00,2020,257,22,30,0,8,70.6 2020-09-14 00:00:00,2020,257,23,30,0,9,70.6 2020-09-14 01:00:00,2020,258,0,33,2,4,69.7 2020-09-14 02:00:00,2020,258,1,33,2,-10,69.7 2020-09-14 03:00:00,2020,258,2,33,2,-19,69.7 2020-09-14 04:00:00,2020,258,3,33,2,-24,69.7 2020-09-14 05:00:00,2020,258,4,33,2,-35,69.7 2020-09-14 06:00:00,2020,258,5,33,2,-34,69.7 2020-09-14 07:00:00,2020,258,6,23,2,-29,69.7 2020-09-14 08:00:00,2020,258,7,23,2,-26,69.7 2020-09-14 09:00:00,2020,258,8,23,2,-29,69.7 2020-09-14 10:00:00,2020,258,9,27,2,-15,69.7 2020-09-14 11:00:00,2020,258,10,27,2,-10,69.7 2020-09-14 12:00:00,2020,258,11,27,2,-6,69.7 2020-09-14 13:00:00,2020,258,12,30,2,-12,69.7 2020-09-14 14:00:00,2020,258,13,30,2,-13,69.7 2020-09-14 15:00:00,2020,258,14,30,2,-10,69.7 2020-09-14 16:00:00,2020,258,15,10,2,-9,69.7 2020-09-14 17:00:00,2020,258,16,10,2,-8,69.7 2020-09-14 18:00:00,2020,258,17,10,2,-6,69.7 2020-09-14 19:00:00,2020,258,18,7,2,-7,69.7 2020-09-14 20:00:00,2020,258,19,7,2,-8,69.7 2020-09-14 21:00:00,2020,258,20,7,2,-10,69.7 2020-09-14 22:00:00,2020,258,21,20,2,-12,69.7 2020-09-14 23:00:00,2020,258,22,20,2,-14,69.7 2020-09-15 00:00:00,2020,258,23,20,2,-17,69.7 2020-09-15 01:00:00,2020,259,0,23,0,-17,69.6 2020-09-15 02:00:00,2020,259,1,23,0,-18,69.6 2020-09-15 03:00:00,2020,259,2,23,0,-16,69.6 2020-09-15 04:00:00,2020,259,3,20,0,-17,69.6 2020-09-17 00:00:00,2020,260,23,3,0,-15,70.2 | The following code is an adaptation of the excellent answer by @Muhammed Yunus. This updated version organizes the management of space weather data into a Class called SpaceWeather. A more detailed discussion of the changes is at my blog Visualizing Space Weather Data: From Procedural to Object-Oriented Approach. Key enhancements include: Refactoring: The SpaceWeather class bundles related data and methods. The load_and_clean_data method loads and cleans data, and plot_data manages plotting. Modularity: Separate methods are for different plot types (imshow, pcolormesh, contourf, plot_surface). Type Hints: All function parameters and return types employ type hints. Efficiency: plot_type == 'plot_surface' in this answer resulted in an unused figure, which is avoided in plot_surface. Code Comments and Docstrings: Functionality is described by detailed comments and docstrings. PEP8 Compliance: The code follows the PEP8 style guide. Tested with: python version: 3.12.0 pandas version: 2.2.1 numpy version: 1.26.4 matplotlib version: 3.8.1 import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as mdates from typing import Dict, Union import matplotlib.figure import matplotlib.axes class SpaceWeather: def __init__(self, file_path: str): """ Initialize the SpaceWeather class. Parameters: file_path (str): The path to the CSV file containing the data. """ self.file_path = file_path self.df = None self.img_data = None self.labels = None self.dates = None def load_and_clean_data(self): """ Load and clean the space weather data. The code for which is from https://stackoverflow.com/a/78294905/7758804 """ # Load the necessary columns, parse dates, skip header, and rename columns self.df = pd.read_csv( self.file_path, parse_dates=[0], skiprows=1, usecols=[0, 4, 6], names=["date", "kp", "dst"], header=0, ) # Extract day of year and hour into new columns self.df["day_of_year"] = self.df["date"].dt.day_of_year # new columns self.df["hour"] = self.df["date"].dt.hour # Add event column self.df["event"] = "" self.df.loc[(self.df.kp > 40) | (self.df.dst < -30), "event"] = "S" self.df.loc[self.df.date == pd.Timestamp("2020/09/01 01:00"), "event"] = "E" # Create image data self.img_data = self.df.pivot_table( index="hour", columns="day_of_year", values="dst" ).values # Create labels to be used as annotations self.labels = self.df.pivot( index="hour", columns="day_of_year", values="event" ).values # Create date range for x-axis self.dates = pd.date_range( start=self.df.date.min(), end=self.df.date.max() + pd.offsets.Day(), freq="D", inclusive="both", ) def plot_data(self, plot_type: str = "imshow"): """ Plot the space weather data. The code for plotting is from https://stackoverflow.com/a/78294536/7758804 Parameters: plot_type (str): The type of plot to create. Options are 'imshow', 'pcolormesh', 'contourf', 'plot_surface'. """ common_params: Dict[str, Union[int, str]] = dict( vmin=-40, vmax=20, cmap="jet_r" ) # Create meshgrid for non-imshow plots if plot_type != "imshow": x, y = np.meshgrid( mdates.date2num(self.dates), range(self.img_data.shape[0]) ) # Plot data based on a plot type if plot_type == "imshow": self.plot_imshow(common_params) elif plot_type == "pcolormesh": self.plot_pcolormesh(x, y, common_params) elif plot_type == "contourf": self.plot_contourf(x, y, common_params) elif plot_type == "plot_surface": self.plot_surface(x, y, common_params) def plot_imshow(self, common_params: Dict[str, Union[int, str]]): """ Plot the space weather data using imshow. Parameters: common_params (Dict[str, Union[int, str]]): Common parameters for the plot. """ f, ax = plt.subplots(figsize=(11, 3)) im = ax.imshow( self.img_data, interpolation="none", aspect="auto", origin="lower", extent=( mdates.date2num(self.dates[0] - pd.offsets.Hour(12)), mdates.date2num(self.dates[-1] + pd.offsets.Hour(12)), float(self.df["hour"].min()), float(self.df["hour"].max()), ), **common_params, ) self.format_plot(f, im, ax, "imshow") def plot_pcolormesh(self, x: np.ndarray, y: np.ndarray, common_params: Dict[str, Union[int, str]]): """ Plot the space weather data using pcolormesh. Parameters: x (np.ndarray): The X coordinates of the meshgrid. y (np.ndarray): The Y coordinates of the meshgrid. common_params (Dict[str, Union[int, str]]): Common parameters for the plot. """ f, ax = plt.subplots(figsize=(11, 3)) im = ax.pcolormesh(x, y, self.img_data, **common_params) self.format_plot(f, im, ax, "pcolormesh") def plot_contourf(self, x: np.ndarray, y: np.ndarray, common_params: Dict[str, Union[int, str]]): """ Plot the space weather data using contourf. Parameters: x (np.ndarray): The X coordinates of the meshgrid. y (np.ndarray): The Y coordinates of the meshgrid. common_params (Dict[str, Union[int, str]]): Common parameters for the plot. """ f, ax = plt.subplots(figsize=(11, 3)) im = ax.contourf(x, y, self.img_data, levels=10, **common_params) self.format_plot(f, im, ax, "contourf") def plot_surface(self, x: np.ndarray, y: np.ndarray, common_params: Dict[str, Union[int, str]]): """ Plot the space weather data using plot_surface. Parameters: x (np.ndarray): The X coordinates of the meshgrid. y (np.ndarray): The Y coordinates of the meshgrid. common_params (Dict[str, Union[int, str]]): Common parameters for the plot. """ f = plt.figure(figsize=(11, 11)) ax = f.add_subplot(projection="3d", proj_type="persp", focal_length=0.2) ax.view_init(azim=79, elev=25) ax.set_box_aspect(aspect=(3, 2, 1.5), zoom=0.95) im = ax.plot_surface(x, y, self.img_data, **common_params) ax.contourf( x, y, self.img_data, levels=10, zdir="z", offset=-35, alpha=0.3, **common_params, ) ax.contour( x, y, self.img_data, levels=8, zdir="z", offset=24, alpha=0.5, linewidths=3, **common_params, ) ax.set_zlabel("Dst") ax.invert_xaxis() # Orders the dates from left to right ax.invert_yaxis() # Orders the hours from front to back self.format_plot(f, im, ax, "plot_surface") def format_plot(self, f: matplotlib.figure.Figure, im: matplotlib.image.AxesImage, ax: matplotlib.axes.Axes, plot_type: str): """ Format the plot. Parameters: f (matplotlib.figure.Figure): The figure. im (matplotlib.image.AxesImage): The image. ax (matplotlib.axes.Axes): The axes. plot_type (str): The type of plot. """ # Add labels for (row, col), label in np.ndenumerate(self.labels): if plot_type == "plot_surface": break # skip labels on 3d plot for simplicity if type(label) is not str: continue ax.text( self.dates[col] - pd.offsets.Hour(6), row, label, fontsize=9, fontweight="bold", ) # Format x-axis with dates ax.set_xticks(self.dates) ax.xaxis.set_major_formatter(mdates.DateFormatter(fmt="%b %d")) ax.xaxis.set_major_locator(mdates.DayLocator(interval=5)) # Tick every 5 days ax.tick_params(axis="x", rotation=90) # Format y axis ax.set_yticks([0, 6, 12, 18, 23]) ax.set_ylabel("UT") # Add colorbar aspect, fraction = (10, 0.15) if plot_type != "plot_surface" else (5, 0.05) f.colorbar(im, aspect=aspect, fraction=fraction, pad=0.01, label="nT") # Add title ax.set_title(f"Dst\n(plot type: {plot_type})", fontweight="bold") # Adjust layout plt.tight_layout() # Save the plot plt.savefig(f"{plot_type}_plot.png", dpi=300) # Show the plot plt.show() if __name__ == "__main__": sw = SpaceWeather("spaceWeather.csv") sw.load_and_clean_data() for plot_type in ["imshow", "pcolormesh", "contourf", "plot_surface"]: print(f"Plotting with plot type: {plot_type}") sw.plot_data(plot_type) | 2 | 2 |
78,282,930 | 2024-4-6 | https://stackoverflow.com/questions/78282930/why-are-there-double-parentheses-around-my-python-virtual-environment-in-visual | After updating Visual Studio Code to version 1.88.0, I opened one of my Python projects and noticed that there are double parentheses in my virtual environment: ((env) ). I'm using the Python extension v2024.4.0 Previously, in the same and all other projects, I had only one pair of parentheses like (env). I have checked, but I didn't find any information about it. I have read that (venv1) (venv2) indicates a double virtual environment, but I don't know if this is the case. I have tried deleting the env (I have the requirement.txt) or closing/reopening VSCode, but the problem persists. Any suggestions on how to fix it? I have checked the files: .bashrc, .zshrc, .bash_profile, but everything seems fine. Also, by starting a new project from scratch, the problem persists. | It's an issue with the Python extension v2024.4.0. Reverting to the previous version, v2024.2.1, fixed this issue for me. | 5 | 2 |
78,284,486 | 2024-4-6 | https://stackoverflow.com/questions/78284486/pil-creates-gifs-with-less-images-than-the-input-despite-save-all-true | I was trying to make plots using matplotlib into GIFs for further analysis, when during the analysis I noticed that the output of the analysis consisted of less images than expected. I created and saved 3956 plots with axes turned off (this would make my analysis simpler) then proceeded to create GIFs in the easiest way I found, using PIL, I opened the images to a list, then sorted the list into 2 further lists of 1978 images, then used im.save with save_all=True and append_images=im[1:]. The code ran without an issue and the GIFs opened normally. It was during the further analysis I noticed something was up and I confirmed it later using GIMP that the two output GIFs had 1798 and 1784 images only, instead of the 1978 I used to create them. The opening of the images shouldn't be the problem as the opened image lists both have lengths of 1978, the problem has to happen with the code: imS[0].save( "filename.gif", save_all=True, append_images=imS[1:], duration=250, loop=0) I tried using multiple image formats, tried changing the duration, tried changing the "interlace" and "optimize" options in im.save(), deleting and reinstalling the PIL module, none of them changed the result. I thought that there might be an issue with consecutive images being empty (Each image is supposed to show data in a unit time and there are intervals with no data so that's where the empty images come from) and some compression is deleting the multiple empty images but this clearly isn't the case after eyeballing the GIFs. That's where I'm stuck | I do not have enough reputation to make this a comment, so I respond without solving the issue completely, sorry for that. I was running into the same issue until now. Does your list of frames contain duplicates or very similar frames? In this git issue it is outlined that it is intended behaviour from pillow to remove these duplicates. Here they explicitly state: Removing duplicate consecutive frames is intended to be a feature, reducing file size without affecting the visual output. Currently, I am also stuck with this and cannot find a solution as the workarounds from the git issue do not work anymore. My workaround is to create a list of pngs in python and then compose them using ImageMagick. | 2 | 1 |
78,292,857 | 2024-4-8 | https://stackoverflow.com/questions/78292857/gunicorn-workers-on-google-app-engine-randomly-sending-sigkill-leading-to-timeou | This happens randomly, sometimes when a new instance is spun up (example below), sometimes when an instance is already been running for some time. This error cycle of "Boot worker - SIGKILL - TIMEOUT" can last anywhere from 10s to more than an hour. This can also happen to any endpoint in my application, it has happened to POST requests, GET requests on different endpoints. Initially I thought that it was due to some bug in my code or some malformed data in the POST request but after investigation I realized that sending the exact same POST/GET request when the instance is not stuck in this error loop works perfectly fine. Below are the logs from an example where a POST request that led to a new instance being spun up was stuck in this loop for over an hour and suddenly resolves itself and handles the POST request normally with status 200. DEFAULT 2024-04-07T07:56:06.463491Z [2024-04-07 07:56:06 +0000] [11] [INFO] Starting gunicorn 21.2.0 DEFAULT 2024-04-07T07:56:06.464623Z [2024-04-07 07:56:06 +0000] [11] [DEBUG] Arbiter booted DEFAULT 2024-04-07T07:56:06.464637Z [2024-04-07 07:56:06 +0000] [11] [INFO] Listening at: http://0.0.0.0:8081 (11) DEFAULT 2024-04-07T07:56:06.464712Z [2024-04-07 07:56:06 +0000] [11] [INFO] Using worker: gthread INFO 2024-04-07T07:56:06.466397Z [pid1-nginx] Successfully connected to 127.0.0.1:8081 after 3.361200658s [session:RB4VHXB] INFO 2024-04-07T07:56:06.466735Z [pid1-nginx] Successfully connected to localhost:8081 after 3.361617817s [session:RB4VHXB] INFO 2024-04-07T07:56:06.469263Z [pid1-nginx] Creating config at /tmp/nginxconf-644813283/nginx.conf [session:RB4VHXB] INFO 2024-04-07T07:56:06.472325Z [pid1-nginx] Starting nginx (pid 17): /usr/sbin/nginx -c /tmp/nginxconf-644813283/nginx.conf [session:RB4VHXB] DEFAULT 2024-04-07T07:56:06.544837Z [2024-04-07 07:56:06 +0000] [16] [INFO] Booting worker with pid: 16 DEFAULT 2024-04-07T07:56:06.576102Z [2024-04-07 07:56:06 +0000] [16] [DEBUG] Closing connection. DEFAULT 2024-04-07T07:56:06.577868Z [2024-04-07 07:56:06 +0000] [16] [DEBUG] Closing connection. DEFAULT 2024-04-07T07:56:06.579116Z [2024-04-07 07:56:06 +0000] [16] [DEBUG] GET /_ah/start DEFAULT 2024-04-07T07:56:06.618933Z [2024-04-07 07:56:06 +0000] [19] [INFO] Booting worker with pid: 19 DEFAULT 2024-04-07T07:56:06.666217Z [2024-04-07 07:56:06 +0000] [11] [DEBUG] 2 workers DEFAULT 2024-04-07T07:56:06.739491Z [2024-04-07 07:56:06 +0000] [16] [DEBUG] POST /processNewOrder DEFAULT 2024-04-07T07:57:07.717148Z [2024-04-07 07:57:07 +0000] [11] [CRITICAL] WORKER TIMEOUT (pid:16) DEFAULT 2024-04-07T07:57:08.720797Z [2024-04-07 07:57:08 +0000] [11] [ERROR] Worker (pid:16) was sent SIGKILL! Perhaps out of memory? DEFAULT 2024-04-07T07:57:08.720910Z 2024/04/07 07:57:08 [error] 18#18: *6 upstream prematurely closed connection while reading response header from upstream, client: 169.254.1.1, server: _, request: "POST /processNewOrder HTTP/1.1", upstream: "http://127.0.0.1:8081/processNewOrder", host: "redacted.et.r.appspot.com" DEFAULT 2024-04-07T07:57:08.824712Z [protoPayload.method: POST] [protoPayload.status: 502] [protoPayload.responseSize: 272 B] [protoPayload.latency: 61.15 s] [protoPayload.userAgent: AppEngine-Google; (+http://code.google.com/appengine)] /processNewOrder DEFAULT 2024-04-07T07:57:08.898539Z [2024-04-07 07:57:08 +0000] [19] [DEBUG] POST /processNewOrder DEFAULT 2024-04-07T07:57:08.990455Z [2024-04-07 07:57:08 +0000] [27] [INFO] Booting worker with pid: 27 DEFAULT 2024-04-07T07:58:08.968963Z [2024-04-07 07:58:08 +0000] [11] [CRITICAL] WORKER TIMEOUT (pid:19) DEFAULT 2024-04-07T07:58:09.973588Z 2024/04/07 07:58:09 [error] 18#18: *7 upstream prematurely closed connection while reading response header from upstream, client: 169.254.1.1, server: _, request: "POST /processNewOrder HTTP/1.1", upstream: "http://127.0.0.1:8081/processNewOrder", host: "redacted.et.r.appspot.com" DEFAULT 2024-04-07T07:58:09.973611Z [2024-04-07 07:58:09 +0000] [11] [ERROR] Worker (pid:19) was sent SIGKILL! Perhaps out of memory? DEFAULT 2024-04-07T07:58:10.106688Z [2024-04-07 07:58:10 +0000] [33] [INFO] Booting worker with pid: 33 DEFAULT 2024-04-07T07:58:10.177760Z [protoPayload.method: POST] [protoPayload.status: 502] [protoPayload.responseSize: 272 B] [protoPayload.latency: 61.976 s] [protoPayload.userAgent: AppEngine-Google; (+http://code.google.com/appengine)] /processNewOrder DEFAULT 2024-04-07T07:58:10.196059Z [2024-04-07 07:58:10 +0000] [33] [DEBUG] POST /processNewOrder DEFAULT 2024-04-07T07:59:11.149239Z [2024-04-07 07:59:11 +0000] [11] [CRITICAL] WORKER TIMEOUT (pid:33) DEFAULT 2024-04-07T07:59:12.153215Z 2024/04/07 07:59:12 [error] 18#18: *9 upstream prematurely closed connection while reading response header from upstream, client: 169.254.1.1, server: _, request: "POST /processNewOrder HTTP/1.1", upstream: "http://127.0.0.1:8081/processNewOrder", host: "redacted.et.r.appspot.com" DEFAULT 2024-04-07T07:59:12.153281Z [2024-04-07 07:59:12 +0000] [11] [ERROR] Worker (pid:33) was sent SIGKILL! Perhaps out of memory? DEFAULT 2024-04-07T07:59:12.307443Z [2024-04-07 07:59:12 +0000] [39] [INFO] Booting worker with pid: 39 DEFAULT 2024-04-07T08:00:45.632391Z [protoPayload.method: POST] [protoPayload.status: 502] [protoPayload.responseSize: 272 B] [protoPayload.latency: 61.725 s] [protoPayload.userAgent: AppEngine-Google; (+http://code.google.com/appengine)] /processNewOrder ... Repeat of same, POST request recieved, worker boot, worker timeout then worker sent SIGKILL for the next 1 hour. ... DEFAULT 2024-04-07T09:01:47.742589Z [protoPayload.method: POST] [protoPayload.status: 502] [protoPayload.responseSize: 272 B] [protoPayload.latency: 61.369 s] [protoPayload.userAgent: AppEngine-Google; (+http://code.google.com/appengine)] /processNewOrder DEFAULT 2024-04-07T09:01:47.916781Z [2024-04-07 09:01:47 +0000] [387] [INFO] Booting worker with pid: 387 DEFAULT 2024-04-07T09:01:48.003333Z [2024-04-07 09:01:48 +0000] [387] [DEBUG] POST /processNewOrder DEFAULT 2024-04-07T09:02:13.317927Z [protoPayload.method: POST] [protoPayload.status: 502] [protoPayload.responseSize: 272 B] [protoPayload.latency: 86.175 s] [protoPayload.userAgent: AppEngine-Google; (+http://code.google.com/appengine)] /processNewOrder DEFAULT 2024-04-07T09:02:36.933886Z [2024-04-07 09:02:36 +0000] [11] [CRITICAL] WORKER TIMEOUT (pid:381) DEFAULT 2024-04-07T09:02:37.938484Z [2024-04-07 09:02:37 +0000] [11] [ERROR] Worker (pid:381) was sent SIGKILL! Perhaps out of memory? DEFAULT 2024-04-07T09:02:37.938619Z 2024/04/07 09:02:37 [error] 18#18: *140 upstream prematurely closed connection while reading response header from upstream, client: 169.254.1.1, server: _, request: "POST /processNewOrder HTTP/1.1", upstream: "http://127.0.0.1:8081/processNewOrder", host: "redacted.et.r.appspot.com" DEFAULT 2024-04-07T09:02:38.097720Z [2024-04-07 09:02:38 +0000] [393] [INFO] Booting worker with pid: 393 DEFAULT 2024-04-07T09:02:38.142051Z [protoPayload.method: POST] [protoPayload.status: 502] [protoPayload.responseSize: 272 B] [protoPayload.latency: 61.351 s] [protoPayload.userAgent: AppEngine-Google; (+http://code.google.com/appengine)] /processNewOrder DEFAULT 2024-04-07T09:02:38.189106Z [2024-04-07 09:02:38 +0000] [393] [DEBUG] POST /processNewOrder DEFAULT 2024-04-07T09:02:38.196806Z [2024-04-07 09:02:38 +0000] [393] [DEBUG] POST /processNewOrder DEFAULT 2024-04-07T09:02:48.105780Z [2024-04-07 09:02:48 +0000] [11] [CRITICAL] WORKER TIMEOUT (pid:387) DEFAULT 2024-04-07T09:02:49.112205Z 2024/04/07 09:02:49 [error] 18#18: *142 upstream prematurely closed connection while reading response header from upstream, client: 169.254.1.1, server: _, request: "POST /processNewOrder HTTP/1.1", upstream: "http://127.0.0.1:8081/processNewOrder", host: "redacted.et.r.appspot.com" DEFAULT 2024-04-07T09:02:49.112209Z [2024-04-07 09:02:49 +0000] [11] [ERROR] Worker (pid:387) was sent SIGKILL! Perhaps out of memory? Finally it processes the POST request correctly with status 200. DEFAULT 2024-04-07T09:02:49.114051Z [protoPayload.method: POST] [protoPayload.status: 200] [protoPayload.responseSize: 135 B] [protoPayload.latency: 81.691 s] [protoPayload.userAgent: AppEngine-Google; (+http://code.google.com/appengine)] /processNewOrder DEFAULT 2024-04-07T09:02:49.367448Z [2024-04-07 09:02:49 +0000] [401] [INFO] Booting worker with pid: 401 DEFAULT 2024-04-07T09:02:49.464783Z [2024-04-07 09:02:49 +0000] [401] [DEBUG] POST /processNewOrder I did also check the memory usage but there does not seem to be an issue. I am using a B2 instance class which should have 768 MB of RAM and during the time period of the incident RAM usage was low at roughly ~210 MB. This issue also never ever occurs when I am running the Gunicorn server via App Engine locally. I have tried to increase the instance size to B4 (1536 MB RAM) thinking that maybe the Memory Usage is not being reported correctly, but the same problem occurs. I have tried to increase the timeout parameter in my Gunicorn config file to 20 minutes but the same problem still occurs, just that now each worker will take 20mins before timing out. I also tried to set preload_app=True in my Gunicorn config file as I thought maybe the issue is caused by the worker being spun up but the application code is not ready, but the same problem still occurs. Given how randomly it occurs, I am not able to reliably reproduce the issues, hence cannot find a solution to this problem. One possibility is that there is some kind of outage on Google Cloud's end, but before we go there, I wanted to see if anyone else has faced this issue before and can provide any advise on a potential solution. | For anyone else who faces this issue in the future, I eventually moved to running my application in a docker container on Google Cloud Run, with the preload_app=True setting in my gunicorn config file and have completely eliminated the issue. For some reason, any other configuration causes the TIMEOUT issue. Eg, running on Cloud Run without a Dockerfile (means google will decide what is neccessary in your container), will cause the issue, with and without the preload_app=True setting. My best guess is that Google Cloud has some configuration in their settings used for the containerization (which is also likely used for App Engine as well) that is somehow incompatible with my code. Moving to Cloud Run also helped to reduce my cost drastically so that is helpful as well. | 2 | 0 |
78,295,801 | 2024-4-9 | https://stackoverflow.com/questions/78295801/numpy-convolve-with-valid | I am working on convolving chunks of a signal with a moving average type smoothing operation but having issues with the padding errors which affects downstream calculations. If plotted this is the issue that is caused on the chunk boundaires. To fix this I am adding N samples from the previous chunk to the beginning of the array. The question is does the convolve operation below remove samples from the beginning or end of the original array? arr = np.array([0.9, 0.9, 0.9, 0.1, 0.1, 0.1, 1.0, 0.1, 0.1, 0.1, 0.9, 0.9, 0.9, ]) print(f"length input array {len(arr)}") result = np.convolve(arr, [1]*3, 'valid')/3 print(f"length result array {len(result)}") print(result) plt.plot(result) I setup the above example to try to confirm adding samples to the beging of the array is the correct place but it did not help. The input length is 13, the output length is 11. With overlap the convolve looks like: result = np.convolve(np.concatenate(previous_samples[-2:], arr), [1]*3, 'valid')/3 | I assume (1) the reader has a basic understanding of what a convolution is and (2) a kernel with an odd number > 1 of elements is used. Short answer: Use np.convolve(β¦, mode="valid"). Before convolving, pad your chunks β¦ at their start with the (len(kernel) - 1) / 2 last elements from the preceding chunk, β¦ at their end with the (len(kernel) - 1) / 2 first elements from the following chunk, where kernel refers to your sliding kernel ([1]*3 in your question). Long answer: As convolution can be thought of as an operation with a sliding kernel, the question is, as you realized, what happens at the boundaries. The numpy.convolve() function has 3 modes to handle the boundaries, as explained there: mode="valid" ensures that the sliding kernel never leaves the signal. This implies, however, that the result will have fewer elements than the signal; namely len(kernel) - 1 fewer elements. So, in case you have a signal of 100 elements and a kernel of 5 elements, your result will hold 100-(5-1)=96 elements. The question of where the operation "removes" the missing elements might be a bit misleading; however, one way to think of the result is having its values been written at the centers of the sliding kernel (which makes sense for a symmetric kernel of an odd number of elements, as is the case in your question), so with this interpretation one would have a symmetric removal of elements: (len(kernel) - 1) / 2 elements at the start, (len(kernel) - 1) / 2 elements at the end of the signal. We will see that this is a meaningful interpretation in the code example below. mode="same" ensures that the resulting signal has the same number of values as the input signal. This implies, however, that we have to "make up" values to counter the effect of having fewer elements that we saw with mode="valid". As a consequence, in this mode, the signal is padded with zeros β (len(kernel) - 1) / 2 both at the beginning and at the end β before the convolution operation. mode="full", finally, ensures that all potential combinations of overlap between the signal and the sliding kernel are produced. This implies that (a) we have to "make up" even more values and (b) that the result will be bigger than the input signal. The "made up" values are again zeros here, and the result will hold len(signal) + len(kernel) - 1 elements, thus 100+5-1=104 elements for the previous example. If you process your signal in chunks, mode="valid" is the way to go: You can ensure that your resulting convolved chunks have the same lengths as the input chunks without the need to "make up" values (i.e. without the need for zero-padding), by padding with the trailing values from the preceding chunk and the leading values from the following chunk. This is true for all but the first and last chunk, in which cases you have to decide yourself how the boundary situation should be handled. In all other cases, pad with half of one less than the kernel length from the preceding and following chunks' values. The code below demonstrates that the described padding of chunks produces the exact same results as if not chunking the signal at all. Note that I shifted the resulting convolved signals in the plots by (len(kernel) - 1) / 2 values to account for the described missing values with mode="valid" at the very beginning of the signal. import matplotlib.pyplot as plt import numpy as np len_signal, len_kernel = 100, 5 num_chunks = 10 rand = np.random.default_rng(seed=42) signal = np.cumsum(rand.normal(size=len_signal)) kernel = np.ones(len_kernel) / len_kernel # Produce convolution result without chunking for reference signal_convolved = np.convolve(signal, kernel, mode="valid") # Split into chunks, then pad them: (1) start with `(len_kernel - 1) / 2` # last values from preceding chunk (except first chunk), (2) end with # `(len_kernel - 1) / 2` first values from following chunk (except last chunk) len_arm = (len_kernel - 1) // 2 # Length of a "kernel arm" chunks_convolved = [] for i, chunk in enumerate(chunks := np.split(signal, num_chunks)): p1 = [] if i == 0 else chunks[i - 1][-len_arm:] # Start padding p2 = [] if i == len(chunks) - 1 else chunks[i + 1][:len_arm] # End padding chunk_conv = np.convolve(np.r_[p1, chunk, p2], kernel, mode="valid") chunks_convolved = np.r_[chunks_convolved, chunk_conv] # Append assert np.allclose(signal_convolved, chunks_convolved) # Check: same result? # Show: convolved w/o chunking (left) vs chunked, padded, and convolved (right) x_c = np.arange(len(signal_convolved)) + len_arm # Right-shift conv. result plt.subplot(121) plt.plot(signal, "--"); plt.plot(x_c, signal_convolved); plt.title("no chunks") plt.subplot(122) plt.plot(signal, "--"); plt.plot(x_c, chunks_convolved); plt.title("chunks") plt.show() As you can see (and as we checked with assert), both results are the same: As a final side note: scipy's convolve1d() provides more options other than zero padding for handling the remaining boundary situation at the start and end of your total signal, and thus might be an alternative, depending on the use case. | 2 | 4 |
78,287,158 | 2024-4-7 | https://stackoverflow.com/questions/78287158/modulenotfounderror-no-module-named-scipy-special-cdflib-with-scipy-1-13-0 | platform:windows server 2019 python:3.12.2 scipy:1.13.0 When I upgraded scipy from 1.12 to 1.13 there was No module named scipy.special._cdflib.This doesn't happen when I use 1.12.0. Traceback (most recent call last): File "src\predict.py", line 14, in <module> from src.utils.calc_utils import * File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module File "src\utils\calc_utils.py", line 1, in <module> from scipy.interpolate import interp1d File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module File "scipy\interpolate\__init__.py", line 167, in <module> File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module File "scipy\interpolate\_interpolate.py", line 10, in <module> File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module File "scipy\special\__init__.py", line 777, in <module> File "scipy\\special\\_ufuncs.pyx", line 1, in init scipy.special._ufuncs ModuleNotFoundError: No module named 'scipy.special._cdflib' How to solve this problem? Why does this problem occur? | Appenrently this is a known issue in pyinstaller, look at which version you have. In the changelog it states Update scipy.special._ufuncs hook for compatibility with SciPy 1.13.0 (add scipy.special._cdflib to hidden imports). (#8394) So using version 6.6.0 of pyinstaller might work | 5 | 3 |
78,280,741 | 2024-4-5 | https://stackoverflow.com/questions/78280741/multiple-colors-of-matplotlib-single-xtick-label | I am fairly new to Matplotlib and appreciate your help with my question: I am looking to generate a figure that each label has 2 colors. For example, a label named 'first_color\nsecond_color', first and second color are different. If I use label.set_color(colors[0]), the whole color of label will be changed. If I use label.set_color(colors[i % 2]), the color of neighboring label will be different. I just want different color in single internal label. labels = ['1first_color\nsecond_color', '2first_color\nsecond_color', '3first_color\nsecond_color', '4first_color\nsecond_color', '5first_color\nsecond_color'] colors = [sns.color_palette('tab10')[0], sns.color_palette('tab10')[1]] plt.plot(range(len(labels)), [0, 1, 2, 3, 4]) plt.xticks(range(len(labels)), labels) for i, label in enumerate(plt.gca().get_xticklabels()): label.set_color(colors[0]) #label.set_color(colors[i % 2]) label.set_color(colors[0]) label.set_color(colors[i % 2]) I am sure there is support for such a feature in Matplotlib. I did look through the docs/forums but could not find much help. Any suggestions will be much appreciated. I want to know how to set multiple colors at single label. | If the tick labels need a different color on different lines, you can explicit place text at the tick positions. With an x-axis transform, the x-position can be given in "data coordinates" (0 for the first label, 1 for the second, etc.) and the y-position in "axes coordinates" (0 at the bottom, 1 at the top of the "ax", negative values for positions below the "ax"). You may need to change the y-positions a bit, depending on the tick lengths and fonts used. import matplotlib.pyplot as plt import seaborn as sns labels = ['1first_color\nsecond_color', '2first_color\nsecond_color', '3first_color\nsecond_color', '4first_color\nsecond_color', '5first_color\nsecond_color'] colors = sns.color_palette('tab10')[:2] plt.plot(range(len(labels)), [0, 1, 2, 3, 4]) plt.xticks(range(len(labels)), []) # set tick positions without labels ax = plt.gca() # get the current subplot for i, label in enumerate(labels): ax.text(i, -0.03, label.split('\n')[0], color=colors[0], transform=ax.get_xaxis_transform(), ha='center', va='top') ax.text(i, -0.08, label.split('\n')[1], color=colors[1], transform=ax.get_xaxis_transform(), ha='center', va='top') plt.tight_layout() plt.show() | 2 | 1 |
78,300,493 | 2024-4-9 | https://stackoverflow.com/questions/78300493/osmnx-shortest-path-returns-none-for-valid-origin-and-destination-nodes | Description When calculating the shortest path between two locations with OSMnx, ox.shortest_path() failed to get any route and returns None origin_lat=42.482, origin_lon=-70.910, dest_lat=42.472, dest_lon=-70.957 The points I am querying are quite normal, i.e. they are not super far/close to each other, and there are clear road network between them. Here is the result when I use OSM webpage: Routing Results with OSM Webpage My Questions: What is the root cause for this issue? What can I do to prevent this issue? If this is unpreventable in some cases, what are the recommended backup / alternatives for me to keep my code running with reasonable routes/distances? Minimal reproducible example from shapely.geometry import Polygon import osmnx as ox region_bounds = [ [42.49897315546415, -70.97752844338558], [42.497310679689555, -70.89216747227316], [42.45989329011355, -70.90617955621047], [42.457041524105065, -70.97768950182164], ] region_bounds.append(region_bounds[-1]) region_polygon = Polygon([bounds[::-1] for bounds in region_bounds]) mode = "drive" G = ox.graph_from_polygon(polygon=region_polygon, network_type=mode) G = ox.add_edge_speeds(G) G = ox.add_edge_travel_times(G) origin_lat = 42.482 origin_lon = -70.910 dest_lat = 42.472 dest_lon = -70.957 origin_nodes = ox.distance.nearest_nodes(G, origin_lon, origin_lat) dest_nodes = ox.distance.nearest_nodes(G, dest_lon, dest_lat) routes = ox.shortest_path(G, origin_nodes, dest_nodes) print(origin_nodes, dest_nodes, routes) The outputs are 68758830 65236189 None which means that the ox.distance.nearest_nodes found valid origin and destination nodes, but the ox.shortest_path failed. Expected behavior When I changed the query a little bit to be origin_lat = 42.452 origin_lon = -70.910 dest_lat = 42.472 dest_lon = -70.957 The code above can find valid routes 68754328 65236189 [68754328, 68752028, 68757205, 68766524, 68769796, 68777219, 68761577, 68759405, 68766786, 68747897, 68755811, 68764727, 68765868, 68755029, 2041487395, 2041487385, 68758705, 68771074, 68751303, 68770735, 68747441, 65186124, 65232064, 65258971, 65258184, 65198797, 65243553, 2041154812, 65261211, 65218821, 65210373, 65208978, 65255290, 65231546, 65190866, 65226679, 65193542, 65239462, 65225225, 2041270157, 65257919, 65186045, 2041270160, 65262590, 2041270186, 65252676, 65232296, 65242158, 65261501, 65221801, 65251183, 65190759, 65218681, 65222417, 2043144587, 65250858, 2043144592, 65247406, 65224701, 65231219, 65202428, 65242218, 65235268, 65197313, 65240735, 65207550, 2045575158, 65227845, 65229809, 65190291, 65217006, 2045610191, 9966458026, 65195913, 65214016, 65241686, 65240704, 65202519, 65201239, 65242936, 65233288, 65186829, 65199167, 65239099, 65242030, 65237992, 65236189] | What is the root cause for this issue? The reason is this OSM way: https://www.openstreetmap.org/way/1243001416 It is digitized as a one-way street inbound into this community. There is no outbound way digitized. Therefore, you can solve routes inbound to the community, but you cannot solve routes outbound from it due to the one-way. What can I do to prevent this issue? Presumably, this is an incorrect digitization since there are no other routes into or out of this community. It's impossible for it to be only one-way and for there to be no egress from this community. If so, the best way to prevent this issue is to make a correction on OpenStreetMap itself. The alternative would be to edit your OSMnx model to add in a reciprocal one-way edge to allow bidirectional access. If this is unpreventable in some cases, what are the recommended backup / alternatives for me to keep my code running with reasonable routes/distances? It's not unpreventable. If the model exhibits an impossible situation like this, one of the solutions mentioned above will resolve it. Either fix the underlying digitization error on OpenStreetMap, or fix it in your OSMnx model (e.g., adding an edge). | 2 | 0 |
78,298,579 | 2024-4-9 | https://stackoverflow.com/questions/78298579/building-a-pypi-package-using-setuptools-pyproject-toml-with-a-custom-director | I have a custom directory structure which is not the traditional "src" or "flat" layouts setuptools expects. This is a sample of the directory tree: my_git_repo/ βββ Dataset/ β βββ __init__.py β βββ data/ β β βββ some_csv_file.csv β βββ some_ds1_script.py β βββ some_ds2_script.py βββ Model/ βββ __init__.py βββ utils/ β βββ __init__.py β βββ some_utils1_script.py β βββ some_utils2_script.py βββ some_model/ βββ __init__.py βββ some_model_script.py βββ trained_models/ βββ __init__.py βββ model_weights.pkl Lets say that my package name inside the pyproject.toml is "my_ai_package" and the current packages configuration is: [tools.setuptools.packages.find] include = ["*"] namespaces = false After building the package, what I currently get is inside my site-packages directory I have the Dataset & Model directories What I want is a main directory called "my_ai_package" and inside it the Dataset & Model directories, I want to be able to do "from my_ai_package import Dataset.some_ds1_script" I can't re-structure my directory tree to match src/flat layouts, I need some custom configuration in pyptoject.toml Thanks! | I ended up re-structuring my project in a "src" layout where "src" is replaced by my package name | 2 | 0 |
78,300,949 | 2024-4-9 | https://stackoverflow.com/questions/78300949/how-to-unpack-a-string-into-multiple-columns-in-a-polars-dataframe-using-express | I have a Polars DataFrame containing a column with strings representing 'sparse' sector exposures, like this: df = pl.DataFrame( pl.Series("sector_exposure", [ "Technology=0.207;Financials=0.090;Health Care=0.084;Consumer Discretionary=0.069", "Financials=0.250;Health Care=0.200;Consumer Staples=0.150;Industrials=0.400" ]) ) sector_exposure Technology=0.207;Financials=0.090;Health Care=0.084;Consumer Discretionary=0.069 Financials=0.250;Health Care=0.200;Consumer Staples=0.150;Industrials=0.400 I want to "unpack" this string into new columns for each sector (e.g., Technology, Financials, Health Care) with associated values or a polars struct with sector names as fields and exposure values. I'm looking for a more efficient solution using polars expressions only, without resorting to Python loops (or python mapped functions). Can anyone provide guidance on how to accomplish this? This is what I have come up with so far - which works in producing the desired struct but is a little slow. ( df["sector_exposure"] .str .split(";") .map_elements(lambda x: {entry.split('=')[0]: float(entry.split('=')[1]) for entry in x}, skip_nulls=True, ) ) Output: shape: (2,) Series: 'sector_exposure' [struct[6]] [ {0.207,0.09,0.084,0.069,null,null} {null,0.25,0.2,null,0.15,0.4} ] Thanks! | There are potentially two ways to do it that I can think of. Regex extract df.with_columns(pl.col('sector_exposure').str.extract(x+r"=(\d+\.\d+)").cast(pl.Float64).alias(x) for x in ["Technology", "Financials", "Health Care", "Consumer Discretionary", "Consumer Staples","Industrials"]) shape: (2, 7) ββββββββββββββββββ¬βββββββββββββ¬βββββββββββββ¬ββββββββββββββ¬βββββββββββββββββ¬βββββββββββ¬ββββββββββββββ β sector_exposur β Technology β Financials β Health Care β Consumer β Consumer β Industrials β β e β --- β --- β --- β Discretionary β Staples β --- β β --- β f64 β f64 β f64 β --- β --- β f64 β β str β β β β f64 β f64 β β ββββββββββββββββββͺβββββββββββββͺβββββββββββββͺββββββββββββββͺβββββββββββββββββͺβββββββββββͺββββββββββββββ‘ β Technology=0.2 β 0.207 β 0.09 β 0.084 β 0.069 β null β null β β 07;Financials= β β β β β β β β 0.090;Health β β β β β β β β Care=0.084;Con β β β β β β β β sumer Discreti β β β β β β β β onary=0.069 β β β β β β β β Financials=0.2 β null β 0.25 β 0.2 β null β 0.15 β 0.4 β β 50;Health Care β β β β β β β β =0.200;Consume β β β β β β β β r Staples=0.15 β β β β β β β β 0;Industrials= β β β β β β β β 0.400 β β β β β β β ββββββββββββββββββ΄βββββββββββββ΄βββββββββββββ΄ββββββββββββββ΄βββββββββββββββββ΄βββββββββββ΄ββββββββββββββ In this one we're counting on all the numbers being decimal (you could tweak the regex to get around this a bit) and all the sectors being prespecified in the generator within with_columns Split and pivot ( df .with_columns(str_split=pl.col('sector_exposure').str.split(';')) .explode('str_split') .with_columns( pl.col('str_split') .str.split('=') .list.to_struct(fields=['sector','value']) ) .unnest('str_split') .pivot(values='value',index='sector_exposure',columns='sector',aggregate_function='first') .with_columns(pl.exclude('sector_exposure').cast(pl.Float64)) ) shape: (2, 7) ββββββββββββββββββ¬βββββββββββββ¬βββββββββββββ¬ββββββββββββββ¬βββββββββββββββββ¬βββββββββββ¬ββββββββββββββ β sector_exposur β Technology β Financials β Health Care β Consumer β Consumer β Industrials β β e β --- β --- β --- β Discretionary β Staples β --- β β --- β f64 β f64 β f64 β --- β --- β f64 β β str β β β β f64 β f64 β β ββββββββββββββββββͺβββββββββββββͺβββββββββββββͺββββββββββββββͺβββββββββββββββββͺβββββββββββͺββββββββββββββ‘ β Technology=0.2 β 0.207 β 0.09 β 0.084 β 0.069 β null β null β β 07;Financials= β β β β β β β β 0.090;Health β β β β β β β β Care=0.084;Con β β β β β β β β sumer Discreti β β β β β β β β onary=0.069 β β β β β β β β Financials=0.2 β null β 0.25 β 0.2 β null β 0.15 β 0.4 β β 50;Health Care β β β β β β β β =0.200;Consume β β β β β β β β r Staples=0.15 β β β β β β β β 0;Industrials= β β β β β β β β 0.400 β β β β β β β ββββββββββββββββββ΄βββββββββββββ΄βββββββββββββ΄ββββββββββββββ΄βββββββββββββββββ΄βββββββββββ΄ββββββββββββββ In this one you do a "round" of splitting at the semi colon and then explode. Then you split again on the equal but you turn that into a struct which you then unnest. From there you pivot the sectors up to columns. If the sectors existed in the same order then you could use str.extract_groups but with varying orders I don't think it works. | 5 | 6 |
78,299,167 | 2024-4-9 | https://stackoverflow.com/questions/78299167/is-there-a-way-to-extract-text-from-python-datacompy-comparison-result | I am using datacompy to compare all columns from two dataframe. My goal is to extract the column(s) name with unmatched values. In the below example, I used inventory_id as a join column to compare df1 and df2. One column shows unmatched value, which is 'indinv_vari_ware_uid'. This is a simple example, in real work situation, it's common to see more than one column with unmatched values. Is there a way to programmatically extract these unmatched column name from the result? The end goal is to print these column names in a text file or in the log instead of having users to read the compare report (there will be hundreds of them in each production run). import datacompy compare = datacompy.Compare(df1, df2, join_columns=['inventory_id'], df1_name='df1', df2_name='df2') print(compare.report()) | You can use compare.column_stats for this: a list with dictionaries that contain the relevant information per column. Sample setup: import pandas as pd import datacompy data = {'id': [1, 2], 'col1': [1, 2]} df1 = pd.DataFrame(data) data2 = {'id': [1, 2], 'col1': ['A', 'B']} df2 = pd.DataFrame(data2) compare = datacompy.Compare(df1, df2, join_columns=['id']) Print: print(compare.report()) # ... Columns with Unequal Values or Types ------------------------------------ Column df1 dtype df2 dtype # Unequal Max Diff # Null Diff 0 col1 int64 object 2 0.0 0 Sample Rows with Unequal Values ------------------------------- id col1 (df1) col1 (df2) 0 2 2 B 1 1 1 A Access column_stats: compare.column_stats [{'column': 'id', 'match_column': '', 'match_cnt': 2, 'unequal_cnt': 0, 'dtype1': 'int64', 'dtype2': 'int64', 'all_match': True, 'max_diff': 0.0, 'null_diff': 0}, {'column': 'col1', 'match_column': 'col1_match', 'match_cnt': 0, 'unequal_cnt': 2, 'dtype1': 'int64', 'dtype2': 'object', 'all_match': False, 'max_diff': 0.0, 'null_diff': 0}] Use a list comprehension to get all column names where unequal_cnt != 0: unmatched_columns = [stat['column'] for stat in compare.column_stats if stat['unequal_cnt'] != 0] unmatched_columns # ['col1'] It might also be convenient to create a df with pd.DataFrame, and filter as desired: column_stats = pd.DataFrame(compare.column_stats) column_stats column match_column match_cnt unequal_cnt dtype1 dtype2 all_match \ 0 id 2 0 int64 int64 True 1 col1 col1_match 0 2 int64 object False max_diff null_diff 0 0.0 0 1 0.0 0 # e.g. column_stats[column_stats['unequal_cnt'].ne(0)] | 2 | 2 |
78,298,555 | 2024-4-9 | https://stackoverflow.com/questions/78298555/how-to-add-columns-to-a-pandas-dataframe-containing-max-of-each-row-and-corresp | This is a revisit to the question Add columns to pandas dataframe containing max of each row, AND corresponding column name where a solution was provided using the now deprecated method ix. How can you do the same thing using iloc or loc instead? I've tried both, but I'm getting: IndexError: boolean index did not match indexed array along dimension 0; dimension is 3 but corresponding boolean dimension is 5 Here's a sample DataFrame: a b c maxval 0 1 0 0 1 1 0 0 0 0 2 0 1 0 1 3 1 0 0 1 4 3 1 0 3 And here's the desired output: a b c maxval maxcol 0 1 0 0 1 a 1 0 0 0 0 a,b,c 2 0 1 0 1 b 3 1 0 0 1 a 4 3 1 0 3 a | Here ix is used to select the columns up to c, you can do the same with loc: df['maxcol'] = (df.loc[:, :'c'].eq(df['maxval'], axis=0) .apply(lambda x: ','.join(df.columns[:3][x==x.max()]),axis=1) ) Or, since [:3] is used later with iloc: df['maxcol'] = (df.iloc[:, :3].eq(df['maxval'], axis=0) .apply(lambda x: ','.join(df.columns[:3][x==x.max()]),axis=1) ) A variant with a dot product: tmp = df.loc[:, :'c'] df['maxcol'] = (tmp.eq(df['maxval'], axis=0) @ (tmp.columns+',')).str[:-1] Or melt+groupby.agg: df['maxcol'] = (df .melt('maxval', ignore_index=False) .query('maxval == value') .groupby(level=0)['variable'].agg(','.join) ) Output: a b c maxval maxcol 0 1 0 0 1 a 1 0 0 0 0 a,b,c 2 0 1 0 1 b 3 1 0 0 1 a 4 3 1 0 3 a | 3 | 3 |
78,288,162 | 2024-4-7 | https://stackoverflow.com/questions/78288162/how-to-turn-a-dynamically-allocated-c-array-into-a-numpy-array-and-return-it-to | So I have found similar problems in threads on here, but I haven't been able to find a solution that works for me. I am building a C extension module for Python in Visual Studio 2022 with Python 3.9. The module takes numpy arrays as inputs and returns numpy arrays. Right now, I just have it read the shape of the input array and use that to decide how big of an output array to create. If the input is shape (row, col), then the output will be shape (2, row, col). I dynamically allocate a C array with 2*row*col elements. The following is all within the function static PyObject* transform(PyObject* self, PyObject* args) {} import_array(); PyObject* data_obj; PyArrayObject* data_array; if (!PyArg_ParseTuple(args, "O", &data_obj)) { return NULL; } data_array = (PyArrayObject*)PyArray_FROM_OTF(data_obj, NPY_DOUBLE, NPY_ARRAY_IN_ARRAY); if (data_array == NULL) { PyErr_SetString(PyExc_TypeError, "The data input must be a NumPy Array (2D or 3D)"); return NULL; } int ndim = PyArray_NDIM(data_array); npy_intp* data_shape = PyArray_SHAPE(data_array); size_t rows, columns; rows = data_shape[0]; columns = data_shape[1]; double* r_arr = (double*)malloc((rows * columns * 2) * sizeof(double)); if (r_arr == NULL) { PyErr_SetString(PyExc_ValueError, "Failed to allocate memory to arrays."); Py_DECREF(data_array); } I then loop over rows and columns and make several calculations to generate two numbers at each row and column and store them as r_arr[rr * rows + cc] and r_arr[rr * rows + cc + rows * columns] for (size_t cc = 0; cc < columns; ++cc) { // some calculation for (size_t rr = 0; rr < rows; ++rr) { // some calculation r_arr[rr * rows + cc] = result1; r_arr[rr * rows + cc + rows * columns] = result2; PySys_WriteStdout("%f, %f\n", r_arr[rr * rows + cc], r_arr[rr * rows + cc + rows * columns]); } } Finally, I want to turn r_arr into a numpy array to return to Python. I've tried many things here, but this is the current state it's in based on reading other threads here. npy_intp dims[3] = { 2, rows, columns }; PyObject* r_obj = (PyArrayObject *)PyArray_SimpleNewFromData(3, dims, NPY_DOUBLE, r_arr); if (r_obj == NULL) { PyErr_Print("Failed to create internal arrays. Likely due to data input being incorrect shape."); return NULL; } Py_DECREF(data_array); free(x); free(y); //free(r_arr); PyArray_ENABLEFLAGS(r_obj, NPY_ARRAY_OWNDATA); PySys_WriteStdout("Returning result\n"); return Py_BuildValue("O", r_obj); In Python, when I create an array with shape (5, 10), and use it as an input, I get the following result. The nested loop prints out the proper values, but the array returned seems to be the kind of result I would get if I used np.empty((2, 5, 10)). I've also tried using PyArray_SimpleNewFromData() to create a 1D numpy array with 2*rows*columns elements, and that doesn't make a difference. output: 0.023581, 4.986665 0.021305, 3.986672 0.018796, 2.986680 0.016282, 1.986689 0.014207, 0.986699 -0.986851, 4.986664 -0.986798, 3.986672 -0.986746, 2.986680 -0.986700, 1.986689 -0.986667, 0.986699 -1.986746, 4.986662 -1.986718, 3.986670 -1.986691, 2.986679 -1.986666, 1.986688 -1.986648, 0.986699 -2.986705, 4.986660 -2.986685, 3.986668 -2.986665, 2.986677 -2.986646, 1.986687 -2.986632, 0.986698 -3.986679, 4.986656 -3.986662, 3.986665 -3.986645, 2.986675 -3.986629, 1.986685 -3.986616, 0.986697 -4.986658, 4.986651 -4.986642, 3.986661 -4.986626, 2.986672 -4.986611, 1.986683 -4.986599, 0.986696 -5.986637, 4.986645 -5.986622, 3.986657 -5.986607, 2.986669 -5.986592, 1.986681 -5.986579, 0.986695 -6.986616, 4.986638 -6.986601, 3.986651 -6.986586, 2.986664 -6.986571, 1.986678 -6.986558, 0.986694 -7.986593, 4.986630 -7.986578, 3.986645 -7.986563, 2.986660 -7.986547, 1.986675 -7.986533, 0.986692 -8.986567, 4.986621 -8.986552, 3.986637 -8.986536, 2.986654 -8.986520, 1.986672 -8.986505, 0.986690 Returning result array([[[ 1.14460020e-311, 1.14458244e-311, 1.14458542e-311, 1.14478895e-311, 1.14478881e-311, 1.14478881e-311, 1.14478893e-311, 1.14478894e-311, 1.14458542e-311, 1.14478894e-311], [-8.98656729e+000, 1.14478893e-311, 1.14458542e-311, 1.14478894e-311, 1.14478894e-311, -8.98655189e+000, 1.14458542e-311, 1.14478895e-311, 1.14478895e-311, 1.14478894e-311], [-8.98653594e+000, 1.14478884e-311, 1.14478894e-311, 1.14478894e-311, 1.14478894e-311, -8.98652003e+000, 1.14478884e-311, 1.14478894e-311, 1.14478894e-311, 1.14478894e-311], [-8.98650487e+000, 1.14478893e-311, 1.14478895e-311, 1.14478894e-311, 1.14478894e-311, 1.14478894e-311, 1.14478884e-311, 1.14478894e-311, 1.14478894e-311, 1.14478894e-311], [ 1.14478894e-311, 1.14478893e-311, 1.14478894e-311, 1.14478894e-311, 1.14478894e-311, 1.14478894e-311, 1.14478894e-311, 1.14478895e-311, 1.14478895e-311, 1.14478895e-311]], [[ 1.14478894e-311, 1.14458542e-311, 1.14478895e-311, 1.14478895e-311, 1.14478893e-311, 1.14478895e-311, 1.14478893e-311, 1.14458542e-311, 1.14478884e-311, 1.14478893e-311], [ 4.98662116e+000, 1.14478895e-311, 1.14478893e-311, 1.14478894e-311, 1.14478895e-311, 3.98663750e+000, 1.14478884e-311, 1.14458542e-311, 1.14478893e-311, 1.14458542e-311], [ 2.98665407e+000, 1.14478895e-311, 1.14478895e-311, 1.14478895e-311, 1.14478895e-311, 1.98667152e+000, 1.14458542e-311, 1.14458542e-311, 1.14478895e-311, 1.14478895e-311], [ 9.86690488e-001, 1.14478895e-311, 1.14478895e-311, 1.14458542e-311, 1.14478884e-311, 1.14458542e-311, 1.14478894e-311, 1.14478893e-311, 1.14458542e-311, 1.14478895e-311], [ 1.14478895e-311, 1.14478884e-311, 1.14478893e-311, 1.14478893e-311, 1.14458542e-311, 1.14478895e-311, 9.38662745e-097, 2.07712408e-308, 1.14478658e-311, 1.14458243e-311]]]) | Here's my solution with a few other fixes. The above code in the OP does not get the data from the array properly (the calculations did not depend on the actual values of the elements, and only the shape). It also does not calculate the index properly. However the crux of the solution is that I am instantiating the array as a PyArrayObject with PyArray_ZEROS() and then getting the data from it to manipulate. PyArrayObject* data_array_obj; if (!PyArg_ParseTuple(args, "O", &data_array_obj)) { return NULL; } if (data_array_obj == NULL) { PyErr_SetString(PyExc_TypeError, "The data input must be a NumPy Array"); return NULL; } // Determine dimensionality and shape int64_t ndim = PyArray_NDIM(data_array_obj); size_t* data_shape = PyArray_SHAPE(data_array_obj); if (ndim != 2) { PyErr_SetString(PyExc_TypeError, "The input NumPy Array must be 2D"); return NULL; } size_t rows, columns; rows = data_shape[0]; columns = data_shape[1]; // Get data from input array double* data = PyArray_DATA(data_array_obj); // Instantiate output array npy_intp dims[3] = { 2, rows, columns }; PyArrayObject* r_obj_array = PyArray_ZEROS(3, dims, NPY_DOUBLE, 0); double* r_data = PyArray_DATA(r_obj_array); for (size_t cc = 0; cc < columns; ++cc) { // some calculation for (size_t rr = 0; rr < rows; ++rr) { // some calculation r_data[rr * columns + cc] = result1; r_data[rr * columns + cc + rows * columns] = result2; } } Py_DECREF(data_array); return r_obj_array; | 3 | 0 |
78,295,126 | 2024-4-8 | https://stackoverflow.com/questions/78295126/polars-cumsum-on-column-if-value-changes | I'm stuck with a cum_sum problem where I only want tot cumulatively sum unique values over a column. Here's an example of what I want to achieve: βββββββ¬ββββββ¬ββββββ β a β b β d β β --- β --- β --- β β i64 β i64 β i64 β βββββββͺββββββͺββββββ‘ β 1 β 1 β 1 β β 1 β 2 β 2 β β 1 β 3 β 3 β β 1 β 1 β 1 β β 2 β 1 β 4 β β 2 β 2 β 4 β β 2 β 2 β 5 β β 2 β 2 β 5 β βββββββ΄ββββββ΄ββββββ a and b are my input columns where a is the group and b is the unique id within he group. I want to generate d which is a unique id across all groups. I'm not able to figure out a way to do this. Here's what I've managed - I can get the max per group by using over but then I don't know how to do the cumsum to get the unique ids. import polars as pl df = pl.DataFrame({'a': [1,1,1,1,2,2,2,2], 'b': [1,2,3,1,1,2,2,2]}) df.with_columns(c = pl.max('b').over('a')).with_columns(pl.cum_sum("c").over("c").alias("d")) Out[60]: shape: (8, 4) βββββββ¬ββββββ¬ββββββ¬ββββββ β a β b β c β d β β --- β --- β --- β --- β β i64 β i64 β i64 β i64 β βββββββͺββββββͺββββββͺββββββ‘ β 1 β 1 β 3 β 3 β β 1 β 2 β 3 β 6 β β 1 β 3 β 3 β 9 β β 1 β 1 β 3 β 12 β β 2 β 1 β 2 β 2 β β 2 β 2 β 2 β 4 β β 2 β 2 β 2 β 6 β β 2 β 2 β 2 β 8 β βββββββ΄ββββββ΄ββββββ΄ββββββ I'm sure this must be pretty simple but I can't figure this out - It seems like I need a cumsum the unique values of c and then add to b to get the unique id but maybe I need some sort of conditional sum of c only if it's values changes? It seems like I should be doing something similar to this answer (https://stackoverflow.com/a/74985568/1506763) but I'm stuck. | You might be looking for pl.Expr.rank with method="dense". df.with_columns( pl.struct("a", "b").rank("dense").alias("id") ) shape: (8, 3) βββββββ¬ββββββ¬ββββββ β a β b β id β β --- β --- β --- β β i64 β i64 β u32 β βββββββͺββββββͺββββββ‘ β 1 β 1 β 1 β β 1 β 2 β 2 β β 1 β 3 β 3 β β 1 β 1 β 1 β β 2 β 1 β 4 β β 2 β 2 β 5 β β 2 β 2 β 5 β β 2 β 2 β 5 β βββββββ΄ββββββ΄ββββββ | 3 | 3 |
78,295,116 | 2024-4-8 | https://stackoverflow.com/questions/78295116/conditional-multiplication-of-dataframes-with-nan | I have two DataFrames A = 0 1 2 0 0.5 0 0.1 0 0.2 0.2 0 0 and B = 0 1.0 1.0 NaN I need to multiply each row of A by B element-wise, but I need the computation done so that the resulting dataframe shows a NaN only if the original element of A is 0. If I do A * B.transpose() I get 0 1 2 0 0.5 NaN 0.1 0 NaN 0.2 0 NaN but I need it to be 0 1 2 0 0.5 NaN 0.1 0 0.2 0.2 0 NaN import pandas as pd A = pd.DataFrame([[0, 0.5, 0], [0.1, 0, 0.2], [0.2, 0, 0]]) B = pd.DataFrame([1, 1, np.NaN]) | IIUC you can try: out = A * B.T.values out[out.isna() & ~A.eq(0)] = A print(out) Prints: 0 1 2 0 0.0 0.5 NaN 1 0.1 0.0 0.2 2 0.2 0.0 NaN | 2 | 1 |
78,295,104 | 2024-4-8 | https://stackoverflow.com/questions/78295104/pandas-change-multiple-level-column-name-to-one-level | I have a dataframe with two-level column names as: ID Value A B ---------------- 1 6 2 5 3 4 4 3 5 2 6 1 I want to change the column head with: column_mapping = { ('ID', 'A') : 'ID:A', ('Value', 'B'): 'Value:B' } I tried rename: df.rename(columns=column_mapping, inplace=True) which does not work. Any idea how? | Try: df.columns = df.columns.map(":".join) print(df) Prints: ID:A Value:B 0 1.0 6.0 1 2.0 5.0 2 3.0 4.0 3 4.0 3.0 4 5.0 2.0 5 6.0 1.0 | 2 | 1 |
78,294,477 | 2024-4-8 | https://stackoverflow.com/questions/78294477/create-a-conditional-cumulative-sum-in-polars | Example dataframe: testDf = pl.DataFrame({ "Date1": ["2024-04-01", "2024-04-06", "2024-04-07", "2024-04-10", "2024-04-11"], "Date2": ["2024-04-04", "2024-04-07", "2024-04-09", "2024-04-10", "2024-04-15"], "Date3": ["2024-04-07", "2024-04-08", "2024-04-10", "2024-05-15", "2024-04-21"], 'Value': [10, 15, -20, 5, 30] }).with_columns(pl.col('Date1').cast(pl.Date), pl.col('Date2').cast(pl.Date), pl.col('Date3').cast(pl.Date) ) shape: (5, 4) ββββββββββββββ¬βββββββββββββ¬βββββββββββββ¬ββββββββ β Date1 β Date2 β Date3 β Value β β --- β --- β --- β --- β β date β date β date β i64 β ββββββββββββββͺβββββββββββββͺβββββββββββββͺββββββββ‘ β 2024-04-01 β 2024-04-04 β 2024-04-07 β 10 β β 2024-04-06 β 2024-04-07 β 2024-04-08 β 15 β β 2024-04-07 β 2024-04-09 β 2024-04-10 β -20 β β 2024-04-10 β 2024-04-10 β 2024-05-15 β 5 β β 2024-04-11 β 2024-04-15 β 2024-04-21 β 30 β ββββββββββββββ΄βββββββββββββ΄βββββββββββββ΄ββββββββ What I would like to do is create a dataframe in which for each 'Date1' I would have a column of the cumulative sum of 'Value' where 'Date1' >= 'Date2' and 'Date1' <= 'Date3'. So when 'Date1' ='2024-04-10' the sum should read -15, since the first 2 rows 'Date3' <= '2024-04-10' and the last row has 'Date2' = '2024-04-15' >= '2024-04-10'. I tried this: testDf.group_by(pl.col('Date1'))\ .agg(pl.col('Value')\ .filter((pl.col('Date1') >= pl.col('Date2')) & (pl.col('Date1') <= pl.col('Date3')))\ .sum()) shape: (5, 2) ββββββββββββββ¬ββββββββ β Date1 β Value β β --- β --- β β date β i64 β ββββββββββββββͺββββββββ‘ β 2024-04-11 β 0 β β 2024-04-06 β 0 β β 2024-04-07 β 0 β β 2024-04-10 β 5 β β 2024-04-01 β 0 β ββββββββββββββ΄ββββββββ But my desired result is this: shape: (5, 2) ββββββββββββββ¬ββββββ β Date1 β Sum β β --- β --- β β date β i64 β ββββββββββββββͺββββββ‘ β 2024-04-01 β 0 β β 2024-04-06 β 10 β β 2024-04-07 β 25 β β 2024-04-10 β -15 β β 2024-04-11 β 5 β ββββββββββββββ΄ββββββ | I'll need to think about it a bit more to understand whether a solution relying purely on polars' native expression API is possible. However, here is a preliminary solution relying on the discouraged pl.Expr.map_elements. ( testDf .with_columns( pl.col("Date1") .map_elements( lambda x: \ ( testDf .filter( pl.col("Date2") <= x, pl.col("Date3") >= x, ) .get_column("Value") .sum() ), return_dtype=pl.Int64 ) .alias("Sum") ) ) shape: (5, 5) ββββββββββββββ¬βββββββββββββ¬βββββββββββββ¬ββββββββ¬ββββββ β Date1 β Date2 β Date3 β Value β Sum β β --- β --- β --- β --- β --- β β date β date β date β i64 β i64 β ββββββββββββββͺβββββββββββββͺβββββββββββββͺββββββββͺββββββ‘ β 2024-04-01 β 2024-04-04 β 2024-04-07 β 10 β 0 β β 2024-04-06 β 2024-04-07 β 2024-04-08 β 15 β 10 β β 2024-04-07 β 2024-04-09 β 2024-04-10 β -20 β 25 β β 2024-04-10 β 2024-04-10 β 2024-05-15 β 5 β -15 β β 2024-04-11 β 2024-04-15 β 2024-04-21 β 30 β 5 β ββββββββββββββ΄βββββββββββββ΄βββββββββββββ΄ββββββββ΄ββββββ | 3 | 4 |
78,290,363 | 2024-4-8 | https://stackoverflow.com/questions/78290363/question-about-python-asynchronous-programming-using-asyncio-and-await-async-def | I am currently studying the differences between synchronous programming in Python and asynchronous programming using asyncio by looking at example code. While doing so, I have a question. Since my current job is a machine learning engineer, I wrote the example code using analogies to a server logic that serves machine learning. Here's the scenario I want to describe: Firstly, there exists a function called preprocessing that must always be executed first. This is because it preprocesses the input data for the two functions that follow (predict and apply_business_logic). Secondly, the output from the preprocessing function will be used as input for both the predict and apply_business_logic functions. The predict function simply generates a prediction calculated from a machine learning model, while the apply_business_logic function applies an arbitrary business logic. The key point is that these two functions are independent of each other, making asynchronous execution much more efficient than synchronous execution. Lastly, the logic involves calculating the weighted average of the outputs from the predict and apply_business_logic functions to determine the final result, which will be returned. If we assume the logic is structured into three steps as described above, executing it synchronously is very straightforward. 1. synchronous running import time def preprocess(x: list[int]): """ dummy preprocess """ return x def predict(x: list[int]) -> float: time.sleep(3) pred = sum(x) / len(x) print(">> finish `predict` function!") return pred def apply_business_logic(x: list[int]) -> int: res = sum(x) ** 2 print(">> finish `apply_business_logic` function!") return res def main(x: list[int]) -> float: # sync function because it must be done before running `predict` and `apply_business_logic` function x = preprocess(x) # asynchronously running(..?) pred_result = predict(x) logic_result = apply_business_logic(x) # final response alpha = 0.7 response = (alpha * pred_result) + ((1 - alpha) * logic_result) return response if __name__ == "__main__": x = list(range(100)) res = main(x) print('>> final result:', res) 2.asynchronous running(1) I can't ensure that below code will be asynchronously running.. import asyncio def preprocess(x: list[int]): """ dummy preprocess """ return x async def predict(x: list[int]) -> float: await asyncio.sleep(delay=3) pred = sum(x) / len(x) print(">> finish `predict` function!") return pred async def apply_business_logic(x: list[int]) -> int: res = sum(x) ** 2 print(">> finish `apply_business_logic` function!") return res async def main(x: list[int]) -> float: # sync function because it must be done before running `predict` and `apply_business_logic` function x = preprocess(x) # asynchronously running(..?) pred_result = await predict(x) logic_result = await apply_business_logic(x) # final response alpha = 0.7 response = (alpha * pred_result) + ((1 - alpha) * logic_result) return response if __name__ == "__main__": x = list(range(100)) loop = asyncio.get_event_loop() res = loop.run_until_complete(main(x)) loop.close() print('>> final result:', res) 3.asynchronous running(2) this code must be running asynchronously and I checked it. import asyncio def preprocess(x: list[int]): """ dummy preprocess """ return x async def predict(x: list[int]) -> float: await asyncio.sleep(delay=3) pred = sum(x) / len(x) print(">> finish `predict` function!") return pred async def apply_business_logic(x: list[int]) -> int: res = sum(x) ** 2 print(">> finish `apply_business_logic` function!") return res async def main(x: list[int]) -> float: # sync function because it must be done before running `predict` and `apply_business_logic` function x = preprocess(x) tasks = await asyncio.gather( predict(x), apply_business_logic(x) ) pred_result, logic_result = tasks[0], tasks[1] # final response alpha = 0.7 response = (alpha * pred_result) + ((1-alpha) * logic_result) return response if __name__ == "__main__": x = list(range(100)) resp = asyncio.run(main(x)) print("resp >>", resp) I don't know the difference between two methods(1.synchronous running and 2.asynchronous running(1)). Especially the second method(2.asynchronous running(1)) is first printing print(">> finish predict function!") though using asynchronous grammar(await, async def keyword..). So, the second method is not asynchronous?? | To make my comments an answer: async doesn't make your synchronous Python functions automagically asynchronous; async really is a form of cooperative multitasking. Here's a simplified example with f doing asynchronous work (in the form of an asyncio.sleep) and f2 doing synchronous work (in the form of time.sleep). import asyncio import time async def f(name: str) -> None: for x in range(3): print(f"{name} will sleep") await asyncio.sleep(0.5) print(f"{name} woke up!") async def f2(name: str) -> None: for x in range(3): print(f"{name} will sleep (2)") time.sleep(0.5) print(f"{name} woke up (2)!") async def main() -> float: await asyncio.gather( f("Hernekeitto"), f("Viina"), f("Teline"), ) print("----") await asyncio.gather( f2("Johannes"), f2("Appelsiini"), f2("Kuutio"), ) if __name__ == "__main__": asyncio.run(main()) This prints out the following: Hernekeitto will sleep Viina will sleep Teline will sleep Hernekeitto woke up! Hernekeitto will sleep Viina woke up! Viina will sleep Teline woke up! Teline will sleep Hernekeitto woke up! Hernekeitto will sleep Viina woke up! Viina will sleep Teline woke up! Teline will sleep Hernekeitto woke up! Viina woke up! Teline woke up! ---- Johannes will sleep (2) Johannes woke up (2)! Johannes will sleep (2) Johannes woke up (2)! Johannes will sleep (2) Johannes woke up (2)! Appelsiini will sleep (2) Appelsiini woke up (2)! Appelsiini will sleep (2) Appelsiini woke up (2)! Appelsiini will sleep (2) Appelsiini woke up (2)! Kuutio will sleep (2) Kuutio woke up (2)! Kuutio will sleep (2) Kuutio woke up (2)! Kuutio will sleep (2) Kuutio woke up (2)! As you can see, in the first case, the three async function invocations are able to do their work in parallel (but there's always a pattern of woke up! will sleep, because there's no await between the two prints (as they're run in a loop). In the second case, the functions are run serially, since there is no await in f2 that would allow other async coroutines to run. As mentioned in the comments, you can use loop.run_in_executor() to run a synchronous function in a concurrent.futures.Executor (commonly a ThreadPoolExecutor), so it's run in a different thread (or a process), and the result can be awaited. | 2 | 2 |
78,290,178 | 2024-4-8 | https://stackoverflow.com/questions/78290178/how-can-i-change-a-streaks-of-numbers-according-to-the-previous-streak | This is my DataFrame: import pandas as pd df = pd.DataFrame( { 'a': [1, 1, 1, 2, 2, 2, 2, 2, -1, -1, 2, 2, 2], } ) Expected output: Changing column a: a 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 -1 9 -1 10 2 11 2 12 2 The process is as follows: a) Finding streaks of 1s and 2s b) If a streak of 2 comes after a streak of 1, that streak of 2 changes to 1. For example: From rows 3 to 7 there is a streak of 2s. The streak before that is streak of 1s. So I want tot change rows 3 to 7 to 1 which is the expected output of this df. These are my attemps: # attempt 1 df['streak'] = df.a.ne(df.a.shift(1)).cumsum() # attempt 2 df.loc[(df.a.eq(2)) & (df.a.shift(1).eq(1)), 'a'] = 1 | You can use shift and ffill to access the previous group, then boolean indexing: # previous value s = df['a'].shift() # check if previous group is 1 m1 = s.where(df['a'].ne(s)).ffill().eq(1) # is value 2? m2 = df['a'].eq(2) df.loc[m1&m2, 'a'] = 1 Output: a 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 -1 9 -1 10 2 11 2 12 2 Intermediates: a s where ffill m1 m2 0 1 NaN NaN NaN False False 1 1 1.0 NaN NaN False False 2 1 1.0 NaN NaN False False 3 2 1.0 1.0 1.0 True True 4 2 2.0 NaN 1.0 True True 5 2 2.0 NaN 1.0 True True 6 2 2.0 NaN 1.0 True True 7 2 2.0 NaN 1.0 True True 8 -1 2.0 2.0 2.0 False False 9 -1 -1.0 NaN 2.0 False False 10 2 -1.0 -1.0 -1.0 False True 11 2 2.0 NaN -1.0 False True 12 2 2.0 NaN -1.0 False True | 2 | 2 |
78,289,206 | 2024-4-7 | https://stackoverflow.com/questions/78289206/how-do-i-get-this-star-pattern-to-work-with-only-one-for-loop-in-python | Very new to python and coding in general! Trying to get a star pattern that looks like this: * ** *** **** ***** **** *** ** * to work using a for loop, but I'm only allowed to use a single for loop, and embed an if/else statement inside to get it to work. If I could use 2 for loops I'd know how to do it but not sure why my current solution isn't working, as it only outputs the first 5 lines of stars and then stops. Any help appreciated :) My current codebase: print("Pattern: ") # Stores the * as a variable, not entirely necessary but negates having to type out "*" star = "*" for i in range(9): if i in range(5): print(star * (i + 1)) elif i in range(4, 0, -1): print(star * i) | As others have mentioned in the comments, your range() structure is flawed. You can simplify the approach by using > and < in your if else like this: print("Pattern: ") star = "*" for i in range(9): if i < 5: print(star * (i + 1)) else: print(star * (9 - i)) | 2 | 1 |
78,288,578 | 2024-4-7 | https://stackoverflow.com/questions/78288578/problem-with-pip-install-filenotfounderror-errno-2-no-such-file-or-directory | I wanted to use contributions graph in my Django project, so after quick research I decided to use 'contributions-django' library. But when I tried to install it, I got stuck with this error:: FileNotFoundError: [Errno 2] No such file or directory: 'requirements.txt' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error Γ Getting requirements to build wheel did not run successfully. β exit code: 1 β°β> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. I am no pip expert, but I can see that requirements.txt file exists in this repository (altough it's empty). How to fix this? Can I install the libary manually? How? Thanks for help in advance. At the comment request, here is the command I used: first I activated virtual environment: my_venv\Scripts\activate then I used this command to install the library: pip install contributions-django | See https://github.com/vsoch/contributions-django/issues/3 . The bug was reported in 2020, still open. The last commit also was 4 years ago. The project seems to be abandoned. This command works, though: pip install git+https://github.com/vsoch/contributions-django.git You need Git installed and git command present in the %PATH%. | 2 | 2 |
78,288,826 | 2024-4-7 | https://stackoverflow.com/questions/78288826/dataframe-that-is-a-partial-view | Is it possible to create a dataframe where one fragment is a view of another df and the remaining fragment is not a view? I am unable to create such a df, but I want to know if it is possible. If this is possible, can you give an example of such a dataframe? | I think you can make something like this when you construct the dataframe with copy=False parameter. Consider this: arr = np.array([[1, 2, 3], [4, 5, 6]]) df1 = pd.DataFrame(arr, columns=["a1", "b1", "c1"], copy=False) df2 = pd.DataFrame(arr, columns=["a2", "b2", "c2"], copy=False) df2["d"] = 999 print(df1) print(df2) This prints: a1 b1 c1 0 1 2 3 1 4 5 6 a2 b2 c2 d 0 1 2 3 999 1 4 5 6 999 Now when you do: df1.loc[0, :] = -1 print(df1) print(df2) This prints: a1 b1 c1 0 -1 -1 -1 1 4 5 6 a2 b2 c2 d 0 -1 -1 -1 999 1 4 5 6 999 | 2 | 2 |
78,288,206 | 2024-4-7 | https://stackoverflow.com/questions/78288206/django-queryset-how-to-aggregate-repeated-elements-and-add-quantity-field-to-it | I have a feeling the solution to this is very simple, but as new to Django I am not able to figure it out... Given the following QuerySet: <QuerySet [ {'id': 2, 'prodclassQuery_id': 1, 'prodDescription': 'HofbrΓ€u Kellerbier 500 ml', 'prodPrice': Decimal('6.50')}, {'id': 1, 'prodclassQuery_id': 1, 'prodDescription': 'Tonic Water 300 ml', 'prodPrice': Decimal('4.50')}, {'id': 3, 'prodclassQuery_id': 2, 'prodDescription': 'Coxinha 6 unidades', 'prodPrice': Decimal('8.00')}, {'id': 3, 'prodclassQuery_id': 2, 'prodDescription': 'Coxinha 6 unidades', 'prodPrice': Decimal('8.00')}]> I want to aggregate the repeated elements (based on id) e produce the following QuerySet, adding the field poQty_ to represent the quantity of repeated elements (products in my case...): <QuerySet [ {'id': 2, 'prodclassQuery_id': 1, 'prodDescription': 'HofbrΓ€u Kellerbier 500 ml', 'prodPrice': Decimal('6.50'), 'poQty_': 1}, {'id': 1, 'prodclassQuery_id': 1, 'prodDescription': 'Tonic Water 300 ml', 'prodPrice': Decimal('4.50'), 'poQty_': 1}, {'id': 3, 'prodclassQuery_id': 2, 'prodDescription': 'Coxinha 6 unidades', 'prodPrice': Decimal('8.00'), 'poQty_': 2}]> What I tried so far with annotate() in views.py is not working, and the results of orders_aggris the same original QuerySet: def display_orders(request): orders = Order.objects.all().order_by('id', 'orderTable', 'menuQuery') for j in orders: print(j.prodQuery.values(),) # original QuerySet orders_aggr = Order.objects.annotate(poQty_=Count('prodQuery__id')).order_by('id', 'orderTable', 'menuQuery') for j in orders_aggr: print(j.prodQuery.values(),) context = { 'orders': orders, 'orders_aggr': orders_aggr } return render(request, 'orders.html', context) Would anyone give please some help? Thanks !! Further information: models.py class Product(models.Model): prodclassQuery = models.ForeignKey(ProductClass, on_delete=models.PROTECT, verbose_name='Product Class', default=1) prodDescription = models.CharField(max_length=255, verbose_name='Product') prodPrice = models.DecimalField(max_digits=6, decimal_places=2, verbose_name='Price') class Meta: ordering = ['prodclassQuery', 'prodDescription'] def __str__(self): return self.prodDescription class Menu(models.Model): menuActive = models.BooleanField(verbose_name='Active?', default=False) menuDescription = models.CharField(max_length=255, verbose_name='Menu') prodQuery = models.ManyToManyField(Product, verbose_name='Product') class Meta: ordering = ['menuDescription',] def __str__(self): return self.menuDescription class Order(models.Model): orderUser = models.ForeignKey(User, on_delete=models.SET_NULL, null=True, blank=True) orderDtOpen = models.DateTimeField(auto_now_add=True) orderDtClose = models.DateTimeField(auto_now=True) orderOpen = models.BooleanField(default=True, verbose_name='Open?') orderTable = models.CharField(max_length=25, verbose_name='Table') menuQuery = models.ForeignKey(Menu, on_delete=models.PROTECT, verbose_name='Menu', default=1) prodQuery = models.ManyToManyField(Product, through='ProductOrder') class Meta: models.UniqueConstraint(fields=['orderTable'], condition=models.Q(orderOpen=True), name='unique_open_order_table', violation_error_message='This table has already a open order') def save_model(self, request, obj, form, change): obj.orderuser = request.user super().save_model(request, obj, form, change) class ProductOrder(models.Model): poOrder = models.ForeignKey(Order, on_delete=models.CASCADE, verbose_name='Order', default=1) poStatus = models.BooleanField(default=True, verbose_name='Order Status') prodQuery = models.ForeignKey(Product, on_delete=models.CASCADE, verbose_name='Product', default=1) | You can also use a combination of values() and annotate(), so: from django.db.models import Count def display_orders(request): orders = Order.objects.values( 'id', 'orderTable', 'menuQuery' ).annotate( poQty_=Count('id') ).order_by( 'id', 'orderTable', 'menuQuery' ) context = { 'orders': orders, } return render(request, 'orders.html', context) This code will group the orders by the specified fields (id, orderTable, menuQuery), count the distinct occurrences of each group, and annotate the count onto each result. | 2 | 1 |
78,279,823 | 2024-4-5 | https://stackoverflow.com/questions/78279823/how-exactly-the-forward-and-backward-hooks-work-in-pytorch | I am trying to understand how exactly code-wise the hooks operate in PyTorch. I have a model and I would like to set a forward and backward hook in my code. I would like to set a hook in my model after a specific layer and I guess the easiest way is to set a hook to this specific module. This introductory video warns that the backward module contains a bug, but I am not sure if that is still the case. My code looks as follows: def __init__(self, model, attention_layer_name='desired_name_module',discard_ratio=0.9): self.model = model self.discard_ratio = discard_ratio for name, module in self.model.named_modules(): if attention_layer_name in name: module.register_forward_hook(self.get_attention) module.register_backward_hook(self.get_attention_gradient) self.attentions = [] self.attention_gradients = [] def get_attention(self, module, input, output): self.attentions.append(output.cpu()) def get_attention_gradient(self, module, grad_input, grad_output): self.attention_gradients.append(grad_input[0].cpu()) def __call__(self, input_tensor, category_index): self.model.zero_grad() output = self.model(input_tensor) loss = ... loss.backward() I am puzzled to understand how code-wise the following lines work: module.register_forward_hook(self.get_attention) module.register_backward_hook(self.get_attention_gradient) I am registering a hook to my desired module, however, then, I am calling a function in each case without any input. My question is Python-wise, how does this call work exactly? How the arguments of the register_forward_hook and register_backward_hook operate when the function it's called? | How does a hook work? A hook allows you to execute a specific function - referred to as a "callback" - when a particular action has been performed. In this case, you are expecting self.get_attention to be called once the forward function of module has been accessed. To give a minimal example of how a hook would look like. I define a simple class on which you can register new callbacks through register_hook, then when the instance is called (via __call__), all hooks will be called with the provided arguments: class Obj: def __init__(self): self.hooks = [] def register_hook(self, hook): self.hooks.append(hook) def __call__(self, x, y): print('instance called') for hook in self.hooks: hook(x, y) First, implement two hooks for demonstration purposes: def foo(x, y): print(f'foo called with {x} and {y}') def bar(x, _): print(f'bar called with {x}') And initialize an instance of Obj: obj = Obj() You can register a hook and call the instance: >>> obj.register_hook(foo) >>> obj('yes', 'no') instance called foo called with yes and no You can add hooks on top and call again to compare, here both hooks are triggered: >>> obj.register_hook(bar) >>> obj('yes', 'no') instance called foo called with yes and no bar called with yes Using hooks in PyTorch There are two primary hooks in PyTorch: forward and backward. You also have pre- and post-hooks. Additionally there exists hooks on other actions such as load_state_dict... To attach a hook on the forward process of a nn.Module, you should use register_forward_hook, the argument is a callback function that expects module, args, and output. This callback will be triggered on every forward execution. For backward hooks, you should use register_full_backward_hook, the registered hook expects three arguments: module, grad_input, and grad_output. As of recent PyTorch versions, register_backward_hook has been deprecated and should not be used. One side effect here is that you are registering the hook with self.get_attention and self.get_attention_gradient. The function passed to the register handler is not unbound to the class instance! In other words, on execution, these will be called without the self argument like: self.get_attention(module, input, output) self.get_attention_gradient(module, grad_input, grad_output) This will fail. A simple way to fix this is to wrap the hook with a lambda when you register it: module.register_forward_hook( lambda *args, **kwargs: Routine.get_attention(self, *args, **kwargs)) All in all, your class could look like this: class Routine: def __init__(self, model, attention_layer_name): self.model = model for name, module in self.model.named_modules(): if attention_layer_name in name: module.register_forward_hook( lambda *args, **kwargs: Routine.get_attention(self, *args, **kwargs)) module.register_full_backward_hook( lambda *args, **kwargs: Routine.get_attention_gradient(self, *args, **kwargs)) self.attentions = [] self.attention_gradients = [] def get_attention(self, module, input, output): self.attentions.append(output.cpu()) def get_attention_gradient(self, module, grad_input, grad_output): self.attention_gradients.append(grad_input[0].cpu()) def __call__(self, input_tensor): self.model.zero_grad() output = self.model(input_tensor) loss = output.mean() loss.backward() When initialized with a single linear layer model: routine = Routine(nn.Sequential(nn.Linear(10,10)), attention_layer_name='0') You can call the instance, this will first trigger the forward hook with (because of self.model(input_tensor), and then the backward hook (because of loss.backward()). >>> routine(torch.rand(1,10, requires_grad=True)) Following your implementation, your forward hook is caching the output of the "attention_layer_name" layer in self.attentions. >>> routine.attentions [tensor([[-0.3137, -0.2265, -0.2197, 0.2211, -0.6700, -0.5034, -0.1878, -1.1334, 0.2025, 0.8679]], grad_fn=<...>)] Similarly for the self.attention_gradients: >>> routine.attentions_gradients [tensor([[-0.0501, 0.0393, 0.0353, -0.0257, 0.0083, 0.0426, -0.0004, -0.0095, -0.0759, -0.0213]])] It is important to note that the cached outputs and gradients will remain in self.attentions and self.attentions_gradients and get appended on every execution of Routine.__call__. | 11 | 9 |
78,287,528 | 2024-4-7 | https://stackoverflow.com/questions/78287528/python-gspread-formatting-set-vertical-alignment-to-middle-for-range-of-cells | I'm trying to format some cells of a google sheet. Since I'm using new line characters in certain cells of the first row, I noticed vertical alignment is automatically set to bottom, whereas I would like to have a centered vertical alignment for a range of cells in this first row. I've omitted quite a few columns from the following code block, as I have over 25: import gspread from google.oauth2.service_account import Credentials scopes = ["https://www.googleapis.com/auth/spreadsheets"] creds = Credentials.from_service_account_file("credentials.json", scopes=scopes) client = gspread.authorize(creds) sheet_id = "url" workbook = client.open_by_key(sheet_id) worksheet_list = map(lambda x: x.title, workbook.worksheets()) new_worksheet_name = "template" # check if new sheet exists already if new_worksheet_name in worksheet_list: sheet = workbook.worksheet(new_worksheet_name) else: sheet = workbook.add_worksheet(new_worksheet_name, rows=91, cols=30) values = [ ["Data\nYYYY/MM/DD", "Totale\ninizio giorno", "Totale\ndopo refresh", "Set giornaliero", "Serie\ngiornaliera"], ] sheet.clear() sheet.update(values, f"A1:Z{len(values)}") sheet.format("A1:Z1", {"textFormat": {"bold": True}}) I tried (hoping it would work the same way as bold) sheet.format("A1:Z1", {"verticalAlignment": {"MIDDLE": True}}) but I get the following traceback: File "C:path", line 41, in <module> sheet.format("A1:Z1", {"verticalAlignment": {"MIDDLE": True}}) File "C:path\.venv\Lib\site-packages\gspread\worksheet.py", line 1479, in format return self.batch_format(formats) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:path\.venv\Lib\site-packages\gspread\worksheet.py", line 1430, in batch_format return self.client.batch_update(self.spreadsheet_id, body) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:path\.venv\Lib\site-packages\gspread\http_client.py", line 134, in batch_update r = self.request("post", SPREADSHEET_BATCH_UPDATE_URL % id, json=body) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:path\.venv\Lib\site-packages\gspread\http_client.py", line 123, in request raise APIError(response) gspread.exceptions.APIError: {'code': 400, 'message': "Invalid value at 'requests[0].repeat_cell.cell.user_entered_format' (vertical_alignment), Starting an object on a scalar field", 'status': 'INVALID_ARGUMENT', 'details': [{'@type': 'type.googleapis.com/google.rpc.BadRequest', 'fieldViolations': [{'field': 'requests[0].repeat_cell.cell.user_entered_format', 'description': "Invalid value at 'requests[0].repeat_cell.cell.user_entered_format' (vertical_alignment), Starting an object on a scalar field"}]}]} | For vertical alignment, you don't use a boolean. You should use a string specifying the alignment type: import gspread from google.oauth2.service_account import Credentials scopes = ["https://www.googleapis.com/auth/spreadsheets"] creds = Credentials.from_service_account_file('your_credentials_file.json', scopes=scopes) client = gspread.authorize(creds) sheet_id = "your_sheet_id" workbook = client.open_by_key(sheet_id) new_worksheet_name = "template" worksheet_list = [ws.title for ws in workbook.worksheets()] if new_worksheet_name in worksheet_list: sheet = workbook.worksheet(new_worksheet_name) else: sheet = workbook.add_worksheet(new_worksheet_name, rows=91, cols=30) values = [["Data\nYYYY/MM.DD", "Totale\ninizio giorno", "Set giornaliero", "Serie\ngiornaliera"],] sheet.clear() sheet.update("A1:Z1", values) sheet.format("A1:Z1", {"textFormat": {"bold": True}}) sheet.format("A1:Z1", {"verticalAlignment": "MIDDLE"}) | 2 | 2 |
78,287,484 | 2024-4-7 | https://stackoverflow.com/questions/78287484/stacked-bar-chart-from-dataframe | Program Here's a small Python program that gets tax data via the treasury.gov API: import pandas as pd import treasury_gov_pandas # ---------------------------------------------------------------------- df = treasury_gov_pandas.update_records( url = 'https://api.fiscaldata.treasury.gov/services/api/fiscal_service/v1/accounting/dts/deposits_withdrawals_operating_cash') df['record_date'] = pd.to_datetime(df['record_date']) df['transaction_today_amt'] = pd.to_numeric(df['transaction_today_amt']) tmp = df[(df['transaction_type'] == 'Deposits') & ((df['transaction_catg'].str.contains('Tax')) | (df['transaction_catg'].str.contains('FTD'))) ] The program is using the following library to download the data: https://github.com/dharmatech/treasury-gov-pandas.py Dataframe Here's what the resulting data looks like: >>> tmp.tail(20).drop(columns=['table_nbr', 'table_nm', 'src_line_nbr', 'record_fiscal_year', 'record_fiscal_quarter', 'record_calendar_year', 'record_calendar_quarter', 'record_calendar_month', 'record_calendar_day', 'transaction_mtd_amt', 'transaction_fytd_amt', 'transaction_catg_desc', 'account_type', 'transaction_type']) record_date transaction_catg transaction_today_amt 371266 2024-04-03 DHS - Customs and Certain Excise Taxes 84 371288 2024-04-03 Taxes - Corporate Income 237 371289 2024-04-03 Taxes - Estate and Gift 66 371290 2024-04-03 Taxes - Federal Unemployment (FUTA) 10 371291 2024-04-03 Taxes - IRS Collected Estate, Gift, misc 23 371292 2024-04-03 Taxes - Miscellaneous Excise 41 371293 2024-04-03 Taxes - Non Withheld Ind/SECA Electronic 1786 371294 2024-04-03 Taxes - Non Withheld Ind/SECA Other 2315 371295 2024-04-03 Taxes - Railroad Retirement 3 371296 2024-04-03 Taxes - Withheld Individual/FICA 12499 371447 2024-04-04 DHS - Customs and Certain Excise Taxes 82 371469 2024-04-04 Taxes - Corporate Income 288 371470 2024-04-04 Taxes - Estate and Gift 59 371471 2024-04-04 Taxes - Federal Unemployment (FUTA) 8 371472 2024-04-04 Taxes - IRS Collected Estate, Gift, misc 127 371473 2024-04-04 Taxes - Miscellaneous Excise 17 371474 2024-04-04 Taxes - Non Withheld Ind/SECA Electronic 1905 371475 2024-04-04 Taxes - Non Withheld Ind/SECA Other 1092 371476 2024-04-04 Taxes - Railroad Retirement 1 371477 2024-04-04 Taxes - Withheld Individual/FICA 2871 The dataframe has data that goes back to 2005: >>> tmp.drop(columns=['table_nbr', 'table_nm', 'src_line_nbr', 'record_fiscal_year', 'record_fiscal_quarter', 'record_calendar_year', 'record_calendar_quarter', 'record_calendar_month', 'record_calendar_day', 'transaction_mtd_amt', 'transaction_fytd_amt', 'transaction_catg_desc', 'account_type', 'transaction_type']) record_date transaction_catg transaction_today_amt 2 2005-10-03 Customs and Certain Excise Taxes 127 7 2005-10-03 Estate and Gift Taxes 74 10 2005-10-03 FTD's Received (Table IV) 2515 12 2005-10-03 Individual Income and Employment Taxes, Not Wi... 353 21 2005-10-03 FTD's Received (Table IV) 15708 ... ... ... ... 371473 2024-04-04 Taxes - Miscellaneous Excise 17 371474 2024-04-04 Taxes - Non Withheld Ind/SECA Electronic 1905 371475 2024-04-04 Taxes - Non Withheld Ind/SECA Other 1092 371476 2024-04-04 Taxes - Railroad Retirement 1 371477 2024-04-04 Taxes - Withheld Individual/FICA 2871 Question I'd like to plot this data as a stacked bar chart. x-axis should be 'record_date'. y-axis should be the 'transaction_today_amt'. The 'transaction_catg' values should be used for the stacked items. I'm open to any plotting library. I.e. matplotlib, bokeh, plotly, etc. What's a good way to implement this? | I have create dummy df and tested it worked This code creates a DataFrame with random transaction data grouped by date and category. It then pivots the data to display a stacked bar chart where each bar represents a date, and the stack segments represent transaction amounts for different categories. I hope that solution help your project. And final code is shown below. import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib.dates import DayLocator data = { 'record_date': pd.to_datetime(['2023-10-01', '2023-10-02', '2023-10-03','2023-10-04', '2023-10-05', '2023-10-06','2023-10-07', '2023-10-08', '2023-10-09'] * 5), 'transaction_catg': ['A', 'B', 'C', 'D', 'E'] * 9, 'transaction_today_amt': np.random.randint(100, 1000, 45) } tmp = pd.DataFrame(data) tmp_agg = tmp.groupby(['record_date', 'transaction_catg'])['transaction_today_amt'].sum().reset_index() tmp_agg['record_date'] = tmp_agg['record_date'].dt.date pivot_df = tmp_agg.pivot(index='record_date', columns='transaction_catg', values='transaction_today_amt').fillna(0) ax = pivot_df.plot(kind='bar', stacked=True, figsize=(10, 6)) ax.xaxis.set_major_locator(DayLocator(interval=3)) plt.xticks(rotation=45) plt.xlabel('Record Date') plt.ylabel('Transaction Today Amount') plt.title('Stacked Bar Chart of Transaction Amounts by Category and Date') plt.show() Output is like : | 2 | 1 |
78,285,182 | 2024-4-6 | https://stackoverflow.com/questions/78285182/why-is-numpys-vectorized-evaluation-slower-when-storing-vectors-as-class-attrib | I am writing a helper class to evaluate a parametrized function over a grid. Since the grid does not change with the parameter, I chose to create it once and for all as a class attribute. However, I realized having the grid as a class attribute causes a significant performance drop as compared to having it as a global variable. Even more intriguing, the performance discrepancy noted for a 2D grid seems to disappear with a 1D one. Here is a minimum "working" example demonstrating the issue: import numpy as np import time grid_1d_size = 25000000 grid_2d_size = 5000 x_min, x_max = 0, np.pi / 2 y_min, y_max = -np.pi / 4, np.pi / 4 # Grid evaluation (2D) with grid as class attribute class GridEvaluation2DWithAttribute: def __init__(self): self.x_2d_values = np.linspace(x_min, x_max, grid_2d_size, dtype=np.float128) self.y_2d_values = np.linspace(y_min, y_max, grid_2d_size, dtype=np.float128) def grid_evaluate(self): cost_values = np.cos(self.x_2d_values[:, None] * self.y_2d_values[None, :]) return cost_values grid_eval_2d_attribute = GridEvaluation2DWithAttribute() initial_time = time.process_time() grid_eval_2d_attribute.grid_evaluate() final_time = time.process_time() print(f"2d grid, with grid as class attribute: {final_time - initial_time} seconds") # Grid evaluation (1D) with grid as global variable x_2d_values = np.linspace(x_min, x_max, grid_2d_size) y_2d_values = np.linspace(y_min, y_max, grid_2d_size) class GridEvaluation2DWithGlobal: def __init__(self): pass def grid_evaluate(self): cost_values = np.cos(x_2d_values[:, None] * y_2d_values[None, :]) return cost_values grid_eval_2d_global = GridEvaluation2DWithGlobal() initial_time = time.process_time() grid_eval_2d_global.grid_evaluate() final_time = time.process_time() print(f"2d grid, with grid as global variable: {final_time - initial_time} seconds") # Grid evaluation (1D) with grid as class attribute class GridEvaluation1DWithAttribute: def __init__(self): self.x_1d_values = np.linspace(x_min, x_max, grid_1d_size, dtype=np.float128) def grid_evaluate(self): cost_values = np.cos(self.x_1d_values) return cost_values grid_eval_1d_attribute = GridEvaluation1DWithAttribute() initial_time = time.process_time() grid_eval_1d_attribute.grid_evaluate() final_time = time.process_time() print(f"1d grid, with grid as class attribute: {final_time - initial_time} seconds") # Grid evaluation (1D) with grid as global variable x_1d_values = np.linspace(x_min, x_max, grid_1d_size, dtype=np.float128) class GridEvaluation1DWithGlobal: def __init__(self): pass def grid_evaluate(self): cost_values = np.cos(x_1d_values) return cost_values grid_eval_1d_global = GridEvaluation1DWithGlobal() initial_time = time.process_time() grid_eval_1d_global.grid_evaluate() final_time = time.process_time() print(f"1d grid, with grid as global variable: {final_time - initial_time} seconds") And this is the output I get: 2d grid, with grid as class attribute: 0.8012442529999999 seconds 2d grid, with grid as global variable: 0.20206781899999982 seconds 1d grid, with grid as class attribute: 2.0631387639999996 seconds 1d grid, with grid as global variable: 2.136266148 seconds How to explain this performance discrepancy? I moved the grid from a class attribute to a global variable. I expected this change to be neutral to performance. However, a significant change of performance resulted. | You aren't consistent with setting the datatype of your grid. Compare self.x_2d_values = np.linspace(x_min, x_max, grid_2d_size, dtype=np.float128) self.y_2d_values = np.linspace(y_min, y_max, grid_2d_size, dtype=np.float128) With: x_2d_values = np.linspace(x_min, x_max, grid_2d_size) y_2d_values = np.linspace(y_min, y_max, grid_2d_size) When not specified, the default type is np.float64, which is 2 times shorter and at least two times faster than np.float128, which matches the time you've measured. | 2 | 2 |
78,285,601 | 2024-4-6 | https://stackoverflow.com/questions/78285601/rolling-standard-deviation-of-all-columns-ignoring-nans | I have the following dataframe: data = {'a': {1: None, 2: 1, 3: 7, 4: 2, 5: 4}, 'b': {1: None, 2: 2, 3: 2, 4: 9, 5: 6}, 'c': {1: None, 2: 2.0, 3: None, 4: 7.0, 5: 4.0}} df = pd.DataFrame(data).rename_axis('day') a b c day 1 NaN NaN NaN 2 1.0 2.0 2.0 3 7.0 2.0 NaN 4 2.0 9.0 7.0 5 4.0 6.0 4.0 I want to get a new column ("std") with the rolling standard deviation of all column values. NaNs should be ignored. Let's say the number of rows to be included in the rolling window is 3 and min_periods (meaning the number of rows with at least one non-null value) is 2. This is the expected output: a b c std day 1 NaN NaN NaN NaN 2 1.0 2.0 2.0 NaN 3 7.0 2.0 NaN 2.387467 4 2.0 9.0 7.0 3.116775 5 4.0 6.0 4.0 2.531939 The first std value (2.387467) is equal to np.std ([1,2,2,7,2], ddof=1). I tried both solutions proposed here but they don't work properly with my dataframe, probably because of NaNs. | You can use numpy.nanstd for working with missing values: #source https://stackoverflow.com/a/77704074/2901002 from numpy.lib.stride_tricks import sliding_window_view as swv N = 3 df.loc[df.index[N-1:], 'std'] = np.nanstd(swv(df.to_numpy(), N, axis=0), (1,2), ddof=1) print (df) a b c std day 1 NaN NaN NaN NaN 2 1.0 2.0 2.0 NaN 3 7.0 2.0 NaN 2.387467 4 2.0 9.0 7.0 3.116775 5 4.0 6.0 4.0 2.531939 | 2 | 4 |
78,284,936 | 2024-4-6 | https://stackoverflow.com/questions/78284936/using-group-by-in-pandas-but-with-condition | I have dataframe data = {'time': ['10:00', '10:01', '10:02', '10:02', '10:03','10:04', '10:06', '10:10', '10:15'], 'price': [100, 101, 101, 103, 101,101, 105, 106, 107], 'volume': [50, 60, 30, 80, 20,50, 10, 40, 40]} I need to group by this df by every 5 minutes and price, sum up the volume df.groupby([df['time'].dt.floor('5T'), 'price']).agg({'volume' : 'sum'}).reset_index() Then i need to find time when pandas groups them where after sum new volume i will get value more than 100. In this df i find 10:03 and after sum, value will be 60 + 30 + 20 = 110. In 10:04 sum will be 60 + 30 + 20 + 50 = 160 How can i do this using pandas? | It looks like you want the cumulated sum of the volume with groupby.cumsum: df['cum_volume'] = (df.groupby([df['time'].dt.floor('5min'), 'price']) ['volume'].cumsum() ) Updated df: time price volume cum_volume 0 2024-04-06 10:00:00 100 50 50 1 2024-04-06 10:01:00 101 60 60 2 2024-04-06 10:02:00 101 30 90 3 2024-04-06 10:02:00 103 80 80 4 2024-04-06 10:03:00 101 20 110 5 2024-04-06 10:04:00 101 50 160 6 2024-04-06 10:06:00 105 10 10 7 2024-04-06 10:10:00 106 40 40 8 2024-04-06 10:15:00 107 40 40 You can then filter based on the value: out = df.query('cum_volume > 100') Output: time price volume cum_volume 4 2024-04-06 10:03:00 101 20 110 5 2024-04-06 10:04:00 101 50 160 | 2 | 3 |
78,284,642 | 2024-4-6 | https://stackoverflow.com/questions/78284642/driver-find-element-cant-find-an-element-by-class-name | I tried to use driver.find_element, by "class_name" to find button and click on it for expanding rooms on - https://www.qantas.com/hotels/properties/18482?adults=2&checkIn=2024-04-16&checkOut=2024-04-17&children=0&infants=0#view-rooms , but received error message raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".css-v84xw-NakedButton eml2css7"} (Session info: chrome=123.0.6312.106); For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors#no-such-element-exception HTML code with button - <button data-testid="expand-offer-summary" aria-label="Expand offer details" type="button" class="css-v84xw-NakedButton eml2css7"><svg class="en7chz91 css-1osu69f-StyledSvg-Icon en7chz90" viewBox="0 0 24 24" title="expandMore" fill="currentcolor"><path d="M16.59 8.59L12 13.17 7.41 8.59 6 10l6 6 6-6z"></path></svg></button> Python code(I tried with explycity wait as wll - from selenium import webdriver from bs4 import BeautifulSoup from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC def rooms(): driver = webdriver.Chrome() driver.get("https://www.qantas.com/hotels/properties/18482?adults=2&checkIn=2024-04-16&checkOut=2024-04-17&children=0&infants=0#view-rooms") driver.implicitly_wait(5) content = driver.page_source soup = BeautifulSoup(content,'html.parser') Button = driver.find_element('class name',"css-v84xw-NakedButton eml2css7") Button.click() rooms() | It should be like that. If there is only one class then use find_element(By.CLASS_NAME," #what is class name"). However, button has 2 different class so u need to use CSS_SELECTOR instead CLASS_NAME. Finally fyi "class name" doesn't work in find_elements as an argument. Button = driver.find_element(By.CSS_SELECTOR,".css-v84xw-NakedButton .eml2css7") For more details you can take look on Selenium Doc. | 2 | 1 |
78,284,506 | 2024-4-6 | https://stackoverflow.com/questions/78284506/how-to-update-python-to-the-latest-version-3-12-2-in-wsl2 | My Python version in my WSL Ubuntu is 3.10.12 and it's not upgrading through these commands even though 3.12.2 is released now. (My WSL Ubuntu version is 22.04) sudo apt update sudo apt install python3 python3-pip Will a particular distribution have control over how much you can upgrade a package? and is it recommended not to do it? is there a command I can run to check how much upgrade my particular distribution will allow for a package? say for python. | Linux distros normally have a Python version tied to the distro and used for various admin scripts. You should not expect that to follow the latest releases of Python. And don't try to force change it because you could break your OS. If you need newer Python versions for your work, install it in user space. | 6 | 1 |
78,284,077 | 2024-4-6 | https://stackoverflow.com/questions/78284077/iterate-over-values-of-nested-dictionary | Nested_Dict = {'candela_samples_generic': {'drc_dtcs': {'domain_name': 'TEMPLATE-DOMAIN', 'dtc_all':{ '0x930001': {'identification': {'udsDtcValue': '0x9300', 'FaultType': '0x11', 'description': 'GNSS antenna short to ground'}, 'functional_conditions': {'failure_name': 'short_to_ground', 'mnemonic': 'DTC_GNSS_Antenna_Short_to_ground'}}, '0x212021': {'identification': {'udsDtcValue': '0x2120', 'FaultType': '0x21', 'description': 'ECU internal Failure'}, 'functional_conditions': {'failure_name': 'short_to_ground', 'mnemonic': 'DTC_GNSS_Antenna_Short_to_ground'}}}}}} Header = { 'dtc_all': { 'DiagnosticTroubleCodeUds': {'udsDtcValue': None, 'FaultType': None}, 'dtcProps': {'description': None}, 'DiagnosticTroubleCodeObd': {'failure_name': None} } } SubkeyList = ['0x930001','0x212021'] Expected Output: New_Dict= {'dtc_all': {'0x930001': {'DiagnosticTroubleCodeUds': {'udsDtcValue': '0x9300', 'FaultType': '0x11'}, 'dtcProps':{'description': 'GNSS antenna short to ground'}, 'DiagnosticTroubleCodeObd': {'failure_name':short_to_ground}}}, {'0x212021': {'DiagnosticTroubleCodeUds': {'udsDtcValue': '0x2120', 'FaultType': '0x21'}, 'dtcProps':{'description': 'ECU internal Failure'}, 'DiagnosticTroubleCodeObd': {'failure_name':short_to_ground}}}} Reference question: Aggregating Inputs into a Consolidated Dictionary Here Want to iterate inside the header dictionary over the values of the header dictionary, but with my code it is iterating over keys of dict not the values of dict. Take an element from the SubkeyList and one header keys at a time from the dictionary (there can be multiple header keys like dtc_all). Iterate inside the header dictionary over the values of the dictionary, such as 'udsDtcValue'. For example: Main_Key = dtc_all Sub_Key = 0x212021 Element = udsDtcValue Pass these parameters to the function get_value_nested_dict(nested_dict, Main_Key, Sub_Key, Element). This function will return the element value. get_value_nested_dict func which is working as expected for Element value retriaval I've posted for the reference. At the same time, create a new dictionary and update the element value at the right place, such as 'udsDtcValue': '0x9300'. Also, ensure that the sequence of keys remains the same as in the header. Similarly, iterate inside the header dictionary over all the values of the dictionary, such as FaultType, description, until failure_name. Repeat these iterations for each element in the SubkeyList and append the results in the new_dict in the same sequence. Any suggestions on how to proceed? def create_new_dict(Nested_Dict, Header, SubkeyList): new_dict = {} for sub_key in SubkeyList: sub_dict = {} for element, value in Header['dtc_all'].items(): value = get_value_nested_dict(Nested_Dict, 'dtc_all', sub_key, element) if value: sub_dict[element] = value[0] new_dict[sub_key] = sub_dict return new_dict | Much, much better π€ You'll need to make sure that for each part of the Header dictionary structure, we're not only iterating over the keys but also get into their nested structure to retrieve udsDtcValue, FaultType, description and failure_name from Nested_Dict. def get_value_from_nested_dict(nested_dict, path): for key in path: nested_dict = nested_dict.get(key, {}) if not nested_dict: return None return nested_dict def create_new_dict(nested_dict, header, subkey_list): new_dict = {'dtc_all': {}} path_mappings = {'DiagnosticTroubleCodeUds': ['identification'], 'dtcProps': ['identification'], 'DiagnosticTroubleCodeObd': ['functional_conditions']} for sub_key in subkey_list: sub_dict_structure = {} for header_key, inner_keys in header['dtc_all'].items(): header_sub_dict = {} for inner_key in inner_keys.keys(): base_path = ['candela_samples_generic', 'drc_dtcs', 'dtc_all', sub_key] specific_path = path_mappings.get(header_key, []) value_path = base_path + specific_path + [inner_key] value = get_value_from_nested_dict(nested_dict, value_path) if value is not None: header_sub_dict[inner_key] = value if header_sub_dict: sub_dict_structure[header_key] = header_sub_dict if sub_dict_structure: new_dict['dtc_all'][sub_key] = sub_dict_structure return new_dict | 2 | 4 |
78,283,909 | 2024-4-6 | https://stackoverflow.com/questions/78283909/pandas-percentage-from-total-in-pivot-table | I used to tackle this kind of thing reasonably quickly within DAX, but being new to pandas, I have been stuck for a while on this: I am trying to output a pivot table showing the % of visa sales per month (columns) and per city (rows). Here is the output I am looking for: Jan Feb London 50.055991 56.435644 Paris 15.119760 67.170191 I've tried various pivot tables and group-by functions, which got me so close and yet so far from what I need. I'm just used to creating "measures" in Excel that I can add to the pivot table like a regular dimension or fact. Reproducible input: data = {'Month': {0: 'Jan', 1: 'Jan', 2: 'Jan', 3: 'Jan', 4: 'Feb', 5: 'Feb', 6: 'Feb', 7: 'Feb', 8: 'Feb'}, 'City': {0: 'Paris', 1: 'Paris', 2: 'London', 3: 'London', 4: 'Paris', 5: 'Paris', 6: 'London', 7: 'London', 8: 'Paris'}, 'Card': {0: 'Visa', 1: 'MasterCard', 2: 'Visa', 3: 'MasterCard', 4: 'Visa', 5: 'MasterCard', 6: 'Visa', 7: 'MasterCard', 8: 'Visa'}, ' Amount ': {0: ' $101 ', 1: ' $567 ', 2: ' $447 ', 3: ' $446 ', 4: ' $926 ', 5: ' $652 ', 6: ' $114 ', 7: ' $88 ', 8: ' $408 '}} df = pd.DataFrame.from_dict(data) df | Using a pivot_table, pipe to compute the ratio, and unstack to reshape: df['Amount'] = pd.to_numeric(df['Amount'].str.strip(' $')) out = (df .pivot_table(index=['Month', 'City'], columns='Card', values='Amount', aggfunc='sum') .pipe(lambda x: x['Visa']/x.sum(axis=1)*100) .unstack('Month') ) Output: Month Feb Jan City London 56.435644 50.055991 Paris 67.170191 15.119760 To sort the months: from calendar import month_abbr months = {m:i for i, m in enumerate(month_abbr)} df['Amount'] = pd.to_numeric(df['Amount'].str.strip(' $')) out = (df .pivot_table(index=['Month', 'City'], columns='Card', values='Amount', aggfunc='sum') .pipe(lambda x: x['Visa']/x.sum(axis=1)*100) .unstack('Month').sort_index(axis=1, key=lambda x: x.map(months)) ) Output: Month Jan Feb City London 50.055991 56.435644 Paris 15.119760 67.170191 | 4 | 2 |
78,283,840 | 2024-4-6 | https://stackoverflow.com/questions/78283840/sqlalchemy-like-all-orm-analog | I need to find documents that satisfy the entire list of passed parameters. I did it using raw query, but for my project specs, raw query can't be used, and I should use ORM. Raw query is: SELECT * FROM outbox_document WHERE document_summary like all(array['%par1%', '%par2%', '%par3%']); It's works well, but I can't find an ORM analog for LIKE ALL. Please help! | You can use sqlalchemy.all_. OutBoxDocument.document_summary.like(all_(["%par1%", "%par2%", "%par3%"])) This generates the following query SELECT outbox_document.id, outbox_document.document_summary FROM outbox_document WHERE outbox_document.document_summary LIKE ALL (%(param_1)s) Complete code from sqlalchemy import create_engine, select, all_ from sqlalchemy.orm import Mapped, DeclarativeBase, mapped_column, Session engine = create_engine("postgresql+psycopg://some_connection_string") class Base(DeclarativeBase): pass class OutBoxDocument(Base): __tablename__ = "outbox_document" id: Mapped[int] = mapped_column(primary_key=True) document_summary: Mapped[str] Base.metadata.create_all(engine) with Session(engine) as session: session.add(OutBoxDocument(document_summary="par1 par2")) session.add(OutBoxDocument(document_summary="par1 par2 par3")) session.add(OutBoxDocument(document_summary="par1 par3")) session.commit() with Session(engine) as session: statement = select(OutBoxDocument).where(OutBoxDocument.document_summary.like(all_(["%par1%", "%par2%", "%par3%"]))) result = session.scalars(statement).all() print(result) The flask sqlalchemy version would be db.session.scalars(statement).all() | 2 | 2 |
78,275,255 | 2024-4-4 | https://stackoverflow.com/questions/78275255/how-can-i-make-it-so-that-when-i-click-an-icon-a-window-with-information-appear | There is an icon in the program; by clicking on it, a window with information should appear on top of the program. How can this be implemented? import flet as ft def main(page: ft.page): page.title = "Π’ΡΠ΅Π½ΠΈΡΠΎΠ²ΠΊΠ° ΠΈΠ½ΡΡΠΈΡΠΈΠΈ" page.window_width = 400.00 page.window_height = 500.00 page.window_resizable = False def info(e): # info text pass page.add(ft.Row([ft.IconButton(ft.icons.HELP, on_click=info)], alignment=ft.alignment.center)) ft.app(target=main) Please tell me, Iβve been studying recently | import flet as ft def main(page: ft.page): page.title = "Π’ΡΠ΅Π½ΠΈΡΠΎΠ²ΠΊΠ° ΠΈΠ½ΡΡΠΈΡΠΈΠΈ" page.window_width = 400.00 page.window_height = 500.00 page.window_resizable = False Build info dialog with using AlertDialog. Add action to close your dialog. Content is the main content in pop-up. def info(e): diaolog = ft.AlertDialog(title=ft.Text("Information"), content=ft.Text("Test Text"), actions=[ft.TextButton(text="Close", on_click=close_diaolog)], open=True, ) page.dialog = diaolog page.update() Built close_diaolog to close dialog after click Close. def close_diaolog(e): page.dialog.open = False page.update() page.add(ft.Row([ft.IconButton(ft.icons.HELP, on_click=info)], alignment=ft.alignment.center)) ft.app(target=main) | 2 | 1 |
78,281,668 | 2024-4-5 | https://stackoverflow.com/questions/78281668/nonlocal-variable-not-updated-when-return-value-from-recursive-function-is-not-b | Came across some pattern similar to this for a leetcode problem. Basically, both functions sums a list a recursively using a nonlocal value. The unassigned value only updates res once it seems. def assigned_sum(l: list[int]): res = 0 def recurse(i: int): nonlocal res if i >= len(l): return 0 assigned = recurse(i+1) res += assigned return l[i] recurse(-1) return res def rvalue_sum(l: list[int]): res = 0 def recurse(i: int): nonlocal res if i >= len(l): return 0 res += recurse(i+1) return l[i] recurse(-1) return res test = [1,2,3,4,5] f"expected={sum(test)}, lvalue={assigned_sum(test)}, rvalue={rvalue_sum(test)}" When thrown into colab, I get 'expected=15, lvalue=15, rvalue=1' | The difference can be seen more clearly between these two variants: (a): res = res + recurse(i+1) and (b): res = recurse(i+1) + res For your test run, (a) will return 1, while (b) will return the intended 15. The difference is caused by the moment when the value of res is taken: before the recursive call or after. If it is taken before the recursive call, it will always be 0. This is because the assignment to res only happens when unwinding out of recursion, not when entering it. So by consequence, all the reading of res that happens at each recursion level, happens before any assignment to res. Then when all the assignments happen to res, they each overwrite the earlier results: 0 + 5, then 0 + 4, then 0 + 3, ... until 0 + 1 which is the final value that is assigned to res. In the correct version, the value of res is read when unwinding out of recursion, so that means we read the value of res after it has been updated by the recursive call, and so we assign to res: 1 + 0, 2 + 1, 3 + 3, 4 + 6, and finally 5 + 10. | 2 | 5 |
78,276,174 | 2024-4-4 | https://stackoverflow.com/questions/78276174/post-a-pandas-dataframe-from-jupyter-notebooks-into-a-stack-overflow-problem | What are the steps to post a Pandas dataframe in a Stack Overflow question? I found: How to make good reproducible pandas examples. I followed the instructions and used pd.read_clipboard, but I still had to spend a significant amount of time formatting the table to make it look correct. I also found: How to display a pandas dataframe on a Stack Overflow question body. I tried to copy the dataframe from Jupyter and paste it into a Blockquote. As mentioned, I also ran pd.read_clipboard('\s\s+') in Jupyter to copy it to the clipboard and then pasted it into a Blockquote. I also tried creating a table and pasting the values in the table. All of these methods required that I tweak the formatting to make it look properly formatted. An example dataframe: df = pd.DataFrame( [['Captain', 'Crunch', 72], ['Trix', 'Rabbit', 36], ['Count', 'Chocula', 41], ['Tony', 'Tiger', 54], ['Buzz', 'Bee', 28], ['Toucan', 'Sam', 38]], columns=['first_name', 'last_name', 'age']) | .to_markdown() The easiest method I found was to use print(df.to_markdown()). This will convert the data into mkd format which can be interpreted by SO. For example with your dataframe, the output is: first_name last_name age 0 Captain Crunch 72 1 Trix 36 Rabbit 2 Count Chocula 41 3 Tony 54 Tiger 4 Buzz 28 Bee 5 Toucan Sam 38 Note you might need to install tabulate module. .to_dict() Another option is to use df.head().to_dict('list'), but it might not be the best one for large datasets (will work for minimum reproducible examples though) {'first_name': ['Captain', 'Trix', 'Count', 'Tony', 'Buzz'], 'last_name': ['Crunch', 36, 'Chocula', 54, 28], 'age': [72, 'Rabbit', 41, 'Tiger', 'Bee']} Anyone can use this by passing it through pd.DataFrame() | 4 | 5 |
78,280,954 | 2024-4-5 | https://stackoverflow.com/questions/78280954/add-timezone-based-on-column-value | I have a polars Dataframe with two columns: a string column containing datetimes and an integer column containing UTC offsets (for example -4 for EDT). Essentially the Dataframe looks like this: >>> data shape: (2, 2) βββββββββββββββββββββββ¬βββββββββββ β Datetime β Timezone β β --- β --- β β str β i64 β βββββββββββββββββββββββͺβββββββββββ‘ β 2022-01-01 12:52:23 β -4 β β 2023-03-31 04:22:59 β -5 β βββββββββββββββββββββββ΄βββββββββββ Now I want to convert this column either to UTC or a timezone-aware datetime column. I looked into the pl.Expr.str.to_datetime function which accepts the time_zone argument. Unfortunately this argument can only be passed as a string and not as a pl.Expr. In other words, I can convert all columns to the same specified timezone, but I cannot dynamically use timezone based on the value of another column. What I would like in the end is something like the following (note that the Datetime column is now of datetime type and the timezone offset has been added dynamically (4 hours for the first row and 5 for the second). >>> data shape: (2, 2) βββββββββββββββββββββββ¬βββββββββββ β Datetime β Timezone β β --- β --- β β datetime β i64 β βββββββββββββββββββββββͺβββββββββββ‘ β 2022-01-01 16:52:23 β -4 β β 2023-03-31 09:22:59 β -5 β βββββββββββββββββββββββ΄βββββββββββ Is there a way to do this without going to map_elements or iter_rows based approaches? | If you're ok with keeping things in UTC, you can use df = pl.DataFrame( {"Datetime": ["2022-01-01 12:52:23", "2023-03-31 04:22:59"], "Timezone": [-4, -5]} ) df.with_columns( pl.col("Datetime") .str.to_datetime("%Y-%m-%d %H:%M:%S", time_zone="UTC") .dt.offset_by(pl.format("{}h", -pl.col("Timezone"))) .alias("dt_conv") ) shape: (2, 3) βββββββββββββββββββββββ¬βββββββββββ¬ββββββββββββββββββββββββββ β Datetime β Timezone β dt_conv β β --- β --- β --- β β str β i64 β datetime[ΞΌs, UTC] β βββββββββββββββββββββββͺβββββββββββͺββββββββββββββββββββββββββ‘ β 2022-01-01 12:52:23 β -4 β 2022-01-01 16:52:23 UTC β β 2023-03-31 04:22:59 β -5 β 2023-03-31 09:22:59 UTC β βββββββββββββββββββββββ΄βββββββββββ΄ββββββββββββββββββββββββββ | 2 | 2 |
78,278,272 | 2024-4-5 | https://stackoverflow.com/questions/78278272/error-installing-dlib-in-python-on-ubuntu | I want to use OpenCV for a python project and for that reason want to install dlib library. I ran the command pip install dliband it gave me following error: Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Building wheels for collected packages: dlib Building wheel for dlib (pyproject.toml) ... error error: subprocess-exited-with-error Γ Building wheel for dlib (pyproject.toml) did not run successfully. β exit code: 1 β°β> [6 lines of output] running bdist_wheel running build running build_ext ERROR: CMake must be installed to build dlib [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for dlib Failed to build dlib ERROR: Could not build wheels for dlib, which is required to install pyproject.toml-based projects Since the error was Cmake must be installed I installed Cmake using pip install cmake but the errror still persisted. So I also installed cmake as mentioned here but the error still persist. | Since you are using anacoda you probably have messed up your run environment. Your error doesn't provide insight on what actually has happened but a makeshift, however it is demotivated, can fix your error. After installing cmake as mentioned in link answered install packages in root directory using sudo pip install cmake and sudo pip install dlib. Then you can import the modules in root directory using sys module as follows: import sys sys.path.insert(0, '/usr/path/to/modules/dlib') | 2 | 1 |
78,278,746 | 2024-4-5 | https://stackoverflow.com/questions/78278746/plot-for-every-subgroup-of-a-groupby | data = {0: {'VAR1': 'A', 'VAR2': 'X', 'VAL1': 3, 'VAL2': 1}, 1: {'VAR1': 'A', 'VAR2': 'X', 'VAL1': 4, 'VAL2': 1}, 2: {'VAR1': 'A', 'VAR2': 'X', 'VAL1': 5, 'VAL2': 1}, 3: {'VAR1': 'A', 'VAR2': 'Y', 'VAL1': 3, 'VAL2': 2}, 4: {'VAR1': 'A', 'VAR2': 'Y', 'VAL1': 4, 'VAL2': 2}, 5: {'VAR1': 'A', 'VAR2': 'Y', 'VAL1': 5, 'VAL2': 2}, 6: {'VAR1': 'A', 'VAR2': 'Z', 'VAL1': 3, 'VAL2': 3}, 7: {'VAR1': 'A', 'VAR2': 'Z', 'VAL1': 4, 'VAL2': 3}, 8: {'VAR1': 'A', 'VAR2': 'Z', 'VAL1': 5, 'VAL2': 3}, 9: {'VAR1': 'B', 'VAR2': 'X', 'VAL1': 3, 'VAL2': 1}, 10: {'VAR1': 'B', 'VAR2': 'X', 'VAL1': 4, 'VAL2': 1}, 11: {'VAR1': 'B', 'VAR2': 'X', 'VAL1': 5, 'VAL2': 1}, 12: {'VAR1': 'B', 'VAR2': 'Y', 'VAL1': 3, 'VAL2': 2}, 13: {'VAR1': 'B', 'VAR2': 'Y', 'VAL1': 4, 'VAL2': 2}, 14: {'VAR1': 'B', 'VAR2': 'Y', 'VAL1': 5, 'VAL2': 2}, 15: {'VAR1': 'B', 'VAR2': 'Z', 'VAL1': 3, 'VAL2': 3}, 16: {'VAR1': 'B', 'VAR2': 'Z', 'VAL1': 4, 'VAL2': 3}, 17: {'VAR1': 'B', 'VAR2': 'Z', 'VAL1': 5, 'VAL2': 3}, 18: {'VAR1': 'C', 'VAR2': 'X', 'VAL1': 3, 'VAL2': 1}, 19: {'VAR1': 'C', 'VAR2': 'X', 'VAL1': 4, 'VAL2': 1}, 20: {'VAR1': 'C', 'VAR2': 'X', 'VAL1': 5, 'VAL2': 1}, 21: {'VAR1': 'C', 'VAR2': 'Y', 'VAL1': 3, 'VAL2': 2}, 22: {'VAR1': 'C', 'VAR2': 'Y', 'VAL1': 4, 'VAL2': 2}, 23: {'VAR1': 'C', 'VAR2': 'Y', 'VAL1': 5, 'VAL2': 2}, 24: {'VAR1': 'C', 'VAR2': 'Z', 'VAL1': 3, 'VAL2': 3}, 25: {'VAR1': 'C', 'VAR2': 'Z', 'VAL1': 4, 'VAL2': 3}, 26: {'VAR1': 'C', 'VAR2': 'Z', 'VAL1': 5, 'VAL2': 3}} df = pd.DataFrame.from_dict(dictio, orient='index') I like to achieve: new axes for every unique element in VAR1 new scatter plot of VAL1(x-value) and VAL2(y-value) for elements in VAR2 for every axes from VAR1 Example for axes of VAR1=A I could not figure out how to do it with the groupby. My approach is not very good/correct: group_var1 = df.groupby('VAR1') for name_var1, grouped_var1 in group_var1: i = 0 fig, axes = plt.subplots(nrows=3, ncols=1,figsize=(20, 8), tight_layout=True) group_var2 = grouped_var1.groupby('VAR2') for name_var2, grouped_var2 in group_var2: grouped_var2.plot(kind='scatter', ax=axes[i], x='VAL1', y='VAL2') i+=1 EDIT: This works, but i highly dislike this approach group_var1 = df.groupby('VAR1') fig, axes = plt.subplots(nrows=3, ncols=1,figsize=(20, 8), tight_layout=True) i = 0 for name_var1, grouped_var1 in group_var1: group_var2 = grouped_var1.groupby('VAR2') for name_var2, grouped_var2 in group_var2: grouped_var2.plot(kind='scatter', ax=axes[i], x='VAL2', y='VAL1', c=['red','green','yellow']) i+=1 | I would use seaborn.relplot, which would work as a one-liner: import seaborn as sns sns.relplot(df, col='VAR1', hue='VAR2', x='VAL1', y='VAL2') Output: | 3 | 4 |
78,278,680 | 2024-4-5 | https://stackoverflow.com/questions/78278680/why-is-the-scraped-html-different-from-browser-inspected-element | I am currently working on a web scraping project and encountered an issue while scraping data from https://foundersfund.com/portfolio. I managed to retrieve all the links to each company's page successfully. However, upon testing some of these links, I noticed that the output HTML differs from what is shown in the inspect element tool. Consequently, I am unable to retrieve any information. import requests from bs4 import BeautifulSoup headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" } response = requests.get("https://foundersfund.com/company/figma/", headers=headers) soup = BeautifulSoup(response.content, "lxml") soup The output is returning this: Google Colab I expected to retrieve information about Figma, but instead, I obtained information about SpaceX. Interestingly, when attempting to view pages for other companies, such as https://foundersfund.com/company/spotify/ or https://foundersfund.com/company/airbnb/, I encountered the same issue with SpaceX appearing instead. I have been troubleshooting this for several days and suspect that there may be an issue with the page itself. It seems that when I load the company's page, it briefly displays the SpaceX page before showing the requested company's page. Could someone please explain what might be happening here? | Content is loaded / rendered dynamically by JavaScript - Sometimes you can also see it on the page by refreshing it, then spacex is displayed for a short time, because the resource is first loaded and rendered. Only fractions of a second later the actual company information is rendered. So try to use the api instead for a single company: https://foundersfund.com/wp-json/wp/v2/company?slug=figma or for the top 100: https://foundersfund.com/wp-json/wp/v2/company?per_page=100 Check your browsers dev tools for network traffic, to get an idea of how to find such information. Example import requests headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36" } response = requests.get('https://foundersfund.com/wp-json/wp/v2/company?slug=figma', headers=headers).json() response | 2 | 2 |
78,278,506 | 2024-4-5 | https://stackoverflow.com/questions/78278506/how-to-select-first-n-number-of-groups-based-on-values-of-a-column-conditionally | This is my DataFrame: import pandas as pd df = pd.DataFrame( { 'a': [10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 10, 22], 'b': [1, 1, 1, -1, -1, -1, -1, 2, 2, 2, 2, -1, -1, -1, -1], 'c': [25, 25, 25, 45, 45, 45, 45, 65, 65, 65, 65, 40, 40, 30, 30] } ) The expected output: Grouping df by c and a condition: a b c 0 10 1 25 1 15 1 25 2 20 1 25 3 25 -1 45 4 30 -1 45 5 35 -1 45 6 40 -1 45 11 65 -1 40 12 70 -1 40 The process is as follows: a) Selecting the group that all of the b values is 1. In my data and this df there is only one group with this condition. b) Selecting first two groups (from top of df) that all of their b values are -1. For example: a) Group 25 is selected. b) There are three groups with this condition. First two groups are: Group 45 and 40. Note that there is a possibility in my data that there are no groups that has a or b condition. If that is the case, returning whatever matches the criteria is fine. For example the output could be only one group or no groups at all. The groups that I want are shown below: These are my attempts that got very close: df1 = df.groupby('c').filter(lambda g: g.b.eq(1).all()) gb = df.groupby('c') new_gb = pd.concat([gb.get_group(group) for i, group in enumerate(gb.groups) if i < 2]) | You can use custom masks for boolean indexing: # identify groups with all 1 m1 = df['b'].eq(1).groupby(df['c']).transform('all') # identify groups with all -1 m2 = df['b'].eq(-1).groupby(df['c']).transform('all') # keep rows of first 2 groups with all -1 m3 = df['c'].isin(df.loc[m2, 'c'].unique()[:2]) # select m1 OR m3 out = df[m1 | m3] Or, for a variant without groupby, using set operations: # identify rows with 1/-1 m1 = df['b'].eq(1) m2 = df['b'].eq(-1) # drop c that have values other that 1/-1: {65} # drop -1 groups after 2nd occurrence: {30} drop = set(df.loc[~(m1|m2), 'c']) | set(df.loc[m2, 'c'].unique()[2:]) out = df[~df['c'].isin(drop)] Output: a b c 0 10 1 25 1 15 1 25 2 20 1 25 3 25 -1 45 4 30 -1 45 5 35 -1 45 6 40 -1 45 11 65 -1 40 12 70 -1 40 Intermediates (first approach): a b c m1 m2 m3 0 10 1 25 True False False 1 15 1 25 True False False 2 20 1 25 True False False 3 25 -1 45 False True True 4 30 -1 45 False True True 5 35 -1 45 False True True 6 40 -1 45 False True True 7 45 2 65 False False False 8 50 2 65 False False False 9 55 2 65 False False False 10 60 2 65 False False False 11 65 -1 40 False True True 12 70 -1 40 False True True 13 10 -1 30 False True False 14 22 -1 30 False True False Intermediates (second approach): a b c m1 m2 ~isin(drop) 0 10 1 25 True False True 1 15 1 25 True False True 2 20 1 25 True False True 3 25 -1 45 False True True 4 30 -1 45 False True True 5 35 -1 45 False True True 6 40 -1 45 False True True 7 45 2 65 False False False 8 50 2 65 False False False 9 55 2 65 False False False 10 60 2 65 False False False 11 65 -1 40 False True True 12 70 -1 40 False True True 13 10 -1 30 False True False 14 22 -1 30 False True False | 4 | 4 |
78,274,221 | 2024-4-4 | https://stackoverflow.com/questions/78274221/use-caplog-in-autouse-fixture-in-pytest | I'd like to wrap all my tests with a fixture where the logs logged with loguru are checked for error messages. I tried this: @pytest.fixture(autouse=True) def assert_no_log_error(caplog): yield assert "ERROR" not in caplog.text But caplog.text is always empty. I assume, caplog is cleared after the test and before the fixture actually checks the logs. How can I make this work? EDIT 1: I found an example about how to use loguru logs with pytest fixtures in the documentation of loguru: https://loguru.readthedocs.io/en/stable/resources/migration.html#replacing-caplog-fixture-from-pytest-library. However, this doesn't work either. It's really about that I want to check the logs in the fixture after the actual test, but I guess the captured logs are cleared after the test and before the end of the fixture. | I found the answer with the help of this post. First, since pytest uses the standard python logging module under the hood, the logs from loguru need to be captured properly. This can be done using the pytest-loguru module, according to the loguru docs. Install pytest-loguru with: pip install pytest-loguru Then the fixture can be written as: import logging from _pytest.logging import LogCaptureFixture @pytest.fixture(autouse=True) def assert_no_log_error(caplog: LogCaptureFixture): yield for record in caplog.get_records('call'): assert record.levelno < logging.ERROR | 2 | 0 |
78,277,628 | 2024-4-5 | https://stackoverflow.com/questions/78277628/create-a-dataframe-from-a-series-and-specifically-how-to-re-name-a-column-in-it | New to Python. I come from a SQL world where I'm used to running queries and applying them. It's handy to take a list of things, get their count and then use a subset of that count (like the top 5) and apply it to other data. With Python/Pandas, I still have not quite grokked the process. By way of example: A simple dataset: `import pandas as pd dataset = ( [1,2,3,4,5,6], [1,None,3,4,5,6], [1,None,3,4,5,6], [1,2,None,4,5,6], [1,None,3,None,5,6], [1,2,None,4,5,6], [1,None,3,None,5,6], [1,2,3,4,5,None], [1,2,3,4,5,None] ) df = pd.DataFrame(dataset, columns=['A','B','C','D','E','F'])` Then make a dataframe to find the NaNs: nan_df = df.isna() Then count the instances of each row: grouped_nan = nan_df.groupby(['A','B', 'C', 'D','E', 'F'], sort=True).value_counts() The original set I was working was had ~200 rows. This simplified example yields this: A B C D E F False False False False False False 1 True 2 True False False False 2 True False False False False 2 True False False 2 Name: count, dtype: int64 This is where I run into trouble. The things I want to do are best done in a Dataframe (the above is a Series). The following makes it a DataFrame: grouped_nan_df = grouped_nan.to_frame() But it doesn't bring along the last column (the count) in a way I can manage. I can see it, but I can't do anything with it. If I try to reference the column with the counts, it does not recognize it. If I try to rename that last column it doesn't work: `grouped_nan_df.rename(columns={grouped_nan_df.columns[5]:"new_count"}, inplace=True)` gives the error "index 5 is out of bounds for axis 0 with size 1". What I want at the end is a DataFrame that includes the counts. Is there a way to get there? Any help appreciated! Andy | IIUC, you can just use groupby with as_index=False and then take the group size: out = nan_df.groupby(['A','B', 'C', 'D','E', 'F'], as_index=False).size() Output: A B C D E F size 0 False False False False False False 1 1 False False False False False True 2 2 False False True False False False 2 3 False True False False False False 2 4 False True False True False False 2 | 2 | 1 |
78,275,253 | 2024-4-4 | https://stackoverflow.com/questions/78275253/best-way-to-create-serializable-data-model | somewhat inexperienced in python. I am coming from C# world, and I am trying to figure out what is the best practice to create a data structure in python that: can have empty fields (None) can have default values assigned to some fields can have aliases assigned to fields that would be used during serialization to clarify: for example in C# I can do something like this: using Newtonsoft.Json; public class MyDataClass { [JsonProperty(PropertyName = "Data Label")] public string label { get; set; } [JsonProperty(PropertyName = "Data Value")] public string value { get; set; } [JsonProperty(PropertyName = "Data Description")] public MyDataDefinition definiton { get; set; } public MyDataClass { this.label = "Default Label"; } } with this class, i can create instance with only one field pre-populated, and populate the rest of the data structure at will, and then serialize it to JSON with aliased field names as decorated. In python, i experimented with several packages, but every time i end up with super complex implementation that doesn't hit all of the requirements. I MUST be missing something very fundamental, because it seems like such a simple and common use case. how would you implement something like this in most "pythonic" way? | Hard to say what is "best practice", I personally would say that just working with dictionaries is very common unless you have a good reason to define a class instead (limitation: no default values, no aliases). If that works for you depends on how you intend to use the data. If you have a dictionary, serializing it is just import json record = { "Data Label": "Default Label", "Data Value": None, "Data Description": { "f1": 1, "f2":"2" } } with open("my_record.json", "w") as f: json.dump(record, f) And reading it would be import json with open("my_record.json", "r") as f: record = json.load(f) If you're coming from a language that follows OO principles to a T it might look weird to handle data without a class to serve as an interface. But often enough it's just fine. If it turns out that it isn't, and you really want to have some kind of schema that tells you what your data looks like / helps your IDE to figure out auto-completion, you can add a TypedDict definition to the existing code (new limitation: only valid python variable names can be keys): from typing import TypedDict, cast import json class MyDataContainer(TypedDict): label: str value: str | None definiton: "MyDataDefinition" class MyDataDefinition(TypedDict): f1: int f2: str with open("my_record.json", "r") as f: record = cast(MyDataContainer, json.load(f)) record[ # at this point your IDE should hint "label", "value", or "definiton" Note: The cast doesn't do anything, it just asserts the type to tooling like your IDE. If you want actual run-time checks against the data you're loading, you need to install third-party libraries like typeguard. First, ask yourself though - is there actually value to this, or are you performing this merely to "do things the right way"? If you can't work with the limitations of dictionaries that I outlined, I'd recommend to go with pydantic. It supports serializing to / deseralizing from json, aliases, defaults, and many many more things: import pydantic class MyDataContainer(pydantic.BaseModel): label: str = pydantic.Field("Default Label", alias="Data Label") value: str | None = pydantic.Field(alias="Data Value") definiton: "MyDataDefinition" = pydantic.Field(alias="Data Description") class MyDataDefinition(pydantic.BaseModel): f1: int f2: str with open("my_record.json", "r") as f: record = MyDataContainer(**json.load(f)) # pydantic would have complained if the json didn't comply print(record.label) # prints: "Default Label" print(record.model_dump_json(indent=2)) # prints: # { # "Data Label": "Default Label", # "Data Value": null, # "Data Description": { # "f1": 1, # "f2": "2" # } # } | 2 | 3 |
78,276,558 | 2024-4-4 | https://stackoverflow.com/questions/78276558/pandas-dataframe-fillna-with-booleans | I have 2 dataframes, one that contains data and one that contains exlusions that need to be merged onto the data and marked as included(True or False). I have been doing this as follows for a couple of years by simply adding a new column to the exclusions data frame and setting everything to True, then merging that onto the main dataframe which results in the additional column containing either True or NaN. Finally I run a pd.fillna to replace all the NaN values with False and I'm good to go. import pandas as pd MainData = {'name': ['apple', 'pear', 'orange', 'watermelon'], 'other': ['blah' , 'blah', 'blah' , 'blah']} dfMainData = pd.DataFrame(MainData) Exclusions = {'name': ['pear' , 'watermelon'], 'reason': ['pears suck', 'too messy!']} dfExclusions = pd.DataFrame(Exclusions) dfExclusions['excluded'] = True dfMainData = pd.merge(dfMainData, dfExclusions, how='left', on='name') dfMainData['excluded'] = dfMainData['excluded'].fillna(False) I was previously running pands 1.2.4 but am making code updates and am migrating to 2.2.1 as part of this, and am now receiving the following warning: dfMainData['excluded'] = dfMainData['excluded'].fillna(False) :1: FutureWarning: Downcasting object dtype arrays on .fillna, .ffill, .bfill is deprecated and will change in a future version. Call result.infer_objects(copy=False) instead. To opt-in to the future behavior, set pd.set_option('future.no_silent_downcasting', True) It still technically works but it appears that this is NOT the pandas'esque way of doing things, so I am curious how I should be going about this now to avoid compatibility issues in the future? | You can use (for pandas 2.2.1 etc) : dfMainData['excluded'] = dfMainData['excluded'].fillna(0).astype('bool') which gives name other reason excluded 0 apple blah NaN False 1 pear blah pears suck True 2 orange blah NaN False 3 watermelon blah too messy! True | 4 | 4 |
78,276,184 | 2024-4-4 | https://stackoverflow.com/questions/78276184/filling-an-empty-data-frame-or-array-with-values-from-the-column-of-another-data | I need to creat an empty data frame that stores values from a column of another data frame base on some conditions being met in two columns of the same second data frame. I have a data frame test_mob_df = pd.DataFrame( {"geoid_o": [10002, 18039, 18039, 18182, 10006, 18111, 18005, 17001], "geoid_d": [10005, 18039, 18111, 18182, 18005, 17004, 18050, 15001], "pop_flows": [20,10,9,15,2,1,6,30]}) and a list of ids that I am interested in as state_county_fip = [18182, 18111, 18005, 18039, 18050, 18001]. I now need to create a new $nxn$ data frame (or an array) with row and column names as sorted state_county_fips, that stores the values in the pop_flows column of $test_mob_df$ whenever values in the same row of columns geoid_o and geoid_d match or not. In essence, the resulting data frame should look like this: 18005 18039 18005 18050 18111 18182 18005 0 0 0 0 0 0 18039 0 10 0 0 9 0 18005 0 0 0 6 0 0 18050 0 0 0 0 0 0 18111 0 0 0 0 0 0 18182 0 0 0 0 0 15 That is, I need to create a a dataframe (or matrix) of population flows from geoid_o to geoid_d, and when we do not have population flow from geoid_o to geoid_d, then we asign zero to the corresponding cell. For instance, 10 individuals moved from geoid_o 18005 to geoid_d 18050. I can't seem to figure out how to do this beyond using query to create a data frame (from test_mob_df) that has the geoids of interest: data_counties_of_interest = test_mob_df.query("18001<=geoid_o<18200 and 18001<=geoid_d<18200"). I will deeply appreciate any help you can offer. | You can do: test_mob_df = pd.DataFrame( { "geoid_o": [10002, 18039, 18039, 18182, 10006, 18111, 18005, 17001], "geoid_d": [10005, 18039, 18111, 18182, 18005, 17004, 18050, 15001], "pop_flows": [20, 10, 9, 15, 2, 1, 6, 30], } ) state_county_fip = [18182, 18111, 18005, 18039, 18050, 18001] out = pd.crosstab( test_mob_df.loc[test_mob_df["geoid_o"].isin(state_county_fip), "geoid_o"], test_mob_df.loc[test_mob_df["geoid_d"].isin(state_county_fip), "geoid_d"], values=test_mob_df["pop_flows"], aggfunc="first", ) out = ( out.reindex(index=state_county_fip, columns=state_county_fip) .fillna(0) .sort_index(axis=1) .sort_index() .astype(int) ) print(out) Prints: geoid_d 18001 18005 18039 18050 18111 18182 geoid_o 18001 0 0 0 0 0 0 18005 0 0 0 6 0 0 18039 0 0 10 0 9 0 18050 0 0 0 0 0 0 18111 0 0 0 0 0 0 18182 0 0 0 0 0 15 | 3 | 2 |
78,274,097 | 2024-4-4 | https://stackoverflow.com/questions/78274097/group-cluster-polars-dataframe-by-substring-in-string-or-string-in-substring | Given this Polars DataFrame: df = pl.DataFrame( { "id": [1, 2, 3, 4, 5], "values": ["A", "B", "A--B", "C--A", "D"], } ) 1, How can I group/cluster it so that 1,2 and 3 ends up in the same group? 2. Can I even achieve having 4 in the same group/cluster? | Assuming you want to merge groups based on the substrings (separated by --), this is unfortunately not straightforward. You can't vectorize this since a member of a group can link to another group that links to another, etc. One option is to use graph theory to identify the connected components. You can do this with networkx and connected_components: import networkx as nx G = nx.from_pandas_edgelist(df.with_columns(pl.col('values').str.split('--')) .explode('values'), source='id', target='values') S = set(df['id']) mapper = {n: i for i, c in enumerate(nx.connected_components(G)) for n in c&S} # {1: 0, 2: 0, 3: 0, 4: 0, 5: 1} out = df.group_by(pl.col('id').replace(mapper).alias('group')).agg(pl.all()) Output: shape: (2, 3) βββββββββ¬ββββββββββββββ¬βββββββββββββββββββββββ β group β id β values β β --- β --- β --- β β i64 β list[i64] β list[str] β βββββββββͺββββββββββββββͺβββββββββββββββββββββββ‘ β 0 β [1, 2, β¦ 4] β ["A", "B", β¦ "C--A"] β β 1 β [5] β ["D"] β βββββββββ΄ββββββββββββββ΄βββββββββββββββββββββββ Graph: | 2 | 1 |
78,272,574 | 2024-4-4 | https://stackoverflow.com/questions/78272574/what-is-the-best-way-to-slice-a-dataframe-up-to-the-first-instance-of-a-mask | This is my DataFrame: import pandas as pd df = pd.DataFrame( { 'a': [10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70], 'b': [1, 1, 1, -1, -1, -2, -1, 2, 2, -2, -2, 1, -2], } ) The mask is: mask = ( (df.b == -2) & (df.b.shift(1) > 0) ) Expected output: slicing df up to the first instance of the mask: a b 0 10 1 1 15 1 2 20 1 3 25 -1 4 30 -1 5 35 -2 6 40 -1 7 45 2 8 50 2 The first instance of the mask is at row 9. So I want to slice the df up to this index. This is what I have tried. It works but I am not sure if it is the best way: idx = df.loc[mask.cumsum().eq(1) & mask].index[0] result = df.iloc[:idx] | You can filter by inverted mask with Series.cummax: out = df[~mask.cummax()] print (out) a b 0 10 1 1 15 1 2 20 1 3 25 -1 4 30 -1 5 35 -2 6 40 -1 7 45 2 8 50 2 How it working: print (df.assign(mask=mask, cumax=mask.cummax(), inv_cummax=~mask.cummax())) a b mask cumax inv_cummax 0 10 1 False False True 1 15 1 False False True 2 20 1 False False True 3 25 -1 False False True 4 30 -1 False False True 5 35 -2 False False True 6 40 -1 False False True 7 45 2 False False True 8 50 2 False False True 9 55 -2 True True False 10 60 -2 False True False 11 65 1 False True False 12 70 -2 True True False | 2 | 5 |
78,271,090 | 2024-4-4 | https://stackoverflow.com/questions/78271090/why-do-i-get-valueerror-unrecognized-data-type-x-of-type-class-list | I tried to run the code below, taken from CS50's AI course: import csv import tensorflow as tf from sklearn.model_selection import train_test_split # Read data in from file with open("banknotes.csv") as f: reader = csv.reader(f) next(reader) data = [] for row in reader: data.append( { "evidence": [float(cell) for cell in row[:4]], "label": 1 if row[4] == "0" else 0, } ) # Separate data into training and testing groups evidence = [row["evidence"] for row in data] labels = [row["label"] for row in data] X_training, X_testing, y_training, y_testing = train_test_split( evidence, labels, test_size=0.4 ) # Create a neural network model = tf.keras.models.Sequential() # Add a hidden layer with 8 units, with ReLU activation model.add(tf.keras.layers.Dense(8, input_shape=(4,), activation="relu")) # Add output layer with 1 unit, with sigmoid activation model.add(tf.keras.layers.Dense(1, activation="sigmoid")) # Train neural network model.compile( optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"] ) model.fit(X_training, y_training, epochs=20) # Evaluate how well model performs model.evaluate(X_testing, y_testing, verbose=2) However, I get the following error: Traceback (most recent call last): File "C:\Users\Eric\Desktop\coding\cs50\ai\lectures\lecture5\banknotes\banknotes.py", line 41, in <module> model.fit(X_training, y_training, epochs=20) File "C:\Users\Eric\Desktop\coding\cs50\ai\.venv\Lib\site-packages\keras\src\utils\traceback_utils.py", line 122, in error_handler raise e.with_traceback(filtered_tb) from None File "C:\Users\Eric\Desktop\coding\cs50\ai\.venv\Lib\site-packages\keras\src\trainers\data_adapters\__init__.py", line 113, in get_data_adapter raise ValueError(f"Unrecognized data type: x={x} (of type {type(x)})") ValueError: Unrecognized data type: x=[...] (of type <class 'list'>) where "..." is the training data. Any idea what went wrong? I'm using Python version 3.11.8 and TensorFlow version 2.16.1 on a Windows computer. I tried running the same code in a Google Colab notebook, and it works: the problem only occurs on my local machine. This is the output I'm expecting: Epoch 1/20 26/26 [==============================] - 1s 2ms/step - loss: 1.1008 - accuracy: 0.5055 Epoch 2/20 26/26 [==============================] - 0s 2ms/step - loss: 0.8588 - accuracy: 0.5334 Epoch 3/20 26/26 [==============================] - 0s 2ms/step - loss: 0.6946 - accuracy: 0.5917 Epoch 4/20 26/26 [==============================] - 0s 2ms/step - loss: 0.5970 - accuracy: 0.6683 Epoch 5/20 26/26 [==============================] - 0s 2ms/step - loss: 0.5265 - accuracy: 0.7120 Epoch 6/20 26/26 [==============================] - 0s 2ms/step - loss: 0.4717 - accuracy: 0.7655 Epoch 7/20 26/26 [==============================] - 0s 2ms/step - loss: 0.4258 - accuracy: 0.8177 Epoch 8/20 26/26 [==============================] - 0s 2ms/step - loss: 0.3861 - accuracy: 0.8433 Epoch 9/20 26/26 [==============================] - 0s 2ms/step - loss: 0.3521 - accuracy: 0.8615 Epoch 10/20 26/26 [==============================] - 0s 2ms/step - loss: 0.3226 - accuracy: 0.8870 Epoch 11/20 26/26 [==============================] - 0s 2ms/step - loss: 0.2960 - accuracy: 0.9028 Epoch 12/20 26/26 [==============================] - 0s 2ms/step - loss: 0.2722 - accuracy: 0.9125 Epoch 13/20 26/26 [==============================] - 0s 2ms/step - loss: 0.2506 - accuracy: 0.9283 Epoch 14/20 26/26 [==============================] - 0s 2ms/step - loss: 0.2306 - accuracy: 0.9514 Epoch 15/20 26/26 [==============================] - 0s 3ms/step - loss: 0.2124 - accuracy: 0.9660 Epoch 16/20 26/26 [==============================] - 0s 2ms/step - loss: 0.1961 - accuracy: 0.9769 Epoch 17/20 26/26 [==============================] - 0s 2ms/step - loss: 0.1813 - accuracy: 0.9781 Epoch 18/20 26/26 [==============================] - 0s 2ms/step - loss: 0.1681 - accuracy: 0.9793 Epoch 19/20 26/26 [==============================] - 0s 2ms/step - loss: 0.1562 - accuracy: 0.9793 Epoch 20/20 26/26 [==============================] - 0s 2ms/step - loss: 0.1452 - accuracy: 0.9830 18/18 - 0s - loss: 0.1407 - accuracy: 0.9891 - 187ms/epoch - 10ms/step [0.14066053926944733, 0.9890710115432739] | https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit It appears you're giving Model.fit([X], [y]) the wrong type. What I almost always do before handing off data to train_test_split is converting my features and labels to np arrays. So you can either convert them before handing them off to train_test_split or do it before the model.fit(...) NOTE: Don't forget to add import numpy as np So in your case you'd do: X_training_np = np.array(X_training) y_training_np = np.array(y_training) model.fit(X_training_np, y_training_np, epochs=...) | 5 | 10 |
78,269,252 | 2024-4-3 | https://stackoverflow.com/questions/78269252/concatenating-pandas-dataframe-with-multi-index-in-different-order | I have two data frames, which should be concatenated. Both are multi index data frames with identical indexes, but in a different order. So, the index of the first data frame (df) looks like: MultiIndex([(11, 1, 1), (11, 1, 2), (11, 1, 3), ... (11, 24, 5), (11, 24, 6), (11, 24, 7)], names=['id_a', 'id_b', 'id_c'], length=168) The second looks like: MultiIndex([(11, 1, 1), (11, 2, 1), (11, 3, 1), (11, 3, 2), ... (11, 5, 23), (11, 6, 23), (11, 7, 23), (11, 7, 24)], names=['id_a', 'id_c', 'id_b'], length=168) As you can see, indexes are in a different order. Now by running pd.concat([df, df2]).index.names I get the following results: FrozenList(['id_a', None, None]) How to reproduce import pandas as pd # create first data frame idx = pd.MultiIndex.from_product( [['A1', 'A2', 'A3'], ['B1', 'B2', 'B3'], ['C1', 'C2', 'C3']], names=['a', 'b', 'c']) cols = ['2010', '2020'] df = pd.DataFrame(1, idx, cols) # Create second data frame with varying order idx = pd.MultiIndex.from_product( [['A1', 'A2', 'A3'], ['C1', 'C2', 'C3'], ['B1', 'B2', 'B3']], names=['a', 'c', 'b']) df2 = pd.DataFrame(2, idx, cols) result = pd.concat([df, df2]) Output > df 2010 2020 a b c A1 B1 C1 1 1 C2 1 1 C3 1 1 B2 C1 1 1 C2 1 1 C3 1 1 B3 C1 1 1 C2 1 1 C3 1 1 A2 B1 C1 1 1 C2 1 1 C3 1 1 B2 C1 1 1 C2 1 1 C3 1 1 B3 C1 1 1 C2 1 1 C3 1 1 A3 B1 C1 1 1 C2 1 1 C3 1 1 B2 C1 1 1 C2 1 1 C3 1 1 B3 C1 1 1 C2 1 1 C3 1 1 > df2 2010 2020 a c b A1 C1 B1 2 2 B2 2 2 B3 2 2 C2 B1 2 2 B2 2 2 B3 2 2 C3 B1 2 2 B2 2 2 B3 2 2 A2 C1 B1 2 2 B2 2 2 B3 2 2 C2 B1 2 2 B2 2 2 B3 2 2 C3 B1 2 2 B2 2 2 B3 2 2 A3 C1 B1 2 2 B2 2 2 B3 2 2 C2 B1 2 2 B2 2 2 B3 2 2 C3 B1 2 2 B2 2 2 B3 2 2 > result 2010 2020 a A1 B1 C1 1 1 C2 1 1 C3 1 1 B2 C1 1 1 C2 1 1 C3 1 1 B3 C1 1 1 C2 1 1 C3 1 1 A2 B1 C1 1 1 C2 1 1 C3 1 1 B2 C1 1 1 C2 1 1 C3 1 1 B3 C1 1 1 C2 1 1 C3 1 1 A3 B1 C1 1 1 C2 1 1 C3 1 1 B2 C1 1 1 C2 1 1 C3 1 1 B3 C1 1 1 C2 1 1 C3 1 1 A1 C1 B1 2 2 B2 2 2 B3 2 2 C2 B1 2 2 B2 2 2 B3 2 2 C3 B1 2 2 B2 2 2 B3 2 2 A2 C1 B1 2 2 B2 2 2 B3 2 2 C2 B1 2 2 B2 2 2 B3 2 2 C3 B1 2 2 B2 2 2 B3 2 2 A3 C1 B1 2 2 B2 2 2 B3 2 2 C2 B1 2 2 B2 2 2 B3 2 2 C3 B1 2 2 B2 2 2 B3 2 2 > result.index.names FrozenList(['a', None, None]) Indexes 'b' and 'c' are gone. | If your levels are in a different order, you first need to reorder them with reorder_levels: result = pd.concat([df, df2.reorder_levels(df.index.names)]) If you have more than 2 DataFrames to concatenate: dfs = [df, df2, df3, df4] levels = dfs[0].index.names result = pd.concat([d.reorder_levels(levels) for d in dfs]) Output: 2010 2020 a b c A1 B1 C1 1 1 C2 1 1 C3 1 1 B2 C1 1 1 C2 1 1 C3 1 1 ... A1 B1 C1 2 2 B2 C1 2 2 B3 C1 2 2 B1 C2 2 2 ... | 2 | 5 |
78,271,102 | 2024-4-4 | https://stackoverflow.com/questions/78271102/improve-dataframe-performance-for-large-datasets | I have a large datasets need to filter out root packages as below: sort data(package column) by string length. start from beginning, scan the following data, if it starts with current data, then mark it as False. repeat step 2 till end. To improve performance, I add a flag column to keep track it's processed or not. Sound like now the big-O is n instead of n square. anything we can improve? The obvious one is the second for loop, now use continue to skip previous items, maybe more better way do do that? import re import pandas as pd import tabulate def dumpdf(df): if len(df) == 0: return df = df.reset_index(drop=True) tab = tabulate.tabulate(df, headers='keys', tablefmt='psql',showindex=True) print(tab) return def main(): data = [ ['A','com.example'], ['A','com.example.a'], ['A','com.example.b.c'], ['A','com.fun'], ['B','com.demo'], ['B','com.demo.b.c'], ['B','com.fun'], ['B','com.fun.e'], ['B','com.fun.f.g'] ] df = pd.DataFrame(data,columns=['name','package']) df ['flag'] = None df = df.sort_values(by="package", key=lambda x: x.str.len()).reset_index(drop=True) for idx,row in df.iterrows(): if row['flag'] == None: df.loc[idx,'flag'] = True for jdx, jrow in df.iterrows(): if jdx <= idx: continue if row['name'] == jrow['name']: if jrow['package'].startswith(row['package']): df.loc[jdx,'flag'] = False # df = df[df['flag']] df = df.groupby('name',as_index=False).agg({'package':'\n'.join}) dumpdf(df) return main() | IIUC you can do: from functools import cmp_to_key from itertools import groupby def fn(g): out = [] for _, k in groupby(g.sort_values(), cmp_to_key(lambda a, b: not b.startswith(a))): out.append(next(k)) return pd.Series(out) out = df.groupby("name")["package"].apply(fn).droplevel(1).reset_index() print(out) Prints: name package 0 A com.example 1 A com.fun 2 B com.demo 3 B com.fun | 3 | 4 |
78,268,258 | 2024-4-3 | https://stackoverflow.com/questions/78268258/attributeerror-sentencetransformer-object-has-no-attribute-embed-documents | I'm trying to build a RAG using the Chroma database, but when I try to create it I have the following error : AttributeError: 'SentenceTransformer' object has no attribute 'embed_documents'. I saw that you can somehow fix it by modifying the Chroma library directly, but I don't have the rights for it on my environment. If someone has a piece of an advice, be pleased. The ultimate goal is to use the index as a query engine for a chatbot. This is what I tried Code: #We load the chunks of texts and declare which column is to be embedded chunks = DataFrameLoader(final_df_for_chroma_injection, page_content_column='TEXT').load() #create the open-source embedding function embedding_model = SentenceTransformer('sentence-transformers/all-MiniLM-L12-v2') #-Load the persist directory on which are stored the previous embeddings #-And add the new ones from chunks/embeddings index = Chroma.from_documents(chunks, embedding_model, persist_directory="./chroma_db") This is the error I get: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[47], line 3 1 #-Load the persist directory on which are stored the previous embeddings 2 #-And add the new ones from chunks/embeddings ----> 3 index = Chroma.from_documents(chunks, 4 embedding_model, 5 persist_directory="./chroma_db") File /opt/anaconda3_envs/abeille_pytorch_p310/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:778, in Chroma.from_documents(cls, documents, embedding, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs) 776 texts = [doc.page_content for doc in documents] 777 metadatas = [doc.metadata for doc in documents] --> 778 return cls.from_texts( 779 texts=texts, 780 embedding=embedding, 781 metadatas=metadatas, 782 ids=ids, 783 collection_name=collection_name, 784 persist_directory=persist_directory, 785 client_settings=client_settings, 786 client=client, 787 collection_metadata=collection_metadata, 788 **kwargs, 789 ) File /opt/anaconda3_envs/abeille_pytorch_p310/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:736, in Chroma.from_texts(cls, texts, embedding, metadatas, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs) 728 from chromadb.utils.batch_utils import create_batches 730 for batch in create_batches( 731 api=chroma_collection._client, 732 ids=ids, 733 metadatas=metadatas, 734 documents=texts, 735 ): --> 736 chroma_collection.add_texts( 737 texts=batch[3] if batch[3] else [], 738 metadatas=batch[2] if batch[2] else None, 739 ids=batch[0], 740 ) 741 else: 742 chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids) File /opt/anaconda3_envs/abeille_pytorch_p310/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:275, in Chroma.add_texts(self, texts, metadatas, ids, **kwargs) 273 texts = list(texts) 274 if self._embedding_function is not None: --> 275 embeddings = self._embedding_function.embed_documents(texts) 276 if metadatas: 277 # fill metadatas with empty dicts if somebody 278 # did not specify metadata for all texts 279 length_diff = len(texts) - len(metadatas) File /opt/anaconda3_envs/abeille_pytorch_p310/lib/python3.10/site-packages/torch/nn/modules/module.py:1688, in Module.__getattr__(self, name) 1686 if name in modules: 1687 return modules[name] -> 1688 raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'") AttributeError: 'SentenceTransformer' object has no attribute 'embed_documents'``` | Use SentenceTransformerEmbeddings instead of SentenceTransformer, or simply HuggingFaceEmbeddings Reference > https://python.langchain.com/docs/integrations/text_embedding/sentence_transformers | 4 | 2 |
78,270,330 | 2024-4-3 | https://stackoverflow.com/questions/78270330/how-to-find-linear-dependence-mod-2-in-python | I have a n+1 by n matrix of integers. I want to find a linear combination of the rows that reduces to zero mod 2. How would I do this in python? I could write gaussian elimination on my own, I feel there ought to be a way to do this using numpy or other library without writing it from scratch. Example: I have the matrix [1, 3, 0] [1, 1, 0] [1, 0, 1] [0, 1, 5] The function should return [1, 0, 1, 1]. Because that linear combination yields [2,4,6] = [0,0,0] mod 2. | You can use galois: from galois import GF2 import numpy as np A = [[1, 3, 0], [1, 1, 0], [1, 0, 1], [0, 1, 5]] A = GF2(np.array(A).T % 2) print(A.null_space()) It gives: [[1 0 1 1] [0 1 1 1]] The rows are a basis of the null space of the matrix over the field F_2. | 2 | 3 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.