question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
78,620,611
2024-6-14
https://stackoverflow.com/questions/78620611/join-merge-multiple-pandas-dataframes-with-blending-of-values-on-one-column
I have two pandas dataframes width unique column names except: year student_id student_name I would like to merge on these 3 columns (year, student_id, student_name), however sometimes the student_name is misspelled (or has a different spelling). Essentially I would like to merge on year & column, while preserving the student_name column. wrt varying student_name, I am indifferent as to which student_name is chosen (although it would be nice if it chose the more frequently occurring student_name, but I don't want to ask for too much). I would prefer the merged/final dataframe to use only ONE version of the student_name (per student_id), but am willing to settle otherwise :) Example: >>> df_A year student_id student_name exam_A 0 2023 12345 Chris P. Bacon 80 1 2024 12345 Chris Bacon 90 2 2024 33333 Noah Buddy 90 3 2021 55555 Faye Kipperson 99 4 2024 11111 Beau Gusman 75 >>> df_B year student_id student_name exam_B exam_C 0 2024 12345 Chris P. Bacon 90 75 1 2024 33333 Noah Buddy 88 77 2 2020 88888 Saul Goodman 86 88 3 2023 88888 Saul Goodman 99 79 4 2024 55555 Fay Kipperson 82 75 5 2024 11111 Beau Gusman 80 99 What I want is this => year student_id student_name exam_A exam_B exam_C 0 2020 88888 Saul Goodman NaN 86.0 88.0 1 2021 55555 Faye Kipperson 99.0 NaN NaN 2 2023 12345 Chris P. Bacon 80.0 NaN NaN 3 2023 88888 Saul Goodman NaN 99.0 79.0 4 2024 11111 Beau Gusman 75.0 80.0 99.0 5 2024 12345 Chris Bacon 90.0 90.0 75.0 6 2024 33333 Noah Buddy 90.0 88.0 77.0 7 2024 55555 Faye Kipperson NaN 82.0 75.0 I DON'T WANT multiple student_name columns. Currently, I am merging one dataframe at a time & then going back through and filling in the null student_name cells. Question #2: I actually have about 33 dataframes to merge (all with unique column names except year, student_id, student_name), is there a way for me to merge them in this same fashion all at once (as opposed to merging each one in individually)? Thank you very much!!! I have tried: >>> pd.merge(df_A, df_B, on=[ 'year', 'student_id', ], how='outer') year student_id student_name_x exam_A student_name_y exam_B exam_C 0 2020 88888 NaN NaN Saul Goodman 86.0 88.0 1 2021 55555 Faye Kipperson 99.0 NaN NaN NaN 2 2023 12345 Chris P. Bacon 80.0 NaN NaN NaN 3 2023 88888 NaN NaN Saul Goodman 99.0 79.0 4 2024 11111 Beau Gusman 75.0 Beau Gusman 80.0 99.0 5 2024 12345 Chris Bacon 90.0 Chris P. Bacon 90.0 75.0 6 2024 33333 Noah Buddy 90.0 Noah Buddy 88.0 77.0 7 2024 55555 NaN NaN Fay Kipperson 82.0 75.0 >>> pd.merge(df_A, df_B.drop(columns=[ 'student_name' ]), on=[ 'year', 'student_id', ], how='outer') year student_id student_name exam_A exam_B exam_C 0 2020 88888 NaN NaN 86.0 88.0 1 2021 55555 Faye Kipperson 99.0 NaN NaN 2 2023 12345 Chris P. Bacon 80.0 NaN NaN 3 2023 88888 NaN NaN 99.0 79.0 4 2024 11111 Beau Gusman 75.0 80.0 99.0 5 2024 12345 Chris Bacon 90.0 90.0 75.0 6 2024 33333 Noah Buddy 90.0 88.0 77.0 7 2024 55555 NaN NaN 82.0 75.0
To get the most common name, concatenate all the student ids and names, group on 'student_id' and aggregate with mode. Do an outer merge of left and right dataframes excluding column 'student_name'. To streamline this, you can create a function that takes a list of dataframes, and returns the resulting dataframe. As an example, including also a df_C: year student_id student_name exam_X exam_Z 0 2024 12345 Chris P. Bacon 90 75 1 2024 33333 Noah Buddy 88 77 2 2020 88888 Saul Goodman 86 88 3 2023 88888 Saul Goodman 99 79 4 2024 55555 Fay Kipperson 82 75 5 2024 11111 Beau Gusman 80 99 You can use something like this: def merge_grades(dfs: list[pd.DataFrame]): df_students = ( pd.concat([df[["student_id", "student_name"]] for df in dfs]) .groupby("student_id", as_index=False) .agg(lambda x: x.mode()[0]) ) df_left = dfs[0].drop(columns="student_name") for df_right in dfs[1:]: df_left = pd.merge( df_left, df_right.drop(columns="student_name"), how="outer", ) return df_left.merge(df_students) df = merge_grades([df_A, df_B, df_C]) year student_id exam_A exam_B exam_C exam_X exam_Z student_name 0 2020 88888 NaN 86.0 88.0 86.0 88.0 Saul Goodman 1 2021 55555 99.0 NaN NaN NaN NaN Fay Kipperson 2 2023 12345 80.0 NaN NaN NaN NaN Chris P. Bacon 3 2023 88888 NaN 99.0 79.0 99.0 79.0 Saul Goodman 4 2024 11111 75.0 80.0 99.0 80.0 99.0 Beau Gusman 5 2024 12345 90.0 90.0 75.0 90.0 75.0 Chris P. Bacon 6 2024 33333 90.0 88.0 77.0 88.0 77.0 Noah Buddy 7 2024 55555 NaN 82.0 75.0 82.0 75.0 Fay Kipperson
2
1
78,617,536
2024-6-13
https://stackoverflow.com/questions/78617536/a-reliable-way-to-check-if-a-method-has-been-wrapped-with-a-decorator-from-withi
In python 3.7 or higher. I have the following class: def dec(method): @functools.wraps(method) def wrapper(*args, **kwargs): print("wrapped") return method(*args, **kwargs) return wrapper class A: @dec() def f1(self): print(is_wrapped()) def f2(self): print(is_wrapped()) I want A().f1() to print True and A().f2() to print False. I created the following code for is_wrapped: def is_wrapped(): frame = inspect.currentframe().f_back for v in frame.f_back.f_locals.values(): if hasattr(v, '__code__') and v.__code__ is frame.f_code: return True return False While this seems to work it can fail if the caller of f2 has a local variable that contains f2 but does not decorate it. For my specific case you can assume the following. The is_wrapped function is called directly from the function in a way that inspect.currentframe().f_back is the method that called it (i.e, f1 or f2) The is_wrapped function cannot get any arguments. the decorator can be implemented in any way as long as it uses functool.wraps for the wrapped function. Is there any more reliable way to achieve this?
I have revisited this question since my last comment, and here's a probably better way: instead of looking for a local variable with same __code__ as frame's code, let's look for something that has __wrapped__ satisfying that condition. Why is this better? Since you say that functools.wraps is a requirement for decorator, we can be sure that a decorator must have some local with __wrapped__ - that's the function built inside. So now we're looking for the following: "traversing from current frame, can the grandparent be a function definition that's called now?" So, if current frame is the one inside is_wrapped, we traverse one back to the original method (f1). This assumes that decorator does actually call the decorated function, which makes sense given functools.wraps requirement. If there are no previous frame, we're at some wrong place, probably is_wrapped shouldn't even be called from that. Otherwise, go a step further: we're now in function built inside the decorator. So the sketch of a modified version may look like this: import gc def is_wrapped(): frame = inspect.currentframe().f_back if frame is None: raise RuntimeError("No call frame found") orig_code = frame.f_code # Looking for this original definition if frame.f_back is None: return False deco_code = frame.f_back.f_code for o in gc.get_referrers(frame.f_back.f_code): if getattr(o, '__code__', None) is deco_code and hasattr(o, '__wrapped__'): return getattr(o.__wrapped__, '__code__', None) is orig_code return False It passes all "testcases" from your question and comments, works for "deep" decorators that accept parameters, but is still breakable - for example, if a decorator-created function does not call the original code, you're out of luck. But it's something less trivial than simply referencing the method in enclosing scope, isn't it? Credits for gc approach of retrieving the actual function go to @MikeHordecki. Use at your own risk and test thoroughly. Also note that Warning: Care must be taken when using objects returned by get_referrers() because some of them could still be under construction and hence in a temporarily invalid state. Avoid using get_referrers() for any purpose other than debugging.
4
2
78,619,953
2024-6-13
https://stackoverflow.com/questions/78619953/why-is-format-throwing-valueerror-unknown-format-code-f-for-object-of-type
I am using Python 2.7. (Switching to Python 3 for this particular code is not an option, please don't suggest it.) I am writing unit tests for some code. Here is the relevant piece of code: class SingleLineGrooveTable: VEFMT = '.3f' @classmethod def formatve(cls, value, error=None): er = 0 if error is not None: er = error v = value elif len(value) > 1: v, er = value else: v = value return format(v, cls.VEFMT), format(er, cls.VEFMT) and my test is: import unittest class TestSingleLineGrooveTable(unittest.TestCase): def test_formatve_no_error(self): e_v = '3.142' e_er = '0.000' r_v, r_er = SingleLineGrooveTable.formatve([3.1423]) self.assertEqual(e_v, r_v) self.assertEqual(e_er, r_er) (Yes, I know it's funny I'm getting an error on the test with "no_error" in the name...) When I run the test, it throws ValueError: Unknown format code 'f' for object of type 'str' on the return statement for the function. But I can't figure out where it's getting a str from. Possibly relevant, this code and the code I have that uses it were copied pretty much wholesale from someone else's code (who I can no longer contact), so maybe I'm calling it in the wrong way, but still, that's a list, not a string! What is going on here? How do I fix this?
On Python 2, object.__format__ effectively delegates to format(str(self), format_spec). You can see the implementation here. Since list inherits object.__format__, your first format call is effectively calling format(str([3.1423]), '.3f'). That's why you get the error message you do. This would still produce an error on Python 3. It'd just be a different error.
3
2
78,619,364
2024-6-13
https://stackoverflow.com/questions/78619364/python-nbtlib-cant-append-a-compound-to-a-list
I am trying to add a Compound to a List to make a Compound List: import nbtlib from nbtlib.tag import * CpdList = List(Compound()) Cpd = Compound() Cpd['name'] = String("Name") Cpd['test'] = Byte(0) CpdList.append(Cpd) This returns the following error: >>> nbtlib.tag.IncompatibleItemType: Compound({'name': String('Name'), 'test': Byte(0)}) should be a End tag I don't understand what an End tag is, and how to solve this issue. I tried adding an End tag to the Compound, but without success: Cpd['End'] = End
You just need to properly use nbtlib.tag.List with the Compound type and properly make a call: import nbtlib from nbtlib.tag import Compound, List, String, Byte CpdList = List[Compound]() Cpd = Compound() Cpd['name'] = String("Name") Cpd['test'] = Byte(0) CpdList.append(Cpd) print(CpdList) Note: Note that the nbtlib.tag.List expects the same type of elements, and the Compound type is not directly allowed in a List tag. The error mentions the need for an End tag, which is a terminator for Compound tags. In case you need a dynamic type list, you can use python native list(): import nbtlib from nbtlib.tag import Compound, String, Byte CpdList = list[Compound]() Cpd = Compound() Cpd['name'] = String("Name") Cpd['test'] = Byte(0) CpdList += [Cpd, 0, 'alice', {1: 2}] print(CpdList) Prints [Compound({'name': String('Name'), 'test': Byte(0)}), 0, 'alice', {1: 2}] Which is same as: import nbtlib from nbtlib.tag import Compound, String, Byte CpdList = list(Compound()) Cpd = Compound() Cpd['name'] = String("Name") Cpd['test'] = Byte(0) CpdList += [Cpd, 0, 'alice', {1: 2}] print(CpdList)
2
2
78,619,084
2024-6-13
https://stackoverflow.com/questions/78619084/unable-to-convert-the-column-data-and-store-it-in-one-new-column-in-dataframe-pa
I have a dataframe like below. A B C 0 3 329430734 998 1 3 329430742 258 2 3 329430776 126 3 3 329430778 998 4 3 329430784 33 5 3 329430851 21 6 3 329430897 998 7 3 329430917 998 8 3 329430943 998 9 3 329430945 998 and I am trying to transform the data in below way to store it in a new column. df["x"]= np.where((df['C'] == 998) | (df['C'] == 999), max(0, df['A']-1), df['C'] ) However i am getting below while trying transform the column values. ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). I tried to use apply function also using the way below but getting the same error. def maximum(x, y): if ((y == 998) | (y == 999)): return (max(0, x-1)) else: return (y) Unable to find out the issue behind it. Can anyone help me to resolve this issue?
You can use np.maximum: df["x"] = np.where(df["C"].isin([998, 999]), np.maximum(0, df["A"] - 1), df["C"]) A B C x 0 3 329430734 998 2 1 3 329430742 258 258 2 3 329430776 126 126 3 3 329430778 998 2 4 3 329430784 33 33 5 3 329430851 21 21 6 3 329430897 998 2 7 3 329430917 998 2 8 3 329430943 998 2 9 3 329430945 998 2
2
4
78,617,576
2024-6-13
https://stackoverflow.com/questions/78617576/in-confluence-how-to-replicate-manual-search-with-api-search
I am following the Confluence API search documentation to implement text search with CQL (confluence query language) in the company Confluence pages. Here is my code for a search query: import requests from requests.auth import HTTPBasicAuth import urllib.parse # Replace with your Confluence credentials and base URL base_url = 'https://your-domain.atlassian.net/wiki' username = '[email protected]' api_token = CONFLUENCE_API_TOKEN search_term = 'What is the per diem costs allowed to be reimbursed for business trips to Paris?' encoded_search_term = urllib.parse.quote(search_term) # Construct the search URL with encoded search term search_url = f'{base_url}/rest/api/content/search?cql=text~"{encoded_search_term}"&limit=10' # Send the request response = requests.get(search_url, auth=HTTPBasicAuth(username, api_token)) # Check for successful response if response.status_code == 200: search_results = response.json() print(search_results) else: print(f'Error: {response.status_code}, {response.text}') Here is the result returned: {'results': [], 'start': 0, 'limit': 10, 'size': 0, '_links': {'base': 'https://your-domain.atlassian.net/wiki', 'context': '/wiki', 'self': 'https://your-domain.atlassian.net/wiki/rest/api/content/search?cql=text~%22What%20is%20the%20per%20diem%20costs%20allowed%20to%20be%20reimbursed%20for%20business%20trips%20to%20Paris%3F%22'}} So zero documents returned during the search. Whereas if I do the search manually in the Confluence page, here is the result: There are loads of documents retrieved with manual search. When I tried shorter search queries with the API, I do get results. Like, for "What is the per diem costs", I get 14 confluence documents retrieved; for "What is the per diem costs allowed to be reimbursed", I get 7 results; and 4 results for "What is the per diem costs allowed to be reimbursed for business trips". But zero results for "What is the per diem costs allowed to be reimbursed for business trips to Paris?" But these are all nowhere close to what I get with manual search (thousands of documents retrieved). So, how do I replicate this manual search with the API? What is the search algorithm used in the "simple" manual search? Here is the same search with the Atlassian API. The result is the same, zero documents returned:
I used the Network tab of developer console to see what exactly Confluence's "Simple" search is doing. Turns out it constructs a cql as follows: siteSearch ~ "SEARCH TEXT HERE" AND type in ("space","user","com.atlassian.confluence.extra.team-calendars:calendar-content-type","attachment","page","com.atlassian.confluence.extra.team-calendars:space-calendars-view-content-type","blogpost") So try changing text to siteSearch: cql_query = f'siteSearch ~ "{search_term}"' confluence.cql(cql_query, limit=1000) # default limit is 25 Note: Using this method I get a maximum of 500 results. Confluence API documentation does not mention this limit; but the documentation of the Confluence class of atlassian-python-api mentions that fixed system limits may be imposed, which I presume is what's happening here. If you need more than 500 results, you can do repeated calls using the start parameter of cql()
2
2
78,618,381
2024-6-13
https://stackoverflow.com/questions/78618381/why-numpy-data-type-kind-returns-void-when-it-was-created-as-float64
I have this code: >>> d = np.dtype([("pos", np.float64, 3)]) >>> d[0].kind 'V' Why does it return 'V' instead of 'f'? In the full code, I need to know if the field corresponds to an integer, float, string...
d[0].kind does not return 'f' because it is not a floating-point dtype: it is a structured dtype with a floating point base. There are several other attributes of structured dtypes that you might be able to inspect depending on your particular goal. For example: >>> d[0].base dtype('float64') >>> d[0].subdtype[0].kind 'f'
4
5
78,618,287
2024-6-13
https://stackoverflow.com/questions/78618287/image-not-found-in-my-django-project-only-in-one-of-the-2-pages
i'm trying to display an home image at top of the page but it doesn't work. I have the same image in an other page and it works. also if i hold mouse on image source link on the ide it says me that the path is correct. . : Here is the line code (path in the images): <p><img src="../media/images/home.png" width="30px" alt="home image">HOME</p>
Relative paths are probably the culprit: if you visit a page that has more slashes, then the .. thus points to a different directory. Use absolute paths, so: /media/… instead of ../media/….
2
3
78,617,300
2024-6-13
https://stackoverflow.com/questions/78617300/what-is-the-most-efficient-way-to-fillna-multiple-columns-with-values-from-other
This is my DataFrame: import pandas as pd import numpy as np df = pd.DataFrame( { 'x': [1, np.nan, 3, np.nan, 5], 'y': [np.nan, 7, 8, 9, np.nan], 'x_a': [1, 2, 3, 4, 5], 'y_a': [6, 7, 8, 9, 10] } ) Expected output is fill_na columns x and y: x y x_a y_a 0 1.0 6.0 1 6 1 2.0 7.0 2 7 2 3.0 8.0 3 8 3 4.0 9.0 4 9 4 5.0 10.0 5 10 Basically I want to fillna x with x_a and y with y_a. In other words each column should be paired with another column that has the suffix _a and the column name. I can get this output by using this code: for col in ['x', 'y']: df[col] = df[col].fillna(df[f'{col}_a']) But I wonder if it is the best/most efficient way? Suppose I got hundreds of columns like these
What about using an Index to select all columns at once and set_axis to realign the DataFrame: cols = pd.Index(['x', 'y']) df[cols] = df[cols].fillna(df[cols+'_a'].set_axis(cols, axis=1)) NB. this is assuming all columns in cols and all '_a' columns exist. If you're not sure you could be safe and use intersection and reindex: cols = pd.Index(['x', 'y']).intersection(df.columns) df[cols] = df[cols].fillna(df.reindex(columns=cols+'_a').set_axis(cols, axis=1)) Or for an approach that is fully independent of explicitly passing input columns and just relying on the suffix (_a): suffix = '_a' # find columns "xyz" that have a "xyz_a" counterpart c1 = df.columns.intersection(df.columns+suffix) c2 = c1.str.removesuffix(suffix) # select, fillna, update df[c2] = df[c2].fillna(df[c1].set_axis(c2, axis=1)) Output: x y x_a y_a 0 1.0 6.0 1 6 1 2.0 7.0 2 7 2 3.0 8.0 3 8 3 4.0 9.0 4 9 4 5.0 10.0 5 10 Example for which the second approach would be needed: df = pd.DataFrame( { 'x': [1, np.nan, 3, np.nan, 5], 'z': [np.nan, 7, 8, 9, np.nan], 'p_a': [1, 2, 3, 4, 5], 'y_a': [6, 7, 8, 9, 10] } )
11
10
78,614,904
2024-6-12
https://stackoverflow.com/questions/78614904/plotting-a-baseball-field-animation
I have a dataframe that includes position data from all 9 players on a baseball field including the hitter throughout a given play as well as the ball trajectory. I need some help with figuring out possibly why my animation is not working. The code below plots an instance of the plot, but it doesn't show a continuous animation. In other words, it should show dots moving continuously. Here is my code: import pandas as pd from sportypy.surfaces import MiLBField import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation import numpy as np # The dimensions are not exactly like this but this is an example if you need something to go off of num_rows = 50 data = { 'game_str': ['game_01'] * num_rows, 'play_id': [10] * num_rows, 'timestamp': np.random.randint(180000, 181000, size=num_rows), 'player_position': np.random.randint(1, 11, size=num_rows), 'field_x': np.random.uniform(-150, 150, size=num_rows), 'field_y': np.random.uniform(-150, 150, size=num_rows), 'ball_position_x': np.random.uniform(0.0, 2.0, size=num_rows), 'ball_position_y': np.random.uniform(0.0, 300.0, size=num_rows), 'ball_position_z': np.random.uniform(0.0, 10.0, size=num_rows) } df = pd.DataFrame(data).sort_values(by='timestamp') field = MiLBField() def update(frame): frame_data = df[df['timestamp'] <= frame] players = frame_data[['field_x', 'field_y']] balls = frame_data[['ball_position_x', 'ball_position_y']] plt.clf() field.draw(display_range='full') p = field.scatter(players['field_x'], players['field_y']) b = field.scatter(balls['ball_position_x'], balls['ball_position_y']) return p, b fig = plt.figure() ani = FuncAnimation(fig, update, frames=np.linspace(df['timestamp'].min(), df['timestamp'].max(), num=100), blit=True) plt.show() I would like it to output a baseball field with the scatter points moving as time increases.
There are few problems in code: Using field.draw() inside update() it tries to create many plots which slows down all program. But in matplotlib you can create it only once - outside update() It creates two plots - one with fields, and second with plot and data. And even example on homepage sportypy (in section Adding Analyses and Plotting Data) uses fig, ax = plt.subplots(1, 1) phf.draw(ax=ax) which creates fig and ax before draw() so it can uses this ax in draw() But for me it creates white background with green triangle instead of green background with green triangle - and solution was gcf() (get current figure) which gets fig created (automatically) by field.draw() field.draw(display_range='full') fig = plt.gcf() The last problem is plt.clf() which removes all from plot - so it removes field and it draws normal scatter with white background and axis. I removed it and now it shows data on field. Full working code: I added ani.save('animation.gif', writer='imagemagick', fps=2) to write it in animated GIF. It needs external program imagemagick I added np.random.seed(0) so everyone will test code on the same values. I added colors to players using player_position, and red to all balls. For test I used num=10 instead of num=100 in FuncAnimation() to run it faster. import pandas as pd from sportypy.surfaces import MiLBField import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation import numpy as np # The dimensions are not exactly like this but this is an example if you need something to go off of num_rows = 50 np.random.seed(0) # it will always generate the same data - so it is simpler to compare them data = { 'game_str': ['game_01'] * num_rows, 'play_id': [10] * num_rows, 'timestamp': np.random.randint(180000, 181000, size=num_rows), 'player_position': np.random.randint(1, 11, size=num_rows), 'field_x': np.random.uniform(-150, 150, size=num_rows), 'field_y': np.random.uniform(-150, 150, size=num_rows), 'ball_position_x': np.random.uniform(0.0, 2.0, size=num_rows), 'ball_position_y': np.random.uniform(0.0, 300.0, size=num_rows), 'ball_position_z': np.random.uniform(0.0, 10.0, size=num_rows) } df = pd.DataFrame(data).sort_values(by='timestamp') field = MiLBField() # it shows white background with green triangle #fig, ax = plt.subplots(1, 1) # get figure before drawing #field.draw(display_range='full', ax=ax) # it shows green background with green triangle field.draw(display_range='full') # without ax= fig = plt.gcf() # get figure after drawing def update(frame): print(f'frame: {frame:.2f}') frame_data = df[ df['timestamp'] <= frame ] #frame_data = df[ df['timestamp'] <= frame ].drop_duplicates(subset=['player_position'], keep='last') print('len(frame_data):', len(frame_data)) players = frame_data # no need [['field_x', 'field_y']] balls = frame_data # no need [['ball_position_x', 'ball_position_y']] #players = frame_data.drop_duplicates(subset=['player_position'], keep='last') print('len(players), len(balls):', len(players), len(balls)) players_colors = players['player_position'] balls_colors = ['red'] * len(balls) p = field.scatter(players['field_x'], players['field_y'], c=players_colors) b = field.scatter(balls['ball_position_x'], balls['ball_position_y'], c=balls_colors) return p, b ani = FuncAnimation(fig, update, frames=np.linspace(df['timestamp'].min(), df['timestamp'].max(), num=10), blit=True) ani.save('animation.gif', writer='imagemagick', fps=2) plt.show() But this has another "problem" - df['timestamp'] <= frame - it shows all positions from the beginnig. Maybe it would need to use previous_frame <= df['timestamp'] <= frame to show only last positions. But this removes all objects when there is no data between previous_frame, frame. previous_frame = None def update(frame): global previous_frame print(f'frame: {frame:.2f}') frame_data = df[ df['timestamp'] <= frame ] if previous_frame is None or previous_frame > frame: mask1 = (df['timestamp'] <= frame) frame_data = df[ mask1 ] else: mask1 = (df['timestamp'] <= frame) mask2 = (df['timestamp'] > previous_frame) frame_data = df[ mask1 & mask2 ] previous_frame = frame players = frame_data # no need [['field_x', 'field_y']] balls = frame_data # no need [['ball_position_x', 'ball_position_y']] p = field.scatter(players['field_x'], players['field_y']) b = field.scatter(balls['ball_position_x'], balls['ball_position_y']) return p, b Maybe it would need to filter data by play_id or player_position and keep only last position for every play_id or player_position. Something like: frame_data = df[ df['timestamp'] <= frame ].drop_duplicates(subset=['player_position'], keep='last') Or maybe filter only players but keep all balls frame_data = df[ df['timestamp'] <= frame ] players = frame_data.drop_duplicates(subset=['player_position'], keep='last') # no need [['field_x', 'field_y']] balls = frame_data # no need [['ball_position_x', 'ball_position_y']]
2
4
78,615,700
2024-6-13
https://stackoverflow.com/questions/78615700/aggregate-columns-that-fall-within-range
I have two dataframes called df and ranges: data = { 'group': ['A', 'B', 'A', 'C', 'B'], 'start': [10, 20, 15, 30, 25], 'end': [50, 40, 60, 70, 45], 'val1': [5, 10, 11, 12, 6], 'val2': [5, 2, 1, 1, 0], } df = pd.DataFrame(data) data = { 'group': ['A', 'B', 'C'], 'start': [0, 5, 25], 'end': [50, 7, 35], } ranges = pd.DataFrame(data) My goal is to aggregate the rows in df together based on whether they fall within the same range defined in ranges. I would like to aggregate them together such that for each val1, val2 column I get the min, max, mean, sum of that column within the context of the aggregation group. The catch here is that I need to do this for something like 5000 ranges in ranges and 500,000 rows in df. So I'd like a fast but memory efficient (relatively) solution. I'm open to solutions using similar frameworks such as vaex. Expected output where range_id is just a way to identify groups assuming they're not unique: range_id val1 val2 min max mean sum min max mean sum 0 0 5 5 5.0 5 5 5 5.0 5
IIUC, I would pre-filter the dataframe with map and boolean indexing, then perform a classical groupby.agg. This should keep the masks and intermediate (filtered) DataFrame minimal for memory efficiency, and minimize the size of the input for groupby. # columns to aggregate cols = ['val1', 'val2'] # ensure data is numeric df[cols] = df[cols].astype(int) # optional, just to avoid having to `set_index` twice tmp = ranges.set_index('group') # pre-filter the rows for memory efficiency # then perform a groupby.agg out = (df[( df['start'].ge(df['group'].map(tmp['start'])) &df['end'].le(df['group'].map(tmp['end'])))] .groupby('group', as_index=False)[cols].agg(['min', 'max', 'mean', 'sum']) ) Output: group val1 val2 min max mean sum min max mean sum 0 A 5 5 5.0 5 5 5 5.0 5 Intermediate before the groupby: group start end val1 val2 0 A 10 50 5 5 variant @sammywemmy proposed an variation of my solution. Instead of computing all aggregations simultaneously in groupby.agg, you could compute them individually and combine them with concat. This is faster and potentially a bit more efficient memory-wise. from itertools import product cols = ['val1', 'val2'] tmp = ranges.set_index('group') grouped = (df[( df['start'].ge(df['group'].map(tmp['start'])) &df['end'].le(df['group'].map(tmp['end'])))] ).groupby('group') aggs = ['min','mean','max','sum'] bunch = product(cols, aggs) contents = [] for col, _agg in bunch: outcome = grouped[col].agg(_agg) outcome.name = (col,_agg) contents.append(outcome) out = pd.concat(contents,axis=1) timings example generating function def init(N): df = pd.DataFrame({'group': np.random.randint(0, 20, N), 'start': np.random.randint(0, 100, N), 'end': np.random.randint(0, 100, N), 'val1': np.random.randint(0, 100, N), 'val2': np.random.randint(0, 100, N), }) # ensure start <= end df[['start', 'end']] = np.sort(df[['start', 'end']], axis=1) group = df['group'].unique() ranges = pd.DataFrame({'group': group, 'start': np.random.randint(0, 110, len(group)), 'end': np.random.randint(0, 110, len(group)), }) ranges[['start', 'end']] = np.sort(ranges[['start', 'end']], axis=1) return df, ranges memory_usage on 10M rows # mozway_pre_filter Line # Mem usage Increment Occurrences Line Contents ============================================================= 49 711.2 MiB 711.2 MiB 1 def mozway_pre_filter(df, ranges): 50 # columns to aggregate 51 711.2 MiB 0.0 MiB 1 cols = ['val1', 'val2'] 52 53 # ensure data is numeric 54 863.9 MiB 152.6 MiB 1 df[cols] = df[cols].astype(int) 55 56 # optional, just to avoid having to `set_index` twice 57 863.9 MiB 0.0 MiB 1 tmp = ranges.set_index('group') 58 59 # pre-filter the rows for memory efficiency 60 # then perform a groupby.agg 61 950.9 MiB 11.2 MiB 4 return (df[( df['start'].ge(df['group'].map(tmp['start'])) 62 881.6 MiB 9.5 MiB 1 &df['end'].le(df['group'].map(tmp['end'])))] 63 950.7 MiB -66.6 MiB 2 .groupby('group', as_index=False)[cols].agg(['min', 'max', 'mean', 'sum']) # donkey_merge Line # Mem usage Increment Occurrences Line Contents ============================================================= 66 884.4 MiB 884.4 MiB 1 def donkey_merge(df, ranges): 67 884.4 MiB 0.0 MiB 1 ranges = ranges.assign(range_id=ranges.index) 68 1484.4 MiB 600.1 MiB 1 df_merged = pd.merge(df, ranges, on='group') 69 1602.8 MiB 109.0 MiB 2 df_filtered = df_merged[(df_merged['start_x'] >= df_merged['start_y']) 70 1494.0 MiB 9.4 MiB 1 & (df_merged['end_x'] <= df_merged['end_y'])] 71 1602.8 MiB 0.0 MiB 2 aggregation_dict = {"val1": ['min', 'max', 'mean', 'sum'], 72 1602.8 MiB 0.0 MiB 1 "val2": ['min', 'max', 'mean', 'sum']} 73 1585.3 MiB -17.6 MiB 1 return df_filtered.groupby('range_id').agg(aggregation_dict).reset_index() # Nayem_aggregate_range Line # Mem usage Increment Occurrences Line Contents ============================================================= 19 905.9 MiB 905.9 MiB 1 def Nayem_aggregate_range(df, ranges): 20 905.9 MiB 0.0 MiB 1 results = [] 21 961.5 MiB 0.0 MiB 21 for idx, row in ranges.iterrows(): 22 961.5 MiB 0.0 MiB 20 mask = ( 23 961.5 MiB 55.6 MiB 60 (df['group'] == row['group']) & 24 961.5 MiB 0.0 MiB 20 (df['start'] >= row['start']) & 25 961.5 MiB 0.0 MiB 20 (df['end'] <= row['end']) 26 ) 27 961.5 MiB 0.0 MiB 20 filtered_df = df[mask] 28 961.5 MiB 0.0 MiB 20 if not filtered_df.empty: 29 961.5 MiB 0.0 MiB 20 agg_dict = { 30 961.5 MiB 0.0 MiB 20 'val1_min': filtered_df['val1'].min(), 31 961.5 MiB 0.0 MiB 20 'val1_max': filtered_df['val1'].max(), 32 961.5 MiB 0.0 MiB 20 'val1_mean': filtered_df['val1'].mean(), 33 961.5 MiB 0.0 MiB 20 'val1_sum': filtered_df['val1'].sum(), 34 961.5 MiB 0.0 MiB 20 'val2_min': filtered_df['val2'].min(), 35 961.5 MiB 0.0 MiB 20 'val2_max': filtered_df['val2'].max(), 36 961.5 MiB 0.0 MiB 20 'val2_mean': filtered_df['val2'].mean(), 37 961.5 MiB 0.0 MiB 20 'val2_sum': filtered_df['val2'].sum(), 38 } 39 961.5 MiB 0.0 MiB 20 agg_dict['range_id'] = idx 40 961.5 MiB 0.0 MiB 20 results.append(agg_dict) 41 961.5 MiB 0.0 MiB 1 aggregated_df = pd.DataFrame(results) 42 961.5 MiB 0.0 MiB 1 aggregated_df = aggregated_df.set_index('range_id') 43 961.5 MiB 0.0 MiB 2 aggregated_df.columns = pd.MultiIndex.from_tuples( 44 961.5 MiB 0.0 MiB 1 [('val1', 'min'), ('val1', 'max'), ('val1', 'mean'), ('val1', 'sum'), 45 ('val2', 'min'), ('val2', 'max'), ('val2', 'mean'), ('val2', 'sum')] 46 ) 47 961.5 MiB 0.0 MiB 1 return aggregated_df # user24714692_merge_agg_ Line # Mem usage Increment Occurrences Line Contents ============================================================= 3 879.9 MiB 879.9 MiB 1 def user24714692_merge_agg_(df, ranges): 4 1429.1 MiB 549.2 MiB 1 mdf = pd.merge(df, ranges, on='group', suffixes=('', '_range')) 5 6 1527.7 MiB 70.3 MiB 2 fdf = mdf[ 7 1457.4 MiB 19.1 MiB 2 (mdf['start'] >= mdf['start_range']) & 8 1438.3 MiB 9.2 MiB 1 (mdf['end'] <= mdf['end_range']) 9 ] 10 11 1527.9 MiB 0.3 MiB 3 res = fdf.groupby(['group', 'start_range', 'end_range']).agg({ 12 1527.7 MiB 0.0 MiB 1 'val1': ['min', 'max', 'mean', 'sum'], 13 1527.7 MiB 0.0 MiB 1 'val2': ['min', 'max', 'mean', 'sum'] 14 1527.9 MiB 0.0 MiB 1 }).reset_index() 15 16 1527.9 MiB 0.0 MiB 14 res.columns = ['_'.join(col).strip() if col[1] else col[0] for col in res.columns.values] 17 1527.9 MiB 0.0 MiB 1 return res Maximum memory usage
4
4
78,614,773
2024-6-12
https://stackoverflow.com/questions/78614773/with-polars-how-to-concatenate-list-of-string-expression-column-to-string
Here's a naive solution of what I want to do using map_elements. How can I do this with only Polars functions? import polars as pl # Create a DataFrame with a column containing lists of strings df = pl.DataFrame({ "list_of_strings": [["a", "b", "c"], ["d", "e", "f"], ["g", "h", "i"]] }) # Define a function to concatenate lists of strings into a single string def concatenate_list_of_strings(lst): return "".join(lst) # Apply the function to the DataFrame df = df.with_column( pl.col("list_of_strings").map_elements(concatenate_list_of_strings, return_dtype=pl.String).alias("concatenated_string") ) print(df)
As already mentioned in the comments, there is pl.Expr.list.join in polars' native expression API to join all string items in a sublist with a separator between them. df.with_columns( pl.col("list_of_strings").list.join("") ) shape: (3, 1) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ list_of_strings β”‚ β”‚ --- β”‚ β”‚ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•‘ β”‚ abc β”‚ β”‚ def β”‚ β”‚ ghi β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
2
2
78,613,926
2024-6-12
https://stackoverflow.com/questions/78613926/how-can-i-merge-two-dataframes-based-on-last-date-of-each-group
These are my DataFrames: import pandas as pd df1 = pd.DataFrame( { 'close': [100, 150, 200, 55, 69, 221, 2210, 111, 120, 140, 150, 170], 'date': [ '2024-01-01', '2024-01-02', '2024-01-03', '2024-01-04', '2024-01-05', '2024-01-06', '2024-01-07', '2024-01-08', '2024-01-09', '2024-01-10', '2024-01-11', '2024-01-12', ] } ) df2 = pd.DataFrame( { 'group': ['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'], 'close': [100, 105, 112, 117, 55, 65, 221, 211], 'date': [ '2024-01-01', '2024-01-02', '2024-01-03', '2024-01-04', '2024-01-05', '2024-01-06', '2024-01-07', '2024-01-08' ], 'extend': [ '2024-01-09', '2024-01-09', '2024-01-09', '2024-01-09', '2024-01-11', '2024-01-11', '2024-01-11', '2024-01-11' ], } ) And this is the expected output. I want to extend df2 for each group in group column: group close date extend 0 a 100 2024-01-01 2024-01-09 1 a 105 2024-01-02 2024-01-09 2 a 112 2024-01-03 2024-01-09 3 a 117 2024-01-04 2024-01-09 4 a 69 2024-01-05 2024-01-09 5 a 221 2024-01-06 2024-01-09 6 a 2210 2024-01-07 2024-01-09 7 a 111 2024-01-08 2024-01-09 8 a 120 2024-01-09 2024-01-09 7 b 55 2024-01-05 2024-01-11 8 b 65 2024-01-06 2024-01-11 9 b 221 2024-01-07 2024-01-11 10 b 211 2024-01-08 2024-01-11 11 b 120 2024-01-09 2024-01-11 12 b 140 2024-01-10 2024-01-11 13 b 150 2024-01-11 2024-01-11 The logic is: Each group in df2 has a fixed extend date. This is the basically the date that each group should be extended using df1. For example for group a, The data should be extended from 2024-01-04 to 2024-01-09. The start point of extending is basically df2.date.iloc[-1] for each group and the end is the extend. This is my attempt that didn't work: import janitor def func(df2, df1): df2['extend_start'] = df2.date.iloc[-1] df2['extend_start'] = pd.to_datetime(df2.extend_start) df3 = df2.conditional_join( df1, ('extend_start', 'date', '<'), ('extend', 'date', '>') ) return df3 df1['date'] = pd.to_datetime(df1.date) df2['extend'] = pd.to_datetime(df2.extend) out = df2.groupby('group').apply(func, df1=df1)
You could add the missing dates with groupby.apply, then map the unknown dates: # ensure datetime df1['date'] = pd.to_datetime(df1['date']) df2[['date', 'extend']] = df2[['date', 'extend']].apply(pd.to_datetime) # fill missing dates # map missing one with values of df1 out = (df2.set_index('date').groupby('group') .apply(lambda x: x.reindex(pd.date_range(min(x.index.min(), x['extend'].min()), max(x.index.max(), x['extend'].max()), )).rename_axis('date') .drop(columns=['group']) .assign(extend=lambda x: x['extend'].ffill(),) ) .reset_index() .assign(close=lambda x: x['close'].fillna(x['date'].map(df1.set_index('date')['close']))) ) Output: group date close extend 0 a 2024-01-01 100.0 2024-01-09 1 a 2024-01-02 105.0 2024-01-09 2 a 2024-01-03 112.0 2024-01-09 3 a 2024-01-04 117.0 2024-01-09 4 a 2024-01-05 69.0 2024-01-09 5 a 2024-01-06 221.0 2024-01-09 6 a 2024-01-07 2210.0 2024-01-09 7 a 2024-01-08 111.0 2024-01-09 8 a 2024-01-09 120.0 2024-01-09 9 b 2024-01-05 55.0 2024-01-11 10 b 2024-01-06 65.0 2024-01-11 11 b 2024-01-07 221.0 2024-01-11 12 b 2024-01-08 211.0 2024-01-11 13 b 2024-01-09 120.0 2024-01-11 14 b 2024-01-10 140.0 2024-01-11 15 b 2024-01-11 150.0 2024-01-11 Intermediate before the map step: group date close extend 0 a 2024-01-01 100.0 2024-01-09 1 a 2024-01-02 105.0 2024-01-09 2 a 2024-01-03 112.0 2024-01-09 3 a 2024-01-04 117.0 2024-01-09 4 a 2024-01-05 NaN 2024-01-09 5 a 2024-01-06 NaN 2024-01-09 6 a 2024-01-07 NaN 2024-01-09 7 a 2024-01-08 NaN 2024-01-09 8 a 2024-01-09 NaN 2024-01-09 9 b 2024-01-05 55.0 2024-01-11 10 b 2024-01-06 65.0 2024-01-11 11 b 2024-01-07 221.0 2024-01-11 12 b 2024-01-08 211.0 2024-01-11 13 b 2024-01-09 NaN 2024-01-11 14 b 2024-01-10 NaN 2024-01-11 15 b 2024-01-11 NaN 2024-01-11
4
3
78,613,142
2024-6-12
https://stackoverflow.com/questions/78613142/polars-convert-duration-to-integer-number-of-hours-minutes-seconds
I want to convert a duration to integer number of hour (or minute or seconds). I though the .dt namespace would work the same as for the datetimes, but I get an error instead. This example from datetime import datetime import polars as pl pl.__version__ dx = pl.DataFrame({'dt1': datetime(2024, 1, 12, 13, 45)}) dx.select((pl.col('dt1') - pl.col('dt1').dt.date()).dt.hour()) gives the error: --------------------------------------------------------------------------- InvalidOperationError Traceback (most recent call last) Cell In[5], line 2 1 dx = pl.DataFrame({'dt1': datetime(2024, 1, 12, 13, 45)}) ----> 2 dx.select((pl.col('dt1') - pl.col('dt1').dt.date()).dt.hour()) File ~/src/poste/sda-poste-logistics/venv/lib/python3.11/site-packages/polars/dataframe/frame.py:8461, in DataFrame.select(self, *exprs, **named_exprs) 8361 def select( 8362 self, *exprs: IntoExpr | Iterable[IntoExpr], **named_exprs: IntoExpr 8363 ) -> DataFrame: 8364 """ 8365 Select columns from this DataFrame. 8366 (...) 8459 β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ 8460 """ -> 8461 return self.lazy().select(*exprs, **named_exprs).collect(_eager=True) File ~/src/poste/sda-poste-logistics/venv/lib/python3.11/site-packages/polars/lazyframe/frame.py:1967, in LazyFrame.collect(self, type_coercion, predicate_pushdown, projection_pushdown, simplify_expression, slice_pushdown, comm_subplan_elim, comm_subexpr_elim, cluster_with_columns, no_optimization, streaming, background, _eager, **_kwargs) 1964 # Only for testing purposes atm. 1965 callback = _kwargs.get("post_opt_callback") -> 1967 return wrap_df(ldf.collect(callback)) InvalidOperationError: `hour` operation not supported for dtype `duration[ΞΌs]` on both polars 0.20.31 and 1.0.0-a1. Is this a bug or am I doing something wrong?
After the subtraction, you don't have a date anymore but a timedelta, you should use dt.total_seconds: dx.select((pl.col('dt1') - pl.col('dt1').dt.date()).dt.total_seconds()) Output: β”Œβ”€β”€β”€β”€β”€β”€β”€β” β”‚ dt1 β”‚ β”‚ --- β”‚ β”‚ i64 β”‚ β•žβ•β•β•β•β•β•β•β•‘ β”‚ 49500 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”˜ Or, as total_hours / fractional hours: dx.select((pl.col('dt1') - pl.col('dt1').dt.date()).dt.total_hours()) β”Œβ”€β”€β”€β”€β”€β” β”‚ dt1 β”‚ β”‚ --- β”‚ β”‚ i64 β”‚ β•žβ•β•β•β•β•β•‘ β”‚ 13 β”‚ β””β”€β”€β”€β”€β”€β”˜ dx.select((pl.col('dt1') - pl.col('dt1').dt.date()).dt.total_seconds()/3600) β”Œβ”€β”€β”€β”€β”€β”€β”€β” β”‚ dt1 β”‚ β”‚ --- β”‚ β”‚ f64 β”‚ β•žβ•β•β•β•β•β•β•β•‘ β”‚ 13.75 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”˜
2
3
78,612,486
2024-6-12
https://stackoverflow.com/questions/78612486/rolling-aggregation-in-polars-and-also-get-the-original-column-back-without-join
Using polars .rolling and .agg, how do I get the original column back, without having to join back with the original column, or without having to use .over? Example: import polars as pl dates = [ "2020-01-01 13:45:48", "2020-01-01 16:42:13", "2020-01-01 16:45:09", "2020-01-02 18:12:48", "2020-01-03 19:45:32", "2020-01-08 23:16:43", ] df = pl.DataFrame({"dt": dates, "a": [3, 7, 5, 9, 2, 1]}).with_columns( pl.col("dt").str.to_datetime().set_sorted() ) Provides me with a small polars dataframe: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ dt ┆ a β”‚ β”‚ --- ┆ --- β”‚ β”‚ datetime[ΞΌs] ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════║ β”‚ 2020-01-01 13:45:48 ┆ 3 β”‚ β”‚ 2020-01-01 16:42:13 ┆ 7 β”‚ β”‚ 2020-01-01 16:45:09 ┆ 5 β”‚ β”‚ 2020-01-02 18:12:48 ┆ 9 β”‚ β”‚ 2020-01-03 19:45:32 ┆ 2 β”‚ β”‚ 2020-01-08 23:16:43 ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ When I apply a rolling aggregations, I get the new columns back, but not the original columns: out = df.rolling(index_column="dt", period="2d").agg( pl.sum("a").alias("sum_a"), pl.min("a").alias("min_a"), pl.max("a").alias("max_a"), pl.col("a") ) which gives: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ dt ┆ sum_a ┆ min_a ┆ max_a ┆ a β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ datetime[ΞΌs] ┆ i64 ┆ i64 ┆ i64 ┆ list[i64] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ═══════β•ͺ═══════β•ͺ══════════════║ β”‚ 2020-01-01 13:45:48 ┆ 3 ┆ 3 ┆ 3 ┆ [3] β”‚ β”‚ 2020-01-01 16:42:13 ┆ 10 ┆ 3 ┆ 7 ┆ [3, 7] β”‚ β”‚ 2020-01-01 16:45:09 ┆ 15 ┆ 3 ┆ 7 ┆ [3, 7, 5] β”‚ β”‚ 2020-01-02 18:12:48 ┆ 24 ┆ 3 ┆ 9 ┆ [3, 7, 5, 9] β”‚ β”‚ 2020-01-03 19:45:32 ┆ 11 ┆ 2 ┆ 9 ┆ [9, 2] β”‚ β”‚ 2020-01-08 23:16:43 ┆ 1 ┆ 1 ┆ 1 ┆ [1] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ How can I get the original a column. I don't want to join and I don't want to use .over as I need the group_by of the rolling later on and .over does not work with .rolling Edit. I am also not keen on using the following. out = df.rolling(index_column="dt", period="2d").agg( pl.sum("a").alias("sum_a"), pl.min("a").alias("min_a"), pl.max("a").alias("max_a"), pl.col("a").last() ) Edit 2. Why Expr.rolling() is not feasible and why I need the group_by: Given a more elaborate example: dates = [ "2020-01-01 13:45:48", "2020-01-01 16:42:13", "2020-01-01 16:45:09", "2020-01-02 18:12:48", "2020-01-03 19:45:32", "2020-01-08 23:16:43", ] df_a = pl.DataFrame({"dt": dates, "a": [3, 7, 5, 9, 2, 1],"cat":["one"]*6}).with_columns( pl.col("dt").str.to_datetime() ) df_b = pl.DataFrame({"dt": dates, "a": [3, 7, 5, 9, 2, 1],"cat":["two"]*6}).with_columns( pl.col("dt").str.to_datetime() ) df = pl.concat([df_a,df_b]) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ dt ┆ a ┆ cat β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ datetime[ΞΌs] ┆ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ 2020-01-01 13:45:48 ┆ 3 ┆ one β”‚ β”‚ 2020-01-01 16:42:13 ┆ 7 ┆ one β”‚ β”‚ 2020-01-01 16:45:09 ┆ 5 ┆ one β”‚ β”‚ 2020-01-02 18:12:48 ┆ 9 ┆ one β”‚ β”‚ 2020-01-03 19:45:32 ┆ 2 ┆ one β”‚ β”‚ 2020-01-08 23:16:43 ┆ 1 ┆ one β”‚ β”‚ 2020-01-01 13:45:48 ┆ 3 ┆ two β”‚ β”‚ 2020-01-01 16:42:13 ┆ 7 ┆ two β”‚ β”‚ 2020-01-01 16:45:09 ┆ 5 ┆ two β”‚ β”‚ 2020-01-02 18:12:48 ┆ 9 ┆ two β”‚ β”‚ 2020-01-03 19:45:32 ┆ 2 ┆ two β”‚ β”‚ 2020-01-08 23:16:43 ┆ 1 ┆ two β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ and the code: out = df.rolling(index_column="dt", period="2d",group_by="cat").agg( pl.sum("a").alias("sum_a"), pl.min("a").alias("min_a"), pl.max("a").alias("max_a"), pl.col("a") ) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ cat ┆ dt ┆ sum_a ┆ min_a ┆ max_a ┆ a β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ datetime[ΞΌs] ┆ i64 ┆ i64 ┆ i64 ┆ list[i64] β”‚ β•žβ•β•β•β•β•β•ͺ═════════════════════β•ͺ═══════β•ͺ═══════β•ͺ═══════β•ͺ══════════════║ β”‚ one ┆ 2020-01-01 13:45:48 ┆ 3 ┆ 3 ┆ 3 ┆ [3] β”‚ β”‚ one ┆ 2020-01-01 16:42:13 ┆ 10 ┆ 3 ┆ 7 ┆ [3, 7] β”‚ β”‚ one ┆ 2020-01-01 16:45:09 ┆ 15 ┆ 3 ┆ 7 ┆ [3, 7, 5] β”‚ β”‚ one ┆ 2020-01-02 18:12:48 ┆ 24 ┆ 3 ┆ 9 ┆ [3, 7, 5, 9] β”‚ β”‚ one ┆ 2020-01-03 19:45:32 ┆ 11 ┆ 2 ┆ 9 ┆ [9, 2] β”‚ β”‚ one ┆ 2020-01-08 23:16:43 ┆ 1 ┆ 1 ┆ 1 ┆ [1] β”‚ β”‚ two ┆ 2020-01-01 13:45:48 ┆ 3 ┆ 3 ┆ 3 ┆ [3] β”‚ β”‚ two ┆ 2020-01-01 16:42:13 ┆ 10 ┆ 3 ┆ 7 ┆ [3, 7] β”‚ β”‚ two ┆ 2020-01-01 16:45:09 ┆ 15 ┆ 3 ┆ 7 ┆ [3, 7, 5] β”‚ β”‚ two ┆ 2020-01-02 18:12:48 ┆ 24 ┆ 3 ┆ 9 ┆ [3, 7, 5, 9] β”‚ β”‚ two ┆ 2020-01-03 19:45:32 ┆ 11 ┆ 2 ┆ 9 ┆ [9, 2] β”‚ β”‚ two ┆ 2020-01-08 23:16:43 ┆ 1 ┆ 1 ┆ 1 ┆ [1] β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ This does not work: df.sort("dt").with_columns(sum=pl.sum("a").rolling(index_column="dt", period="2d").over("cat")) Gives: # InvalidOperationError: rolling expression not allowed in aggregation
There are dedicated rolling_*_by expressions which can be used with .over() df.with_columns( pl.col("a").rolling_sum_by("dt", "2d").over("cat").name.prefix("sum_"), pl.col("a").rolling_min_by("dt", "2d").over("cat").name.prefix("min_"), pl.col("a").rolling_max_by("dt", "2d").over("cat").name.prefix("max_") ) shape: (12, 6) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ dt ┆ a ┆ cat ┆ sum_a ┆ min_a ┆ max_a β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ datetime[ΞΌs] ┆ i64 ┆ str ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════β•ͺ═════β•ͺ═══════β•ͺ═══════β•ͺ═══════║ β”‚ 2020-01-01 13:45:48 ┆ 3 ┆ one ┆ 3 ┆ 3 ┆ 3 β”‚ β”‚ 2020-01-01 16:42:13 ┆ 7 ┆ one ┆ 10 ┆ 3 ┆ 7 β”‚ β”‚ 2020-01-01 16:45:09 ┆ 5 ┆ one ┆ 15 ┆ 3 ┆ 7 β”‚ β”‚ 2020-01-02 18:12:48 ┆ 9 ┆ one ┆ 24 ┆ 3 ┆ 9 β”‚ β”‚ 2020-01-03 19:45:32 ┆ 2 ┆ one ┆ 11 ┆ 2 ┆ 9 β”‚ β”‚ … ┆ … ┆ … ┆ … ┆ … ┆ … β”‚ β”‚ 2020-01-01 16:42:13 ┆ 7 ┆ two ┆ 10 ┆ 3 ┆ 7 β”‚ β”‚ 2020-01-01 16:45:09 ┆ 5 ┆ two ┆ 15 ┆ 3 ┆ 7 β”‚ β”‚ 2020-01-02 18:12:48 ┆ 9 ┆ two ┆ 24 ┆ 3 ┆ 9 β”‚ β”‚ 2020-01-03 19:45:32 ┆ 2 ┆ two ┆ 11 ┆ 2 ┆ 9 β”‚ β”‚ 2020-01-08 23:16:43 ┆ 1 ┆ two ┆ 1 ┆ 1 ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
4
4
78,605,727
2024-6-11
https://stackoverflow.com/questions/78605727/exception-stack-trace-not-clickable-in-pycharm
PyCharm suddenly changed the way it shows the stack trace on the run tab and does not let me click on the exception (or anywhere else) and go to the specific point of the error file anymore. How can I fix this? Running on OSX Sonoma, PyCharm 2022.2.3 (Community Edition)
The short answer: Eliminate all (direct or indirect) imports of the pretty_errors package. The medium answer If you use the torchmetrics package in your project, which tries to use pretty_errors, one of the following should work: (1) just uninstall the pretty_errors package from your Python environment, (2) update torchmetrics to a version that no longer uses pretty_errors (update: such as v1.5). If you don't use torchmetrics, you might want to identify any other package that uses the pretty_errors package and eliminate the corresponding imports of pretty_errors. The long answer I had the same problem with exactly the same look and behavior of console outputs. In my case, I could trace it down to using the package torchmetrics in a project, which, in turn, imported the package pretty_errors. The pretty_errors package changes stack traces in the way that you experienced (compare screenshots on the project's site). And here is the corresponding code in torchmetrics/__init__.py where pretty_errors was imported (lines 17–18 in the source code): if package_available("pretty_errors"): import pretty_errors # noqa: F401 Once I commented out those lines locally, stack traces looked and behaved as usual again. Above I wrote "was imported", because the current source code of torchmetrics does not use pretty_errors any longer – exactly for the reason that it makes stack traces arguably unusable – see corresponding pull request and commit on GitHub. So, at some point, probably the best solution is updating torchmetrics in your project. Update: By now, this should be possible with torchmetrics v1.5. For older versions, just uninstalling pretty_errors from your environment should do the trick. In particular, it won't break torchmetrics, since the import of pretty_errors is conditional there, anyway (see code snippet above). If you don't use torchmetrics in your project, of course, then it must be another package that relies on pretty_errors, or you directly imported pretty_errors yourself. Identifying the culprit(s) shouldn't be that hard, and the corresponding solution should always be eliminating the use of pretty_errors.
6
4
78,590,453
2024-6-7
https://stackoverflow.com/questions/78590453/type-hinting-a-hypothesis-composite-strategy
I am using the hypothesis library and I would like to annotate my code with type hints. The docs are mentioning the hypothesis.strategies.SearchStrategy as the type for all search strategies. Take this example: @composite def int_strategy(draw: DrawFn) -> hypothesis.strategies.SearchStrategy[int]: ... # some computation here resulting in ``x`` being an ``int`` return x Running mypy will (rightly so) result in an error along those lines: error: Returning Any from function declared to return "SearchStrategy[Any]" [no-any-return] I mean, I am actually returning an int, not a SearchStrategy. How am I supposed to type annotate my hypothesis strategies?
Functions decorated with @composite should be type hinted as normal: @composite def int_strategy(draw: DrawFn) -> int: ... @composite will then automatically transform this to something like: # As if it doesn't have the `draw` parameter and that it returns a `SearchStrategy` def int_strategy() -> SearchStrategy[int]: ... Don't believe me? Ask Mypy: # At call site reveal_type(int_strategy) # () -> SearchStrategy[int] reveal_type(int_strategy()) # SearchStrategy[int] This is the same with other decorators: The eventual type of a function is determined by its original type hints and all of its @decorators'. In composite()'s case, this is how it is defined (at least at type-checking time): # Takes a function whose first argument is of type `DrawFn` # and returns a function without that argument, # returning a `SearchStrategy` that will output # values of the same type as the original's return type. def composite( f: Callable[Concatenate[DrawFn, P], Ex] ) -> Callable[P, SearchStrategy[Ex]]: ... In fact, using SearchStrategy[] as the return type is such a common mistake that the maintainers made it so that you would get a runtime warning: @composite def int_strategy() -> SearchStrategy[int]: ... tests/test_foo.py:6 /project/tests/test_foo.py:6: HypothesisWarning: Return-type annotation is `st.SearchStrategy[int]`, but the decorated function should return a value (not a strategy)
5
6
78,591,577
2024-6-7
https://stackoverflow.com/questions/78591577/mypy-error-source-file-found-twice-under-different-module-names-when-using-edi
mypy throws an error when I have an editable installation (pip install -e .) of my library. It works fine with the non-editable installation (pip install .). I was able to reproduce it with a toy example, so here are the files: . β”œβ”€β”€ src β”‚ └── my_ns β”‚ └── mylib β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ main.py β”‚ β”œβ”€β”€ py.typed β”‚ └── second.py β”œβ”€β”€ mypy.ini └── pyproject.toml main.py def something() -> None: print("I am something") second.py from my_ns.mylib.main import something def something_else() -> None: something() print("I am something else") pyproject.toml [build-system] requires = ["setuptools", "setuptools-scm"] build-backend = "setuptools.build_meta" [project] name = "mylib" requires-python = ">=3.10" version = "0.1.0" [tool.setuptools.packages.find] where = ["src"] [tool.setuptools.package-data] "*" = ["*py.typed"] mypy.ini [mypy] namespace_packages = True explicit_package_bases = True exclude = (?x)( ^tests/ # ignore everything in tests directory | ^test/ # ignore everything in test directory | ^setup\.py$ # ignore root's setup.py ) my_ns is a namespace package, so it does by intention not include a __init__.py (and must remain a namespace). This is the result when running mypy 1.10.0: $ mypy --config-file mypy.ini . src/my_ns/mylib/main.py: error: Source file found twice under different module names: "src.my_ns.mylib.main" and "my_ns.mylib.main" Found 1 error in 1 file (errors prevented further checking) How can I make mypy work with an editable install and support namespace packages?
Add the following to your mypy.ini: mypy_path = src (Credit goes to Mario Ishac, who got this from a GitHub issue comment.)
3
4
78,607,542
2024-6-11
https://stackoverflow.com/questions/78607542/simulation-of-a-spring-loaded-pendulum-on-a-spinning-disk
I want to write a simulation in Python similar to the one described in Simulation of a Pendulum hanging on a spinning Disk. But I want the system to be spring loaded. So instead of the mass hanging from a thread, I want the mass to hang from a spring, that rotates. I have tried putting the ODE together but it takes forever to calculate: import sympy as sp from IPython.display import display R = sp.symbols('R') omega = sp.symbols('omega') t = sp.symbols('t') phi = sp.Function('phi')(t) theta = sp.Function('theta')(t) s = sp.Function('s')(t) L = sp.symbols('L') m = sp.symbols('m') k = sp.symbols('k') g = sp.symbols('g') x = R*sp.cos(omega*t)+(L+s)*(sp.sin(theta)*sp.cos(phi)) y = R*sp.sin(omega*t)+(L+s)*(sp.sin(theta)*sp.sin(phi)) z = -(L+s)*sp.cos(theta) xs = sp.diff(x,t) ys = sp.diff(y,t) zs = sp.diff(z,t) v = xs**2 + ys**2 + zs**2 Ekin = 0.5*m*v Epot = g*(L+s)*sp.cos(theta)+0.5*k*s**2 L = Ekin + Epot #display(L) ELTheta = sp.diff(sp.diff(L,sp.Derivative(theta,t)), t) + sp.diff(L,theta) ELPhi = sp.diff(sp.diff(L,sp.Derivative(phi,t)), t) + sp.diff(L,phi) ELs = sp.diff(sp.diff(L,sp.Derivative(s,t)), t) + sp.diff(L,s) Eq1 = sp.Eq(ELTheta,0) Eq2 = sp.Eq(ELPhi,0) Eq3 = sp.Eq(ELs,0) LGS = sp.solve((Eq1,Eq2,Eq3),(sp.Derivative(theta,t,2),sp.Derivative(phi,t,2),sp.Derivative(s,t,2))) thetadd = sp.simplify(LGS[sp.Derivative(theta,t,2)]) phidd = sp.simplify(LGS[sp.Derivative(phi,t,2)]) sdd = sp.simplify(LGS[sp.Derivative(s,t,2)]) I don't know whether I chose the right original condition. Is there a simpler and faster way to compute this or a different formula to the problem, that would simplify it?
Once you remove the constraint of a fixed-length pendulum it is probably easier to solve this in Cartesian coordinates using Newton's 2nd Law (F=ma), rather than via the Lagrangian equations of motion. Consider a spring of natural length L and stiffness k, one end attached to a point on the edge of a disk of radius R rotating with angular velocity Ο‰, the other holding a bob of mass m. The tether point on the disk has time-dependent coordinates The bob has coordinates The bob is subject to two forces: the elastic force from the spring of magnitude and its weight The equation of motion is just or where n is a unit vector toward the tether and g is the acceleration due to gravity. Note that there is no energy-dissipating mechanism, so make sure that you avoid resonance (i.e. the natural circular frequency of the spring, sqrt(k/m), should not be close to the circular frequency, omega, of the rotating point). For a stable, rather than very "bouncy", orbit, see below the main code. Code: import math import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation from scipy.integrate import solve_ivp from mpl_toolkits.mplot3d import Axes3D g = 9.81 # acceleration due to gravity (m/s^2) gravity = np.array( [ 0.0, 0.0, -g ] ) def plot_animation( t, qx, qy, qz, x, y, z ): plotInterval = 1 fig = plt.figure( figsize=(4,4) ) ax = fig.add_subplot(111, projection='3d' ) ax.view_init(30, 45) ax.set_xlim( -2.0, 2.0 ); ax.set_ylim( -2.0, 2.0 ); ax.set_aspect('equal') ax.plot( qx, qy, qz, 'k' ) # tether point a = ax.plot( [qx[0],x[0]], [qy[0],y[0]], [qz[0],z[0]], 'g' ) # pendulum string b = ax.plot( [x[0]], [y[0]], [z[0]], 'ro' ) # pendulum bob def animate( i ): # update anything that has changed a[0].set_data_3d( [qx[i],x[i]], [qy[i],y[i]], [qz[i],z[i]] ) b[0].set_data_3d( [x[i]], [y[i]], [z[i]] ) ani = animation.FuncAnimation( fig, animate, interval=4, frames=len( t ) ) plt.show() # ani.save( "demo.gif", fps=50 ) def deriv( t, Y, R, omega, L, k, m ): x, v = Y[0:3], Y[3:6] # bob position and velocity (vectors) q = np.array( [ R*math.cos(omega*t), R*math.sin(omega*t), 0.0 ] ) # position vector of tether spring = q - x # vector from bob to tether length = np.linalg.norm( spring ) # stretched length of spring unitVector = spring / length # direction of elastic force extension = length - L # extension of spring a = ( k / m ) * extension * unitVector + gravity # F combines elastic force and gravity return np.hstack( ( v, a ) ) # return velocity and acceleration R = 0.5 # radius of disk (m) omega = 2.0 # angular velocity of disk (rad/s) L = 1.0 # natural length of spring (m) k = 20.0 # stiffness of spring (N/m) # warning: avoid resonance! m = 2.0 # mass (kg) Y0 = np.array( [ R, 0.0, -L, 0.0, 0.0, 0.0 ] ) # initial position and velocity period = 2 * np.pi / omega tmax = 5 * period solution = solve_ivp( deriv, [0, tmax], Y0, args=(R,omega,L,k,m), rtol=1.0e-6, dense_output=True ) t = np.linspace( 0, tmax, 1000 ) Y = solution.sol( t ) # Position of bob x = Y[0,:] y = Y[1,:] z = Y[2,:] # Position of tether on disk qx = R * np.cos( omega * t ) qy = R * np.sin( omega * t ) qz = np.zeros_like( qx ) plot_animation( t, qx, qy, qz, x, y, z ) It is possible to give the pendulum bob an initial position and velocity that will put it in a stable orbit (steady motion around a circle, in synchronisation with the tether on the disk). If you set the centripetal force equal to the horizontal component of the elastic force and the weight equal to the vertical component of the elastic force then you can solve a couple of simultaneous equations for the tangent of the angle to the vertical and the spring extension and thereby provide the relevant initial conditions for Y0. Try replacing the initialisation of Y0 with the following lines. There are also probably synchronous solutions with the bob lagging the tether point, but I haven't investigated them. # "Equilibrium motion" - moves in a circle, synchronised with the tether t, e = 0, 0 # tan(theta) and extension w2g = omega ** 2 / g mgk = m * g / k for _ in range( 100 ): c = 1 / math.sqrt( 1 + t*t ); s = t * c # cosine and sine t = w2g * ( R + (L+e) * s ) e = mgk / c # print( t, e ) x = R + (L+e) * s z = - (L+e) * c Y0 = np.array( [ x, 0.0, z, 0.0, x * omega, 0.0 ] )
2
2
78,609,131
2024-6-11
https://stackoverflow.com/questions/78609131/vs-code-not-jumping-to-the-top-stack-frame
I'm trying to debug some Django library code with the default VS Code Python and Django debugging settings (and "justMyCode" = False). I've set a breakpoint in one of the library functions: I call this from some user code, eg. formset.save(). When I debug and hit the breakpoint, VS Code jumps to this user code instead of the library code as I'd have expected: Pressing "Step Into" seems to progress the library code, but keeps jumping back to the user code which is calling it. The call stack seems to know about the library code, although it is greyed out: If I click on save then the currently executing statement is highlighted: I can go through the tedious process of stepping through, clicking the top stack frame, and then doing usual things like inspecting locals. The issue just seems to be about where VS Code is jumping to every time the debugger breaks (or maybe where pdb is telling VS Code to jump -- I'm not sure). I want the library code which is currently being executed to be jumped to, and that's what I would have expected to happen.
I see the same thing. Looks like a recent bug in VSCode. I rolled back to VSCode version 1.89 and it worked as expected. This was fixed in version 1.90.1 (list of addressed issues).
2
2
78,605,948
2024-6-11
https://stackoverflow.com/questions/78605948/django-simple-history-how-display-related-fields-in-the-admin-panel
I use django-simple-history.I keep the historical table of the product and the price, the price is associated with the product and an inlane is added to the admin panelю I want to display in the admin panel in the product history records of the stories of related models (price). How can I do this? And that the fields changed would be displayed my model class Product(models.Model): article = models.PositiveIntegerField() history = HistoricalRecords() class Price(models.Model): prod = models.OneToOneField( Product,) price_supplier = models.FloatField() history = HistoricalRecords() my admin class PriceInline(admin.TabularInline): model = Price class ProductAdmin(SimpleHistoryAdmin): inlines = [ PriceInline,] admin.site.register(Product, ProductAdmin) enter image description here enter image description here I tried to set it up via history_view() and get_history_queryset() to receive objects of another model and I received them, but I do not understand how to embed them in the render so that both changes in the product model and changes in the price model would be reflected, and at the same time the fields that changed were corresponding to their models. or is there another method to achieve this result
I found a solution, it's a bit of a kludge. Since we will not display the app price separately in the admin panel (it only participates in the inline of the product), we can change the history_view() function from django-simple-history. I will add an image model to the example to show how to use this approach with both one-to-one and one-to-many fields. At the same time, I disabled the ability to return the old value from history and for the template I changed what I displayed in the first column as a link because the links need to be configured separately. Now in the first column I just have the names of the field type. Now my code looks like this: model.py class Product(models.Model): article = models.PositiveIntegerField() history = HistoricalRecords() class Price(models.Model): prod = models.OneToOneField(Product,) price_supplier = models.FloatField() history = HistoricalRecords() class ProductImage(models.Model): product = models.ForeignKey(Product,) photo = models.ImageField("Π˜Π·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠ΅", upload_to=get_file_path_add) history = HistoricalRecords() admin.py class PriceInline(admin.TabularInline): model = Price class ProductImageInline(admin.TabularInline): model = ProductImage class PriceAdmin(SimpleHistoryAdmin): model = Price def history_view(self, request, object_id, extra_context=None): """The 'history' admin view for this model.""" model = self.model opts = model._meta pk_name = opts.pk.attname history = getattr(model, model._meta.simple_history_manager_attribute) historical_records = PriceAdmin.get_history_queryset( PriceAdmin, request, history, pk_name, object_id ) history_list_display = PriceAdmin.get_history_list_display(PriceAdmin,request) # Set attribute on each historical record from admin methods for history_list_entry in history_list_display: value_for_entry = getattr(self, history_list_entry, None) if value_for_entry and callable(value_for_entry): for record in historical_records: setattr(record, history_list_entry, value_for_entry(record)) PriceAdmin.set_history_delta_changes(PriceAdmin, request, historical_records) return historical_records def set_history_delta_changes( self, request, historical_records, foreign_keys_are_objs=True, ): previous = None for current in historical_records: if previous is None: previous = current continue # Related objects should have been prefetched in `get_history_queryset()` delta = previous.diff_against( current, foreign_keys_are_objs=foreign_keys_are_objs ) helper = PriceAdmin.get_historical_record_context_helper( PriceAdmin, request, previous ) previous.history_delta_changes = helper.context_for_delta_changes(delta) previous = current class ProductImageAdmin(SimpleHistoryAdmin): model = ProductImage def history_view(self, request, object_id, extra_context=None): """The 'history' admin view for this model.""" model = self.model opts = model._meta pk_name = opts.pk.attname history = getattr(model, model._meta.simple_history_manager_attribute) historical_records = ProductImageAdmin.get_history_queryset( ProductImageAdmin, request, history, pk_name, object_id ) history_list_display = ProductImageAdmin.get_history_list_display( ProductImageAdmin, request ) # Set attribute on each historical record from admin methods for history_list_entry in history_list_display: value_for_entry = getattr(self, history_list_entry, None) if value_for_entry and callable(value_for_entry): for record in historical_records: setattr(record, history_list_entry, value_for_entry(record)) ProductImageAdmin.set_history_delta_changes( ProductImageAdmin, request, historical_records ) return historical_records def set_history_delta_changes( self, request, historical_records, foreign_keys_are_objs=True, ): previous = None for current in historical_records: if previous is None: previous = current continue # Related objects should have been prefetched in `get_history_queryset()` delta = previous.diff_against( current, foreign_keys_are_objs=foreign_keys_are_objs ) helper = ProductImageAdmin.get_historical_record_context_helper( ProductImageAdmin, request, previous ) previous.history_delta_changes = helper.context_for_delta_changes(delta) previous = current class ProductAdmin(SimpleHistoryAdmin): inlines = [ PriceInline, ProductImageInline, ] def history_view(self, request, object_id, extra_context=None): """The 'history' admin view for this model.""" request.current_app = self.admin_site.name model = self.model opts = model._meta app_label = opts.app_label pk_name = opts.pk.attname history = getattr(model, model._meta.simple_history_manager_attribute) object_id = unquote(object_id) price_id = Price.objects.get(prod=object_id) image_id = ProductImage.objects.filter(product=object_id) historical_records = self.get_history_queryset( request, history, pk_name, object_id ) #**here we get historical_records in image and price** historical_records_image = [] for item in image_id: item_list = ProductImageAdmin.history_view( ProductImageAdmin, request, item.id, extra_context=None ) if historical_records_image == []: historical_records_image = item_list else: historical_records_image = list( chain( historical_records_image, item_list, ) ) historical_records_price = PriceAdmin.history_view( PriceAdmin, request, price_id.id, extra_context=None ) history_list_display = self.get_history_list_display(request) # If no history was found, see whether this object even exists. try: obj = self.get_queryset(request).get(**{pk_name: object_id}) except model.DoesNotExist: try: obj = historical_records.latest("history_date").instance except historical_records.model.DoesNotExist: raise http.Http404 if not self.has_view_history_or_change_history_permission(request, obj): raise PermissionDenied # Set attribute on each historical record from admin methods for history_list_entry in history_list_display: value_for_entry = getattr(self, history_list_entry, None) if value_for_entry and callable(value_for_entry): for record in historical_records: setattr(record, history_list_entry, value_for_entry(record)) self.set_history_delta_changes(request, historical_records) # HERE WE COLLECT A GENERAL LIST OF ALL RECORDS result_list = list( chain( historical_records, historical_records_price, historical_records_image, ) ) # HERE WE SORT THEM ALL BY TIME def get_date(element): return element.history_date result_list_sorted = result_list.sort(key=get_date, reverse=True) content_type = self.content_type_model_cls.objects.get_for_model( get_user_model() ) admin_user_view = "admin:{}_{}_change".format( content_type.app_label, content_type.model, ) context = { "title": self.history_view_title(request, obj), "object_history_list_template": self.object_history_list_template, "historical_records": result_list, "module_name": capfirst(force_str(opts.verbose_name_plural)), "object": obj, "root_path": getattr(self.admin_site, "root_path", None), "app_label": app_label, "opts": opts, "admin_user_view": admin_user_view, "history_list_display": history_list_display, "revert_disabled": self.revert_disabled(request, obj), } context.update(self.admin_site.each_context(request)) context.update(extra_context or {}) extra_kwargs = {} return self.render_history_view( request, self.object_history_template, context, **extra_kwargs ) admin.site.register(Product, ProductAdmin)
3
0
78,607,642
2024-6-11
https://stackoverflow.com/questions/78607642/how-to-remove-motion-blur-from-a-given-image-in-frequency-domain-deconvolution
I have read that if we have the PSF of a given image, using deconvolution we can reverse the distortion and get the original image. I tried to do the same thing, however, I'm getting an image that looks like complete noise: def add_motion_blur(img, size=31, angle=11): kernel = np.zeros((size,size), dtype=np.float32) # set the middle row of the kernel array to be all ones. kernel[size//2,:] = np.ones((size,),dtype=np.float32) # now rotate the kernel m = cv2.getRotationMatrix2D((size//2, size//2), angle=angle, scale=1.0) kernel = cv2.warpAffine(kernel, m, dsize=(size,size)) kernel /= np.sum(kernel) return cv2.filter2D(img, ddepth=-1, kernel=kernel), kernel img = cv2.imread('./img/peppers.jpg') # adding motion blur is akin to creating the PSF because # it's a distorting operator, and we are distorting the image using it! img_blurred, kernel = add_motion_blur(img, size=31, angle=60) # now we should be able to deconvolve the image with the PSF and get the deblurred version # for this we can use the fourier transform and divide our blurred image from the psf fft = np.fft.fftn(img_blurred[...,0]) # pad the kernel so its the same shape as our image so the division goes on without an issue fft_psf = np.fft.fftn(kernel, s=fft.shape) fft_result = fft/fft_psf ifft_result = np.fft.ifft2(fft_result) deblurred_image = np.abs(ifft_result).astype(np.uint8) cv2.imshow('deblurred image',deblurred_image) cv2.waitKey(0) cv2.destroyAllWindows() This results in pure noise it seems: I also tried Wiener deconvolution to no avail: # Wiener deconvolution fft_psf_mag = np.conj(fft_psf) / (np.abs(fft_psf)**2 + 10) fft_result = fft / fft_psf_mag It results in the same result it seems: What am I missing here? I also tried fft/(fft_psf+1e-3) as a way of regularizing the kernel's small values, but this does not result in any improvement either. Side note These are the previous outputs belonging to the original image, the blurred version and the kernel used: Update: Here is the updated version which: pads the kernel properly on all sides the imaginary values are close to 0 However the problem still persists and I can not get anything meaningful! The outputs are given at the end: def add_motion_blur(img, size=31, angle=11): kernel = np.zeros((size,size), dtype=np.float32) # set the middle row of the kernel array to be all ones. kernel[size//2,:] = np.ones((size,),dtype=np.float32) # now rotate the kernel m = cv2.getRotationMatrix2D((size//2, size//2), angle=angle, scale=1.0) kernel = cv2.warpAffine(kernel, m, dsize=(size,size)) kernel /= np.sum(kernel) return cv2.filter2D(img, ddepth=-1, kernel=kernel), kernel img = cv2.imread('./img/peppers.jpg') # adding motion blur is akin to creating the PSF because # it's a distorting operator, and we are distorting the image using it! img_blurred, kernel = add_motion_blur(img, size=31, angle=60) # now we should be able to deconvolve the image with the PSF and get the deblurred version # for this we can use the fourier transform and divide our blurred image from the psf fft = np.fft.fftn(img_blurred[...,0]) # pad the kernel equally on all sides, so that it sits at the center! pad_height = img.shape[0] - kernel.shape[0] pad_width = img.shape[1] - kernel.shape[1] pad_top = pad_height // 2 pad_bottom = pad_height - pad_top pad_left = pad_width // 2 pad_right = pad_width - pad_left kernel_padded = np.pad(kernel, [(pad_top, pad_bottom), (pad_left, pad_right)], 'constant') fft_psf = np.fft.fft2(kernel_padded) # the normal way doesnt work! # fft_result = fft/fft_psf # wiener deconvolution (0.8 is the best I could find to get something that could be visualized somewhat properly) fft_psf_mag = np.conj(fft_psf) / (np.abs(fft_psf)**2 + 0.8) fft_result = fft * fft_psf_mag # get back the image ifft_result = np.fft.ifft2(fft_result) # Check if the imaginary part is close to zero assert np.all(np.isclose(np.imag(ifft), 0, atol=1e-8)), 'imaginary values must be close to 0 otherwise something is wrong!' # grab the final image img_abs = np.abs(ifft_result).astype(np.uint8) img_real = ifft_result.real.astype(np.uint8) cv2.imshow('padded kernel',cv2.normalize(kernel_padded,None,0,255,cv2.NORM_MINMAX)) cv2.imshow('deblurred(img.real)', img_real) cv2.imshow('deblurred(np.abs(img))', img_abs) cv2.waitKey(0) cv2.destroyAllWindows() Here are the outputs(note using .real values for image visualization is inferior to the magnitude based version: Update 2: Here is the updated version which: pads the kernel properly according to this answer images are clipped properly using the real values and not their magnitudes based on the explanations here used a much lower epsilon and got a much better output instead of selecting a single channel from the BGR image, I used the grayscale version which resulted in a much more clearer output.(different channels have different brightness, so they lead to different outcomes, using grayscale version removes this discrepancy. However, playing with epsilon, while results in a better output compared to the previous attempts, it introduces some artifacts/patterns into the final deblurred result which is not indented or wanted. Is this the best I can hope for, or is there still space for improvements? The outputs are given at the end: def add_motion_blur(img, size=31, angle=11): kernel = np.zeros((size,size), dtype=np.float32) # Set the middle row of the kernel array to be all ones. kernel[size//2,:] = np.ones((size,),dtype=np.float32) # Now rotate the kernel m = cv2.getRotationMatrix2D((size//2, size//2), angle=angle, scale=1.0) kernel = cv2.warpAffine(kernel, m, dsize=(size,size)) kernel /= np.sum(kernel) return cv2.filter2D(img, ddepth=-1, kernel=kernel), kernel img = cv2.imread('./img/peppers.jpg') img_blurred, kernel = add_motion_blur(img, size=31, angle=60) # Use the grayscale image instead img_blurred_gray = cv2.cvtColor(img_blurred,cv2.COLOR_BGR2GRAY) fft = np.fft.fftn(img_blurred_gray) # Pad the kernel properly, and then shift it pad_height = img.shape[0] - kernel.shape[0] pad_width = img.shape[1] - kernel.shape[1] pad_top = pad_height // 2 pad_bottom = pad_height - pad_top pad_left = pad_width // 2 pad_right = pad_width - pad_left kernel_padded = np.pad(kernel, [(pad_top, pad_bottom), (pad_left, pad_right)], 'constant') # shift the kernel kernel_padded = np.fft.ifftshift(kernel_padded) fft_psf = np.fft.fft2(kernel_padded) # The normal way doesnt work very well, but after correcting the kernel, it works much better than the previous attempt! 2 as the regularizer/epsilon seems to give somewhat good result compared to other values. # fft_result = fft/(fft_psf+2) # Wiener deconvolution works much better, 0.01 seems like a good eps, but it introduces weird artifacts in the image. fft_psf_mag = np.conj(fft_psf) / (np.abs(fft_psf)**2 + 0.01) fft_result = fft * fft_psf_mag # Get back the image-no need to shift-back the result! ifft_result = np.fft.ifft2(fft_result) # Check if the imaginary part is close to zero assert np.all(np.isclose(np.imag(ifft), 0, atol=1e-8)), 'imaginary values must be close to 0 otherwise something is wrong!' # Grab the final image # Clip the values for more accurate visualization img_real = ifft_result.real.clip(0,255).astype(np.uint8) cv2.imshow('padded kernel',cv2.normalize(kernel_padded,None,0,255,cv2.NORM_MINMAX)) cv2.imshow('deblurred(img.real)', img_real) cv2.waitKey(0) cv2.destroyAllWindows() Update 3: Here is the updated version which: fixes the artifacts caused by deblurring by expanding the input image followed by cropping the final result to the original image dimension according to the Cris Luengo's explanations here individual channels are operated on and later aggregated to allow for color image deblurring (Cris Luengo's explanations here). After these changes, the result look very good. The outputs are given at the end: def add_motion_blur(img, size=31, angle=11): kernel = np.zeros((size,size), dtype=np.float32) # Set the middle row of the kernel array to be all ones. kernel[size//2,:] = np.ones((size,),dtype=np.float32) # Now rotate the kernel m = cv2.getRotationMatrix2D((size//2, size//2), angle=angle, scale=1.0) kernel = cv2.warpAffine(kernel, m, dsize=(size,size)) kernel /= np.sum(kernel) return cv2.filter2D(img, ddepth=-1, kernel=kernel), kernel img = cv2.imread('./img/peppers.jpg') # so lets add 50px padding to each side of our image pad=50 img = cv2.copyMakeBorder(img, pad,pad,pad,pad,borderType=cv2.BORDER_REFLECT) img_blurred, kernel = add_motion_blur(img,size=31,angle=60) pad_height = img.shape[0] - kernel.shape[0] pad_width = img.shape[1] - kernel.shape[1] pad_top = (pad_height+1)//2 pad_bottom = pad_height//2 pad_left = (pad_width+1)//2 pad_right = pad_width//2 kernel_padded = np.pad(kernel, [(pad_top, pad_bottom),(pad_left, pad_right)],mode='constant') kernel_padded = np.fft.fftshift(kernel_padded) fft_psf = np.fft.fft2(kernel_padded) # now lets take the fft of our image ffts = np.fft.fftn(img_blurred) # weiner deconvolution eps = 0.01 fft_psf_mag = np.conj(fft_psf)/(np.abs(fft_psf)**2 + eps) # individually multiply each channel and then aggregate them fft_result = np.array([ffts[...,i] * fft_psf_mag for i in range(3)]).transpose(1,2,0) # now lets get back the image. iffts = np.fft.ifftn(fft_result) # before we continue, lets makesure the imaginary components are close to zero assert np.all(np.isclose(iffts.imag, 0, atol=1e-8)), 'imaginary values must be close to zero! or something is worng' # take the image img_deblurred = iffts.real.clip(0,255).astype(np.uint8) # crop the final image img_deblurred_cropped = img_deblurred[pad:img.shape[0]-pad, pad:img.shape[1]-pad] cv2.imshow('img',img) cv2.imshow('kenel_padded', cv2.normalize(kernel_padded,None,0,255,cv2.NORM_MINMAX)) cv2.imshow('img-blurred', img_blurred) cv2.imshow('img-deblurred-uncropped', img_deblurred) cv2.imshow('img-deblurred-cropped', img_deblurred_cropped) cv2.waitKey(0) cv2.destroyAllWindows() result:
side note: Thanks to @Cris Luengo, the issues were identified and corrected. Here is a summary of all of the points addressed in the comment section which lead to the final solution: Long Answer: There are several issues at play here, which are as follows: The Wiener deconvolution algorithm is implemented incorrectly, the correct form needs the multiplication not division: # Wiener deconvolution fft_psf_mag = np.conj(fft_psf) / (np.abs(fft_psf)**2 + 10) fft_result = fft * fft_psf_mag the +10 component, i.e. epsilon value is too much. it represents 1/snr. when the correction is applied, using a smaller value of epsilon yields a sharper result, the larger the value, the blurrier the outcome. values in the range of 0.01, 0.001 should be good starting points. (remember you can always try using trackbars and watching the changes realtime to get the best results!) The way the kernel is being padded is wrong. fft_psf = np.fft.fftn(kernel, s=fft.shape) will put the kernel at the top left corner. This doesn't cause any noticeable change in the output though, but the issue is: "@Cris Luengo: If you pad only to the right and bottom, then the kernel is shifted by some small amount from the true origin. The larger the kernel, the larger this shift is. You might not have noticed the small shift in the output, but it’s there, and if you overlay the result and original images you will see it." note that we're dealing with the spatial domain there (i.e. it’s about the position of the kernel in the image before applying FFT). to cut a long story short, the proper way would be to pad the kernel so the kernel sits at the center of the image and then shift it so the kernel sits at the corners of the image, i.e. do : pad_height = img.shape[0] - kernel.shape[0] pad_width = img.shape[1] - kernel.shape[1] pad_top = (pad_height+1)//2 pad_bottom = pad_height//2 pad_left = (pad_width+1)//2 pad_right = pad_width//2 kernel_padded = np.pad(kernel, [(pad_top, pad_bottom),(pad_left, pad_right)],mode='constant') kernel_padded = np.fft.fftshift(kernel_padded) fft_psf = np.fft.fft2(kernel_padded) Here's how the output looks when the wrong vs the right kernel is used, note the output seems identical at first glance, but when overlayed on the original image, the wrong one reveals its shifted nature!(we can also see artifacts in this case, especially towards the bottom and bottom right corner): Using fft_psf = np.fft.fftn(kernel, s=fft.shape): Using the correct padding/shifting : 4.To get rid of the artifacts in the final image, one can expand the image dimensions by padding all 4 sides, running the operation and finally cropping the final deblurred image to the original image size. This way, by avoiding sharp high frequency drop-offs, we avoid the artifacts:(edgetaper can be used to deal with ring artifacts as well. Here's another implementation in C++. The Python implementation is provided below) pad = 50 img = cv2.copyMakeBorder(img, pad, pad, pad, pad, borderType=cv2.BORDER_REFLECT) ... # crop the final image img_deblurred_cropped = img_deblurred[pad:img.shape[0]-pad, pad:img.shape[1]-pad] edgetaper in Python: def edgetaper(img, gamma=5, beta=0.2): width,height = img.shape[:2] dx = 2 * np.pi / width dy = 2 * np.pi / height # subtract dx and dy to match original function's range x = np.linspace(-np.pi, np.pi-dx, width) y = np.linspace(-np.pi, np.pi-dy, height) w1 = 0.5 * (np.tanh((x + gamma / 2) / beta) - np.tanh((x - gamma / 2) / beta)) w2 = 0.5 * (np.tanh((y + gamma / 2) / beta) - np.tanh((y - gamma / 2) / beta)) w = np.dot(w2.reshape(-1, 1), w1.reshape(1, -1)) if img.ndim>2: w = w[:, :, np.newaxis].repeat(img.shape[2], axis=2) return cv2.multiply(img.astype(np.float32), w.astype(np.float32)).clip(0,255).astype(np.uint8) Side Notes: Use the real components instead of their magnitudes, remember to do proper clipping (i.e. clip(0, 255)) before casting to uint8. from Cris Luengo Taking the absolute value, though common, is incorrect. For example, you might want to apply a filter to an image that contains negative values, or apply a filter that produces negative values. Taking the absolute value here would create artefacts. If the output of the inverse FFT contains imaginary values significantly different from zero, then there is an error in the way that the filtering kernel was padded. To deblur the color image, simply apply the Wiener's deconvolution on each channel separately, then aggregate them. that is : ffts = np.fft.fftn(img_blurred) ... fft_psf_mag = np.conj(fft_psf)/(np.abs(fft_psf)**2 + 1/snr) # individually multiply each channel and then aggregate them fft_result = np.array([ffts[...,i] * fft_psf_mag for i in range(3)]).transpose(1,2,0) # now lets get back the image. iffts = np.fft.ifftn(fft_result) # clip and crop to get the final image Always make sure the imaginary components of your final ifft are close to zero, otherwise this means something is wrong! assert np.all(np.isclose(iffts.imag, 0, atol=1e-8)), 'imaginary values must be close to zero! or something is worng' Here is the complete code for deblurring color images using Weiner's deconvolution: def add_motion_blur(img, size=31, angle=11): kernel = np.zeros((size,size), dtype=np.float32) # Set the middle row of the kernel array to be all ones. kernel[size//2,:] = np.ones((size,),dtype=np.float32) # Now rotate the kernel m = cv2.getRotationMatrix2D((size//2, size//2), angle=angle, scale=1.0) kernel = cv2.warpAffine(kernel, m, dsize=(size,size)) kernel /= np.sum(kernel) return cv2.filter2D(img, ddepth=-1, kernel=kernel), kernel img_org = cv2.imread('./img/peppers.jpg') # so lets add 50px padding to each side of our image pad=50 img = cv2.copyMakeBorder(img_org, pad,pad,pad,pad,borderType=cv2.BORDER_REFLECT) img_blurred, kernel = add_motion_blur(img, size=31, angle=60) pad_height = img.shape[0] - kernel.shape[0] pad_width = img.shape[1] - kernel.shape[1] pad_top = (pad_height+1)//2 pad_bottom = pad_height//2 pad_left = (pad_width+1)//2 pad_right = pad_width//2 kernel_padded = np.pad(kernel, [(pad_top, pad_bottom),(pad_left, pad_right)],mode='constant') kernel_padded = np.fft.fftshift(kernel_padded) fft_psf = np.fft.fft2(kernel_padded) # now lets take the fft of our image ffts = np.fft.fftn(img_blurred) # Weiner deconvolution # instead of eps, use snr snr = 100 fft_psf_mag = np.conj(fft_psf)/(np.abs(fft_psf)**2 + 1/snr) # individually multiply each channel and then aggregate them fft_result = np.array([ffts[...,i] * fft_psf_mag for i in range(3)]).transpose(1,2,0) # now lets get back the image. iffts = np.fft.ifftn(fft_result) # make sure the imaginary components are close to zero assert np.all(np.isclose(iffts.imag, 0, atol=1e-8)), 'imaginary values must be close to zero! or something is worng' # take the image img_deblurred = iffts.real.clip(0,255).astype(np.uint8) # crop the final image img_deblurred_cropped = img_deblurred[pad:img.shape[0]-pad, pad:img.shape[1]-pad] # When the right kernel with proper padding and shifting is used, this looks identical to original image, # otherwise, you can see one image is shifted, or worse the whole image is ugly/corrupted looking because one uses the wrong kernel! overlay = cv2.addWeighted(img_org, 0.5, img_deblurred_cropped, 0.5, 0) cv2.imshow('img',img) cv2.imshow('kenel_padded', cv2.normalize(kernel_padded,None,0,255,cv2.NORM_MINMAX)) cv2.imshow('img-blurred', img_blurred) cv2.imshow('img-deblurred-uncropped', img_deblurred) cv2.imshow('img-deblurred-cropped', img_deblurred_cropped) cv2.imshow('overlay',overlay) cv2.waitKey(0) cv2.destroyAllWindows() The outputs:
3
2
78,603,670
2024-6-10
https://stackoverflow.com/questions/78603670/how-to-extract-n-elements-every-m-elements-from-an-array
Suppose I have a numpy array [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16], How do I take 4 elements every 8 elements). Here is the expected result: a -> [1,2,3,4, 9,10,11,12] b -> [5,6,7,8, 13,14,15,16] My array has hundreds of elements. I went through the numpy array documentation but I never succeeded to perform this computation other then a loop which is very slow. EDIT: The array can have up to 3 interleave sub-array of 4 elements 4 elt sample0, 4 elt sample 1, 4 elt sample2, 4 elt sample0, 4 elt sample 1, 4 elt sample2, 4 elt sample0, 4 elt sample 1, 4 elt sample2 ... My array has 499875840 elements !
For a generic and pure numpy approach, you could argsort then split: N = 4 # number of consecutive elements M = 2 # number of output arrays idx = np.argsort(np.arange(len(arr))%(N*M)//N, kind='stable') # array([ 0, 1, 2, 3, 8, 9, 10, 11, 4, 5, 6, 7, 12, 13, 14, 15]) a, b = np.split(arr[idx], M) As a one liner: out = np.split(arr[np.argsort(np.arange(len(arr))%(N*M)//N, kind='stable')], M) Output: # a / out[0] array([ 1, 2, 3, 4, 9, 10, 11, 12]) # b / out[1] array([ 5, 6, 7, 8, 13, 14, 15, 16]) Output with arr = np.arange(32) as input: # a array([ 0, 1, 2, 3, 8, 9, 10, 11, 16, 17, 18, 19, 24, 25, 26, 27]) # b array([ 4, 5, 6, 7, 12, 13, 14, 15, 20, 21, 22, 23, 28, 29, 30, 31]) Output with arr = np.arange(32), N = 4, M = 4: (array([ 0, 1, 2, 3, 16, 17, 18, 19]), array([ 4, 5, 6, 7, 20, 21, 22, 23]), array([ 8, 9, 10, 11, 24, 25, 26, 27]), array([12, 13, 14, 15, 28, 29, 30, 31]) timings Paul's approach is faster than mine (but limited to 2 arrays as output). generalization A reshaping approach, as proposed by @hpaulj, can be generalized using: N = 4 # number of consecutive elements M = 3 # number of samples/output arrays out = arr.reshape(-1, M, N).transpose(1, 0, 2).reshape(-1, arr.size//M) # or to map to individual variables a,b,c = arr.reshape(-1, M, N).transpose(1, 0, 2).reshape(-1, arr.size//M) @U13-Forward's approach only works when M = 2, if can however be generalized using a list comprehension: N = 4 # number of consecutive elements M = 3 # number of samples/output arrays reshaped = arr.reshape(-1, N*M) out = [reshaped[:, n*N:n*N+N].ravel() for n in range(M)]
6
6
78,607,102
2024-6-11
https://stackoverflow.com/questions/78607102/how-to-load-a-quantized-fine-tuned-llama-3-8b-model-in-vllm-for-faster-inference
I am working on deploying a quantized fine-tuned LLaMA 3-8B model and I aim to use vLLM to achieve faster inference. I am currently using the following Python code to load the model: import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig import bitsandbytes as bnb import accelerate # model_id = "meta-llama/Meta-Llama-3-8B" #"mistralai/Mistral-7B-Instruct-v0.1" model_id = "meta-llama/Meta-Llama-3-8B-Instruct" quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16) base_model = AutoModelForCausalLM.from_pretrained( model_id, trust_remote_code=True, quantization_config=quantization_config, #load_in_8bit=True,# device_map='auto', token=MYTOKEN ) peft_model = "BojanaBas/Meta-Llama-3-8B-Instruct-pqa-10" model = PeftModel.from_pretrained(base_model, peft_model) The code successfully loads the model, but I am not sure how to integrate this with vLLM to optimize for faster inference. I read that it is not possible to load a model using PEFT in vLLM; instead, the PEFT model needs to be merged and loaded on Hugging Face. I have merged and loaded the model on Hugging Face as described in the article, after that, I am trying to use the model pushed to Hugging Face to load it on vLLM with the following code: from vllm import LLM merged_peft_model_name="lcass00/Meta-Llama-3-8B-Instruct-pqa-10-merged-peft" model_id = "meta-llama/Meta-Llama-3-8B-Instruct" llm = LLM(model=merged_peft_model_name, tokenizer=model_id) but when I try to load the model on vLLM I get the following error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-14-c306a36d9c21> in <cell line: 3>() 1 from vllm import LLM 2 ----> 3 llm = LLM(model=merged_peft_model_name, tokenizer=model_id) 4 frames /usr/local/lib/python3.10/dist-packages/vllm/entrypoints/llm.py in __init__(self, model, tokenizer, tokenizer_mode, skip_tokenizer_init, trust_remote_code, tensor_parallel_size, dtype, quantization, revision, tokenizer_revision, seed, gpu_memory_utilization, swap_space, enforce_eager, max_context_len_to_capture, max_seq_len_to_capture, disable_custom_all_reduce, **kwargs) 142 **kwargs, 143 ) --> 144 self.llm_engine = LLMEngine.from_engine_args( 145 engine_args, usage_context=UsageContext.LLM_CLASS) 146 self.request_counter = Counter() /usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py in from_engine_args(cls, engine_args, usage_context) 333 """Creates an LLM engine from the engine arguments.""" 334 # Create the engine configs. --> 335 engine_config = engine_args.create_engine_config() 336 distributed_executor_backend = ( 337 engine_config.parallel_config.distributed_executor_backend) /usr/local/lib/python3.10/dist-packages/vllm/engine/arg_utils.py in create_engine_config(self) 557 def create_engine_config(self, ) -> EngineConfig: 558 device_config = DeviceConfig(self.device) --> 559 model_config = ModelConfig( 560 self.model, self.tokenizer, self.tokenizer_mode, 561 self.trust_remote_code, self.dtype, self.seed, self.revision, /usr/local/lib/python3.10/dist-packages/vllm/config.py in __init__(self, model, tokenizer, tokenizer_mode, trust_remote_code, dtype, seed, revision, code_revision, rope_scaling, tokenizer_revision, max_model_len, quantization, quantization_param_path, enforce_eager, max_context_len_to_capture, max_seq_len_to_capture, max_logprobs, disable_sliding_window, skip_tokenizer_init, served_model_name) 141 self._verify_tokenizer_mode() 142 self._verify_embedding_mode() --> 143 self._verify_quantization() 144 self._verify_cuda_graph() 145 /usr/local/lib/python3.10/dist-packages/vllm/config.py in _verify_quantization(self) 201 if self.quantization is not None: 202 if self.quantization not in supported_quantization: --> 203 raise ValueError( 204 f"Unknown quantization method: {self.quantization}. Must " 205 f"be one of {supported_quantization}.") ValueError: Unknown quantization method: bitsandbytes. Must be one of ['aqlm', 'awq', 'deepspeedfp', 'fp8', 'marlin', 'gptq_marlin_24', 'gptq_marlin', 'gptq', 'squeezellm', 'sparseml']. How can I load a quantized finetuned model on vLLM?
Unfortunately vLLM does not support bitsandbytes quantization technique yet. You may want to use Mixtral-8x7B-Instruct-v0.1-GPTQ tough, as GPTQ and AWQ quantization techniques are already supported.
2
0
78,607,139
2024-6-11
https://stackoverflow.com/questions/78607139/how-to-calculate-and-plot-prediction-and-confidence-intervals-for-linear-regress
I need to plot prediction and confidence intervals and need to use python and only the following packages. How do I plot both intervals in the same model. I managed to plot the prediction intervals with the help of ChatGPT. this is the code including the data set. import statsmodels.api as sm import statsmodels.formula.api as smf import numpy as np import pandas as pd import matplotlib.pyplot as plt #dataframe data = { 'X': [55641, 55681, 55637, 55825, 55772, 55890, 56068, 56299, 56825, 57205, 57562, 57850, 57975, 57992, 58240, 58414, 58561, 59066, 58596, 58631, 58758, 59037], 'Y': [21886, 21934, 21699, 21901, 21812, 21714, 21932, 22086, 22265, 22551, 22736, 22301, 22518, 22580, 22618, 22890, 23112, 23315, 22865, 22788, 22949, 23149] } df = pd.DataFrame(data) #OLS model = smf.ols(formula='Y ~ X', data=df) results = model.fit() print(results.summary()) #calculating prediction intevals predictions = results.get_prediction(df) prediction_summary_frame = predictions.summary_frame(alpha=0.05) #data points plt.scatter(df['X'], df['Y'], color='black', label='Data') #regression line plt.plot(df['X'], results.fittedvalues, color='#58C9F4', label='Regression Line') #prediction invterval plt.fill_between(df['X'], prediction_summary_frame['obs_ci_lower'], prediction_summary_frame['obs_ci_upper'], color='grey', alpha=0.2, label='95% Prediction Interval') plt.xlabel('X') plt.ylabel('Y') plt.title('Linear Regression with Prediction Intervals') plt.legend() plt.show() How should I continue to be able to plot both intervals?
Explanation: The summary_frame method provides both the prediction intervals (obs_ci_lower, obs_ci_upper) and the confidence intervals (mean_ci_lower, mean_ci_upper). The fill_between function is used twice to plot both intervals: once for the prediction interval and once for the confidence interval. import statsmodels.api as sm import statsmodels.formula.api as smf import numpy as np import pandas as pd import matplotlib.pyplot as plt # Dataframe data = { 'X': [55641, 55681, 55637, 55825, 55772, 55890, 56068, 56299, 56825, 57205, 57562, 57850, 57975, 57992, 58240, 58414, 58561, 59066, 58596, 58631, 58758, 59037], 'Y': [21886, 21934, 21699, 21901, 21812, 21714, 21932, 22086, 22265, 22551, 22736, 22301, 22518, 22580, 22618, 22890, 23112, 23315, 22865, 22788, 22949, 23149] } df = pd.DataFrame(data) # OLS model = smf.ols(formula='Y ~ X', data=df) results = model.fit() print(results.summary()) # Calculating prediction intervals predictions = results.get_prediction(df) prediction_summary_frame = predictions.summary_frame(alpha=0.05) # Calculating confidence intervals confidence_intervals = results.conf_int(alpha=0.05) # Data points ax = df.plot(kind='scatter', x='X', y='Y', color='black', label='Data') # Regression line ax.plot(df['X'], results.fittedvalues, color='#58C9F4', label='Regression Line') # Prediction interval ax.fill_between(df['X'], prediction_summary_frame['obs_ci_lower'], prediction_summary_frame['obs_ci_upper'], color='grey', alpha=0.2, label='95% Prediction Interval') # Confidence interval ax.fill_between(df['X'], prediction_summary_frame['mean_ci_lower'], prediction_summary_frame['mean_ci_upper'], color='blue', alpha=0.2, label='95% Confidence Interval') ax.set(xlabel='X', ylabel='Y', title='Linear Regression with Prediction and Confidence Intervals') ax.legend() plt.show() OLS Regression Results ============================================================================== Dep. Variable: Y R-squared: 0.919 Model: OLS Adj. R-squared: 0.915 Method: Least Squares F-statistic: 227.5 Date: Tue, 11 Jun 2024 Prob (F-statistic): 2.17e-12 Time: 06:30:58 Log-Likelihood: -140.06 No. Observations: 22 AIC: 284.1 Df Residuals: 20 BIC: 286.3 Df Model: 1 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Intercept 559.4600 1450.698 0.386 0.704 -2466.642 3585.562 X 0.3815 0.025 15.084 0.000 0.329 0.434 ============================================================================== Omnibus: 0.314 Durbin-Watson: 1.479 Prob(Omnibus): 0.855 Jarque-Bera (JB): 0.390 Skew: -0.242 Prob(JB): 0.823 Kurtosis: 2.562 Cond. No. 2.64e+06 ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 2.64e+06. This might indicate that there are strong multicollinearity or other numerical problems. In the original code, the intercept at x = 0 is correct, but to visually represent it, the x-axis limits need to be expanded. Additionally, the slope appears distorted due to an unequal aspect ratio between the X and Y axes, causing a misrepresentation of the linear relationship. # OLS model = smf.ols(formula='Y ~ X', data=df) results = model.fit() # Extract coefficients intercept = results.params['Intercept'] slope = results.params['X'] # Calculate regression line values x_values = np.linspace(0, 59037, 100) y_values = intercept + slope * x_values # Calculating prediction intervals predictions = results.get_prediction(df) prediction_summary_frame = predictions.summary_frame(alpha=0.05) # Calculating confidence intervals confidence_intervals = results.conf_int(alpha=0.05) # Data points ax = df.plot(kind='scatter', x='X', y='Y', color='black', label='Data', s=1, figsize=(12, 10)) # Regression line ax.plot(x_values, y_values, color='#58C9F4', label='Regression Line') # Adding the point where the regression line intersects the y-axis ax.scatter(0, intercept, color='red', label='Y-intercept (x=0)', s=2) ax.annotate(f'(0, {intercept:.2f})', xy=(0, intercept), xytext=(2000, intercept + 1000), arrowprops=dict(facecolor='black', shrink=0.05)) # Annotating the regression line with the slope ax.text(x_values[len(x_values)//2], y_values[len(y_values)//2], f'Slope: {slope:.2f}', fontsize=12, verticalalignment='bottom') # Prediction interval ax.fill_between(df['X'], prediction_summary_frame['obs_ci_lower'], prediction_summary_frame['obs_ci_upper'], color='grey', alpha=0.2, label='95% Prediction Interval') # Confidence interval ax.fill_between(df['X'], prediction_summary_frame['mean_ci_lower'], prediction_summary_frame['mean_ci_upper'], color='blue', alpha=0.2, label='95% Confidence Interval') # Setting equal aspect ratio ax.set_aspect('equal', 'box') ax.set(xlabel='X', ylabel='Y', title='Linear Regression with Prediction and Confidence Intervals') ax.legend(bbox_to_anchor=(1, 0.5), loc='center left', frameon=False) plt.show()
2
4
78,599,664
2024-6-9
https://stackoverflow.com/questions/78599664/after-pushing-log-out-button-the-app-is-forwarding-me-to-empty-page
After pushing log out button the app is forwarding me to https://it-company-task-manager-uzrc.onrender.com/accounts/login/ But there is an empty page and after log out I'm still logged in and can do things (create task etc) I tried to solve it in different ways, but nothing changes. You can check it by yourself: https://it-company-task-manager-uzrc.onrender.com sidebar.html: <li class=""> <a href="{% url 'login' %}" class="icon"> <i class="fa-solid fa-right-from-bracket"></i> <span class="ml-4">Log out</span> </a> </li> login.html: {% extends "base.html" %} {% load static %} {% block login %} <section class="login-content"> <div class="container"> <div class="row align-items-center justify-content-center height-self-center"> <div class="col-lg-8"> <div class="card auth-card"> <div class="card-body p-0"> <div class="d-flex align-items-center auth-content"> <div class="col-lg-6 bg-primary content-left"> <div class="p-3"> <h2 class="mb-2 text-white">Sign In</h2> <p>Login to stay connected.</p> {% if form.non_field_errors %} {{ form.non_field_errors }} {% endif %} <form action="{% url 'login' %}" method="post"> {% csrf_token %} <div class="row"> <div class="col-lg-12"> <div class="floating-label form-group"> {% if form.username.errors %} {{ form.username.errors }} {% endif %} <input class="floating-input form-control" type="text" name="username" placeholder="Username"> </div> </div> <div class="col-lg-12"> <div class="floating-label form-group"> {% if form.password.errors %} {{ form.password.errors }} {% endif %} <input class="floating-input form-control" type="password" name="password" placeholder="Password"> </div> </div> </div> <input type="hidden" name="next" value="{{next}}" /> <button type="submit" class="btn btn-white">Sign In</button> </form> </div> </div> <div class="col-lg-6 content-right"> <img src="{% static 'images/login/01.png' %}" class="img-fluid image-right" alt=""> </div> </div> </div> </div> </div> </div> </div> </section> {% endblock %} settings: LOGIN_REDIRECT_URL = "/" LOGOUT_REDIRECT_URL = "/accounts/login/" urls: from django.contrib import admin from django.urls import path, include from django.conf import settings from django.conf.urls.static import static from django.views.generic import RedirectView urlpatterns = [ path("admin/", admin.site.urls), path("accounts/", include("django.contrib.auth.urls")), path("", RedirectView.as_view(url="tasks/", permanent=True)), path("tasks/", include("task.urls", namespace="task")), path("employees/", include("employee.urls", namespace="employee")) ] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
If you're using Django 5, it's possible you're running into the same issue that is addressed in these questions: Django built in Logout view `Method Not Allowed (GET): /users/logout/` Problem with Django class-based LogoutView in Django 5.0 Essentially, you can no longer access the built-in logout view with a GET request; you have to do it with a POST request. So, like willeM_ Van Onsem suggests in the first link above, you need to wrap your logout button in a small form: <form method="post" action="{% url 'logout' %}"> {% csrf_token %} <button type="submit">logout</button> </form> In my Django app, a POST request to the Django-standard '/logout' endpoint will log the user out and redirect to a 'logged_out.html' template that you might already have with the rest of your registration templates (or you might be using the one that already comes with Django).
2
1
78,610,026
2024-6-11
https://stackoverflow.com/questions/78610026/dirichlet-boundary-conditions-using-odeint
I am trying to edit the Gray-Scott 1D equation example (last example on the page) in the odeint documentation. I have the code below and it works for the Neumann boundary conditions but I want a Dirichlet boundary condition on the left at x=0 and retain Neumann on the right at x=L. I tried to reduce the dimension of y0 by 2 as that should remove 2 ODEs. But I was just repeating the same problem with 1 less interior point and getting the same answer, which makes sense since I didn't have any place where I fixed the left boundary values. import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import time from scipy.integrate import odeint def G(u, v, f, k): return f * (1 - u) - u*v**2 def H(u, v, f, k): return -(f + k) * v + u*v**2 def grayscott1d(y, t, f, k, Du, Dv, dx): """ Differential equations for the 1-D Gray-Scott equations. The ODEs are derived using the method of lines. """ # The vectors u and v are interleaved in y. We define # views of u and v by slicing y. u = y[::2] v = y[1::2] # dydt is the return value of this function. dydt = np.empty_like(y) # Just like u and v are views of the interleaved vectors # in y, dudt and dvdt are views of the interleaved output # vectors in dydt. dudt = dydt[::2] dvdt = dydt[1::2] # Compute du/dt and dv/dt. The end points and the interior points # are handled separately. dudt[0] = G(u[0], v[0], f, k) + Du * (-2.0*u[0] + 2.0*u[1]) / dx**2 dudt[1:-1] = G(u[1:-1], v[1:-1], f, k) + Du * np.diff(u,2) / dx**2 dudt[-1] = G(u[-1], v[-1], f, k) + Du * (- 2.0*u[-1] + 2.0*u[-2]) / dx**2 dvdt[0] = H(u[0], v[0], f, k) + Dv * (-2.0*v[0] + 2.0*v[1]) / dx**2 dvdt[1:-1] = H(u[1:-1], v[1:-1], f, k) + Dv * np.diff(v,2) / dx**2 dvdt[-1] = H(u[-1], v[-1], f, k) + Dv * (-2.0*v[-1] + 2.0*v[-2]) / dx**2 return dydt rng = np.random.default_rng(0) num_discretize_x = 100 #2500 gives their solution y0 = rng.standard_normal(2*num_discretize_x) y0 = np.ones(2*num_discretize_x)*3 t = np.linspace(0, 4, 100) f = 0.024 k = 0.055 Du = 0.01 Dv = 0.005 dx = 0.01 x = np.linspace(0, 1, num_discretize_x) t0 = time.time() #sola = odeint(grayscott1d, y0, t, args=(f, k, Du, Dv, dx)) #t1 = time.time() solb = odeint(grayscott1d, y0, t, args=(f, k, Du, Dv, dx), ml=2, mu=2) print('solb shape: ', solb.shape) t2 = time.time() #print(f'No banding takes {t1-t0} s, while banding takes {t2-t1} s.') u = solb[:,::2] v = solb[:,1::2] print(u.T) t_grid, x_grid = np.meshgrid(t, x) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot_surface(t_grid, x_grid, u.T) plt.xlabel('$t$') plt.ylabel('$x$') plt.show() plt.close('all') plt.plot(t, u[:,-1], label='$u(x=L)$') plt.plot(t, v[:,-1], label='$v(x=L)$') plt.xlabel('$t$') plt.ylabel('$u$ or $v$') plt.legend(loc=0) plt.show()
You can fix the value of u and v at x = 0 and then keep the Neumann conditions at x = L: Code: import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import time from scipy.integrate import odeint def G(u, v, f, k): return f * (1 - u) - u * v**2 def H(u, v, f, k): return -(f + k) * v + u * v**2 def grayscott1d(y, t, f, k, Du, Dv, dx, u0_fixed=1.0, v0_fixed=0.5): u, v = y[::2], y[1::2] dydt = np.empty_like(y) dudt, dvdt = dydt[::2], dydt[1::2] dudt[0], dvdt[0] = 0, 0 u[0], v[0] = u0_fixed, v0_fixed dudt[1:-1] = G(u[1:-1], v[1:-1], f, k) + Du * np.diff(u, 2) / dx**2 dudt[-1] = G(u[-1], v[-1], f, k) + Du * (-2.0 * u[-1] + 2.0 * u[-2]) / dx**2 dvdt[1:-1] = H(u[1:-1], v[1:-1], f, k) + Dv * np.diff(v, 2) / dx**2 dvdt[-1] = H(u[-1], v[-1], f, k) + Dv * (-2.0 * v[-1] + 2.0 * v[-2]) / dx**2 return dydt rng = np.random.default_rng(0) num_discretize_x = 100 y0 = rng.standard_normal(2 * num_discretize_x) y0 = np.ones(2 * num_discretize_x) * 3 t = np.linspace(0, 4, 100) f = 0.024 k = 0.055 Du = 0.01 Dv = 0.0005 dx = 0.01 x = np.linspace(0, 1, num_discretize_x) t0 = time.time() solb = odeint(grayscott1d, y0, t, args=(f, k, Du, Dv, dx), ml=2, mu=2) print('solb shape: ', solb.shape) t2 = time.time() u = solb[:, ::2] v = solb[:, 1::2] print(u.T) t_grid, x_grid = np.meshgrid(t, x) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot_surface(t_grid, x_grid, u.T) plt.xlabel('$t$') plt.ylabel('$x$') plt.show() plt.close('all') plt.plot(t, u[:, -1], label='$u(x=L)$') plt.plot(t, v[:, -1], label='$v(x=L)$') plt.xlabel('$t$') plt.ylabel('$u$ or $v$') plt.legend(loc=0) plt.show() Prints solb shape: (100, 200) [[3.00000000e+00 1.00000000e+00 1.00000000e+00 ... 1.00000000e+00 1.00000000e+00 1.00000000e+00] [3.00000000e+00 1.17379393e+00 8.52324657e-01 ... 3.41682696e-01 3.41131018e-01 3.40594796e-01] [3.00000000e+00 1.34513346e+00 7.43775028e-01 ... 9.71922350e-02 9.68656847e-02 9.65485574e-02] ... [3.00000000e+00 1.77119093e+00 6.99074218e-01 ... 1.18100847e-03 1.18800490e-03 1.19504096e-03] [3.00000000e+00 1.77119093e+00 6.99074218e-01 ... 1.18100847e-03 1.18800490e-03 1.19504096e-03] [3.00000000e+00 1.77119093e+00 6.99074218e-01 ... 1.18100847e-03 1.18800490e-03 1.19504096e-03]] Comment: @Warren Weckesser: Instead of using the parameters u0_fixed and v0_fixed and setting u[0] and v[0] in each call of grayscott1d(), you can set the corresponding initial conditions to those values (i.e. y0[:2] = [u0_fixed, v0_fixed]. Then, since dudt[0] = 0 and dvdt[0] = 0, the solution will remain constant at those values. Code: import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import time from scipy.integrate import odeint def G(u, v, f, k): return f * (1 - u) - u * v**2 def H(u, v, f, k): return -(f + k) * v + u * v**2 def grayscott1d(y, t, f, k, Du, Dv, dx): u, v = y[::2], y[1::2] dydt = np.empty_like(y) dudt, dvdt = dydt[::2], dydt[1::2] dudt[0], dvdt[0] = 0, 0 dudt[1:-1] = G(u[1:-1], v[1:-1], f, k) + Du * np.diff(u, 2) / dx**2 dudt[-1] = G(u[-1], v[-1], f, k) + Du * (-2.0 * u[-1] + 2.0 * u[-2]) / dx**2 dvdt[1:-1] = H(u[1:-1], v[1:-1], f, k) + Dv * np.diff(v, 2) / dx**2 dvdt[-1] = H(u[-1], v[-1], f, k) + Dv * (-2.0 * v[-1] + 2.0 * v[-2]) / dx**2 return dydt rng = np.random.default_rng(0) num_discretize_x = 100 y0 = rng.standard_normal(2 * num_discretize_x) y0 = np.ones(2 * num_discretize_x) * 3 y0[0], y0[1] = 1.0, 0.5 t = np.linspace(0, 4, 100) f = 0.024 k = 0.055 Du = 0.01 Dv = 0.005 dx = 0.01 x = np.linspace(0, 1, num_discretize_x) solb = odeint(grayscott1d, y0, t, args=(f, k, Du, Dv, dx), ml=2, mu=2) print('solb shape: ', solb.shape) u = solb[:, ::2] v = solb[:, 1::2] print(u.T) t_grid, x_grid = np.meshgrid(t, x) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot_surface(t_grid, x_grid, u.T) plt.xlabel('$t$') plt.ylabel('$x$') plt.show() plt.close('all') plt.plot(t, u[:, -1], label='$u(x=L)$') plt.plot(t, v[:, -1], label='$v(x=L)$') plt.xlabel('$t$') plt.ylabel('$u$ or $v$') plt.legend(loc=0) plt.show()
2
2
78,609,465
2024-6-11
https://stackoverflow.com/questions/78609465/animating-yearly-data-from-pandas-in-geopandas-with-matplotlib-funcanimation
Using this dataset of % change by state, I have merged it with a cartographic boundary map of US states from the Census department: https://www2.census.gov/geo/tiger/GENZ2018/shp/cb_2018_us_state_500k.zip df.head() Year 2017 2018 2019 2020 2021 2022 2023 State Alabama 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Arizona 0.24 0.00 0.03 -0.15 0.56 -0.36 0.21 Arkansas 0.35 -0.06 -0.03 0.03 -0.00 -0.13 -0.02 California 0.13 0.07 -0.03 0.04 0.21 -0.10 0.03 Colorado 0.81 -0.18 -0.01 -0.05 0.10 -0.03 -0.51 I would like to cycle through the columns (years) in a FuncAnimation after the boundaries have been plotted, and I am not quite sure how to go about it. The lifecycle of a plot in official reference manual cites relevant examples, but all deal with built-in figures, and not shape files. Here is a related answer that seems exactly like what I'm missing, but deals with only (x, y) line graph: How to keep shifting the X axis and show the more recent data using matplotlib.animation in Python? How do I extrapolate column outside of calling shape.plot()? code: shape = gpd.read_file(shapefile) years = dfc.columns # dfc = % change df tspan = len(dfc.columns) """ merge map with dataframe on state name column """ shape = pd.merge( left=shape, right=dfc, left_on='NAME', right_on='State', how='right' ) """ init pyplot 'OO method' """ fig, ax = plt.subplots(figsize=(10, 5)) """ draw shape boundary """ ax = shape.boundary.plot( ax=ax, edgecolor='black', linewidth=0.3, ) """ plot shape """ ax = shape.plot( ax=ax, column=year, # what I need access to legend=True, cmap='RdBu_r', legend_kwds={'shrink': 0.3, 'orientation': 'horizontal', 'format': '%.0f'}) """ cycle through columns -- not operable yet """ def animate(year): ax.clear() ax.shape.column(year) animation = FuncAnimation(states, animate, frames=(dfc.columns[0], dfc.columns[tspan] + 1, 1), repeat=True, interval=1000) I really haven't found anything online dealing with these cartographic boundary maps specifically I have tried the most obvious things I could think of: Putting the entire shape.plot() method into animate() I tried a for loop cycling the years, which resulted in 7 distinct maps. Each iteration lost the attributes I set in shape.boundary.plot() Edit: Since I've converted the original procedural example into the OO format, I am starting to have new questions about what might be done. If ax = shape.plot(ax=ax), is there some kind of getter/setter, for previously defined attributes? e.g. ax.set_attr = column=year (will scour manual immediately after I finish this) Is there a way to define the map's boundary lines, shown here with shape.plot() and shape.boundary.plot(), using the fig, instead of ax (ax = shape.plot())? Barring that, could we have shape.plot() and shape.boundary.plot() persist to the first subplot axs[0] and have columns of data shown using subsequent overlapping subplots axs[n == year]? Any iterative process I've seen so far has lost the boundary attributes, so that's been a big sticking point for me.
In the following animation, only states in data are plotted since how='right' is used for pd.merge. Tested in python v3.12.3, geopandas v0.14.4, matplotlib v3.8.4. import geopandas as gpd import pandas as pd import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation, PillowWriter # Sample data data = { 'State': ['Alabama', 'Arizona', 'Arkansas', 'California', 'Colorado'], '2017': [0.00, 0.24, 0.35, 0.13, 0.81], '2018': [0.00, 0.00, -0.06, 0.07, -0.18], '2019': [0.00, 0.03, -0.03, -0.03, -0.01], '2020': [0.00, -0.15, 0.03, 0.04, -0.05], '2021': [0.00, 0.56, -0.00, 0.21, 0.10], '2022': [0.00, -0.36, -0.13, -0.10, -0.03], '2023': [0.00, 0.21, -0.02, 0.03, -0.51], } df = pd.DataFrame(data) # Load the shapefile shape = gpd.read_file('cb_2018_us_state_500k.shp') # Merge the shape data with the dataframe shape = pd.merge( left=shape, right=df, left_on='NAME', right_on='State', how='right' ) # Initialize the plot fig, ax = plt.subplots(figsize=(10, 5)) # Set fixed axis limits xlim = (shape.total_bounds[0], shape.total_bounds[2]) ylim = (shape.total_bounds[1], shape.total_bounds[3]) ax.set_xlim(xlim) ax.set_ylim(ylim) # Plot initial boundaries boundary = shape.boundary.plot(ax=ax, edgecolor='black', linewidth=0.3) # Initialize the colorbar variable with a fixed normalization norm = plt.Normalize(vmin=df.iloc[:, 1:].min().min(), vmax=df.iloc[:, 1:].max().max()) sm = plt.cm.ScalarMappable(cmap='RdBu_r', norm=norm) sm.set_array([]) # Only needed for adding the colorbar colorbar = fig.colorbar(sm, ax=ax, orientation='horizontal', shrink=0.5, format='%.2f') # Function to update the plot for each year def animate(year): ax.clear() # Set the fixed axis limits ax.set_xlim(xlim) ax.set_ylim(ylim) # Plot initial boundaries boundary = shape.boundary.plot(ax=ax, edgecolor='black', linewidth=0.3) # Plot the data for the current year shape.plot( ax=ax, column=year, legend=False, cmap='RdBu_r', norm=norm ) # Add year annotation at the top ax.annotate(f'Year: {year}', xy=(0.5, 1.05), xycoords='axes fraction', fontsize=12, ha='center') # Create the animation years = df.columns[1:] # Skip the 'State' column animation = FuncAnimation(fig, animate, frames=years, repeat=False, interval=1000) # Save the animation as a GIF writer = PillowWriter(fps=1) animation.save('us_states_animation.gif', writer=writer) # Show the plot plt.show() Note: Segmentation of the colorbar is an artifact of the .gif format and is not present when running the animation. Save the file as a .mp4, which doesn't display segmentation in the colorbar. Download FFmped from FFmpeg download page, extract the archive, and add the bin folder path to the Path variable in 'System Variables'. from matplotlib.animation import FuncAnimation, FFMpegWriter import matplotlib as mpl # Set the path to the ffmpeg executable mpl.rcParams['animation.ffmpeg_path'] = r'C:\FFmpeg\bin\ffmpeg.exe' # Replace this with the correct path to your ffmpeg executable ... # Save the animation as an MP4 writer = FFMpegWriter(fps=1, metadata=dict(artist='Me'), bitrate=1800) animation.save('us_states_animation.mp4', writer=writer) # Show the plot plt.show()
4
4
78,608,557
2024-6-11
https://stackoverflow.com/questions/78608557/color-coded-time-series-plot-based-on-value-sign
I have a dataframe containing positive,negative and zero values. import pandas as pd import numpy as np import matplotlib.pyplot as plt df1 = pd.DataFrame({'A': [-1, 3, 9, 5, 0, -1, -1], 'B': [4, -5, 5, 7, 9, 8, 6], 'C': [7, 5, -3,-1, 5, 9, 3], 'D': [-3, 6, 7, 4, 0, -2, -1], 'date':['01-02-2020', '01-06-2020', '01-03-2021', '01-05-2021', '01-10-2021', '01-03-2022', '01-08-2022']}) # make sure the time column is actually time format df1['date']=pd.to_datetime(df1['date']) # set time as the index df1.set_index('date',inplace=True) df1 I want to show the month and year on the X-axis, there should be 4 scatterplots and I want the markers (i.e. scatter circles) to be 'red' for the negative values in the corresponding month/year,'green' for positive and yellow for zero, which I have tried to do with a list comprehension statement below. # to condition by colour kcolors = ['green' if s>0 else 'red' if s<0 else 'yellow' for s in df1] # this gives a TypeError: '>' not supported between instances of 'str' and 'int' # I have also tried # kcolors = ['#00ff41' if s>0 else '#65000B' if s<0 else '#F5D300' for s in df1] # plot fig, ax = plt.subplots() df1.plot(kind='scatter', marker='o', color=kcolors, ax=ax) ax.figure.autofmt_xdate(rotation=45, ha='center') plt.legend(loc='best') plt.show() How can I achieve this?
The original post requested the use of a line plot with markers but was changed because a comment suggested that wasn't possible. The following answer color-codes the markers, and the line from its marker. Tested in python v3.12.3, pandas v2.2.2, matplotlib v3.8.4. import pandas as pd import matplotlib.pyplot as plt # Create the DataFrame df1 = pd.DataFrame({ 'A': [-1, 3, 9, 5, 0, -1, -1], 'B': [4, -5, 5, 7, 9, 8, 6], 'C': [7, 5, -3, -1, 5, 9, 3], 'D': [-3, 6, 7, 4, 0, -2, -1], 'date': ['01-02-2020', '01-06-2020', '01-03-2021', '01-05-2021', '01-10-2021', '01-03-2022', '01-08-2022'] }) # Convert the date column to datetime format df1['date'] = pd.to_datetime(df1['date']) # Set the date column as the index df1.set_index('date', inplace=True) # Create a 2x2 subplot grid fig, axs = plt.subplots(2, 2, figsize=(14, 10)) axs = axs.flatten() # Flatten the 2x2 array of axes for easy iteration # Loop through each column and its corresponding subplot axis for ax, col in zip(axs, df1.columns): # Loop through each pair of consecutive points for i in range(len(df1)-1): x = df1.index[i:i+2] # Get the two consecutive dates y = df1[col].iloc[i:i+2] # Get the two consecutive values for the current column # Determine the color based on the value if y.iloc[0] > 0: color = 'green' elif y.iloc[0] < 0: color = 'red' else: color = 'yellow' # Plot the segment with the determined color ax.plot(x, y, marker='o', color=color) # Set the title of the subplot to the column name ax.set_title(f'{col}') # Rotate and format the x-axis dates ax.figure.autofmt_xdate(rotation=45, ha='center') # Adjust layout to prevent overlap plt.tight_layout() # Display the plot plt.show() Similar Questions: color line by "value" How to plot one line in different colors Different color for line depending on corresponding values
2
2
78,608,434
2024-6-11
https://stackoverflow.com/questions/78608434/cartesian-coordinates-to-label
Using coordinates for labelling? I was asked it was possible and to see for myself, I am trying to program it and read up more on it. I do not know what it is called or where to look but the general idea is as follows: labels are converted into an N-dimensional space and trajectories are calculated along the N-dimensional space. Based on the direction, a label is assigned with a confidence interval. The data basic_data = [ {"label":"First-Person RPG", "Tags":["Open-world", "Fantasy", "Adventure", "Single-player", "Exploration", "Dragons", "Crafting", "Magic", "Story-rich", "Moddable"]}, {"label":"Action RPG", "Tags":["Open-world", "Fantasy", "Story-rich", "Adventure", "Single-player", "Monsters", "Crafting", "Horse-riding", "Magic", "Narrative"]}, {"label":"Adventure", "Tags":["Difficult", "Dark Fantasy", "Action", "Single-player", "Exploration", "Lore-rich", "Combat", "Permadeath", "Monsters", "Atmospheric"]}, {"label":"Party Game", "Tags":["Multiplayer", "Social Deduction", "Indie", "Strategy", "Casual", "Space", "Deception", "Survival", "Teams", "Interactive"]} ] code for the first part below mlb = MultiLabelBinarizer() for idx, data in enumerate(basic_data): basic_data[idx]["tag_str"] = ",".join(data["Tags"]) pd_basic_data: pd.DataFrame = pd.DataFrame(basic_data) tags: List = [str(pd_basic_data.loc[i,'tag_str']).split(',') for i in range(len(pd_basic_data))] mlb_result = mlb.fit_transform(tags) df_final: pd.DataFrame = pd.concat([pd_basic_data['label'],pd.DataFrame(mlb_result,columns=list(mlb.classes_))],axis=1) a simple one word answer telling the theory works as well for an answer. I just need to know where to look.
You are most probably referring to something called Embedding and uses dimensionality reduction techniques, such as PCA. Those are often used in ML for tasks such as classification and clustering. If I were you, I would investigate Word Embeddings first: Word2Vec from the modeule gensim.models is a really good candidate. Basically, you converts words into continuous vector space and preserve contextual relationships. Here is an example of how to do this: import pandas as pd from gensim.models import Word2Vec from sklearn.decomposition import PCA import matplotlib.pyplot as plt basic_data = [ {"label": "First-Person RPG", "Tags": ["Open-world", "Fantasy", "Adventure", "Single-player", "Exploration", "Dragons", "Crafting", "Magic", "Story-rich", "Moddable"]}, {"label": "Action RPG", "Tags": ["Open-world", "Fantasy", "Story-rich", "Adventure", "Single-player", "Monsters", "Crafting", "Horse-riding", "Magic", "Narrative"]}, {"label": "Adventure", "Tags": ["Difficult", "Dark Fantasy", "Action", "Single-player", "Exploration", "Lore-rich", "Combat", "Permadeath", "Monsters", "Atmospheric"]}, {"label": "Party Game", "Tags": ["Multiplayer", "Social Deduction", "Indie", "Strategy", "Casual", "Space", "Deception", "Survival", "Teams", "Interactive"]} ] sentences = [data["Tags"] for data in basic_data] model = Word2Vec(sentences, vector_size=50, window=3, min_count=1, sg=1) tag_embeddings = {tag: model.wv[tag] for tag in model.wv.index_to_key} tag_vectors = [tag_embeddings[tag] for tag in model.wv.index_to_key] tag_labels = list(tag_embeddings.keys()) pca = PCA(n_components=2) pca_result = pca.fit_transform(tag_vectors) plt.figure(figsize=(12, 8)) plt.scatter(pca_result[:, 0], pca_result[:, 1]) for i, tag in enumerate(tag_labels): plt.annotate(tag, (pca_result[i, 0], pca_result[i, 1])) plt.xlabel('PCA Component 1') plt.ylabel('PCA Component 2') plt.title('Tag Embeddings Visualized with PCA') plt.show() which gives you (here you have your coorinates) Note that this involves a PCA. It makes sense since you need to reduce dimensionality and keep contextual relationships between words. Another nother alterantive is FastText, also from gensim.models: import pandas as pd import numpy as np from gensim.models import FastText from sklearn.decomposition import PCA import matplotlib.pyplot as plt basic_data = [ {"label": "First-Person RPG", "Tags": ["Open-world", "Fantasy", "Adventure", "Single-player", "Exploration", "Dragons", "Crafting", "Magic", "Story-rich", "Moddable"]}, {"label": "Action RPG", "Tags": ["Open-world", "Fantasy", "Story-rich", "Adventure", "Single-player", "Monsters", "Crafting", "Horse-riding", "Magic", "Narrative"]}, {"label": "Adventure", "Tags": ["Difficult", "Dark Fantasy", "Action", "Single-player", "Exploration", "Lore-rich", "Combat", "Permadeath", "Monsters", "Atmospheric"]}, {"label": "Party Game", "Tags": ["Multiplayer", "Social Deduction", "Indie", "Strategy", "Casual", "Space", "Deception", "Survival", "Teams", "Interactive"]} ] sentences = [data["Tags"] for data in basic_data] model = FastText(sentences, vector_size=50, window=3, min_count=1, sg=1, epochs=10) tag_embeddings = {tag: model.wv[tag] for tag in model.wv.key_to_index} tag_vectors = np.array([tag_embeddings[tag] for tag in tag_embeddings]) tag_labels = list(tag_embeddings.keys()) pca = PCA(n_components=2) pca_result = pca.fit_transform(tag_vectors) plt.figure(figsize=(12, 8)) plt.scatter(pca_result[:, 0], pca_result[:, 1]) for i, tag in enumerate(tag_labels): plt.annotate(tag, (pca_result[i, 0], pca_result[i, 1])) plt.xlabel('PCA Component 1') plt.ylabel('PCA Component 2') plt.title('Tag Embeddings Visualized with PCA') plt.show() with tag_vector given by array([[ 3.5692262e-03, 1.5384286e-03, 1.7154109e-03, ..., 5.1359989e-04, 1.0005912e-03, -1.4637399e-03], [-3.5827586e-03, 2.6330323e-04, 2.4984824e-04, ..., 2.1814678e-03, -7.5217336e-06, 2.8979264e-03], [-1.4136693e-03, -1.3430609e-03, -1.2442525e-03, ..., 2.1025788e-03, 3.1783513e-04, -1.0448305e-05], ..., [-3.3974617e-03, 4.9481675e-04, -2.5317934e-04, ..., -1.1619454e-03, 1.1570274e-03, -2.4804280e-03], [ 1.7241882e-03, 9.6893904e-04, -2.9550551e-04, ..., -1.6130345e-04, -1.8300014e-03, -8.8712422e-04], [ 3.8428712e-04, -6.7049061e-04, -2.3678755e-03, ..., 1.6739646e-03, -2.6099158e-03, 2.2148804e-03]], dtype=float32) There are of course other methods, and my tip to you would be to look for references on Word-embedding and understand dimensionality reduction techniques, such as principal component analysis. Note also that depending on the technique you choose, you'll get different results. Look into what these technique actually do.
2
2
78,598,087
2024-6-9
https://stackoverflow.com/questions/78598087/creating-blended-transform-with-identical-data-dependent-scaling
I am trying to create a circle that displays a circle regardless of axis scaling, but placed in data coordinates and whose radius is dependent on the scaling of the y-axis. Based on the transforms tutorial, and more specifically the bit about plotting in physical coordinates, I need a pipeline that looks like this: from matplotlib import pyplot as plt, patches as mpatch, transforms as mtrans fig, ax = plt.subplots() x, y = 5, 10 r = 3 transform = fig.dpi_scale_trans + fig_to_data_scaler + mtrans.ScaledTranslation(x, y, ax.transData) ax.add_patch(mpatch.Circle((0, 0), r, edgecolor='k', linewidth=2, facecolor='w', transform=t)) The goal is to create a circle that's scaled correctly at the figure level, scale it to the correct height, and then move it in data coordinates. fig.dpi_scale_trans and mtrans.ScaledTranslation(x, y, ax.transData) work as expected. However, I am unable to come up with an adequate definition for fig_to_data_scaler. It is pretty clear that I need a blended transformation that takes the y-scale from ax.transData combined with fig.dpi_scale_trans (inverted?) and then uses the same values for x, regardless of data transforms. How do I do that? Another reference that I looked at: https://stackoverflow.com/a/56079290/2988730. Here's a transform graph I've attempted to construct unsuccessfully: vertical_scale_transform = mtrans.blended_transform_factory(mtrans.IdentityTransform(), fig.dpi_scale_trans.inverted() + mtrans.AffineDeltaTransform(ax.transData)) reflection = mtrans.Affine2D.from_values(0, 1, 1, 0, 0, 0) fig_to_data_scaler = vertical_scale_transform + reflection + vertical_scale_transform # + reflection, though it's optional It looks like the previous attempt was a bit over-complicated. It does not matter what the figure aspect ratio is. The axes data transform literally handles all of that out-of-the box. The following attempt almost works. The only thing it does not handle is pixel aspect ratio: vertical_scale_transform = mtrans.AffineDeltaTransform(ax.transData) reflection = mtrans.Affine2D.from_values(0, 1, 1, 0, 0, 0) uniform_scale_transform = mtrans.blended_transform_factory(reflection + vertical_scale_transform + reflection, vertical_scale_transform) t = uniform_scale_transform + mtrans.ScaledTranslation(x, y, ax.transData) ax.add_patch(mpatch.Circle((0, 0), r, edgecolor='k', linewidth=2, facecolor='w', transform=t)) This places perfect circles at the correct locations. Panning works as expected. The only issue is that the size of the circles does not update when I zoom. Given mtrans.AffineDeltaTransform(ax.transData) on the y-axis, I find that to be surprising. I guess the updated question is then, why is the scaling part of the transform graph not updating fully when I zoom the axes?
It appears that the approach I proposed in the question is supposed to work. To create a transform that has data scaling in the y-direction and the same scaling regardless of data in the x-direction, we can do the following: Create a transform that scales vertically with ax.transData Create a simple reflection transform using Affine2D By reflecting, applying the transform in step 1 and reflecting back, we can make a transform that scales the x-axis the same as y. Finally, we add a ScalesTranslation to place the object at the correct data location Here is the full solution: from matplotlib import pyplot as plt, patches as mpatch, transforms as mtrans fig, ax = plt.subplots() x, y = 5, 10 r = 3 # AffineDeltaTransform returns just the scaling portion vertical_scale_transform = mtrans.AffineDeltaTransform(ax.transData) reflection = mtrans.Affine2D.from_values(0, 1, 1, 0, 0, 0) # The first argument relies on the fact that `reflection` is its own inverse uniform_scale_transform = mtrans.blended_transform_factory(reflection + vertical_scale_transform + reflection, vertical_scale_transform) t = uniform_scale_transform + mtrans.ScaledTranslation(x, y, ax.transData) # Create a circle at origin, and move it with the transform ax.add_patch(mpatch.Circle((0, 0), r, edgecolor='k', linewidth=2, facecolor='w', transform=t)) This answer is encapsulated in a proposed gallery example: https://github.com/matplotlib/matplotlib/pull/28364 The issue with this solution at time of writing is that AffineDeltaTransform is not updating correctly when axes are zoomed or resized. The issue has been filed in matplotlib#28372, and resolved in matplotlib#28375. Future versions of matplotlib will be able to run the code above interactively.
2
1
78,607,218
2024-6-11
https://stackoverflow.com/questions/78607218/cant-fix-the-same-fontsize-for-both-axis-ticks-in-a-log-plot
I'm trying to do a simple log plot using the matplotlib library, but I can't seem to get the x-axis ticks to have the same fontsize. My example code is: import numpy as np import matplotlib.pyplot as plt fontsize = 8 x = np.linspace(5.6e-5,6.5e-5, 120) y = np.logspace(-10, 10, 120) fig = plt.figure(figsize=(3.5, 2.65), constrained_layout=True) ax = fig.gca() ax.plot(x,y) plt.xlabel('X label', fontsize=fontsize) plt.ylabel('Y label', fontsize=fontsize) ax.tick_params(axis='both', labelsize=fontsize) plt.gca().set_yscale('log') plt.gca().set_xscale('log') plt.show() And the resulting plot:
It seems the ticks on the x axis are minor ones and are not affected the way you tried. If you also provide 'both' for the which parameter as in plt.tick_params(axis='both', which='both', labelsize=fontsize) it should work.
4
3
78,604,018
2024-6-10
https://stackoverflow.com/questions/78604018/importerror-cannot-import-name-packaging-from-pkg-resources-when-trying-to
I was trying to install "causal_conv1d" using: pip install --no-cache-dir -t /scratch/ahmed/lib causal_conv1d==1.0.0 The error I got is: Collecting causal_conv1d==1.0.0 Downloading causal_conv1d-1.0.0.tar.gz (6.4 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error Γ— python setup.py egg_info did not run successfully. β”‚ exit code: 1 ╰─> [9 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/tmp/pip-install-9i0wsv2k/causal-conv1d_fc0a21267f664102adca1aa336c93106/setup.py", line 19, in <module> from torch.utils.cpp_extension import ( File "/scratch/ahmed/lib/torch/utils/cpp_extension.py", line 28, in <module> from pkg_resources import packaging # type: ignore[attr-defined] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ImportError: cannot import name 'packaging' from 'pkg_resources' (/scratch/ahmed/lib/pkg_resources/__init__.py) [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Γ— Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details.
I don't know the exact problem, but it seems this problem happened when I used two directories for Python "lib": one was the default Anaconda lib, and I had another separate one. The problem disappeared when I used only the default Anaconda lib. It works fine now.
8
0
78,606,466
2024-6-11
https://stackoverflow.com/questions/78606466/regex-pattern-to-allow-alphanumeric-and-square-brackets-with-text-insde-it
I am using regex to allow alphanumeric, underscore, hyphen and square brackets in a text box. regex i am using to validate input is r'^[a-zA-Z0-9_\-\[\] ]*$. I need to modify the regex such that if empty brackets are given it should return false. Sample cases "Your message here" - Valid "Your [text] message here" - Valid "your_text_message [text]" - valid "Your [] message here" - Invalid "[] message" - Invalid
Can it go like this? ^(?!.*\[\s*\].*)[a-zA-Z0-9_\-\[\] ]*$
2
1
78,599,924
2024-6-9
https://stackoverflow.com/questions/78599924/how-to-diagnose-an-28x-slowdown-in-containerized-vs-host-pythonnumpy-execution
I'm doing some number-crunching in a docker container. On an 8 cpu machine the dockerized execution is around 28x slower than the host? I've examined: warm-up costs: I've tried running the test on the second execution in the same process (below the warmup costs appear negligble anyway) numpy optimization options: import numpy ; numpy.show_config() shows identical result in the container and the host cpus: os.cpu_count reports the same in the container as the host. file IO: on the second run, there is no file IO. (The first run loads data, the second had the data cached in memory) blas vs openblas installed python version (both use python 3.10) numpy version (1.22.4 for both) fastdtw version (both same) The program uses fastdtw which uses numpy internally. Here is a minimal driver: #fdtw.py # Runs in 1s on host # Runs in 28s in Docker import numpy as np from fastdtw import fastdtw import time a = np.sin(np.arange(1000)) b = np.cos(np.arange(3000)) t = time.time() for i in range(100): fastdtw(a,b) print(time.time()-t) The numpy setup and test execution in Docker. A lot of the expensive calls are implement in cython (https://github.com/slaypni/fastdtw/tree/master/fastdtw). Some not being compiled and optimized in the Docker case? How? FROM python:3.10-slim RUN apt update RUN apt-get -y install nano build-essential software-properties-common libpng-dev RUN apt-get -y install libopenblas-dev libopenblas64-0-openmp RUN apt-get -y install gfortran liblapack3 liblapack-dev # libatlas-base-dev libatlas-base-dev # libblas3 libblas-dev RUN pip3 install numpy==1.22.4 fastdtw COPY /server / RUN python -c 'import numpy ; print(numpy.__version__)' RUN python -c 'import numpy ; numpy.show_config()' RUN python -m cProfile -s cumtime /server/fdtw.py > log.txt RUN cat log.txt | head -500 RUN exit 1 The docker profile output: #12 [ 8/36] RUN python -c 'import numpy ; print(numpy.__version__)' #12 0.427 1.22.4 #12 DONE 0.5s #13 [ 9/36] RUN python -c 'import numpy ; numpy.show_config()' #13 0.611 openblas64__info: #13 0.611 libraries = ['openblas64_', 'openblas64_'] #13 0.611 library_dirs = ['/usr/local/lib'] #13 0.611 language = c #13 0.611 define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None)] #13 0.611 runtime_library_dirs = ['/usr/local/lib'] #13 0.611 blas_ilp64_opt_info: #13 0.611 libraries = ['openblas64_', 'openblas64_'] #13 0.611 library_dirs = ['/usr/local/lib'] #13 0.611 language = c #13 0.611 define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None)] #13 0.611 runtime_library_dirs = ['/usr/local/lib'] #13 0.611 openblas64__lapack_info: #13 0.611 libraries = ['openblas64_', 'openblas64_'] #13 0.611 library_dirs = ['/usr/local/lib'] #13 0.611 language = c #13 0.611 define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None), ('HAVE_LAPACKE', None)] #13 0.611 runtime_library_dirs = ['/usr/local/lib'] #13 0.611 lapack_ilp64_opt_info: #13 0.611 libraries = ['openblas64_', 'openblas64_'] #13 0.611 library_dirs = ['/usr/local/lib'] #13 0.611 language = c #13 0.611 define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None), ('HAVE_LAPACKE', None)] #13 0.611 runtime_library_dirs = ['/usr/local/lib'] #13 0.611 Supported SIMD extensions in this NumPy install: #13 0.611 baseline = SSE,SSE2,SSE3 #13 0.611 found = SSSE3,SSE41,POPCNT,SSE42,AVX,F16C,FMA3,AVX2 #13 0.611 not found = AVX512F,AVX512CD,AVX512_KNL,AVX512_KNM,AVX512_SKX,AVX512_CLX,AVX512_CNL,AVX512_ICL #13 DONE 0.7s #14 [10/36] RUN python -m cProfile -s cumtime /server/fdtw.py > log.txt #14 DONE 28.5s #15 [11/36] RUN cat log.txt | head -500 #15 0.320 27.874674558639526 #15 0.320 46359783 function calls (46356877 primitive calls) in 28.046 seconds #15 0.320 #15 0.320 Ordered by: cumulative time #15 0.320 #15 0.320 ncalls tottime percall cumtime percall filename:lineno(function) #15 0.320 112/1 0.000 0.000 28.046 28.046 {built-in method builtins.exec} #15 0.320 1 0.010 0.010 28.046 28.046 fdtw.py:1(<module>) #15 0.320 100 0.063 0.001 27.865 0.279 fastdtw.py:15(fastdtw) #15 0.320 1000/100 0.899 0.001 27.800 0.278 fastdtw.py:64(__fastdtw) #15 0.320 1000 9.697 0.010 18.775 0.019 fastdtw.py:133(__dtw) #15 0.320 900 5.837 0.006 7.933 0.009 fastdtw.py:157(__expand_window) #15 0.320 4349147 4.383 0.000 5.852 0.000 {built-in method builtins.min} #15 0.320 4348100 1.212 0.000 1.711 0.000 fastdtw.py:56(__difference) #15 0.320 13044300 1.469 0.000 1.469 0.000 fastdtw.py:143(<lambda>) #15 0.320 4349100 1.176 0.000 1.176 0.000 fastdtw.py:137(<genexpr>) #15 0.320 7089966 0.923 0.000 0.923 0.000 {method 'add' of 'set' objects} #15 0.320 2991000 0.849 0.000 0.849 0.000 fastdtw.py:160(<genexpr>) #15 0.320 4348196 0.499 0.000 0.499 0.000 {built-in method builtins.abs} #15 0.320 14 0.001 0.000 0.373 0.027 __init__.py:1(<module>) #15 0.320 4949989 0.372 0.000 0.372 0.000 {method 'append' of 'list' objects} #15 0.320 798400 0.290 0.000 0.290 0.000 fastdtw.py:138(<lambda>) #15 0.320 1800 0.003 0.000 0.191 0.000 fastdtw.py:153(__reduce_by_half) #15 0.320 1800 0.188 0.000 0.188 0.000 fastdtw.py:154(<listcomp>) ... minor costs follow ... The same tests on the host: $ python -c 'import numpy ; print(numpy.__version__)' 1.22.4 $ python -c 'import numpy ; numpy.show_config()' openblas64__info: libraries = ['openblas64_', 'openblas64_'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None)] runtime_library_dirs = ['/usr/local/lib'] blas_ilp64_opt_info: libraries = ['openblas64_', 'openblas64_'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None)] runtime_library_dirs = ['/usr/local/lib'] openblas64__lapack_info: libraries = ['openblas64_', 'openblas64_'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None), ('HAVE_LAPACKE', None)] runtime_library_dirs = ['/usr/local/lib'] lapack_ilp64_opt_info: libraries = ['openblas64_', 'openblas64_'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None), ('HAVE_LAPACKE', None)] runtime_library_dirs = ['/usr/local/lib'] Supported SIMD extensions in this NumPy install: baseline = SSE,SSE2,SSE3 found = SSSE3,SSE41,POPCNT,SSE42,AVX,F16C,FMA3,AVX2 not found = AVX512F,AVX512CD,AVX512_KNL,AVX512_KNM,AVX512_SKX,AVX512_CLX,AVX512_CNL,AVX512_ICL $ python -m cProfile -s cumtime fdtw.py > log.txt $ cat log.txt | head -500 0.1275956630706787 90773 function calls (88792 primitive calls) in 0.286 seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 14 0.001 0.000 0.336 0.024 __init__.py:1(<module>) 111/1 0.000 0.000 0.286 0.286 {built-in method builtins.exec} 1 0.006 0.006 0.286 0.286 fdtw.py:1(<module>) 151/2 0.001 0.000 0.159 0.079 <frozen importlib._bootstrap>:1022(_find_and_load) 151/2 0.001 0.000 0.159 0.079 <frozen importlib._bootstrap>:987(_find_and_load_unlocked) 139/2 0.001 0.000 0.158 0.079 <frozen importlib._bootstrap>:664(_load_unlocked) 110/2 0.000 0.000 0.158 0.079 <frozen importlib._bootstrap_external>:877(exec_module) 217/2 0.000 0.000 0.158 0.079 <frozen importlib._bootstrap>:233(_call_with_frames_removed) 180/16 0.001 0.000 0.154 0.010 <frozen importlib._bootstrap>:1053(_handle_fromlist) 362/8 0.001 0.000 0.153 0.019 {built-in method builtins.__import__} 100 0.117 0.001 0.122 0.001 {fastdtw._fastdtw.fastdtw} 110 0.001 0.000 0.041 0.000 <frozen importlib._bootstrap_external>:950(get_code) 1 0.000 0.000 0.029 0.029 multiarray.py:1(<module>) 1 0.000 0.000 0.028 0.028 numeric.py:1(<module>) 1 0.000 0.000 0.028 0.028 overrides.py:1(<module>) 316 0.002 0.000 0.022 0.000 overrides.py:170(decorator) 110 0.000 0.000 0.021 0.000 <frozen importlib._bootstrap_external>:670(_compile_bytecode) 110 0.020 0.000 0.020 0.000 {built-in method marshal.loads} 139/136 0.000 0.000 0.019 0.000 <frozen importlib._bootstrap>:564(module_from_spec) 148 0.001 0.000 0.018 0.000 <frozen importlib._bootstrap>:921(_find_spec) 2 0.000 0.000 0.017 0.008 shape_base.py:1(<module>) 286 0.001 0.000 0.017 0.000 overrides.py:88(verify_matching_signatures) 1 0.000 0.000 0.016 0.016 py3k.py:1(<module>) 110 0.010 0.000 0.016 0.000 <frozen importlib._bootstrap_external>:1070(get_data) 136 0.000 0.000 0.015 0.000 <frozen importlib._bootstrap_external>:1431(find_spec) 136 0.001 0.000 0.015 0.000 <frozen importlib._bootstrap_external>:1399(_get_spec) 1 0.000 0.000 0.015 0.015 fromnumeric.py:1(<module>) 622 0.001 0.000 0.015 0.000 _inspect.py:96(getargspec) 17 0.000 0.000 0.014 0.001 <frozen importlib._bootstrap_external>:1174(create_module) 17 0.011 0.001 0.014 0.001 {built-in method _imp.create_dynamic} 289 0.003 0.000 0.013 0.000 <frozen importlib._bootstrap_external>:1536(find_spec) 151 0.000 0.000 0.012 0.000 <frozen importlib._bootstrap>:169(__enter__) 454 0.001 0.000 0.012 0.000 <frozen importlib._bootstrap>:179(_get_module_lock) 622 0.011 0.000 0.011 0.000 _inspect.py:26(isfunction) 150 0.010 0.000 0.010 0.000 <frozen importlib._bootstrap>:71(__init__) 1 0.000 0.000 0.010 0.010 _add_newdocs_scalars.py:1(<module>) 1 0.000 0.000 0.010 0.010 _pickle.py:1(<module>) 150 0.000 0.000 0.009 0.000 re.py:288(_compile) 1 0.000 0.000 0.009 0.009 platform.py:1(<module>) 17/12 0.000 0.000 0.009 0.001 <frozen importlib._bootstrap_external>:1182(exec_module) 17/12 0.003 0.000 0.009 0.001 {built-in method _imp.exec_dynamic} 1 0.000 0.000 0.008 0.008 pathlib.py:1(<module>) 28 0.000 0.000 0.008 0.000 sre_compile.py:783(compile) 25 0.000 0.000 0.008 0.000 re.py:249(compile) 219/218 0.004 0.000 0.007 0.000 {built-in method builtins.__build_class__} 1 0.000 0.000 0.006 0.006 index_tricks.py:1(<module>) 1 0.000 0.000 0.005 0.005 _add_newdocs.py:1(<module>) 313 0.001 0.000 0.005 0.000 function_base.py:475(add_newdoc) 28 0.000 0.000 0.005 0.000 sre_parse.py:944(parse) 1501 0.002 0.000 0.004 0.000 <frozen importlib._bootstrap_external>:126(_path_join) 1 0.000 0.000 0.004 0.004 secrets.py:1(<module>) 139 0.001 0.000 0.004 0.000 <frozen importlib._bootstrap>:492(_init_module_attrs) 77/28 0.000 0.000 0.004 0.000 sre_parse.py:436(_parse_sub) 1 0.000 0.000 0.004 0.004 numerictypes.py:1(<module>) 82/30 0.002 0.000 0.004 0.000 sre_parse.py:494(_parse) 1 0.000 0.000 0.004 0.004 pickle.py:1(<module>) 1 0.000 0.000 0.004 0.004 subprocess.py:1(<module>) 596 0.000 0.000 0.004 0.000 <frozen importlib._bootstrap_external>:140(_path_stat) 2000 0.002 0.000 0.004 0.000 numerictypes.py:356(issubdtype) 1 0.000 0.000 0.003 0.003 ntpath.py:1(<module>) 596 0.003 0.000 0.003 0.000 {built-in method posix.stat} 28 0.000 0.000 0.003 0.000 sre_compile.py:622(_code) 1 0.000 0.000 0.003 0.003 version.py:1(<module>) 326 0.002 0.000 0.003 0.000 functools.py:35(update_wrapper) 220 0.001 0.000 0.003 0.000 <frozen importlib._bootstrap_external>:380(cache_from_source) 1 0.000 0.000 0.003 0.003 defmatrix.py:1(<module>) 1 0.000 0.000 0.003 0.003 defchararray.py:1(<module>) 303 0.001 0.000 0.003 0.000 <frozen importlib._bootstrap>:216(_lock_unlock_module) 2 0.000 0.000 0.003 0.001 _version.py:1(<module>) 1 0.000 0.000 0.003 0.003 hmac.py:1(<module>) ... minor costs follow
FastDTW contains both a fast path implemented in Cython, and a slow path implemented in Python. It tries the fast path first, and if it gets an ImportError, it falls back to the slow path. You are probably hitting the fast path in the host, and the slow path in the container. Some evidence for this idea: The functions called in both are completely different. In the fast one, about a quarter of the functions in the trace are NumPy related. In the slow one, NumPy doesn't appear at all. builtins.min() appears in the slow profile. While both the fast path and slow path use min(), Cython compiles this into C code which makes no use of the Python min(). Therefore, the slower profile must be using the slow path in FastDTW. Source code where this feature is implemented: See here for the code which implements the fallback. See here for the fast path. See here for the slow path. You can use the following code to check this: import inspect import fastdtw fastdtw.dtw inspect.getfile(fastdtw.dtw) Output, slow path: Python 3.10.10 (main, Jun 25 2023, 11:16:46) [Clang 14.0.3 (clang-1403.0.22.14.1)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import inspect >>> import fastdtw >>> fastdtw.dtw <function dtw at 0x10f9441f0> >>> inspect.getfile(fastdtw.dtw) '/Users/nick/.pyenv/versions/3.10.10/lib/python3.10/site-packages/fastdtw/fastdtw.py' Output, fast path: Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import inspect >>> import fastdtw >>> fastdtw.dtw <built-in function dtw> >>> inspect.getfile(fastdtw.dtw) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3.10/inspect.py", line 797, in getfile raise TypeError('module, class, method, function, traceback, frame, or ' TypeError: module, class, method, function, traceback, frame, or code object was expected, got builtin_function_or_method Timing of slow path: 16.39020 seconds Timing of fast path: 0.09339 seconds (Note: slow path and fast path benchmark are run on different hardware.)
3
2
78,605,817
2024-6-11
https://stackoverflow.com/questions/78605817/how-to-replace-an-individual-level-in-a-multi-level-column-index-in-pandas
Consider the following multi-level column index dataframe: import numpy as np import pandas as pd arrays = [ ["A", "A", "B", "B"], ["one", "two", "one", "two"], ["1", "2", "1", "pd.NA"], ] idx = pd.MultiIndex.from_arrays(arrays, names=["level_0", "level_1", "level_2"]) data = np.random.randn(3, 4) df = pd.DataFrame(data, columns=idx) print(df) level_0 A B level_1 one two one two level_2 1 2 1 pd.NA 0 -1.249285 0.314225 0.011139 0.675274 1 -0.654808 -0.492350 0.596338 -0.087334 2 0.113570 0.566687 -0.361334 0.085368 level_2 holds values of type object (str really). df.columns.get_level_values(2) Index(['1', '2', '1', 'pd.NA'], dtype='object', name='level_2') I need to parse it to the correct data type and change this particular column level. new_level_2 = [ pd.NA if x == "pd.NA" else int(x) for x in df.columns.get_level_values(2) ] I am looking for a pythonic way to replace the old level_2 with new_level_2.
You could convert the MultiIndex.to_frame then back to MultiIndex.from_frame, change the values with replace and as_type: df.columns = pd.MultiIndex.from_frame(df.columns.to_frame() .replace({'level_2': {'pd.NA': pd.NA}}) .astype({'level_2': 'Int64'})) Output: level_0 A B level_1 one two one two level_2 1 2 3 <NA> 0 0.144044 1.454274 0.761038 0.121675 1 0.443863 0.333674 1.494079 -0.205158 2 0.313068 -0.854096 -2.552990 0.653619
4
2
78,601,319
2024-6-10
https://stackoverflow.com/questions/78601319/why-is-my-basic-gekko-ode-solver-much-slower-than-scipy
In this minimal example I want to solve the basic integrator ODE dT/dt = k[F(t) - T(t)] where F(t) is a forcing vector, which is selected to be a square wave. I have implemented two procedures to solve the ODE: Using Scipy's solve_ivp and using Gekko. The former takes 6 milliseconds to run, the latter takes 8 seconds to run. Am I doing something wrong with Gekko? Are there some additional parameters that I can tune to improve performance? I have omitted the testing part of the code, but I have compared both solutions to the analytic solution, and they are both accurate within a 3-4 significant digits. import numpy as np from scipy.integrate import solve_ivp from gekko import GEKKO def step_problem_forcing(time_arr, param): T_low = param['T_low'] T_high = param['T_high'] t_break_l = param['t_break_l'] t_break_r = param['t_break_r'] forcing = np.full(len(time_arr), T_low) idxs_pulse = np.logical_and(time_arr >= t_break_l, time_arr <= t_break_r) forcing[idxs_pulse] = T_high return forcing def step_problem_generate(tmin, tmax): tAvg = (tmin+tmax) / 2 tDur = tmax - tmin return { 'k' : 10 ** np.random.uniform(-3, 3), 'T_low' : np.random.uniform(20, 50), 'T_high' : np.random.uniform(20, 50), 'T0' : np.random.uniform(20, 50), 't_break_l' : np.random.uniform(tmin, tAvg - 0.1*tDur), 't_break_r' : np.random.uniform(tAvg + 0.1*tDur, tmax) } def ode_scipy_step_solver(time_arr, param: dict): f_this = lambda t: np.interp(t, time_arr, param['forcing']) def _dxdt(t, x, param: tuple): # if (t < tmin) or (t > tmax): # raise ValueError(f"{t} is out of bounds [{tmin}, {tmax}]") k, = param return k*(f_this(t) - x) k = param['k'] sol = solve_ivp(_dxdt, (time_arr[0], time_arr[-1]), [param['T0'], ], t_eval=time_arr, args=([k, ],), rtol=1.0E-6, method='RK45') return sol['y'].T def ode_gekko_step_solver(time_arr, param: dict) -> np.ndarray: m = GEKKO() # create GEKKO model # Define variables m.time = time_arr T = m.Var(value=param['T0']) F = m.Param(value=param['forcing']) k = m.Const(value=param['k']) # t = m.Param(value=m.time) # equations m.Equation(T.dt() == k * (F - T)) m.options.IMODE = 7 # dynamic simulation m.solve(disp=False) # solve locally (remote=False) return T.value tmin = 0 tmax = 10 time_arr = np.linspace(tmin, tmax, 1000) param = step_problem_generate(tmin, tmax) param['forcing'] = step_problem_forcing(time_arr, param) # Takes 6.8ms to run rezScipy = ode_scipy_step_solver(time_arr, param) # Takes 8.5s to run rezGekko = ode_gekko_step_solver(time_arr, param)
Switch to remote=False to solve locally and avoid the overhead due to network communication to the public server. Switching to IMODE=4 solves all 1000 time steps simultaneously instead of sequentially and can improve solve time. Here are 10 trials with the 1 original and 9 modified versions. I'm on Ubuntu Linux. Timing will likely be different based on the computer. # IMODE=7, Original Script ODE Scipy Time: 0.014239072799682617 Gekko Time: 8.92684006690979 # IMODE=4, Remote=False ODE Scipy Time: 0.2553427219390869 Gekko Time: 0.12177252769470215 ODE Scipy Time: 0.0017511844635009766 Gekko Time: 0.11760997772216797 ODE Scipy Time: 0.0032286643981933594 Gekko Time: 0.11528849601745605 ODE Scipy Time: 0.06327152252197266 Gekko Time: 0.12140154838562012 ODE Scipy Time: 0.01037144660949707 Gekko Time: 0.12077593803405762 ODE Scipy Time: 0.0036330223083496094 Gekko Time: 0.11592268943786621 ODE Scipy Time: 0.1100163459777832 Gekko Time: 0.11649894714355469 ODE Scipy Time: 0.0032815933227539062 Gekko Time: 0.1154778003692627 ODE Scipy Time: 0.0015087127685546875 Gekko Time: 0.1158149242401123 Here's the modified script: import numpy as np from scipy.integrate import solve_ivp from gekko import GEKKO import time def step_problem_forcing(time_arr, param): T_low = param['T_low'] T_high = param['T_high'] t_break_l = param['t_break_l'] t_break_r = param['t_break_r'] forcing = np.full(len(time_arr), T_low) idxs_pulse = np.logical_and(time_arr >= t_break_l, time_arr <= t_break_r) forcing[idxs_pulse] = T_high return forcing def step_problem_generate(tmin, tmax): tAvg = (tmin+tmax) / 2 tDur = tmax - tmin return { 'k' : 10 ** np.random.uniform(-3, 3), 'T_low' : np.random.uniform(20, 50), 'T_high' : np.random.uniform(20, 50), 'T0' : np.random.uniform(20, 50), 't_break_l' : np.random.uniform(tmin, tAvg - 0.1*tDur), 't_break_r' : np.random.uniform(tAvg + 0.1*tDur, tmax) } def ode_scipy_step_solver(time_arr, param: dict): f_this = lambda t: np.interp(t, time_arr, param['forcing']) def _dxdt(t, x, param: tuple): # if (t < tmin) or (t > tmax): # raise ValueError(f"{t} is out of bounds [{tmin}, {tmax}]") k, = param return k*(f_this(t) - x) k = param['k'] sol = solve_ivp(_dxdt, (time_arr[0], time_arr[-1]), [param['T0'], ], t_eval=time_arr, args=([k, ],), rtol=1.0E-6, method='RK45') return sol['y'].T def ode_gekko_step_solver(time_arr, param: dict) -> np.ndarray: m = GEKKO(remote=False) # create GEKKO model # Define variables m.time = time_arr T = m.Var(value=param['T0']) F = m.Param(value=param['forcing']) k = m.Const(value=param['k']) # t = m.Param(value=m.time) # equations m.Equation(T.dt() == k * (F - T)) m.options.IMODE = 4 # dynamic simulation m.solve(disp=False) # solve locally (remote=False) return T.value tmin = 0 tmax = 10 time_arr = np.linspace(tmin, tmax, 1000) param = step_problem_generate(tmin, tmax) param['forcing'] = step_problem_forcing(time_arr, param) st = time.time() rezScipy = ode_scipy_step_solver(time_arr, param) print(f'ODE Scipy Time: {time.time()-st}') st = time.time() rezGekko = ode_gekko_step_solver(time_arr, param) print(f'Gekko Time: {time.time()-st}') Scipy solve_ivp is a specialized ODE solver while Gekko solves higher index DAEs, mixed integer optimization, regression, and machine learning models. It uses a simultaneous method and only has basic adaptive step lengths to control error. I recommend solve_ivp if it is an ODE problem.
2
1
78,604,895
2024-6-10
https://stackoverflow.com/questions/78604895/python-typeerror-not-supported-between-instances-of-str-and-int
import random elements = { "normal": {"strong_against": ["None"], "weak_against": ["None"]}, "fire": {"strong_against": ["earth", "ice"], "weak_against": ["water", "ice"]}, "water": {"strong_against": ["fire", "poison"], "weak_against": ["earth", "electric"]}, "earth": {"strong_against": ["water", "poison"], "weak_against": ["fire", "nature"]}, "holy": {"strong_against": ["dark"], "weak_against": ["dark"]}, "dark": {"strong_against": ["holy"], "weak_against": ["holy"]}, "ice": {"strong_against": ["fire"], "weak_against": ["fire"]}, "electric": {"strong_against": ["water"], "weak_against": ["earth"]}, "poison": {"strong_against": ["nature"], "weak_against": ["earth", "water"]}, "nature": {"strong_against": ["earth"], "weak_against": ["poison", "fire"]} } class armor: def __init__(self, name, defence, element=None): self.name = name self.defence = defence self.element = element class weapon: def __init__(self, name, damage, element=None): self.name = name self.damage = damage self.element = element class restore_spell: def __init__(self, name, restore_power,cost): self.name = name self.restore_power = restore_power self.cost = cost class attack_spell: def __init__(self, name, attack_spell_power, cost, element=None): self.name = name self.attack_spell_power = attack_spell_power self.cost = cost self.element = element class status_effects: pass class Person: def __init__(self,name,armor=None,weapon=None,attack_spell=None,restore_spell=None,debuff=None,buff=None,max_health = 100,health=None,max_mp=48,mp=None,strength=18,defe=12,dex=10,magic=20,magic_defe=12,luck=16,evasion=2,exp=0,lvl=1,skill_points=0,attack_accuracy=100,max_lvl=100, base_d=0, base_d_m=0): self.name = name self.max_health = max_health self.health = health if health is not None else max_health self.weapon = weapon self.attack_spell = attack_spell self.restore_spell = restore_spell self.buff = buff self.debuff = debuff self.max_mp = max_mp self.mp = mp if mp is not None else max_mp self.strength = strength self.defe = defe self.armor = armor self.dex = dex self.magic = magic self.magic_defe = magic_defe self.luck = luck self.evasion = evasion self.exp = exp self.lvl = lvl self.skill_points = skill_points self.attack_accuracy = attack_accuracy self.evasion = evasion self.max_lvl = max_lvl self.base_d_m = base_d_m def see_stats(self): print("max health: ", self.max_health,"/ health: ", self.health) print("max mp: ", self.max_mp,"/ mp: ", self.mp) print("strength: ", self.strength) print("defense: ", self.defe) print("magic defense: ", self.magic_defe) print("dexterity: ", self.dex ) print("luck: ", self.luck) print("exp: ", self.exp) print("lvl: ", self.lvl) def see_gear(self): print("weapon: ", self.weapon.name if self.weapon else "None") print("armor: ", self.armor.name if self.armor else "None") def restore_health(self): if self.health < self.max_health and self.restore_spell and self.mp >= self.restore_spell.cost: self.health = min(self.health + self.restore_spell.restore_power, self.max_health) self.mp -= self.restore_spell.cost print(f"{self.name} used {self.restore_spell.name} and restored health to {self.health}") else: print(f"{self.name} cannot use the restore spell.") def calculate_base_d(self): return (self.strength + (self.weapon.damage if self.weapon else 0)) * random.uniform(0.9375, 1.0625) def calculate_base_d_m(self): return (self.magic + (self.attack_spell.attack_spell_power if self.attack_spell else 0)) * random.uniform(0.9375, 1.0625) def attack(self): attack_type = input("What kind of attack do you want to do (magic/physical): ").lower() if attack_type == "magic" or "m": if self.attack_spell and self.mp >= self.attack_spell.cost: self.mp -= self.attack_spell.cost return self.calculate_base_d_m() else: print("Not enough MP to cast a spell or no spell equipped.") return 0 elif attack_type == "physical" or "p": return self.calculate_base_d() else: print("Please enter a valid attack type.") return 0 def exp_to_next_lvl(self): if 1 <= self.lvl <= 9: return self.lvl * 100 elif 10 <= self.lvl <= 15: return self.lvl * 200 else: return self.lvl * 300 def gain_exp(self, exp): self.exp += exp while self.exp >= self.exp_to_next_lvl(): self.level_up() def level_up(self): self.exp -= self.exp_to_next_lvl self.lvl += 1 self.skill_points += 5 if 0 <= self.lvl <= 10: self.max_health += 75 self.mp += 10 elif 11 <= self.lvl <= 50: self.max_health += 150 self.mp += 15 elif 51 <= self.lvl <= 99: self.max_health +=250 self.mp += 25 else: print("you are at max lvl") print(f"Congratulations! You've reached level {self.lvl}") def s_skill_p(self): if self.skill_points >0: x = input("which skill you want to incrase:\nstrength\ndefense\nmagic\nmagic_defense\ndexterity\nluck ").lower() if x == 'strength': self.strength += 1 self.skill_points -= 1 elif x == 'defense': self.defe += 1 self.skill_points -= 1 elif x == 'magic': self.magic += 1 self.skill_points -= 1 elif x == 'magic_defense': self.magic_defe += 1 self.skill_points -= 1 elif x == 'dexterity': self.dex += 1 self.skill_points -= 1 elif x == 'luck': self.luck += 1 self.skill_points -= 1 else: print("Please enter a valid skill") else: print("you dont have skill points ") def is_dead(self): return self.health <= 0 class Barbarian(Person): pass class Wizard(Person): def __init__(self, name, max_health=None, health=None, max_mp=48, mp=None, magic=20, magic_defe=12, attack_spells=None): super().__init__(name, max_health, health, max_mp, mp, magic=magic, magic_defe=magic_defe) self.attack_spells = attack_spells if attack_spells is not None else [] class Monster(Person): def __init__(self, name, max_health=100, health=None, max_mp=48, mp=None, magic=20, magic_defe=12, element=None, attack_spells=None): super().__init__(name, max_health, health, max_mp, mp, magic, magic_defe, element, attack_spells) self.element = element self.attack_spells = attack_spells if attack_spells is not None else [] def attack_m(self): if self.health > self.max_health * 0.5: return self.calculate_base_d() else: return self.calculate_base_d_m() def battle(player, monster): while not player.is_dead() and not monster.is_dead(): print(f"{player.name}'s turn:") player_turn = True while player_turn: action = input("What do you want to do? (attack/restore/stats/gear): ").lower() if action == "attack" or action == "a": damage = player.attack() if player.weapon and player.weapon.element or player.attack_spell and player.attack_spell.element: attack_element = player.weapon.element if player.weapon else player.attack_spell.element if monster.element and attack_element in elements[monster.element]['weak_against']: damage *= 1.5 elif monster.element and attack_element in elements[monster.element]['strong_against']: damage *= 0.5 monster.health -= damage print(f"{player.name} attacks {monster.name} for {damage:.2f} damage!") if monster.is_dead(): player.gain_exp(monster.exp) break player_turn = False elif action == "stats" or action == "s": player.see_stats() elif action == "gear" or action == "g": player.see_gear() elif action == "restore" or action == "r": player.restore_health() player_turn = False else: print("Please enter a valid action.") if monster.is_dead(): break print(f"{monster.name}'s turn:") m_damage = monster.attack_m() player.health -= m_damage print(f"{monster.name} attacks {player.name} for {m_damage:.2f} damage!") if player.is_dead(): break cure = restore_spell("cure", 50, 10) fireball = attack_spell("fireball", 30, 15, "fire") thunder = attack_spell("thunder", 25, 12, "electric") player_weapon = weapon("Sword", 10, "normal") player = Person(name="Hero", weapon=player_weapon, attack_spell=fireball, restore_spell=cure) monster = Monster(name="Goblin", element="fire", max_health=250, health=250, max_mp=20, mp=20, magic=10, attack_spells=None) monster.weapon = weapon("Claws", 5, "fire") battle(player, monster) I am trying to make something like a rpg game in python for understanding classes well. out of nowhere is_dead function start to cause this error('<=' not supported between instances of 'str' and 'int'). How can i improve my code and solve the issue can you guys please help, i am new to python. How can i solve this issue
The error message says '<=' not supported between instances of 'str' and 'int'. This means you used the <= operator trying to compare a string and an integer. Your function is_dead() has only one line, return self.health <= 0. We know that 0 is an integer. Therefore, at some point you are assigning a string to a Person's health attribute. As Paul M already mentioned, the issue is in the call to super() in your Monster class. super().__init__(name, max_health, health, max_mp, mp, magic, magic_defe, element, attack_spells) You are missing the keyword specifiers in the function call. It should look like this: super().__init__(name, max_health=max_health, health=health, max_mp=max_mp, mp=mp, magic=magic, magic_defe=magic_defe) Python doesn't check the names of the variables in the caller scope and match them to the function signature, you have to provide the names. By omitting these, the parent class constructor is receiving the arguments in order. Meaning when Monster is initialized it assigns armor=max_health, weapon=health, attack_spell=max_mp, etc. Also you are passing element and attack_spells to super()--but a Person does not have those attributes. Same issue exists for other subclasses of Person. Another thing--for your elements dict, I would strongly recommend using strong_against: [] rather than strong_against: ["None"]
2
2
78,601,579
2024-6-10
https://stackoverflow.com/questions/78601579/asyncio-create-task-executed-even-without-any-awaits-in-the-program
This is a followup question to What does asyncio.create_task() do? There, as well in other places, asyncio.create_task(c) is described as "immediately" running the coroutine c, when compared to simply calling the coroutine, which is then only executed when it is awaited. It makes sense if you interpret "immediately" as "without having to await it", but in fact a created task is not executed until we run some await (possibly for other coroutines) (in the original question, slow_coro started being executed only when we await fast_coro). However, if we do not run any await at all, the tasks are still executed (only one step, not to completion) at the end of the program: import asyncio async def counter_loop(x, n): for i in range(1, n + 1): print(f"Counter {x}: {i}") await asyncio.sleep(0.5) return f"Finished {x} in {n}" async def main(): slow_task = asyncio.create_task(counter_loop("Slow", 4)) fast_coro = asyncio.create_task(counter_loop("Fast", 2)) print("Created tasks") for _ in range(1000): pass print("main ended") asyncio.run(main()) print("program ended") the output is Created tasks main ended Counter Slow: 1 Counter Fast: 1 program ended I am curious: why are the two created tasks executed at all if there was no await being run anywhere?
Let's try to put it in these words: the event loop enters execution with one task to be performed: the main(). When "main()" is complete, there are other two tasks ready to be processed - so before returning to main() caller, those are executed up the next await statement inside each one. (either an await or an async for or async with). For the next loop iteration, as the main task is "over", the loop cancels the remaining tasks, and shuts down. This happens because what signals the loop that the "main" task is over is a callback that is set when loop.run_until_complete (called by asyncio.run) is done: it is this callback that signals that the loop should stop. But the callback itself will be executed only on the next loop iteration, after the main co-routine is done. And the loop iteration, even though gets a mark that the asyncio loop should stop there, will only actually shutdown after running once over all pending tasks - this implies in advancing each created task to the next await point. This is done by throwing asyncio.CancelledError into the tasks code, at the await statement. So if you have a try/except/finally clause encompassing the await, you can still clean-up your task before the loop ends. All of that is not documented in "English" - it is rather the current code behavior in the asyncio implementation. One have to follow the code at the asyncio/base_events.py file to understand it.
3
3
78,604,299
2024-6-10
https://stackoverflow.com/questions/78604299/how-to-change-a-pydantic-list-of-objects-into-list-of-strings
I'm using SQLModel for an API. I want the object to be like this: class Post(SQLModel): id: int categories: list[str] # category.name instead of nested objects: class Post(SQLModel): id: int categories: list[Category] Do I have to change the serialization function or is there a way to do this automatically?
As far as I understood Post is your db model. I would highly recommend to separate you db model (should reflect the database structure) from dto (should be used for comfortable data transfer between application components) class PostDTO(pydantic.BaseModel): id: int categories: list[str] @classmethod def from_db_model(cls, model: Post) -> "PostDTO": """Build Post DTO based on db model.""" return cls( id=model.id, categories=[category.name for category in model.categories] ) post = Post(...) post_dto = PostDTO.from_db_model(model=post) Don't be afraid to create an extra object if it makes your code more scalable, readable and logical!
2
3
78,603,690
2024-6-10
https://stackoverflow.com/questions/78603690/information-schema-invalid-identifier-from-snowflake-when-trying-to-access-query
I am currently working with Snowflake to view the query history. However, when I attempt to execute the same command in Python, following its syntax rules, I encounter an error stating β€˜INVALID IDENTIFIER’ for β€˜information_schema’. select * from table(information_schema.query_history())
Usually these things can be resolved by simply advising the documentation, and reviewing the query profile from within Snowflake. In this case, please try using the following code, with snowflake specified instead of directly targeting the information_schema. Other than that, also ensure your user has the correct and specified access. select * from table(snowflake.information_schema.query_history()) order by start_time If your issue persists, please add as an comment what the query looks like from snowflake point of view, as what you're running in python might not be what is actually running depending on your lib.
2
2
78,600,352
2024-6-10
https://stackoverflow.com/questions/78600352/cannot-read-parquet-file-of-multi-level-complex-index-data-frame
I can create a sample data frame with the following code and save it as parquet. When I try to read it throws "TypeError: unhashable type: 'numpy.ndarray'". Is it possible to save an index comprised of tuples or do I have to reset the index before saving to parquet? Thanks import pandas as pd # Creating sample data data = { 'A': [1, 2, 3], 'B': [6, 7, 8], 'C': [11, 12, 13], } # Creating multi-index index = pd.MultiIndex.from_tuples( [ ((10, 30), (0.75, 1.0)), ((10, 30), (0.75, 1.25)), ((10, 30), (1.0, 1.25)) ], names=['level_0', 'level_1'] ) # Creating DataFrame with multi-index df = pd.DataFrame(data, index=index) print(df) df.to_parquet(path="test.parquet") pd.read_parquet("test.parquet")
You must specify the levels: import pandas as pd data = { 'A': [1, 2, 3], 'B': [6, 7, 8], 'C': [11, 12, 13], } index = pd.MultiIndex.from_tuples( [ (str((10, 30)), str((0.75, 1.0))), (str((10, 30)), str((0.75, 1.25))), (str((10, 30)), str((1.0, 1.25))) ], names=['level_0', 'level_1'] ) df = pd.DataFrame(data, index=index) print("Original DataFrame:") print(df) df_reset = df.reset_index() df_reset.to_parquet(path="test.parquet") df_read = pd.read_parquet("test.parquet") df_read.set_index(['level_0', 'level_1'], inplace=True) print("DataFrame read from Parquet:") print(df_read) which returns Original DataFrame: A B C level_0 level_1 (10, 30) (0.75, 1.0) 1 6 11 (0.75, 1.25) 2 7 12 (1.0, 1.25) 3 8 13 DataFrame read from Parquet: A B C level_0 level_1 (10, 30) (0.75, 1.0) 1 6 11 (0.75, 1.25) 2 7 12 (1.0, 1.25) 3 8 13
2
0
78,600,172
2024-6-10
https://stackoverflow.com/questions/78600172/skip-downloading-files-if-the-docker-build-process-is-interrupted
I'm new to Docker and want to build a Dockerfile with the following snippet: RUN \ cur=`pwd` && \ wget http://www.coppeliarobotics.com/files/CoppeliaSim_Edu_V4_1_0_Ubuntu20_04.tar.xz && \ tar -xf CoppeliaSim_Edu_V4_1_0_Ubuntu20_04.tar.xz && \ export COPPELIASIM_ROOT="$cur/CoppeliaSim_Edu_V4_1_0_Ubuntu20_04" && \ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$COPPELIASIM_ROOT:$COPPELIASIM_ROOT/platforms && \ export QT_QPA_PLATFORM_PLUGIN_PATH=$COPPELIASIM_ROOT && \ git clone https://github.com/stepjam/PyRep.git && \ cd PyRep && \ pip3 install -r requirements.txt && \ pip3 install setuptools && \ pip3 install . Because of the network issue, the build process is unstable and interrupted on the git clone... line. However, when I try to build again, CoppeliaSim_Edu_V4_1_0_Ubuntu20_04 which has already been saved is downloaded again, which consumes a lot of time. Is there any method to skip the download process if the .tar file has already been saved during the last build process?
Put the download command of .tar files on a separate command. RUN wget ttp://www.coppeliarobotics.com/files/CoppeliaSim_Edu_V4_1_0_Ubuntu20_04.tar.xz && \ tar -xf CoppeliaSim_Edu_V4_1_0_Ubuntu20_04.tar.xz RUN # rest of your code Each instruction in the Dockerfile translates to a layer in your final image. You can think of image layers as a stack, with each layer adding more content on top of the layers that came before it: Whenever a layer changes, that layer will need to be re-built. If a layer changes, all other layers that come after it are also affected. source : docker docs What it means is: After the first built, Docker stores each layer in Cache, in the future builds, if the dockerfile content doesn't change, docker uses the cached layer instead of rebuilding them. To your question: "Is there any method to skip the download process if the .tar file has already been saved during the last build process?" Put the download .tar file command in a separate RUN Command that precedes your other commands to take advantage of docker caching.
2
0
78,587,497
2024-6-6
https://stackoverflow.com/questions/78587497/bezier-surface-matrix-form
I have a problem with constructing a Bezier surface following an example from a book, using mathematical formulas in matrix form. Especially when multiplying matrices. I'm trying to use this formula I have a matrix of control points B = np.array([ [[-15, 0, 15], [-15, 5, 5], [-15, 5, -5], [-15, 0, -15]], [[-5, 5, 15], [-5, 5, 5], [-5, 5, -5], [-5, 5, -15]], [[5, 5, 15], [5, 5, 5], [5, 5, -5], [5, 5, -15]], [[15, 0, 15], [15, 5, 5], [15, 5, -5], [15, 0, -15]] ]) And we have to multiply it by matrices and get [N][B][N]^t And I tried to multiply the matrix by these two, but I get completely different values ​​for the final matrix, I understand that most likely the problem is in the code " B = np.array([ [[-15, 0, 15], [-5, 5, 15], [5, 5, 15], [15, 0, 15]], [[-15, 5, 5], [-5, 5, 5], [5, 5, 5], [15, 5, 5]], [[-15, 5, -5], [-5, 5, -5], [5, 5, -5], [15, 5, -5]], [[-15, 0, -15], [-5, 5, -15], [5, 5, -15], [15, 0, -15]] ]) N = np.array([[-1, 3, -3, 1], [3, -6, 3, 0], [-3, 3, 0, 0], [1, 0, 0, 0] ]) Nt = np.array([[-1, 3, -3, 1], [3, -6, 3, 0], [-3, 3, 0, 0], [1, 0, 0, 0]]) B_transformed = np.zeros_like(B) for i in range(B.shape[0]): for j in range(B.shape[1]): for k in range(3): B_transformed[i, j, k] = B[i, j, k] * N[j, k] * Nt[j, k] " [[[ -15 0 135] [ -45 180 135] [ 45 45 0] [ 15 0 0]] [[ -15 45 45] [ -45 180 45] [ 45 45 0] [ 15 0 0]] [[ -15 45 -45] [ -45 180 -45] [ 45 45 0] [ 15 0 0]] [[ -15 0 -135] [ -45 180 -135] [ 45 45 0] [ 15 0 0]]] Correct answer from book is NBNt = np.array([ [[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, -45, 0], [0, 45, 0], [0, -15, 0]], [[0, 0, 0], [0, 45, 0], [0, -45, 0], [30, 15, 0]], [[0, 0, 0], [0, -15, 0], [0, 15, -30], [-15, 0, 15]] ]) Next, matrix multiplication will also be performed, so it’s important for me to understand what I’m doing wrong Q(0.5, 0.5) = [0.125 0.25 0.5 1. ] * [N][B][N]^t * [[0.125] [0.25 ] [0.5 ] [1. ]] This is the calculation of a point on a surface at w = 0.5 and u = 0.5 And the answer should be [0, 4.6875, 0] I use Jupyter Notebook
Generally, Bezier surface are plotted this way (as the question is posted in matplotlib). import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from scipy.special import comb def bernstein_poly(i, n, t): return comb(n, i) * (t**i) * ((1 - t)**(n - i)) def bernstein_matrix(n, t): return np.array([bernstein_poly(i, n, t) for i in range(n + 1)]) P = np.array([ [[-15, 0, 15], [-15, 5, 5], [-15, 5, -5], [-15, 0, -15]], [[-5, 5, 15], [-5, 5, 5], [-5, 5, -5], [-5, 5, -15]], [[5, 5, 15], [5, 5, 5], [5, 5, -5], [5, 5, -15]], [[15, 0, 15], [15, 5, 5], [15, 5, -5], [15, 0, -15]] ]) n, m = P.shape[0] - 1, P.shape[1] - 1 u = np.linspace(0, 1, 50) v = np.linspace(0, 1, 50) U, V = np.meshgrid(u, v) surface_points = np.zeros((U.shape[0], U.shape[1], 3)) for i in range(U.shape[0]): for j in range(U.shape[1]): Bu = bernstein_matrix(n, U[i, j]) Bv = bernstein_matrix(m, V[i, j]) surface_points[i, j] = np.tensordot(np.tensordot(Bu, P, axes=(0, 0)), Bv, axes=(0, 0)) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot_surface(surface_points[:,:,0], surface_points[:,:,1], surface_points[:,:,2], rstride=1, cstride=1, color='b', alpha=0.6, edgecolor='w') ax.scatter(P[:,:,0], P[:,:,1], P[:,:,2], color='r', s=50) plt.show() which return Now, for your particular problem, you can do this: import numpy as np B = np.array([ [[-15, 0, 15], [-15, 5, 5], [-15, 5, -5], [-15, 0, -15]], [[-5, 5, 15], [-5, 5, 5], [-5, 5, -5], [-5, 5, -15]], [[5, 5, 15], [5, 5, 5], [5, 5, -5], [5, 5, -15]], [[15, 0, 15], [15, 5, 5], [15, 5, -5], [15, 0, -15]] ]) N = np.array([[-1, 3, -3, 1], [3, -6, 3, 0], [-3, 3, 0, 0], [1, 0, 0, 0]]) Nt = N.T B_transformed = np.zeros((4, 4, 3)) for i in range(3): B_transformed[:, :, i] = N @ B[:, :, i] @ Nt print("Transformed control points matrix B_transformed:") print(B_transformed) u = 0.5 w = 0.5 U = np.array([u**3, u**2, u, 1]) W = np.array([w**3, w**2, w, 1]) Q = np.array([U @ B_transformed[:, :, i] @ W for i in range(3)]) print("Point on the BΓ©zier surface Q(0.5, 0.5):") print(Q) which gives you Transformed control points matrix B_transformed: [[[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] [[ 0. 0. 0.] [ 0. -45. 0.] [ 0. 45. 0.] [ 0. -15. 0.]] [[ 0. 0. 0.] [ 0. 45. 0.] [ 0. -45. 0.] [ 30. 15. 0.]] [[ 0. 0. 0.] [ 0. -15. 0.] [ 0. 15. -30.] [-15. 0. 15.]]] Point on the BΓ©zier surface Q(0.5, 0.5): [0. 4.6875 0. ] and if you also want to plot it, you can adapt my top code to this: import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from scipy.special import comb def bernstein_poly(i, n, t): return comb(n, i) * (t**i) * ((1 - t)**(n - i)) def bernstein_matrix(n, t): return np.array([bernstein_poly(i, n, t) for i in range(n + 1)]) B = np.array([ [[-15, 0, 15], [-15, 5, 5], [-15, 5, -5], [-15, 0, -15]], [[-5, 5, 15], [-5, 5, 5], [-5, 5, -5], [-5, 5, -15]], [[5, 5, 15], [5, 5, 5], [5, 5, -5], [5, 5, -15]], [[15, 0, 15], [15, 5, 5], [15, 5, -5], [15, 0, -15]] ]) N = np.array([[-1, 3, -3, 1], [3, -6, 3, 0], [-3, 3, 0, 0], [1, 0, 0, 0]]) Nt = N.T B_transformed = np.zeros((4, 4, 3)) for i in range(3): B_transformed[:, :, i] = N @ B[:, :, i] @ Nt print("Transformed control points matrix B_transformed:") print(B_transformed) u = np.linspace(0, 1, 50) w = np.linspace(0, 1, 50) U, W = np.meshgrid(u, w) surface_points = np.zeros((U.shape[0], U.shape[1], 3)) for i in range(U.shape[0]): for j in range(U.shape[1]): U_vec = np.array([U[i, j]**3, U[i, j]**2, U[i, j], 1]) W_vec = np.array([W[i, j]**3, W[i, j]**2, W[i, j], 1]) surface_points[i, j] = np.array([U_vec @ B_transformed[:, :, k] @ W_vec for k in range(3)]) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot_surface(surface_points[:,:,0], surface_points[:,:,1], surface_points[:,:,2], rstride=1, cstride=1, color='b', alpha=0.6, edgecolor='w') ax.scatter(B[:,:,0], B[:,:,1], B[:,:,2], color='r', s=50) plt.show() giving you again
2
2
78,596,486
2024-6-8
https://stackoverflow.com/questions/78596486/mypy-displays-an-error-when-inheriting-from-str-and-adding-metaclass
Here's simple example of code: class meta(type): pass class Test(str, metaclass = meta): pass When I run mypy on it (just from cli, without any flags or other additional setup) I'm seeing next output: test.py:2: error: Metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases [misc] I know that similar error appears at runtime when metaclass of derived class is not a subtype of all parents metaclasses. But type(str) is type, and my class meta derived from type, so I cannot understand what's incorrect here. Provided example isn't working only with str parent class. Mypy says everything is correct when inheriting from int or float. So, how to get rid of such error message? Or it is just a mypy bug?
str is statically-typed as a subclass of typing.Protocol, whose metaclass is special-cased by mypy as typing._ProtocolMeta (typing.Protocol is actually an instance of typing._ProtocolMeta at runtime, but this isn't reflected in the typing stubs, so normally mypy wouldn't know about this). Therefore, to accurately reflect both static typing and runtime in your example, you can do this (mypy Playground, Pyright Playground): import typing as t if t.TYPE_CHECKING: from typing import _ProtocolMeta else: _ProtocolMeta = type(str) class meta(_ProtocolMeta): pass class Test(str, metaclass=meta): pass Since typing._ProtocolMeta has a bare minimum typing interface, this leaves you relatively free to implement meta without too much maintenance if type-checkers and/or the Python runtime decides to change the metaclass of str in the future. Note that keyword arguments in a class declaration do not currently work properly in mypy for user-defined metaclasses, so you'll run into some type-safety issues for those. Instead of using metaclasses, consider using a base class defining __init_subclass__ instead.
2
1
78,590,309
2024-6-7
https://stackoverflow.com/questions/78590309/why-does-int-class-give-type-in-python
From what I've read, the built-in types such as int, float, dict, etc have been implemented in C, and they used to be different from user-defined classes before Python 2.2 However, the built-in types and user defined classes have long been unified now, and the following happens >>> int.__class__ <class 'type'> Why does this happen? I'm aware that type is a metaclass in Python, and all the user defined classes (by default) are (meta) instances of the same, and it gives me the impression that any object that is an instance of type has been instantiated using the type metaclass. But built-in classes, such as int, have been implemented in C, and haven't been created using the type metaclass, right? Then, why does int.__class__, for example, give <class 'type'>? Is it that, "manually", the __class__ attribute of built-in types such as int, has been set to type, to enable the unification?
Classes created in C are still classes and share the same core structure in memory as classes created in pure Python code. The same mechanisms used from Python to create a class can be used to create a class from C code. The only thing is that they don't "have to": a class in C can have all its inner fields hard coded, and its class members manually assigned: unlike Python code there are no guards for assigning the __class__ field of the class object. As I said: regardless of that, these built-in classes are still "real" instances of type, as the memory layout to the special slots, both public and private have the layout used by type. And - on top of all that - even for Python defined classes, it is possible to change the __class__ attribute of a class (but with restrictions). yes, as simple as that: just assign a new value to the __class__ attribute of an existing instance. This will effectivelly "transform" an object into another, carrying its instance namespace. It is not the same thing as cast does in static typed languages: with cast in Java/C++ an object is unchanged, and the compiler just tried to apply the next operation as if the casted instance where from the cast-target class. Assigning to __class__ is a permanent change (until a reassignment, of course) to the underlying instance: See this in action: In [51]: class A: ...: def a(self): ...: ... ...: In [52]: class B: ...: def b(self): ...: print("at method B") ...: In [53]: a = A() In [54]: a.__class__ = B In [55]: a.b() at method B It won't work if both classes have diffing "memory layouts" - that is, data structures defined in native code, such as lists or tuples, or using the __slots__ mechanism. But for "ordinary" classes which keep the instance namespace in the __dict__ attribute, it just works.
2
2
78,591,534
2024-6-7
https://stackoverflow.com/questions/78591534/matplotlib-add-image-in-top-left-corner-of-figure
I tried various methods found online but can't add an image at the correct location in the top left corner of my figure. Here's one on my attempts, where I try to set exact the size of the generated figure in pixels, and then add a 48x48 px image in top left corner. Issues: The figures are not created with the chosen pixel size On the X axis, where the origin is left, I need to offset by 34 pixels to get the image near to left border, which makes no sense to me. On the Y axis, the image is also badly placed, mostly because the size is not as expected. Why is the code below failing? How can I achieve what I want? import matplotlib.pylab as plt # Plot from PIL import Image # Add background image from matplotlib.offsetbox import AnnotationBbox, OffsetImage # Add background image def plot_image(w, h): px = 1/plt.rcParams['figure.dpi'] fig,ax = plt.subplots(figsize=(w*px, h*px)) image_path = '......../The Simpsons 48x48.png' image_data = Image.open(image_path) image_box = OffsetImage(image_data, zoom=1) anno_box = AnnotationBbox(image_box, xy=(34,h-48), xycoords='figure pixels', frameon=False) ax.add_artist(anno_box) plt.text(0.5, 0.5, f'Requested: {w}x{h}') plt.text(0.5, 0.4, 'Obtained: 579x455' if w==640 else 'Obtained: 759x561' if w==800 else 'Obtained: 960x711' ) plt.show() plot_image( 640, 480) plot_image( 800, 600) plot_image(1024, 768) Results: Image used:
Are you using a notebook? When you show in a notebook the figure is automatically resized to fit around the artists on it. This is equivalent to passing bbox_inches='tight' when you call savefig. It is possible to turn that off in a notebook, but I just tried replacing plt.show() in your example with plt.savefig(f'simpson_test{w}x{h}.png') and they all came out the right size. I have not understood why the positioning is strange with AnnotationBbox, but a simpler way is figimage: import matplotlib.pyplot as plt # Plot from PIL import Image # Add background image def plot_image(w, h): px = 1/plt.rcParams['figure.dpi'] fig,ax = plt.subplots(figsize=(w*px, h*px)) image_path = 'simpson.png' image_data = Image.open(image_path) fig.figimage(image_data, yo=h-48) plt.text(0.5, 0.5, f'Requested: {w}x{h}') plt.savefig(f'simpson_test{w}x{h}.png') plot_image( 640, 480) plot_image( 800, 600) plot_image(1024, 768)
2
1
78,593,047
2024-6-7
https://stackoverflow.com/questions/78593047/tcl-issue-while-running-a-python-script-from-another-python-app-converted-to-ex
I made an app in tkinter which helps creating tkinter app/scripts, (when I export the file it is stored as a .py) I want to run this .py script from my app so that we can preview it immediately after export. I used subprocess.run method and it works perfectly within the python app. But when I converted the app to exe with pyinstaller then the preview thing doesn't work because of a tcl version error. init.tcl: version conflict for package "Tcl": have 8.6.10, need exactly 8.6.9 version conflict for package "Tcl": have 8.6.10, need exactly 8.6.9 while executing "package require -exact Tcl 8.6.9" (file "C:/----/_MEI170162/tcl/init.tcl" line 19) invoked from within "source {C:/-----/_MEI170162/tcl/init.tcl}" ("uplevel" body line 1) invoked from within "uplevel #0 [list source $tclfile]" This probably means that Tcl wasn't installed properly. I tried subprocess.run, subprocess.startfile, os.system, and even webbroswer.open methods to run the exported .py tkinter app, but the same error is showing. Note that I have compiled the app with an older version of python, so the tcl version is also different. In my main system the python version is set to latest one, so subprocess is using that tcl version, but I don't want to have any connection with the main app's tcl version. As the user may install different versions of python, so there will be different tcl versions too, and this error will be shown again in that case. I tried the exec(open(file).read()) method, although it works but there are issues with some exports like pyimages missing errors. Is there any other way to run a python script which is more standalone?
Finally got an answer, we can delete the tcl environment variables before running the subprocess import os import subprocess file_path = "C:/----/test.py" def run_script(): env = os.environ.copy() if 'TCL_LIBRARY' in env: del env['TCL_LIBRARY'] if 'TK_LIBRARY' in env: del env['TK_LIBRARY'] subprocess.run(["python",file_path], shell=True, env=env)
2
1
78,593,700
2024-6-7
https://stackoverflow.com/questions/78593700/langchain-community-langchain-packages-giving-error-missing-1-required-keywor
All of sudden langchain_community & langchain packages started throwing error: TypeError: ForwardRef._evaluate() missing 1 required keyword-only argument: 'recursive_guard' The error getting generated somewhere in pydantic I strongly suspect it is version mismatch. So I tried upgrading packages langchain, langchain_community, pydantic, langsmith etc. But no luck. My current installed versions shows as under: Python 3.12.4 langchain: 0.2.3 langchain_community: 0.2.4 langsmith: 0.1.75 pydantic: 2.7.3 typing_extensions: 4.11.0 Pip check also not showing any conflict. Here is complete trace of error. Any help would be really appreciated. TypeError: ForwardRef._evaluate() missing 1 required keyword-only argument: 'recursive_guard' File "C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 600, in _run_script exec(code, module.__dict__) File "C:\MyProject\MyScript.py", line 20, in <module> from langchain_community.vectorstores import Chroma File "<frozen importlib._bootstrap>", line 1412, in _handle_fromlist File "C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langchain_community\vectorstores\__init__.py", line 509, in __getattr__ module = importlib.import_module(_module_lookup[name]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.1264.0_x64__qbz5n2kfra8p0\Lib\importlib\__init__.py", line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langchain_community\vectorstores\chroma.py", line 20, in <module> from langchain_core.documents import Document File "C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langchain_core\documents\__init__.py", line 6, in <module> from langchain_core.documents.compressor import BaseDocumentCompressor File "C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langchain_core\documents\compressor.py", line 6, in <module> from langchain_core.callbacks import Callbacks File "C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langchain_core\callbacks\__init__.py", line 22, in <module> from langchain_core.callbacks.manager import ( File "C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langchain_core\callbacks\manager.py", line 29, in <module> from langsmith.run_helpers import get_run_tree_context File "C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langsmith\run_helpers.py", line 40, in <module> from langsmith import client as ls_client File "C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langsmith\client.py", line 52, in <module> from langsmith import env as ls_env File "C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langsmith\env\__init__.py", line 3, in <module> from langsmith.env._runtime_env import ( File "C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langsmith\env\_runtime_env.py", line 10, in <module> from langsmith.utils import get_docker_compose_command File "C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langsmith\utils.py", line 31, in <module> from langsmith import schemas as ls_schemas File "C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langsmith\schemas.py", line 69, in <module> class Example(ExampleBase): File "C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\pydantic\v1\main.py", line 286, in __new__ cls.__try_update_forward_refs__() File "C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\pydantic\v1\main.py", line 807, in __try_update_forward_refs__ update_model_forward_refs(cls, cls.__fields__.values(), cls.__config__.json_encoders, localns, (NameError,)) File "C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\pydantic\v1\typing.py", line 554, in update_model_forward_refs update_field_forward_refs(f, globalns=globalns, localns=localns) File "C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\pydantic\v1\typing.py", line 520, in update_field_forward_refs field.type_ = evaluate_forwardref(field.type_, globalns, localns or None) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\pydantic\v1\typing.py", line 66, in evaluate_forwardref return cast(Any, type_)._evaluate(globalns, localns, set()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I am having the same issue. The stack is different, but the error comes from the same line pydantic\v1\typing.py", line 66 This is referring to the python typing module (v3.12.4) that has an additional mandatory parameter 'recursive_guard'. There are other areas of the code in pydantic where this has been fixed (recursive_gurard=set()). Check this out --> https://github.com/pydantic/pydantic-core/issues/1292 Within this thread, they mention that using python v3.12.3 could temporarily solve the issue in 1292, probably because this additional attribute in v3.12.4 (I am guessing here). This is not an option for me as my google alpha functions local deploy is not recognizing the --runtime=python311 and always take the latest runtime (v3.12.4). I hope that they fix this too
10
11
78,594,171
2024-6-7
https://stackoverflow.com/questions/78594171/cost-function-increases-then-stops-growing
I understand the zig-zag nature of the cost function when applying gradient descent, but what bothers me is that the cost started out at a low 300 only to increase to 1600 in the end. The cost function would oscillate between 300 and 4000 to end up at 1600. I thought I should get a number that is 300 or lower. I have tried changing the learning rate and all it does is still take me to 1600. I should get a cost around 300, not one that grows it. Data: square_feet = [1661.0, 871.0, 1108.0, 1453.0, 1506.0, 1100.0, 1266.0, 1514.0, 948.0, 1878.0, 1522.0, 931.0, 1475.0, 1177.0, 1844.0, 1469.0, 2155.0, 967.0, 1092.0] prices = [1350.0, 489.0, 589.0, 539.0, 775.0, 575.0, 749.0, 795.0, 644.9, 590.0, 575.0, 699.0, 999.0, 775.0, 599.0, 599.0, 895.0, 550.0, 849.0] Both of those lists are Pandas series in the original code, but have converted them to lists here for clarity. Main: # Add starting weight and bias w_init = 5e-1 # Increase in price for every 1 square feet b_init = 200 # Starting price for the cheapest houses # Iterations and learning rate for the gradient descent algorithm iterations = 10000 alpha = 1.0e-6 # Delicate and causes a divergence if it's set too large w_final, b_final, J_hist, p_hist = gradient_descent( square_feet, prices, w_init, b_init, alpha, iterations) print(f'w: {w_final}, b: {b_final}, Costs: {J_hist}, Weight and Bias: {p_hist}') Functions: # Cost Function to determine cumulative error between real and predicted values def cost_function(x, y, w, b): # 1) Number of training examples m = x.size cost = 0 # 2) Index the training examples and account for cost per instance for i in range(m): y_hat = w * x[i] + b cost += (y_hat-y[i])**2 cost /= 2 * m # 3) Return total cost return cost # Compute the gradient, i.e., the scalar that improves accuracy def gradient_function(x, y, w, b): m = x.size # Partial derivatives of the cost function with respect to weight and bias dj_dw = 0 dj_db = 0 for i in range(m): y_hat = w * x[i] + b dj_dw_i = (y_hat - y[i]) * x[i] dj_db_i = (y_hat - y[i]) dj_db += dj_db_i dj_dw += dj_dw_i dj_dw /= m dj_db /= m return dj_dw, dj_db def gradient_descent(x, y, w_init, b_init, learning_rate, num_iters): # Used for graphing J_history = [] p_history = [] b = b_init w = w_init for i in range(num_iters): dj_dw, dj_db = gradient_function(x, y, w, b) # Gradient # Update weight, bias w -= learning_rate * dj_dw b -= learning_rate * dj_db # Prevents resource exhaustion; unnecessary to store similar costs # Past 100000 iterations if i < 100000: J_history.append(cost_function(x, y, w, b)) p_history.append([w,b]) return w, b, J_history, p_history I'm stumped over this issue.
In your cost_function, the average cost is inside the loop, which results in a cost value that is m times smaller than it should be, preventing the data set from converging. You should sum up all data before averaging. def cost_function(x, y, w, b): # 1) Number of training examples m = x.size cost = 0 # 2) Index the training examples and account for cost per instance for i in range(m): y_hat = w * x[i] + b cost += (y_hat-y[i])**2 cost /= 2 * m # 3) Return total cost return cost
2
2
78,586,028
2024-6-6
https://stackoverflow.com/questions/78586028/vs-code-closes-after-opening
I just installed ubuntu 24. I'm a django developer and I've been working fine with Visual Studio Code, but today when I tried to open my vscode, while it was opening it closed without any error or message! :( I restarted my computer and reinstalled Vscode. But nothing that nothing:( note: I installed vscode from app center on ubuntu
I've faced the same issue. I think you've installed it from snap store. You need to install it with deb package. Uninstall the current code(remove it). Go to official website of Visual Studio Code, download the latest version and install it with command: sudo dpkg -i code_1.90.0-1717531825_amd64.deb In fact, the command is: sudo dpkg -i <deb_file_name>.deb
2
5
78,592,406
2024-6-7
https://stackoverflow.com/questions/78592406/group-pandas-dataframe-on-criteria-from-another-dataframe-to-multi-index
I have the following two DataFrames: df 100 101 102 103 104 105 106 107 108 109 0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 2 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 3 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 4 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 5 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 6 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 7 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 8 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 9 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 df2 crit1 crit2 110 a A 109 a B 108 a A 107 b B 106 b A 105 a A 104 a B 103 a A 102 b B 101 b A 100 b A 99 b A df contains data for ten entities 100-109 and df2 describes two criteria categorizing the ten entities 100-109 (and others, in a different order). I'd like to group df on a two-level column index (crit1,crit2) with one value per combination of (crit1,crit2), being the sum of all columns with this combination. For example, the new column with the index ('a','A') would contain the sum of columns [108,105,103]. expected result: crit1 a b crit2 A B A B 0 3.0 2.0 3.0 2.0 1 19.0 15.0 10.0 11.0 2 3.0 2.0 3.0 2.0 3 3.0 2.0 3.0 2.0 4 3.0 2.0 3.0 2.0 5 3.0 2.0 3.0 2.0 6 3.0 2.0 3.0 2.0 7 3.0 2.0 3.0 2.0 8 3.0 2.0 3.0 2.0 9 3.0 2.0 3.0 2.0 To reproduce the DataFrames: import pandas as pd import numpy as np df = pd.DataFrame(np.ones((10,10)), index=np.arange(10), columns=np.arange(100,110)) df2 = pd.DataFrame(np.array([['a','A'],['a','B'],['a','A'],['b','B'],['b','A'],['a','A'],['a','B'],['a','A'],['b','B'],['b','A'],['b','A'],['b','A']]), index=np.arange(110,98,-1), columns=['crit1','crit2']) df.iloc[1] = np.arange(1,11)
You can reindex, set_axis with MultiIndex.from_frame then groupby.sum: out = (df.reindex(columns=df2.index) .set_axis(pd.MultiIndex.from_frame(df2), axis=1) .groupby(axis=1, level=[0, 1]).sum() ) For the latest version of pandas, using the transpose as intermediate: out = (df.T .reindex(df2.index) .set_axis(pd.MultiIndex.from_frame(df2)) .groupby(level=[0, 1]).sum().T ) Output: crit1 a b crit2 A B A B 0 4.0 2.0 2.0 2.0 1 4.0 2.0 2.0 2.0 2 4.0 2.0 2.0 2.0 3 4.0 2.0 2.0 2.0 4 4.0 2.0 2.0 2.0 5 4.0 2.0 2.0 2.0 6 4.0 2.0 2.0 2.0 7 4.0 2.0 2.0 2.0 8 4.0 2.0 2.0 2.0 9 4.0 2.0 2.0 2.0 Alternatively, pre-aggregating the indices, then building the output with concat: groups = df2.reset_index().groupby(list(df2))['index'].agg(list) out = pd.concat({k: df.reindex(columns=lst).sum(axis=1) for k, lst in groups.items()}, axis=1) Output: a b A B A B 0 4.0 2.0 2.0 2.0 1 4.0 2.0 2.0 2.0 2 4.0 2.0 2.0 2.0 3 4.0 2.0 2.0 2.0 4 4.0 2.0 2.0 2.0 5 4.0 2.0 2.0 2.0 6 4.0 2.0 2.0 2.0 7 4.0 2.0 2.0 2.0 8 4.0 2.0 2.0 2.0 9 4.0 2.0 2.0 2.0 Intermediate groups: crit1 crit2 a A [100, 102, 105, 107] B [101, 106] b A [104, 109] B [103, 108] Name: index, dtype: object
2
3
78,584,013
2024-6-6
https://stackoverflow.com/questions/78584013/how-to-chain-multiple-with-columns-in-polars
I'm using Polars to transform my DataFrame, and I want to chain multiple with_columns transformations. However, I encounter an issue when trying to perform operations on a newly created column within the same with_columns context. I end up needing to save the DataFrame after each transformation and then reapply with_columns for subsequent transformations. Is there a cleaner way to achieve this? Here is an example of my current approach: import polars as pl # Sample data exampledata = { 'A': [1, 2, 3], 'B': [4, 5, 6] } df = pl.DataFrame(exampledata) # First transformation df = df.with_columns( (pl.col("A") + pl.col("B")).alias("C") ) # Second transformation df = df.with_columns( (pl.col("C") * pl.col("B")).alias("D") ) print(df) In this example, I create a new column C from columns A and B. Then, I need to save the DataFrame before I can create column D from C and B. Is there a more efficient or idiomatic way to chain these transformations in Polars?
For performance reasons, all expressions in a single pl.DataFrame.with_columns context are evaluated in parallel. Especially, it is not possible to use a column resulting from an expression in a different expression in the same context.1 1 This does not mean that polars performs duplicate work for expressions with common sub-expressions since under the hood a Common-Subplan-Elimination mechanism is used. Still, since with_columns returns the resulting dataframe, you do not need to assign each result back to df, but can chain the with_columns calls directly as follows. df = ( df .with_columns( (pl.col("A") + pl.col("B")).alias("C") ) .with_columns( (pl.col("C") * pl.col("B")).alias("D") ) ) shape: (3, 4) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ A ┆ B ┆ C ┆ D β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ 1 ┆ 4 ┆ 5 ┆ 20 β”‚ β”‚ 2 ┆ 5 ┆ 7 ┆ 35 β”‚ β”‚ 3 ┆ 6 ┆ 9 ┆ 54 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
3
3
78,589,001
2024-6-6
https://stackoverflow.com/questions/78589001/trying-to-get-input-from-dataframe-and-using-values-cell-by-cell-to-output-into
I have an excel sheet with data in following format Bus num Bus name POI bus 20000 J874 0 20001 J976 0 10000 J1000 333333 I want to divide dataframe into two dataframes for value within POI bus column, hence one dataframe will have rows 1 and 2, and other will have row 3. After this, I am trying to get data from both the dataframes using iloc function for all the rows, and specific columns. The code is working of dataframe 1 but not working for dataframe 2, and python is throwing " raise KeyError(key) from err". Code I am using: from openpyxl import load_workbook import pandas as pd input_file = '..\Bench\Gen_addition_total.xlsx' wb = load_workbook(input_file) ws = wb.active output_file1 = open('..\Bench\Gen_sub.inch','w') output_file2 = open('..\Bench\Gen_tap.inch','w') df0 = pd.read_excel(input_file) df1 = df0[(df0['POI 2'] == 0)] df2 = df0[(df0['POI 2'] != 0)] def gen_sub(): for i in range(len(df1)): bus_num = (df1.iloc[:,0]) bus_name = (df1.iloc[:,1]) n1 = bus_num[i] n2 = bus_name[i] output_file1.write(str(n1).strip("()"))+ output_file1.write('\n') + output_file1.write(str(n2).strip("()")) + output_file1.write('\n') output_file1.close() def gen_tap(): for j in range(len(df2)): bus_num = (df2.iloc[:,0]) bus_name = (df2.iloc[:,1]) n4 = bus_num[j] n5 = POI_bus2[j] output_file2.write(str(n4).strip("()"))+ output_file2.write('\n') + output_file2.write(str(n5).strip("()")) + output_file2.write('\n') output_file2.close() gen_sub() gen_tap() When I run code for just gen_sub function, code is running perfectly, and I am getting following output: 20000 J874 20001 J976 But when I am trying to run code for both the functions I am getting error: "raise KeyError(key) from err" When I run code for just gen_sub function, code is running perfectly, and I am getting following output: 20000 J874 20001 J976 But when I am trying to run code for both the functions I am getting error: in gen_tap n4 = bus_num[j] File ~\AppData\Local\Programs\Spyder\pkgs\pandas\core\series.py:1007 in __getitem__ return self._get_value(key) File ~\AppData\Local\Programs\Spyder\pkgs\pandas\core\series.py:1116 in _get_value loc = self.index.get_loc(label) File ~\AppData\Local\Programs\Spyder\pkgs\pandas\core\indexes\base.py:3655 in get_loc raise KeyError(key) from err KeyError: 0
One approach is to do this: import pandas as pd data = { 'Bus num': [20000, 20001, 10000], 'Bus name': ['J874', 'J976', 'J1000'], 'POI bus': [0, 0, 333333] } df = pd.DataFrame(data) df1 = df[df['POI bus'] == 0] df2 = df[df['POI bus'] != 0] def gen_sub(output_file1, df1): with open(output_file1, 'w') as file: for i in range(len(df1)): bus_num = df1.iloc[i, 0] bus_name = df1.iloc[i, 1] file.write(f"{bus_num}\n") file.write(f"{bus_name}\n") def gen_tap(output_file2, df2): with open(output_file2, 'w') as file: for j in range(len(df2)): bus_num = df2.iloc[j, 0] bus_name = df2.iloc[j, 1] poi_bus = df2.iloc[j, 2] file.write(f"{bus_num}\n") file.write(f"{bus_name}\n") file.write(f"{poi_bus}\n") output_file1 = 'Gen_sub.inch' output_file2 = 'Gen_tap.inch' gen_sub(output_file1, df1) gen_tap(output_file2, df2) which gives you (in the .inch files) Gen_sub 20000 J874 20001 J976 and Gen_tap 10000 J1000 333333
2
1
78,587,580
2024-6-6
https://stackoverflow.com/questions/78587580/how-do-i-reduce-repetitive-imports-from-subfolders
I am working on a large project in VSCode. I have various subfolders with plenty of .py files, and a main.py in the project root. I want to import various functions from various files that exist in different subfolders. I find from x import y a very redundant process. What is the efficient way to do it? How does a professional python developer do it? my_project/ β”œβ”€β”€ FolderA/ β”‚ β”œβ”€β”€ __init__.py β”‚ └── module_a.py β”œβ”€β”€ FolderB/ β”‚ β”œβ”€β”€ __init__.py β”‚ └── module_b.py β”œβ”€β”€ FolderC/ β”‚ β”œβ”€β”€ __init__.py β”‚ └── module_c.py β”œβ”€β”€ __init__.py └── main.py Let's say the above is my structure and I want to call functions in main.py from module_a.py, module_b.py and module_c.py. Thanks. I tried the following, but I can't keep doing it for 10/20 functions: from FolderA.module_a import functionA from FolderB.module_b import functionB from FolderC.module_c import functionC Also tried the following, it didn't work: sys.path.append("D:\Python_workspace\software")
From a practical standpoint (if the folder representation you gave is accurate to your project) if you find yourself with lots of folders with one file each in them, you may want to reassess whether the categories/groupings you've chosen are realistic - too granular and you get a lot of the repetitive, snakey imports that you've shown. Unfortunately at that point with one file per folder there's not much you can do to solve your import problem short of restructuring the project tree. However, if your project is already in the following format (difference being multiple files in each subfolder): my_project/ β”œβ”€β”€ FolderA/ β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ module_a1.py β”‚ └── module_a2.py β”œβ”€β”€ FolderB/ β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ module_b1.py β”‚ └── module_b2.py β”œβ”€β”€ FolderC/ β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ module_c1.py β”‚ └── module_c2.py β”œβ”€β”€ __init__.py └── main.py You can then set up imports in __init__.py (assume you're in FolderA/__init__.py): from module_a1 import fun1 from module_a2 import fun2 Then in any other file you can now just directly reference the directory/package and to grab what you've now indirectly imported: from FolderA import fun1, fun2
4
2
78,588,656
2024-6-6
https://stackoverflow.com/questions/78588656/django-rest-framework-custom-filter-with-default-search-and-ordering-filter
I have a project in Django Rest Framework where I need have endpoint where I'm able to search Document objects by title and text and possibility to search object is active and inactive, using to this URL address. I am using to achieve this Django_filters package. Example: https://localhost:8000/?is_active=True. This is my model class Document(models.Model): title = models.CharField(max_length=100) text = models.TextField() date = models.DateField() account = models.ForeignKey( to=Account, null=True, blank=True, on_delete=models.CASCADE, related_name='documents', verbose_name='account', help_text='account', ) is_active = models.BooleanField(default=True) def __str__(self): return self.title Serializer class DocumentSerializer(serializers.ModelSerializer): class Meta: model = Document fields = ['id', 'title', 'text', 'date', 'account', 'is_active'] Custom FilterSet class DocumentBackendFilter(dfilters.FilterSet): is_active = dfilters.BooleanFilter(field_name='is_active', lookup_expr='exact') class Meta: model = Document fields = ['is_active'] View class DocumentsListView(viewsets.ModelViewSet): queryset = Document.objects.all() serializer_class = DocumentSerializer filterset_class = DocumentBackendFilter filterset_fields = ('is_active',) filter_backends = [filters.SearchFilter, filters.OrderingFilter] search_fields = ['title', 'text'] ordering_fields = ['title', 'text'] def get_queryset(self): qs = super().get_queryset() qs = qs.filter(account__users=self.request.user) return qs Problem: Under url: http://127.0.0.1:8000/?is_active=True I have elements where is_active is set on True and False. Default search is working OK. How can I get search, ordering and filter objects based on is_active?
since you want to use the DocumentBackendFilter, you must add the DjangoFilterBackend from django_filters.rest_framework to the filter_backends list in your view class. Besides, If you set filterset_class, the view class will ignore the value list of filterset_fields, so you should provide one of them. class DocumentsListView(viewsets.ModelViewSet): # some code filterset_class = DocumentBackendFilter # filterset_fields = ('is_active',) -> remove this line filter_backends = [filters.SearchFilter, filters.OrderingFilter, DjangoFilterBackend]
2
1
78,588,269
2024-6-6
https://stackoverflow.com/questions/78588269/can-a-python-program-find-orphan-processes-that-it-previously-created
I'm using popen() to create potentially long-running processes in Python. If the parent program dies and then restarts, is there a way to retrieve the previously created processes that are still running? I'd probably have to use start_new_session=True, which I'm not doing now. Essentially, I want to get a re-constructed instance of a popen object that points to the child process. I suspect it's not possible, since serializing a popen() object doesn't seem to be possible.
First of all, if one of the processes connected with a pipe dies, any read/write from/to that pipe will result in SIGPIPE -- which by default terminates the process. So your orphan process is very likely to die promptly, too, if the parent crashes. As per Can I share a file descriptor to another process on linux or are they local to the process?, it's only possible to share a file discriptor between unrelated processes via a UNIX domain socket, and that requires active participation from both processes. As others have pointed out, you also won't be able to wait for that process. Depending on what you're trying to achieve (e.g. it's a worker process and you don't want to lose its progress), a better way may be to e.g. save the progress at some points (whether by the parent or by the child) and add an ability to restart from a saved state.
3
1
78,586,783
2024-6-6
https://stackoverflow.com/questions/78586783/how-to-use-pycountry-db-country-objects-as-a-pd-dataframe-index
I am creating a dataset collecting data for a given set of countries. To avoid any ambiguity, I would like to use a pycountry.db.Country object to represent each country. However, when setting the country as the index of my pd.DataFrame, I can't select (.loc[]) a record by passing a country, I'm getting this type of error β€” despite the record existing: raise KeyError(f"None of [{key}] are in the [{axis_name}]") How to select a record in my pd.DataFrame, given a pycountry.db.Country object? Here is a working example: import pandas as pd import pycountry aruba: pycountry.db.Country = pycountry.countries.get(alpha_3="ABW") belgium: pycountry.db.Country = pycountry.countries.get(alpha_3="BEL") canada: pycountry.db.Country = pycountry.countries.get(alpha_3="CAN") data: list[dict] = [ {"country": aruba, "population": 106_203}, {"country": belgium, "population": 11_429_336}, {"country": canada, "population": 37_058_856}, ] df: pd.DataFrame = pd.DataFrame(data) df.set_index("country", inplace=True) # df.index = df.index.astype(dtype="category") # optional: doesn't change the outcome assert df.index[1] == belgium assert df.index[1] is belgium belgium_data = df.loc[belgium] # <-- fails with "None of [Index([('alpha_2', 'BE'),\n('alpha_3', 'BEL'),\n('flag', 'πŸ‡§πŸ‡ͺ'),\n('name', 'Belgium'),\n('numeric', '056'),\n('official_name', 'Kingdom of Belgium')],\ndtype='object', name='country')] are in the [index]"
Explanation Pandas treats your object as a list-like object, which is why you cannot use it as a key for loc, since it will try to iterate over the objects in the list. >>> from pandas.core.dtypes.common import is_list_like, is_scalar >>> is_scalar(belgium) False >>> is_list_like(belgium) True See What datatype is considered 'list-like' in Python? for more about is_list_like Workaround Interestingly, this works: >>> df.loc[[belgium]].iloc[0] population 11429336 Name: Country(alpha_2='BE', alpha_3='BEL', flag='πŸ‡§πŸ‡ͺ', name='Belgium', numeric='056', official_name='Kingdom of Belgium'), dtype: int64 so if you really really want to use the object as an index, you can work around it with this. Or, getting even more ridiculous, making the object not iterable: >>> belgium.__iter__ = None >>> df.loc[belgium] population 11429336 Name: Country(__iter__=None, alpha_2='BE', alpha_3='BEL', flag='πŸ‡§πŸ‡ͺ', name='Belgium', numeric='056', official_name='Kingdom of Belgium'), dtype: int64 But I'm sure this would break some other functionality on your code, since __iter__ seems to be implemented on Country to make it possible to cast it to dict easily. Recommendation Objects as an index is not maybe the best of practices if your dataset is large. What I would recommend would be to use for example alpha_3 as the index instead, and keep the object in a separate column. You would still avoid ambiguity, but would not get in trouble with overly complex index types. data: list[dict] = [ {"index": aruba.alpha_3, "country": aruba, "population": 106_203}, {"index": belgium.alpha_3, "country": belgium, "population": 11_429_336}, {"index": canada.alpha_3, "country": canada, "population": 37_058_856}, ] df: pd.DataFrame = pd.DataFrame(data) df.set_index("index", inplace=True) assert df.loc[belgium.alpha_3]["country"] == belgium
2
2
78,587,067
2024-6-6
https://stackoverflow.com/questions/78587067/how-to-refactor-similar-functions-in-python
I have defined multiple functions where I search for a specific pydantic model in a list of pydantic models based on some attribute value. SocketIOUserSessionID = str RoomWithIndex = tuple[Room, RoomIndex] RSStateWithIndex = tuple[RSState, int] RSPacketSavedRecordWithIndex = tuple[RSPacketSavedRecordsContainer, int] def find_room_by_id(self, id: UUID | str, where: list[Room]) -> RoomWithIndex | None: room = next(filter(lambda room: room.id == id, where), None) if room is None: return None index = where.index(room) return room, index def find_room_by_session(self, session: SocketIOUserSessionID, where: list[Room]) -> RoomWithIndex | None: room = next(filter(lambda room: session in room.sessions, where), None) if room is None: return None index = where.index(room) return room, index def find_rs_state_by_room_id(self, room_id: str, where: list[RSState]) -> RSStateWithIndex | None: rs_state = next(filter(lambda rs_state: rs_state.room_id == room_id, where), None) if rs_state is None: return None index = where.index(rs_state) return rs_state, index def find_saved_record_by_room_id(self, room_id: str, where: list[RSPacketSavedRecordsContainer]) -> RSPacketSavedRecordWithIndex | None: saved_record = next(filter(lambda saved_records: saved_records.room_id == room_id, where), None) if saved_record is None: return None index = where.index(saved_record) return saved_record, index How to write a generic function (with typing) to refactor such code? I heard of functools.singledispatch decorator but I am not sure that this is the right case to use it. def find_value_by_attr(self): ?
I tried to generalize the four functions as much as possible - here's where I ended up: def find_model( self, iid: UUID | str, where: list[Any], filter_attr: str, ) -> tuple[Any, int] | None: if ( model := next( filter(lambda model: iid == getattr(model, filter_attr), where) ) ): return model, where.index(model) The find_model function takes an iid (item ID) in place of id, session, or room_id (NOTE: naming this iid avoids shadowing the builtin id) a where list just as before a filter_attr which is the string name of the attribute you want to filter, e.g. 'id', 'sessions', or 'room_id' The only thing this doesn't quite cover is the filtering case of find_room_by_session which is using in instead of == in the filter lambda. If anyone more clever than I would care to weigh in, I'm open to it! If I may take a moment to editorialize: I like the idea of using @singledispatch for this, but it doesn't get you away from writing multiple function prototypes anyway...if the goal it 'less code', then @singledispatch doesn't help much in that regard.
3
1
78,586,128
2024-6-6
https://stackoverflow.com/questions/78586128/surprising-behaviour-for-numpy-float16-when-testing-equality
I'm passing various bits of data to a function that computes variance along the first dimension. Sometimes the variance is zero, which is fine, but then the following strange thing happens: >> sigma = data.var(axis=0) + 1e-7 # data has zero variance so all entries should equal 1e-7 >> sigma array([1.e-07, 1.e-07, 1.e-07, ..., 1.e-07, 1.e-07, 1.e-07], dtype=float16) >> (sigma==1e-7).all() True >> sigma[0]==1e-7 False On its own, the fourth line would be explained by the 16-bit precision, and indeed >> np.float16(1e-7)==1e-7 False But it seems to contradict the third line, which says they are equal. This was causing a bug in my code. I can redesign around it, but I want to understand why numpy is doing this so I'm not caught out again in the future.
This comes from the fact that numpy type promotion treats scalars and arrays differently. You can see this with np.result_type: >>> np.result_type(sigma, 1E-7) dtype('float16') >>> np.result_type(sigma[0], 1E-7) dtype('float64') Essentially, when an array value is compared to a scalar value (the first case), the dtype of the array value takes precedence. When comparing two scalars or two arrays (the second case), the highest precision takes precedence. What this means is that when you evaluate (sigma == 1E-7), both sides are first cast to float16 before comparison, whereas when you evaluate sigma[0] == 1E-7, both sides are first cast to float64 before comparison. Because float16 cannot perfectly represent the value 1E-7, this causes a discrepency in the scalar-comparison case, where both values are cast to float64: >>> np.float16(1E-7).astype(np.float64) 1.1920928955078125e-07 >>> np.float64(1E-7) 1e-07 Finally, please note that these scalar-specific type casting rules are being changed in NumPy 2.0 (see NEP 50: Promotion Rules for Scalars), so if you run your code with NumPy 2.0, both cases will promote to float16 and return True.
4
8
78,585,754
2024-6-6
https://stackoverflow.com/questions/78585754/tight-subplot-axes-without-their-plot-to-the-figure
I have made a subplot in matplotlib and managed to to put the different cmap I have on the same column. For a minimal working example (with dummy cmaps): import matplotlib.pyplot as plt import numpy as np # Generate sample data data1 = np.random.rand(10, 10) data2 = np.random.rand(10, 10) data3 = np.random.rand(10, 10) fig_bandwidth = plt.figure(figsize=(12, 6)) ax1 = plt.subplot(3, 2, 6) ax2 = plt.subplot(3, 2, 4) ax3 = plt.subplot(3, 2, 2) ax_bandwidth = plt.subplot(1, 3, 1) axes = [ax1, ax2, ax3] # Plot data and add color bars for ax, data in zip(axes, [data1, data2, data3]): cax = ax_bandwidth.imshow(data, aspect='auto', cmap='viridis') plt.colorbar(cax, ax=ax) ax.axis('off') plt.tight_layout() plt.show() What I am trying to do is have a tight subplot with the figure on the left and the 3 color bars on the right in the same column, but it seems the plotting boxes are still there, preventing me from placing these axes next to the figure. Maybe using subplots isn't the best solution, any suggestion? Then how could I place an ax title spanning across the three color bars since they represent the same thing (bandwidth in MHz for context).
Using add_axes works much better than sublots. It gives me much more freedom on the placement. Here is a minimal working example with dummy cbars: fig_bandwidth = plt.figure(figsize=(12, 6)) # Creating three axes: add_axes([xmin,ymin,dx,dy]) ax1 = fig_bandwidth.add_axes((0.75, 0.05, 0.1, 0.3)) ax2 = fig_bandwidth.add_axes((0.75, 0.36, 0.1, 0.3)) ax3 = fig_bandwidth.add_axes((0.75, 0.67, 0.1, 0.3)) ax_bandwidth = fig_bandwidth.add_axes((0.1, 0.05, 0.7, 0.92)) axes = [ax1, ax2, ax3] # Generate sample data data1 = np.random.rand(10, 10) data2 = np.random.rand(10, 10) data3 = np.random.rand(10, 10) for ax, data in zip(axes, [data1, data2, data3]): cax = ax_bandwidth.imshow(data, aspect='auto', cmap='viridis') plt.colorbar(cax, ax=ax) ax.axis('off') plt.show()
2
0
78,581,797
2024-6-5
https://stackoverflow.com/questions/78581797/how-can-i-create-an-in-memory-file-object-that-has-a-file-descriptor-in-python
I plan to use subprocess.Popen (in Python 3.11.2) to implement git mktree < foo.txt. Now I face the question. In order to reproduce my situation, here is the script that creates the environment. #!/bin/bash export GIT_AUTHOR_NAME=foo export [email protected] export GIT_AUTHOR_DATE="Tue Jun 4 21:40:15 2024 +0800" export GIT_COMMITTER_NAME=foo export [email protected] export GIT_COMMITTER_DATE="Tue Jun 4 21:40:15 2024 +0800" rm -rf foo git init foo mkdir -p foo/hello echo hello > foo/hello/hello.txt echo hello > foo/hello.txt echo world > foo/world.txt git -C foo add . git -C foo commit -m 'hello world' git -C foo log --no-decorate git -C foo ls-tree HEAD hello.txt git -C foo ls-tree HEAD world.txt It's expected to print the commit and 2 blob entries. commit d2b25fd15c1435f515dd6379eca8d691dde6abeb Author: foo <[email protected]> Date: Tue Jun 4 21:40:15 2024 +0800 hello world 100644 blob ce013625030ba8dba906f756967f9e9ca394464a hello.txt 100644 blob cc628ccd10742baea8241c5924df992b5c019f71 world.txt I want to create a commit from the tree that has only hello.txt and world.txt. So first I need to create the new tree object. (Update: after getting the right solution, I find that the code has a fatal bug. It creates a tree object with wrong content due to str and bytes.) import subprocess # get the blob entry of hello.txt o, e = subprocess.Popen( ['git', 'ls-tree', 'HEAD', 'hello.txt'], stdout=subprocess.PIPE, env={'GIT_DIR': 'foo/.git'}, ).communicate() line1 = o.decode() # get the blob entry of world.txt o, e = subprocess.Popen( ['git', 'ls-tree', 'HEAD', 'world.txt'], stdout=subprocess.PIPE, env={'GIT_DIR': 'foo/.git'}, ).communicate() line2 = o.decode() # write the 2 lines to foo.txt with open('foo.txt', 'w') as f: f.write(line1) f.write(line2) # create a new tree object from foo.txt with open('foo.txt') as f: o, e = subprocess.Popen( ['git', 'mktree'], stdin=f, stdout=subprocess.PIPE, env={'GIT_DIR': 'foo/.git'}, ).communicate() tree = o.decode() print(f'created tree object {tree}') I wonder if I can use an in-memory file object so that I don't have to create and remove foo.txt. As io.StringIO is recommended in many answers, I try the code. import subprocess import io line1 = '100644 blob ce013625030ba8dba906f756967f9e9ca394464a\thello.txt\n' line2 = '100644 blob cc628ccd10742baea8241c5924df992b5c019f71\tworld.txt\n' with io.StringIO(line1 + line2) as f: o, e = subprocess.Popen( ['git', 'mktree'], stdin=f, stdout=subprocess.PIPE, env={'GIT_DIR': 'foo/.git'}, ).communicate() tree = o.decode() print(f'created tree object {tree}') It raises an exception. Traceback (most recent call last): File "<stdin>", line 2, in <module> File "C:\Python311\Lib\subprocess.py", line 892, in __init__ errread, errwrite) = self._get_handles(stdin, stdout, stderr) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python311\Lib\subprocess.py", line 1339, in _get_handles p2cread = msvcrt.get_osfhandle(stdin.fileno()) ^^^^^^^^^^^^^^ io.UnsupportedOperation: fileno According to the io doc, it seems io.StringIO does not have a file descriptor. fileno() Return the underlying file descriptor (an integer) of the stream if it exists. An OSError is raised if the IO object does not use a file descriptor. Is there any in-memory file object that has a file descriptor? Or is there any method to bypass the exception with io.StringIO in subprocess.Popen? Update: With the help of @Ture PΓ₯lsson 's answer, I get the expected solution. import subprocess # get the blob entry of hello.txt line1, _ = subprocess.Popen( ['git', 'ls-tree', 'HEAD', 'hello.txt'], stdout=subprocess.PIPE, env={'GIT_DIR': 'foo/.git'}, ).communicate() # get the blob entry of world.txt line2, _ = subprocess.Popen( ['git', 'ls-tree', 'HEAD', 'world.txt'], stdout=subprocess.PIPE, env={'GIT_DIR': 'foo/.git'}, ).communicate() # create a tree object from the lines # although Popen's text=True allows communicate's input to be str, # here it should be bytes. # str input creates a tree object with wrong content. o, e = subprocess.Popen( ['git', 'mktree'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, env={'GIT_DIR': 'foo/.git'}, ).communicate(input=line1+line2) tree = o.decode().strip() print(f'created tree object {tree}')
Unless I am missing something, you can set stdin=PIPE when building the Popen object, and pass the input as the argument to communicate. Here’s what I used as a "smoke test": import subprocess with subprocess.Popen(['cat'], encoding='utf-8', stdin=subprocess.PIPE, stdout=subprocess.PIPE) as p: o, e = p.communicate('foo') print(o, e)
2
3
78,584,847
2024-6-6
https://stackoverflow.com/questions/78584847/convert-count-row-to-one-hot-encoding-efficiently
I have a table with rows in this format where the integers are a count: A B C D E 0 a 2 0 3 x 1 b 1 2 0 y I'd like to convert it into a format where each count is a one hot encoded row: A B C D E 0 a 1 0 0 x 1 a 1 0 0 x 2 a 0 0 1 x 3 a 0 0 1 x 4 a 0 0 1 x 5 b 1 0 0 y 6 b 0 1 0 y 7 b 0 1 0 y I wrote inefficient code which achieves this # Sample DataFrame data = { 'A': ['a', 'b'], 'B': [2, 1], 'C': [0, 2], 'D': [3, 0], 'E': ['x', 'y'] } df = pd.DataFrame(data) new_df = pd.DataFrame(columns=df.columns) for index, row in df.iterrows(): first_val = row.iloc[0] last_val = row.iloc[-1] middle_vals = row.iloc[1:-1].astype(int) for i in range(len(middle_vals)): new_data = [first_val] + [1 if i == j else 0 for j in range(len(middle_vals))] + [last_val] new_rows = pd.DataFrame([new_data] * middle_vals.iloc[i], columns=df.columns) new_df = pd.concat([new_df, new_rows], ignore_index=True) Any tips for vectorizing this operation which is incredibly slow? I realize a concat operation per iteration is a big issue, so I did try a batching solution where I collect chunks of new_rows and then concat. This remains slow.
Here is a full numpy solution, I would expect this to be faster than reshaping: import numpy as np num_cols = ['B', 'C', 'D'] # convert to numpy array a = df[num_cols].to_numpy() # build indices to repeat idx = np.repeat(np.arange(a.shape[0]), a.sum(1)) # array([0, 0, 0, 0, 0, 1, 1, 1]) # build column indices to repeat cols = np.repeat(np.tile(np.arange(a.shape[1]), a.shape[0]), a.flat) # array([0, 0, 2, 2, 2, 0, 1, 1]) # assign 1s b = np.zeros((len(idx), len(num_cols)), dtype=int) b[np.arange(len(idx)), cols] = 1 # repeat and update DataFrame out = df.iloc[idx] out.loc[:, num_cols] = b Output: A B C D E 0 a 1 0 0 x 0 a 1 0 0 x 0 a 0 0 1 x 0 a 0 0 1 x 0 a 0 0 1 x 1 b 1 0 0 y 1 b 0 1 0 y 1 b 0 1 0 y timings # setup N = 10_000 df = (pd.DataFrame(np.clip(np.random.randint(-10, 15, (N, 10)), 0, 15)).assign(A=np.arange(N), E=np.arange(N)) [['A', 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 'E']] ) # numpy approach (this one) 45.2 ms Β± 6.4 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) # list approach (@EmiOB) 249 ms Β± 36.9 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) # pure pandas melt+repeat+pivot approach 3.68 s Β± 146 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) comparison for up to ~2M rows (10 cols of numbers)
3
6
78,583,009
2024-6-5
https://stackoverflow.com/questions/78583009/typeerror-unhashable-type-arrayimpl-when-trying-to-use-equinox-module-with-j
I'm new to Equinox and JAX but wanted to use them to simulate a dynamical system. But when I pass my system model as an Equinox module to jax.lax.scan I get the unhashable type error in the title. I understand that jax expects the function argument to be a pure function but I thought an Equinox Module would emulate that. Here is a test script to reproduce the error import equinox as eqx import jax import jax.numpy as jnp class EqxModel(eqx.Module): A: jax.Array B: jax.Array C: jax.Array D: jax.Array def __call__(self, states, inputs): x = states.reshape(-1, 1) u = inputs.reshape(-1, 1) x_next = self.A @ x + self.B @ u y = self.C @ x + self.D @ u return x_next.reshape(-1), y.reshape(-1) def simulate(model, inputs, x0): xk = x0 outputs = [] for uk in inputs: xk, yk = model(xk, uk) outputs.append(yk) outputs = jnp.stack(outputs) return xk, outputs A = jnp.array([[0.7, 1.0], [0.0, 1.0]]) B = jnp.array([[0.0], [1.0]]) C = jnp.array([[0.3, 0.0]]) D = jnp.array([[0.0]]) model = EqxModel(A, B, C, D) # Test simulation inputs = jnp.array([[0.0], [1.0], [1.0], [1.0]]) x0 = jnp.zeros(2) xk, outputs = simulate(model, inputs, x0) assert jnp.allclose(xk, jnp.array([2.7, 3.0])) assert jnp.allclose(outputs, jnp.array([[0.0], [0.0], [0.0], [0.3]])) # This raises TypeError xk, outputs = jax.lax.scan(model, x0, inputs) What is unhashable type: 'ArrayImpl' referring to? Is it the arrays A, B, C, and D? In this model, these matrices are parameters and therefore should be static for the duration of the simulation. I just found this issue thread that might be related: lax.scan for equinox Modules
Owen Lockwood (lockwo) has provided an explanation and answer in this issue thread, which I will re-iterate below. I believe your issue is happening because jax tries to hash the function you are scanning over, but it can't hash the arrays that are in the module. There are probably a number of things that you could do to solve this, the simplest being to just curry the model, e.g. xk, outputs = jax.lax.scan(lambda carry, y: model(carry, y), x0, inputs) works fine Or, re-written in terms of the variable names I am using: xk, outputs = jax.lax.scan(lambda xk, uk: model(xk, uk), x0, inputs)
3
1
78,582,450
2024-6-5
https://stackoverflow.com/questions/78582450/check-if-a-string-contains-all-words-in-a-phrase-from-a-list-in-python
I have a list of phrases and I need to be able to identify whether each row in a dataset contains all the words from any of the phrases in my list. Take my example problem below. I have a dataset where the column "Search" contains some browser searches. I also have a list called "phrases" that contains the phrases I'm trying to find within the Search column. import pandas as pd import numpy as np text = [('how to screenshot on mac', 0), ('how to take screenshot?', 0), ('how to take screenshot on windows', 0), ('when is christmas', 0), ('how many days until christmas', 0), ('how many weeks until christmas', 0), ('how much is the new google pixel 8', 0), ('which google pixel versions are available', 0), ('how do I do google search on my pixel phone 7a', 0)] labels = ['Search','Random_Column'] df = pd.DataFrame.from_records(text, columns=labels) phrases = ['mac screenshot', 'days until christmas', 'google pixel 7a'] I don't care about the order of the words within "phrases" and there can be other before before, within, and after the phrase, but I need to make sure that only the df rows that contain all the words within any of the phrases are identified. Therefore, the expected output would be like this: Search Random_Column Match 0 how to screenshot on mac 0 True 1 how to take screenshot? 0 False 2 how to take screenshot on windows 0 False 3 when is christmas 0 False 4 how many days until christmas 0 True 5 how many weeks until christmas 0 False 6 how much is the new google pixel 8 0 False 7 which google pixel versions are available 0 False 8 how do I do google search on my pixel phone 7a 0 True I have found a lot of solutions for instances where the "phrases" list is made up of single words (e.g. here, here, and here) but I'm struggling to find a solution where I need to match full phrases. I also tried to implement this solution but could not get it to work for a dataset.
You have to loop over all phrase until you find a match. An efficient option would be to use sets (set.issubset) combined with any: # convert the phrases to set sets = [set(s.split()) for s in phrases] # [{'screenshot', 'mac'}, {'until', 'christmas', 'days'}, # {'google', 'pixel', '7a'}] # for each string, check if one of the sets is a subset # if a match is found, return True immediately df['Match'] = [any(S.issubset(lst) for S in sets) for lst in map(str.split, df['Search'])] Output: Search Random_Column Match 0 how to screenshot on mac 0 True 1 how to take screenshot? 0 False 2 how to take screenshot on windows 0 False 3 when is christmas 0 False 4 how many days until christmas 0 True 5 how many weeks until christmas 0 False 6 how much is the new google pixel 8 0 False 7 which google pixel versions are available 0 False 8 how do I do google search on my pixel phone 7a 0 True
2
2
78,580,737
2024-6-5
https://stackoverflow.com/questions/78580737/understanding-the-details-of-equality-in-python
When trying to construct an example where a == b is not the same as b == a, it seems that I have accidentally constructed an example where a == b is not the same as a.__eq__(b): class A: def __eq__(self, other: object) -> bool: return type(other) is A class B(A): pass if __name__ == '__main__': a = A() b = B() assert not a.__eq__(b) # as expected assert not (a == b) # Why does this fail? Can somebody explain to me why the last assertion fails? I expected it to be the same as the second one.
The relevant quote explaining what happens is located in the documentation: If the operands are of different types, and the right operand’s type is a direct or indirect subclass of the left operand’s type, the reflected method of the right operand has priority, otherwise the left operand’s method has priority. Virtual subclassing is not considered.
3
5
78,579,128
2024-6-5
https://stackoverflow.com/questions/78579128/how-can-i-randomly-replace-the-values-remaining-at-least-one-value-using-python
I tried to replace the some of major values in the tensor with the unique value while maintaining at least one value. For example, given edge_index, I want to change it as below. edge_index = torch.as_tensor([[0, 0, 1, 2, 3, 4, 6, 7, 7, 8], [1, 2, 2, 4, 4, 5, 0, 1, 3, 7]]) result = torch.as_tensor([[5, 0, 1, 2, 3, 8, 6, 7, 7, 8], [1, 2, 2, 4, 4, 5, 0, 1, 3, 6]]) In detail, some values appeared more than 2 times in edge_index (i.e., 0, 1, 2, 4, 7). To avoid unique values, I need to change some of them with exceptional values (i.e., 5, 6, 8). I tried to do it as below, but my code couldn't ensure the condition; all values need to appear at least 2 times. # Counts values n_id, n_counts = torch.unique(edge_index, return_counts=True) unique_nodes = n_id[n_counts==1] #tensor([5, 6, 8]) major_nodes = n_id[n_counts>2] #tensor([0, 1, 2, 4, 7]) # Find the index where the major_nodes located major_s_idx = (edge_index[..., None] == major_nodes).any(-1)[0].nonzero()[:, 0] #tensor([0, 1, 2, 3, 5, 7, 8]) major_t_idx = (edge_index[..., None] == major_nodes).any(-1)[1].nonzero()[:, 0] #tensor([0, 1, 2, 3, 4, 6, 7, 9]) result = edge_index.clone() result[0][major_s_idx[torch.randperm(len(major_s_idx))[:len(unique_nodes)]]] = unique_nodes result[1][major_t_idx[torch.randperm(len(major_t_idx))[:len(unique_nodes)]]] = unique_nodes result # tensor([[0, 0, 6, 2, 3, 4, 6, 5, 8, 8], # [5, 2, 2, 4, 6, 5, 0, 8, 3, 7]]) # 1 is disappeared and 7 remained only one time Note that the correct result does not need to be the same as the result in the first code block. Just reference.
Your issue arises because you're replacing some of the major nodes with unique nodes randomly, without checking if the major node still appears elsewhere. Create a list of nodes that appear twice, then for each node find all its occurrences. Finally, randomly select one occurrence to keep and replace the rest with unique nodes: import torch edge_index = torch.tensor([[0, 0, 1, 2, 3, 4, 6, 7, 7, 8], [1, 2, 2, 4, 4, 5, 0, 1, 3, 7]]) n_id, n_counts = torch.unique(edge_index, return_counts=True) unique_nodes = n_id[n_counts==1] major_nodes = n_id[n_counts>2] result = edge_index.clone() for major_node in major_nodes: occurrences = torch.nonzero(result == major_node, as_tuple=True) keep_idx = torch.randint(len(occurrences[0]), (1,))[0] for i in range(len(occurrences[0])): if i != keep_idx and len(unique_nodes) > 0: result[occurrences[0][i], occurrences[1][i]] = unique_nodes[0] unique_nodes = unique_nodes[1:] print(result)
2
2
78,574,125
2024-6-4
https://stackoverflow.com/questions/78574125/how-to-making-async-calls-to-amazon-bedrock
We were trying to make calls in parallel to LLMs hosted in Bedrock, from a lambda layer (in python) only to discover that boto3 does not support async. Is there any workaround? I am looking into aiobotocore / aioboto3, but I do not find any example with Bedrock. Any hint appreciated and thank you very much! This is a minimal sample of the code I intended to use, but runs in sequence instead of parallel: nest_asyncio.apply() # async summaries async def _into_comment(segments: list[str]): bedrock = boto3.client( service_name="bedrock-runtime", aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key, aws_session_token=aws_session_token, region_name=aws_region ) async def sum_up(segment: str): body = json.dumps({ "max_tokens": 256, "messages": [{"role": "user", "content": f"Sumarize this: {segment}"}], "anthropic_version": "bedrock-2023-05-31" }) return bedrock.invoke_model(body=body, modelId=model_id) summaries = await asyncio.gather(*[sum_up(segment) for segment in segments]) return summaries summaries = asyncio.run(_into_comment(segments))
If you are using Anthropic, you can use the AsyncAnthropicBedrock API. from anthropic import AsyncAnthropicBedrock model_id = "my_model_id" user_message = "Hello Claude!" client = AsyncAnthropicBedrock() message = await client.messages.create( model=model_id, max_tokens=1024, messages=[ {"role": "user", "content": user_message} ] )
4
1
78,562,640
2024-6-1
https://stackoverflow.com/questions/78562640/how-to-filter-polars-dataframe-by-first-maximum-value-while-using-over
I am trying to filter a dataframe to find the first occurrence of a maximum value over a category column. In my data there is no guarantee that there is a single unique maximum value, there could be multiple values, but i only need the first occurance. Yet I can't seem to find a way to limit the max part of the filter, currently I am then adding a further filter on another column generally a time based one and taking the minimum value. df = pl.DataFrame( { "cat": [1, 1, 1, 2, 2, 2, 2, 3, 3, 3], "max_col": [12, 24, 36, 15, 50, 50, 45, 20, 40, 60], "other_col": [25, 50, 75, 125, 150, 175, 200, 225, 250, 275], } ) df = df.filter(pl.col("max_col") == pl.col("max_col").max().over("cat")).filter( pl.col("other_col") == pl.col("other_col").min().over("cat") ) shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ cat ┆ max_col ┆ other_col β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════β•ͺ═══════════║ β”‚ 1 ┆ 36 ┆ 75 β”‚ β”‚ 2 ┆ 50 ┆ 150 β”‚ β”‚ 3 ┆ 60 ┆ 275 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ However, I'd prefer to simplify the above to only require passing in references to the max and category columns. Am I missing something obvious here? EDIT: Added example dataframe and output.
You can add .is_first_distinct() to the filter to keep only the first max. df.filter( pl.all_horizontal( pl.col("max_col") == pl.col("max_col").max(), pl.col("max_col").is_first_distinct() ) .over("cat") ) shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ cat ┆ max_col ┆ other_col β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════β•ͺ═══════════║ β”‚ 1 ┆ 36 ┆ 75 β”‚ β”‚ 2 ┆ 50 ┆ 150 β”‚ β”‚ 3 ┆ 60 ┆ 275 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
4
2
78,569,761
2024-6-3
https://stackoverflow.com/questions/78569761/type-hint-for-an-object-that-can-be-used-as-a-type-hint-itself
I have following code from typing import TypeVar, Type, overload T = TypeVar('T') @overload def foo(bar: Type[T]) -> T: ... @overload def foo(bar: Type[T] | None) -> T | None: ... def foo(bar: Type[T] | None) -> T | None: # implementation goes here ... class Bar: ... bar = foo(Bar) bar2 = foo(Bar | None) # No overload variant of "foo" matches argument type "UnionType" How to properly type hint case for bar2? I tried some others: Type[T | None], mypy says Overloaded function signature 2 will never be matched: signature 1's parameter type(s) are the same or broader removing 2nd overload (resulting in only Type[T] allowed), mypy says No overload variant of "foo" matches argument type "UnionType" (meaning 2nd overload is incorrect for that case anyways)
Only concrete classes are assignable to type[T]: [...] the actual argument passed in at runtime must [...] be a concrete class object [...] β€” Special types in annotations Β§ type[] | Python typing spec (Admittedly, the spec isn't completely clear about this. However, the following paragraph holds true.) It is thus understood that an object of this type can be invoked to retrieve an instance of type T. type[T] | None means such a thing, or None; thus, bar might or might not be invocable. def foo(bar: type[T] | None) -> None: if bar is not None: reveal_type(bar) # type[T] instance = bar() # fine However, Bar | None returns an instance of UnionType at runtime, and objects of this kind cannot be invoked. foo(Bar | None) # (Bar | None) is not None # bar() => error What you want is the proposed TypeForm of PEP 747 (a draft PEP): def foo[T](bar: TypeForm[T]) -> T: ... reveal_type(foo(Bar | None)) # Bar | None
1
4
78,576,727
2024-6-4
https://stackoverflow.com/questions/78576727/how-can-i-make-the-x-axis-of-my-2d-histogram-use-dates-while-avoiding-overflow-e
I am working with a set of monthly averaged time-series data that spans 20+ years and have put the data into a pandas dataframe. The index of the dataframe is composed of the datetime objects that span the time range of the dataset. I have successfully created a 2D histogram subplot of both time and another parameter, proton speed. The x-axis of the histogram was created by what seems like a default action, but I'm not sure how to interpret it. I have been trying to format the x-axis using matplotlib commands, primarily the date locator/formatter functions, but they keep throwing a massive overflow error that ends with: "OverflowError: int too big to convert." I have not been successful in finding a good solution with other questions or through the documentation. These are the imports I have used so far: import numpy as np import pandas as pd import matplotlib.pyplot as plt from datetime import datetime, date, time import matplotlib.dates as mdates The following is the pandas dataframe that I have been using. I apologize if the formatting is weird. I wasn't sure how to share the table, so I copied the dataframe directly from my notebook. The columns should be tab delimited here. Datetime proton_density proton_temp He4toprotons proton_speed x_dot_RTN Proton Mass Flux ---------------------------------------------------------------------------------------- 1998-01-23 11.625 58930.0 0.0224 380.90 379.91 7.406307e-19 1998-02-19 9.569 64302.0 0.0294 380.99 380.23 6.097867e-19 1998-03-18 8.767 66770.0 0.0348 384.00 383.19 5.630929e-19 1998-04-14 7.410 121090.0 0.0352 448.44 446.58 5.558023e-19 1998-05-11 7.881 102230.0 0.0271 421.21 419.87 5.552362e-19 ... ... ... ... ... ... ... 2021-09-19 8.244 55183.0 0.0356 384.52 383.22 5.302183e-19 2021-10-16 9.664 70601.0 0.0115 418.50 416.21 6.764725e-19 2021-11-12 6.137 93617.0 0.0256 450.47 449.30 4.624021e-19 2021-12-09 4.889 96768.0 0.0177 426.52 424.99 3.487845e-19 2022-01-05 7.280 85944.0 0.0310 434.17 433.01 5.286752e-19 Here is the code I have used to make my histogram: ax_example = plt.subplot2grid((3, 6), (2, 1), colspan = 2) H,xedges,yedges = np.histogram2d(SWEPAM_dataframe.index, SWEPAM_dataframe.proton_speed, bins=[50,50]) ax_example.pcolor(xedges, yedges, H.T) ax_example.set_xlabel("Year") ax_example.set_ylabel("Proton Speed (km/s)") The result was this: As you can see, the x-axis is not in datetime by default, it seems. I'm not actually sure how to interpret the default x-axis values, but that's not as important here. I have found that I should be using some combination of ax2.xaxis.set_major_locator(loc) and ax2.xaxis.set_major_formatter(fmt). However, anytime I try to use these commands I get the aforementioned overflow error and am prevented from turning the x-axis of my histogram into the desired dates.
I could reproduce your issue. Why xedges returns such high numbers (in the 10^17) has to see with how matplotlib reads datetime objects, in what unit of time since epoch. Indeed prior reformating such as in Format of datetime in pyplot axis seems a valid option but there may be better ones, e.g. using date2num (Convert datetime objects to Matplotlib dates), as discussed in Pandas vs matplotlib datetime. I have been trying to make it function reliably to provide a full answer. Also this overflow error was already reported in Set xaxis data to datetime in matplotlib without receiving a convincing answer. Alternatively, seaborn is better than matplotlib at handling the datetime dtype in pandas dataframes without requiring further manipulations on the axes: import seaborn as sns # with input: (without setting `"Datetime"` as index) df = pd.DataFrame(columns = ['Datetime','proton_density','proton_temp','He4toprotons','proton_speed','x_dot_RTN','Proton_Mass_Flux'], data = [['1998-01-23',11.625,58930.0,0.0224,380.90,379.91,7.406307e-19], ['1998-02-19', 9.569,64302.0,0.0294,380.99,380.23,6.097867e-19], ['1998-03-18', 8.767,66770.0,0.0348,384.00,383.19,5.630929e-19], ['1998-04-14',7.410,121090.0,0.0352,448.44,446.58,5.558023e-19], ['1998-05-11',7.881,102230.0,0.0271,421.21,419.87,5.552362e-19], ['2021-09-19', 8.244,55183.0,0.0356,384.52,383.22,5.302183e-19], ['2021-10-16', 9.664,70601.0,0.0115,418.50,416.21,6.764725e-19], ['2021-11-12', 6.137,93617.0,0.0256,450.47,449.30,4.624021e-19], ['2021-12-09', 4.889,96768.0,0.0177,426.52,424.99,3.487845e-19], ['2022-01-05', 7.280,85944.0,0.0310,434.17,433.01,5.286752e-19]]) df['Datetime'] = pd.to_datetime(df['Datetime']) This will then produce the expected 2D histogramm and axes labels: sns.histplot(df, x="Datetime", y="proton_speed")
1
2
78,574,357
2024-6-4
https://stackoverflow.com/questions/78574357/given-argument-of-listint-liststr-cant-i-be-sure-that-the-list-is-listi
I have a python script and trying to add type hints to the code, Following is the sample code (without type hints the code works) using mypy. values_int: list[int] = [1, 2, 3, 4] values_str: list[str] = ["1", "2", "3", "4"] def bar(*, x: list[int]) -> bool: # processing return True def baz(*, y: list[str]) -> bool: # processing return True def foo(*, values: list[int] | list[str]) -> bool: status: bool = False if isinstance(values[0], int): x: list[int] = values status = bar(x=x) elif isinstance(values[0], str): # case-1 # status = baz(y=values) # case-2 y: list[str] = values status = baz(y=y) return status foo(values=values_str) Errors for: # case-1 # error: Argument "y" to "baz" has incompatible type "list[int] | list[str]"; expected "list[str]" # case-2 # error: Incompatible types in assignment (expression has type "list[int] | list[str]", variable has type "list[str]")
isinstance(a[b], ...) is not (yet) supported as a type narrowing construct. For what it's worth, Pyright also doesn't support it. Perhaps you want a custom TypeIs: (playgrounds: Mypy, Pyright) from typing_extensions import TypeIs def is_list_of_ints(v: list[Any]) -> TypeIs[list[int]]: return isinstance(v[0], int) def is_list_of_strs(v: list[Any]) -> TypeIs[list[str]]: return isinstance(v[0], int) def foo(*, values: list[int] | list[str]) -> None: reveal_type(values) # list[int] | list[str] if is_list_of_ints(values): reveal_type(values) # list[int] bar(x=values) # fine elif is_list_of_strs(values): reveal_type(values) # list[str] baz(y=values) # fine Note that Mypy 1.10.0 has a bug with TypeIs: it incorrectly determines the second conditional branch to be unreachable: def foo(*, values: list[int] | list[str]) -> None: reveal_type(values) # list[int] | list[str] if is_list_of_ints(values): reveal_type(values) # list[int] bar(x=values) # fine elif is_list_of_strs(values): reveal_type(values) # error: Statement is unreachable baz(y=values) A workaround is to use TypeGuard instead: (playground) from typing import TypeGuard def is_list_of_ints(v: list[Any]) -> TypeGuard[list[int]]: return isinstance(v[0], int) def is_list_of_strs(v: list[Any]) -> TypeGuard[list[str]]: return isinstance(v[0], int) The difference between TypeIs and TypeGuard is explained in PEP 742.
2
2
78,546,844
2024-5-29
https://stackoverflow.com/questions/78546844/why-is-it-vscode-python-could-not-find-debugpy-path-when-running-without-debuggi
I'm relatively new to platforms like vscode, and I seem to run into problems whenever I try to run a python program without debugging (Ctrl + F5). A window pops up saying "Could not find debugpy path" and gives me the option of either opening launch.json or cancelling. Running from the "run file" button on the right works, so I'm just curious what is causing this problem. I opened launch.json and it's mostly empty: { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [] }
Turns out I needed to download a Python debugger in the VSCode extensions. I used Python Debugger from Microsoft.
3
12
78,572,914
2024-6-4
https://stackoverflow.com/questions/78572914/on-this-conditional-expression-what-the-syntax-error-about
I get: return r.group() if r := re.match(rx,l) else None ^ SyntaxError: invalid syntax whereas return r.group() if (r := re.match(rx,l)) else None is accepted. What is invalid about the first's syntax? And what other interpretation of it is there, than the second, such that it is not unambiguous?
Limiting the valid places in which the walrus operator ("assignment expression" for those who have killed their inner child) can be used was a mostly elective thing. They didn't want it replacing = for assignment, or getting used constantly to condense multiple lines of code down to one for the fun of it, so they put limitations on its syntax that make it awkward outside of the cases it was really designed for. The biggest limitation was that they require parentheses in cases they didn't strictly need to (e.g. the most obvious case is that the language could trivially have implemented it such that x := 1 by itself worked, but chose not to do so). The language's grammar spec describes the precise rules (emphasis added): Assignment expressions must be surrounded by parentheses when used as expression statements and when used as sub-expressions in slicing, conditional, lambda, keyword-argument, and comprehension-if expressions and in assert, with, and assignment statements. In all other places where they can be used, parentheses are not required, including in if and while statements. Basically, unparenthesized walruses are definitionally incompatible with conditional expressions, because the Python language designers said so.
2
5
78,577,960
2024-6-4
https://stackoverflow.com/questions/78577960/handle-logarithmic-and-exponential-objective-function-in-gurobi
I am working with a very complex optimization problem which basically uses medical parameters to calculate the risk of a certain person developing a cardiovascular disease. I want to, of course, minimize such probability, which is described by the following objective function: where: and beta will always be a float. The x vector is also a float and will have an initial value, which represents the person's current medical conditions. Now my question is, how can I load my objective function correctly into my Gurobi Model, given the presence of logarithms and exponentials? Below is my current code: def calculate_beta(x2, x3, i): beta = np.zeros((2, 2, 15)) beta[0, 0, :] = [-29.799, 4.884, 13.540, -3.114, -13.578, 3.149, 2.019, 0.0, 1.957, 0.0, 7.574, -1.665, 0.661, -29.18, 0.9665] beta[0, 1, :] = [17.114, 0.0, 0.940, 0.0, -18.920, 4.475, 29.291, -6.432, 27.820, -6.087, 0.691, 0.0, 0.874, 86.61, 0.9533] beta[1, 0, :] = [12.344, 0.0, 11.853, -2.664, -7.990, 1.769, 1.797, 0.0, 1.764, 0.0, 7.837, -1.795, 0.658, 61.18, 0.9144] beta[1, 1, :] = [2.469, 0.0, 0.302, 0.0, -0.307, 0.0, 1.916, 0.0, 1.809, 0.0, 0.549, 0.0, 0.645, 19.54, 0.8954] return beta[x2, x3, (i-1)] def calculate_xi(x1, x2, x3, x5, x6, x8, x9, x10): xi = ((np.log(x1) * calculate_beta(x2, x3, 1)) + ((np.log(x1)**2) * calculate_beta(x2, x3, 2)) + (np.log(x9) * calculate_beta(x2, x3, 3)) + (np.log(x1) * np.log(x9) * calculate_beta(x2, x3, 4)) + (np.log(x8) * calculate_beta(x2, x3, 5)) + (np.log(x1) * np.log(x8) * calculate_beta(x2, x3, 6)) + (np.log(x10) * calculate_beta(x2, x3, 7)) + (np.log(x1) * np.log(x10) * calculate_beta(x2, x3, 8)) + (0.0 * calculate_beta(x2, x3, 9)) + (0.0 * calculate_beta(x2, x3, 10)) + (x6 * calculate_beta(x2, x3, 11)) + (np.log(x1) * x6 * calculate_beta(x2, x3, 12)) + (x5 * calculate_beta(x2, x3, 13))) return xi def ascvd_risk(x1, x2, x3, x5, x6, x8, x9, x10): probability = 1 - calculate_beta(x2, x3, 15) ** (np.exp(calculate_xi(x1, x2, x3, x5, x6, x8, x9, x10) - calculate_beta(x2, x3, 14))) return probability model = Model("ascvd_risk") x1 = model.addVar(vtype=GRB.INTEGER, name="x_1_age") x2 = model.addVar(vtype=GRB.BINARY, name="x_2_gender") ... x23 = model.addVar(vtype=GRB.BINARY, name="x_23_daily_alcohol") x24 = model.addVar(vtype=GRB.CONTINUOUS, name="x_24_bmd") model.setObjective(ascvd_risk(x1, x2, x3, x5, x6, x8, x9, x10), GRB.MINIMIZE) Now, when I call the last line of code, I get the following error: IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices which traces back to ----> 9 return beta[x2, x3, (i-1)] From what I've seen in the Gurobi forums, I have to approximate it to a piece-wise linear function, but I don't know how exactly I can do it. Thanks in advance!
here are some snippets that I would recommend to use in the model: Split variable X3 into multiple binary variables. This is need to later get variable specific constants from the beta matrix: from gurobipy import Model, GRB, Var, quicksum # race variables... # Map: gurobi_var -> var_index(int) x3 = {} for i in range(20): var = model.addVar(vtype=GRB.BINARY, name=f"x_3_race_{i}") model.update() x3[var] = i # With SOS constraint you will only have single non-zero X3 model.addSOS(GRB.SOS_TYPE1, [x for x in x3.keys()]) model.update() Non-linearity must be implemented using Gurobi non-linear constraints: from gurobipy import Model, GRB, Var, quicksum def var_log(x_var: Var): y = model.addVar(vtype=GRB.CONTINUOUS, name=f"y_nonlinear") gc = model.addGenConstrLog(x_var, y) model.update() # inform Gurobi that constraint shouldn't be treated as piecewise-linear approximations gc.FuncNonlinear = 1 return y tt = var_log(x2) This is an example how to boradcast beta constants into gurobi expressions: # --> generate random data # you can skip this part beta = np.zeros((2, 2, 15)) beta[0, 0, :] = [-29.799, 4.884, 13.540, -3.114, -13.578, 3.149, 2.019, 0.0, 1.957, 0.0, 7.574, -1.665, 0.661, -29.18, 0.9665] beta[0, 1, :] = [17.114, 0.0, 0.940, 0.0, -18.920, 4.475, 29.291, -6.432, 27.820, -6.087, 0.691, 0.0, 0.874, 86.61, 0.9533] beta[1, 0, :] = [12.344, 0.0, 11.853, -2.664, -7.990, 1.769, 1.797, 0.0, 1.764, 0.0, 7.837, -1.795, 0.658, 61.18, 0.9144] beta[1, 1, :] = [2.469, 0.0, 0.302, 0.0, -0.307, 0.0, 1.916, 0.0, 1.809, 0.0, 0.549, 0.0, 0.645, 19.54, 0.8954] beta = np.repeat(beta, 20, axis=1) # <-- end of random data generation def calculate_beta(x_gender: Var, x_3: dict[Var, int], j) -> Var: global beta beta_male = quicksum(x_var * beta[0,index, j] for x_var, index in x_3.items()) beta_female = quicksum(x_var * beta[1,index, j] for x_var, index in x_3.items()) beta_full = x_gender * beta_male + (1- x_gender) * beta_female # return gurobi variable class instead of Gurobi expression y = model.addVar(vtype=GRB.CONTINUOUS) gc = model.addConstr(y == beta_full) model.update() return y tt = calculate_beta(x2, x3, 5) Implement ASVCD using Gurobi non-linear constraints. Link redirects to documentation on such constraints: https://www.gurobi.com/documentation/current/refman/general_constraints.html#subsubsection:GenConstrFunction # ---> Dummy function instead of Xi # it is needed to check that ASVCD constraints will compile # Replace with your implementation calculate_xi = lambda x1, x2, x3, x5, x6, x8, x9, x10: x2 # <--- end of dummy function def ascvd_risk(x1, x2, x3, x5, x6, x8, x9, x10): """ prob = 1 - exp(np.ln(beta) * (exp(xi) - beta)) """ def get_var_exp(x_var: Var) -> Var: """ Creates new var that is equal to `exp(x_var)` (same as `2.7 ** x_var`) """ y = model.addVar(vtype=GRB.CONTINUOUS, name=f"xi_var") gc = model.addGenConstrExp(x_var, y) model.update() gc.FuncNonlinear = 1 return y def get_var_log(x_var: Var) -> Var: """ Creates new var that is equal to `log(x_var)` """ y = model.addVar(vtype=GRB.CONTINUOUS) gc = model.addGenConstrLog(x_var, y) model.update() gc.FuncNonlinear = 1 return y var_beta_log = get_var_log(calculate_beta(x2, x3, 14)) var_xi = calculate_xi(x1, x2, x3, x5, x6, x8, x9, x10) var_power = model.addVar(vtype=GRB.CONTINUOUS) model.addConstr(var_power == var_beta_log * (calculate_beta(var_xi) - get_var_beta(x2, x3, 13))) probability = 1 - get_var_exp(var_power) return probability prob = ascvd_risk(x1, x2, x3, x5, x6, x8, x9, x10) Don't forget to change model type to nonlinear: model.params.FuncNonlinear = 1 model.update() Off topic and other thoughts. As an alternative to Gurobi (if you won't be able to pass all your constraints to the model) I would suggest to have a closer look on differential evolution algos like scipy.optimize.differential_evolution. They work very well with non-linear systems that have less than 40-50 variables.
2
1
78,574,898
2024-6-4
https://stackoverflow.com/questions/78574898/how-to-find-base-line-of-curved-text
Attached is a picture with curved lines, how can you find the Baseline of the text? The goal is to get lines like I drew by hand in the following picture: I tried the following code, but letters like g p q y and similar break the line. import cv2 as cv import numpy as np src = cv.imread("boston_cooking_a.jpg", cv.IMREAD_GRAYSCALE) src = cv.adaptiveThreshold(src=src, maxValue=255, blockSize=55, C=11, thresholdType=cv.THRESH_BINARY, adaptiveMethod=cv.ADAPTIVE_THRESH_MEAN_C) src = cv.dilate(src, cv.getStructuringElement(ksize=(3, 3), shape=cv.MORPH_RECT)) src = cv.erode(src, cv.getStructuringElement(ksize=(50, 3), shape=cv.MORPH_RECT)) src = cv.Sobel(src, ddepth=0, dx=0, dy=1, ksize=5) cv.imwrite("test.jpg", src) cv.imshow("src", src) cv.waitKey(0) EDIT: Attached is another image to test your answer on, so we can make sure the answer doesn't suffer from "overfitting" to a single image.
I found an approach which is a possibility to find your lines in β€žpureβ€œ opencv. The suggested solution is not perfect, but demonstrates a first direction. Maybe you should use pytesseract to follow up your overall goal ? In general the suggested solution below is quite sensitive to the parameters of the first filter A. The basics pseudo code steps are: A) apply filters to merge letters to words B) select contours of words (filter by: ratio heights vs widths , area size) C) get random points from word-contours using gaussian distribution and the center point centroid of contour D) use linear regression to find middle line of word-contours E) merge all word-contours which are neighbors to line-contours (outer middle line points are close together) F) do polynomial regression 2nd order to estimate middle line of line-contours G) write the found merged lines from our estimaded group line The main output for example 2 shows robust output but still has some artifacts from step 1 merge all letter to words. import cv2 import math import uuid import numpy as np from scipy import stats def resizeImageByPercentage(img,scalePercent = 60): width = int(img.shape[1] * scalePercent / 100) height = int(img.shape[0] * scalePercent / 100) dim = (width, height) # resize image return cv2.resize(img, dim, interpolation = cv2.INTER_AREA) def calcMedianContourWithAndHeigh(contourList): hs = list() ws = list() for cnt in contourList: (x, y, w, h) = cv2.boundingRect(cnt) ws.append(w) hs.append(h) return np.median(ws),np.median(hs) def calcCentroid(contour): houghMoments = cv2.moments(contour) # calculate x,y coordinate of centroid if houghMoments["m00"] != 0: #case no contour could be calculated cX = int(houghMoments["m10"] / houghMoments["m00"]) cY = int(houghMoments["m01"] / houghMoments["m00"]) else: # set values as what you need in the situation cX, cY = -1, -1 return cX,cY def applyDilateImgFilter(img,kernelSize= 3,iterations=1): img_bin = 255 - img #invert kernel = np.ones((kernelSize,kernelSize),np.uint8) img_dilated = cv2.dilate(img_bin, kernel, iterations = iterations) return (255- img_dilated) #invert back def randomColor(): return tuple(np.random.randint(0, 255, 3).tolist()) def drawGaussianValuesInsideRange(start, end, center, stdDev, amountValues): values = [] if center < 0: return values if start > end: return values while len(values) < amountValues: valueListPotencial = np.random.normal(center, stdDev, amountValues) valueListFiltered = [value for value in valueListPotencial if start <= value <= end] values.extend(valueListFiltered) return values[:amountValues] def drawRandomPointsInPolygon(amountPoints, cntFactObj): pointList = list() if not isinstance(cntFactObj, ContourFacts): return pointList #we calc basic parameter from random point selection horizontalStart = cntFactObj.x horizontalEnd = cntFactObj.x + cntFactObj.w verticalStart = cntFactObj.y verticalEnd = cntFactObj.y + cntFactObj.h #calc std deviation connected to length and ratio horitonalStdDeviation = 1 / cntFactObj.ratioHeightoWidth * (horizontalEnd-horizontalStart) verticalStdDeviation = 1 / cntFactObj.ratioHeightoWidth * (verticalEnd-verticalStart) while len(pointList)<amountPoints: if cntFactObj.centoird[0] < 0 or cntFactObj.centoird[1] < 0: return pointList drawXValues = drawGaussianValuesInsideRange(horizontalStart, horizontalEnd, cntFactObj.centoird[0], horitonalStdDeviation, amountPoints) drawYValues = drawGaussianValuesInsideRange(verticalStart, verticalEnd, cntFactObj.centoird[1], verticalStdDeviation, amountPoints) #we create the points and check if they are inside the polygon for i in range(0,len(drawXValues)): #create points point = (drawXValues[i],drawYValues[i]) # check if the point is inside the polygon if cv2.pointPolygonTest(cntFactObj.contour, point, False) > 0: pointList.append(point) return pointList[:amountPoints] def drawCountourOn(img,contours,color=None): imgContour = img.copy() for i in range(len(contours)): if color is None: color = randomColor() cv2.drawContours(imgContour, contours, i, color, 2) return imgContour DEBUGMODE = True fileIn = "bZzzEeCU.jpg"#"269aSnEM.jpg" img = cv2.imread(fileIn) ## A) apply filters to merge letters to words # prepare img load imgGrey = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #gaussian filter imgGaussianBlur = cv2.GaussianBlur(imgGrey,(3,3),1) #make binary img, black and white via filter _, imgBinThres = cv2.threshold(imgGaussianBlur, 140, 230, cv2.THRESH_BINARY) if DEBUGMODE: cv2.imwrite("img01bw.jpg",resizeImageByPercentage(imgBinThres,30)) ## 3 steps merged by helper class ContourFacts ## B) select contours of words (filter by: ratio heights vs widths , area size) ## C) get random points from wordcontours with gaussian distribution and center point centroid of contour ## D) use linear regression to find middle line of wordcontours #apply dilate filter to merge letter to words imgDilated = applyDilateImgFilter(imgBinThres,5,3) if DEBUGMODE: cv2.imwrite("img02dilated.jpg",resizeImageByPercentage(imgDilated,30)) # detect contours contourList, _ = cv2.findContours(imgDilated, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) if DEBUGMODE: imgContour = drawCountourOn(img,contourList) cv2.imwrite("img03contourAll.jpg",resizeImageByPercentage(imgContour,30)) #do a selection of contours by rule #A) ratio h vs w #B) area size mediaWordWidth, medianWordHigh = calcMedianContourWithAndHeigh(contourList) print("median word width: ", mediaWordWidth) print("median word high: ", medianWordHigh) contourSelectedByRatio=list() #we calc for every contour ratio h vs w ratioThresholdHeightToWidth = 1.1 #thresold ratio should be a least be 1 to 1 # e.g word to --> 10 pixel / 13 pixel #helper class for contour atrributess class ContourFacts: def __init__(self,contour): if contour is None: return self.uid = uuid.uuid4() (self.x, self.y, self.w, self.h) = cv2.boundingRect(contour) self.minRect = cv2.minAreaRect(contour) self.angle = self.minRect[-1] _, (rectWidth, rectHeight), _ = self.minRect self.minRectArea = rectWidth * rectHeight self.ratioHeightoWidth = self.h / self.w self.contour = contour self.centoird = calcCentroid(contour) self.randomPoinsInCnt = self.DrawRandomPoints() if len(self.randomPoinsInCnt) > 0: (self.bottomSlope, self.bottomIntercept) = self.EstimateCenterLineViaLinearReg() self.bottomMinX = min([x for x,y in self.randomPoinsInCnt]) self.bottomMaxX = max([x for x,y in self.randomPoinsInCnt]) def EstimateCenterLineViaLinearReg(self): if self.contour is None: return (0,0) slope = 0 intercept = 0 #model = slope (x) + intercept xValues = [x for x,y in self.randomPoinsInCnt] yValues = [y for x,y in self.randomPoinsInCnt] if len(xValues) < 2: return (0,0) elif len(xValues) ==2: #we calc a line with 2 points # y = m*x + b deltaX = xValues[1]-xValues[0] if deltaX == 0: return (0,0) slope = (yValues[1]-yValues[0])/(deltaX) intercept = yValues[0] - (slope*xValues[0]) else: #normal linear regression above 2 points slope, intercept, r, p, std_err = stats.linregress(xValues, yValues) #TODO check std_err return slope, intercept def DrawRandomPoints(self,pointFactor=2): pointList = list() #calc area to amount point relation -> bigger area more points amountPointsNeeded = int(self.minRectArea/pointFactor) pointList = drawRandomPointsInPolygon(amountPointsNeeded,self) return pointList def GetCenterLineLeftCorner(self): if self.contour is None or len(self.randomPoinsInCnt) == 0: return (0,0) # calc via y = m*x + b with min return (int(self.bottomMinX), int(self.bottomSlope*self.bottomMinX + self.bottomIntercept)) def GetCenterLineRightCorner(self): if self.contour is None or len(self.randomPoinsInCnt) == 0: return (0,0) # calc via via y = m*x + b with max return (int(self.bottomMaxX), int(self.bottomSlope*self.bottomMaxX + self.bottomIntercept)) def __eq__(self, other): if isinstance(other, ContourFacts): return self.uid == other.uid return False def __hash__(self): return hash(self.uid) #calc mean area size from area size vectorOfAreaSize = np.array([cv2.contourArea(cnt) for cnt in contourList]) meanAreaSize = np.mean(vectorOfAreaSize) print("mean area size: ", meanAreaSize) stdDevAreaSize = np.std(vectorOfAreaSize) print("std dev area size: ", stdDevAreaSize) thresoldDiffAreaSize = stdDevAreaSize/4 #we iterate all contours and select by ratio and size for cnt in contourList: #construct helper class instance contourFactObj = ContourFacts(cnt) #calc abs diff to mean area size diffArea = abs(cv2.contourArea(cnt) - meanAreaSize) if contourFactObj.ratioHeightoWidth < ratioThresholdHeightToWidth and diffArea < (thresoldDiffAreaSize): contourSelectedByRatio.append(contourFactObj) #debug print if DEBUGMODE: #we print words imgContourSelection = img.copy() for cnt in contourSelectedByRatio: contourColor = randomColor() imgContourSelection = drawCountourOn(imgContourSelection,[cnt.contour],contourColor) #we print centroid cv2.circle(imgContourSelection, cnt.centoird, 5, (0, 0, 255), -1) p1 = cnt.GetCenterLineLeftCorner() p2 = cnt.GetCenterLineRightCorner() if p1 != (0,0) or p2 != (0,0): cv2.circle(imgContourSelection, p1, 5, (0, 0, 255), -1) cv2.circle(imgContourSelection, p2, 5, (0, 0, 255), -1) cv2.line(imgContourSelection, p1, p2, (0, 255, 0), 2) cv2.imwrite("img04contourSelection.jpg",resizeImageByPercentage(imgContourSelection,30)) ## E) merge all wordcontours which are neighbours to linecontours (outer middle line points are close together) #define distance function, differences in height is negativ weighted def euclidianDistanceWithNegativHeightWeight(cnt1,cnt2,negativeHeightWeight=2.0): if cnt1 is None or cnt2 is None: return 1000000 if not isinstance(cnt1, ContourFacts) or not isinstance(cnt2, ContourFacts): return 1000000 p1 = cnt1.GetCenterLineRightCorner() p2 = cnt2.GetCenterLineLeftCorner() return math.sqrt((p2[0] - p1[0])**2 + (negativeHeightWeight*(p2[1] - p1[1]))**2) # helper class to group contours class ContourGroup: def __init__(self): self.uuid = uuid.uuid4() self.contourList = list() def GetLastElement(self): if len(self.contourList) == 0: return None return self.contourList[-1] def Add(self,cnt): self.contourList.append(cnt) def __eq__(self, other): if isinstance(other, ContourGroup): return self.uuid == other.uuid return False groupMap = dict() lineGroupList = list() ## we grouping the contours to lines maxDistanceThresholNextWord= medianWordHigh *0.9 #TODO get better estimate #recursive function to get nearest neighbors def getNearestNeighbors(cnt1,depthCounter,contourSelectedByRatio,maxDistanceThresholNextWord): maxDepth = 10 #var for max recursion depth nearestCnt = None nearestDist = maxDistanceThresholNextWord for j in range(0,len(contourSelectedByRatio)): cnt2 = contourSelectedByRatio[j] if cnt1 == cnt2:#skip same continue dist = euclidianDistanceWithNegativHeightWeight(cnt1,cnt2) if dist < nearestDist: nearestDist = dist nearestCnt = cnt2 if nearestCnt is not None:#call recursive nearaestListWeHave = [nearestCnt] #new list depthCounter += 1 if depthCounter < maxDepth:# all to call nearListWeGet =getNearestNeighbors(nearestCnt,depthCounter,contourSelectedByRatio,maxDistanceThresholNextWord) if nearListWeGet is None: return nearaestListWeHave else: nearListWeGet.extend(nearaestListWeHave) return nearListWeGet else:#limit reached of recursion skip return nearaestListWeHave else: return None ## E) merge all wordcontours which are neighbours to linecontours (outer middle line points are close together) #we group all contours for i in range(0,len(contourSelectedByRatio)): cnt1 = contourSelectedByRatio[i] if cnt1 in groupMap: continue lineGroup = ContourGroup() lineGroup.Add(cnt1) groupMap[cnt1] = lineGroup depthCounter = 0 nearaestList = getNearestNeighbors(cnt1,depthCounter, contourSelectedByRatio,maxDistanceThresholNextWord) if nearaestList is None: lineGroupList.append(lineGroup) #no neighbor found continue for cnt in nearaestList: groupMap[cnt] = lineGroup lineGroup.Add(cnt) lineGroupList.append(lineGroup) if DEBUGMODE: imgContourGroup = img.copy() for group in lineGroupList: #print(f"group({group.uuid} size: {len(group.contourList)}") #we print all corner points for cnt in group.contourList: leftCorner = cnt.GetCenterLineLeftCorner() rigthCorner = cnt.GetCenterLineRightCorner() cv2.circle(imgContourGroup, leftCorner, 5, (0, 0, 255), -1) cv2.circle(imgContourGroup, rigthCorner, 5, (140, 0, 0), -1) #we print estimated underlines for cnt in group.contourList: leftCorner = cnt.GetCenterLineLeftCorner() rigthCorner = cnt.GetCenterLineRightCorner() cv2.line(imgContourGroup, leftCorner, rigthCorner, (0, 255, 0), 2) # we print all contours groupColor = randomColor() cntList = [cnt.contour for cnt in group.contourList] imgContourGroup = drawCountourOn(imgContourGroup,cntList,groupColor) cv2.imwrite("img05contourGroup.jpg",resizeImageByPercentage(imgContourGroup,30)) ## F) do polynomial regression 2nd order to estimate middle line of linecontours # calc line from stable group points minAmountRegressionElements = 12 movingWindowSize = 3 letterCenterOffset = medianWordHigh * 0.5 lineListCollection = list() for group in lineGroupList: stablePoints = list() for cnt in group.contourList: stablePoints.extend(cnt.randomPoinsInCnt) if len(stablePoints) >= minAmountRegressionElements : xValues = [x for x,y in stablePoints] yValues = [y for x,y in stablePoints] # perform polynomial regression of degree 2 coefffientValues = np.polyfit(np.array(xValues), np.array(yValues), 2) # create a polynomial function with the coefficients polynomial = np.poly1d(coefffientValues) #we filter to build something like a line xValuesNewLineFilter = list() xMin =int( min(xValues)) xMax = int(max(xValues)) for xNew in range(xMin,xMax,movingWindowSize): xValuesNewLineFilter.append(xNew) #we predict new points with all old x values yValuesNew = polynomial(xValuesNewLineFilter) yValuesNewHighCorrect =np.array(yValuesNew) + letterCenterOffset lineList = list() #we create a list of points for i in range(0,len(xValuesNewLineFilter)): pointInt = (int(xValuesNewLineFilter[i]),int(yValuesNewHighCorrect[i])) lineList.append(pointInt) lineListCollection.append(lineList) ## G) write the lines imgLines = img.copy() for lineList in lineListCollection: p1 = lineList[0] for j in range(1,len(lineList)): p2 = lineList[j] #cv2.circle(imgLines, p2Int, 5, (0, 0, 255), -1) cv2.line(imgLines, p1, p2, (0, 255, 0), 2) p1 = p2 cv2.imwrite("img06Lines.jpg",resizeImageByPercentage(imgLines,30)) if DEBUGMODE: cv2.waitKey(0) more debug output is: The picture below shows word contours with green middle lines and red outer points for neighborhood analysis.
19
10
78,560,728
2024-5-31
https://stackoverflow.com/questions/78560728/evaluation-metric-for-parameter-tuning-for-outlier-detection-unsupervised-learn
I'm working on implementing parameter tuning for outlier detection in time-series data using the DBSCAN algorithm. To maximize the Silhouette score (as evaluation), I'm leveraging optuna for tuning. However, after parameter tuning, the model's performance seems to be underperformed. Below is the complete code, which encompasses data generation, preprocessing, decomposition, parameter tuning, and applying. I utilized isolated forest, LOF, and OneSVM algorithms and the result was similar. I utilized metrics including davies_bouldin_score and calinski_harabasz_score, but did not achieve better results. How can I improve the outlier detection parameter tuning? import numpy as np import matplotlib.pyplot as plt from statsmodels.tsa.seasonal import seasonal_decompose from sklearn.preprocessing import MinMaxScaler from sklearn.cluster import DBSCAN from sklearn.metrics import silhouette_score import optuna # Function to generate time series data def generate_time_series(n_samples=300, n_outliers=30): np.random.seed(np.random.randint(10000)) t = np.linspace(0, 50, n_samples) y = np.cumsum(np.random.randn(n_samples)) + np.sin(t) # Adding trend and noise outlier_indices = np.random.choice(n_samples, n_outliers, replace=False) y[outlier_indices] += 15 * np.random.randn(n_outliers) # Injecting outliers return y.reshape(-1, 1), t # Generate the time series data y, t = generate_time_series() # Plot the time series data plt.figure(figsize=(10, 5)) plt.plot(t, y, label='Time series', color='blue') plt.xlabel('Time') plt.ylabel('Value') plt.title('Generated Time Series Data') plt.legend() plt.show() # Decompose the time series result = seasonal_decompose(y, period=30, model='additive', two_sided=True) residual = result.resid # Handle NaN values in residuals (if any) non_nan_indices = ~np.isnan(residual).flatten() residual = residual[non_nan_indices].reshape(-1, 1) t_residual = t[non_nan_indices] # Plot the seasonal decomposition plt.figure(figsize=(10, 5)) plt.subplot(411) plt.plot(t, y, label='Original', color='blue') plt.legend(loc='best') plt.subplot(412) plt.plot(t, result.trend, label='Trend', color='orange') plt.legend(loc='best') plt.subplot(413) plt.plot(t, result.seasonal, label='Seasonal', color='green') plt.legend(loc='best') plt.subplot(414) plt.plot(t_residual, residual, label='Residual', color='red') plt.legend(loc='best') plt.tight_layout() plt.show() # Scale the residual data scaler = MinMaxScaler() residual_scaled = scaler.fit_transform(residual) # Define the objective function for DBSCAN def dbscan_objective(trial): eps = trial.suggest_float('eps', 0.01, 0.5, log=True) min_samples = trial.suggest_int('min_samples', 2, 20) dbscan = DBSCAN(eps=eps, min_samples=min_samples) clusters = dbscan.fit_predict(residual_scaled) # Ignore cases where all points are considered noise if len(set(clusters)) <= 1: return -1.0 score = silhouette_score(residual_scaled, clusters) return score # Optimize DBSCAN using Optuna optuna.logging.set_verbosity(optuna.logging.WARNING) dbscan_study = optuna.create_study(direction='maximize') dbscan_study.optimize(dbscan_objective, n_trials=100, show_progress_bar=True) best_dbscan_params = dbscan_study.best_params print(f"Best DBSCAN parameters: {best_dbscan_params}") # Apply DBSCAN with the best parameters dbscan = DBSCAN(**best_dbscan_params) dbscan_clusters = dbscan.fit_predict(residual_scaled) dbscan_outliers = (dbscan_clusters == -1) # Plot the detected outliers in the residuals plt.figure(figsize=(10, 5)) plt.plot(t_residual, residual, label='Residual', color='blue') plt.scatter(t_residual[dbscan_outliers], residual[dbscan_outliers], color='red', label='Outliers') plt.xlabel('Time') plt.ylabel('Value') plt.title('DBSCAN Outlier Detection on Residuals') plt.legend() plt.show() # Plot the detected outliers in the original time series plt.figure(figsize=(10, 5)) plt.plot(t, y, label='Time series', color='blue') plt.scatter(t_residual[dbscan_outliers], y[non_nan_indices][dbscan_outliers], color='red', label='Outliers') plt.xlabel('Time') plt.ylabel('Value') plt.title('DBSCAN Outlier Detection on Original Time Series') plt.legend() plt.show() # Print the number of outliers detected by DBSCAN print(f"Number of outliers detected by DBSCAN: {np.sum(dbscan_outliers)}")
DBSCAN relies on distance measurements to find clusters, thus it is sensitive to the scale and distribution of the data. Even, in your case, you have just one feature vector, I don't think you need to scale it for outlier detection. Just use residual variable in hyper-parameter and final prediction. You may also need to increase eps may be up to 2. So final code would look like this: import numpy as np import matplotlib.pyplot as plt from statsmodels.tsa.seasonal import seasonal_decompose from sklearn.cluster import DBSCAN from sklearn.metrics import silhouette_score import optuna # Function to generate time series data def generate_time_series(n_samples=300, n_outliers=30): np.random.seed(np.random.randint(10000)) t = np.linspace(0, 50, n_samples) y = np.cumsum(np.random.randn(n_samples)) + np.sin(t) # Adding trend and noise outlier_indices = np.random.choice(n_samples, n_outliers, replace=False) y[outlier_indices] += 15 * np.random.randn(n_outliers) # Injecting outliers return y.reshape(-1, 1), t # Generate the time series data y, t = generate_time_series() # Plot the time series data plt.figure(figsize=(10, 5)) plt.plot(t, y, label='Time series', color='blue') plt.xlabel('Time') plt.ylabel('Value') plt.title('Generated Time Series Data') plt.legend() plt.show() # Decompose the time series result = seasonal_decompose(y, period=30, model='additive', two_sided=True) residual = result.resid # Handle NaN values in residuals (if any) non_nan_indices = ~np.isnan(residual).flatten() residual = residual[non_nan_indices].reshape(-1, 1) t_residual = t[non_nan_indices] # Plot the seasonal decomposition plt.figure(figsize=(10, 5)) plt.subplot(411) plt.plot(t, y, label='Original', color='blue') plt.legend(loc='best') plt.subplot(412) plt.plot(t, result.trend, label='Trend', color='orange') plt.legend(loc='best') plt.subplot(413) plt.plot(t, result.seasonal, label='Seasonal', color='green') plt.legend(loc='best') plt.subplot(414) plt.plot(t_residual, residual, label='Residual', color='red') plt.legend(loc='best') plt.tight_layout() plt.show() # Define the objective function for DBSCAN def dbscan_objective(trial): eps = trial.suggest_float('eps', 0.01, 2, log=True) min_samples = trial.suggest_int('min_samples', 2, 20) dbscan = DBSCAN(eps=eps, min_samples=min_samples) clusters = dbscan.fit_predict(residual) # Ignore cases where all points are considered noise if len(set(clusters)) <= 1: return -1.0 score = silhouette_score(residual, clusters) return score # Optimize DBSCAN using Optuna optuna.logging.set_verbosity(optuna.logging.WARNING) dbscan_study = optuna.create_study(direction='maximize') dbscan_study.optimize(dbscan_objective, n_trials=100, show_progress_bar=True) best_dbscan_params = dbscan_study.best_params print(f"Best DBSCAN parameters: {best_dbscan_params}") # Apply DBSCAN with the best parameters dbscan = DBSCAN(**best_dbscan_params) dbscan_clusters = dbscan.fit_predict(residual) dbscan_outliers = (dbscan_clusters == -1) # Plot the detected outliers in the residuals plt.figure(figsize=(10, 5)) plt.plot(t_residual, residual, label='Residual', color='blue') plt.scatter(t_residual[dbscan_outliers], residual[dbscan_outliers], color='red', label='Outliers') plt.xlabel('Time') plt.ylabel('Value') plt.title('DBSCAN Outlier Detection on Residuals') plt.legend() plt.show() # Plot the detected outliers in the original time series plt.figure(figsize=(10, 5)) plt.plot(t, y, label='Time series', color='blue') plt.scatter(t_residual[dbscan_outliers], y[non_nan_indices][dbscan_outliers], color='red', label='Outliers') plt.xlabel('Time') plt.ylabel('Value') plt.title('DBSCAN Outlier Detection on Original Time Series') plt.legend() plt.show() # Print the number of outliers detected by DBSCAN print(f"Number of outliers detected by DBSCAN: {np.sum(dbscan_outliers)}") And you will get something like this:
3
1
78,562,406
2024-5-31
https://stackoverflow.com/questions/78562406/multiplying-chains-of-matrices-in-jax
Suppose I have a vector of parameters p which parameterizes a set of matrices A_1(p), A_2(p),...,A_N(p). I have a computation in which for some list of indices q of length M, I have to compute A_{q_M} * ... * A_{q_2} * A_{q_1} * v for several different q s. Each q has a different length, but crucially doesn't change! What changes, and what I wish to take gradients against is p. I'm trying to figure out how to convert this to performant JAX. One way to do it is to have some large matrix Q which contains all the different qs on each row, padded out with identity matrices such that each multiplication chain is the same length, and then scan over a function that switch es between N different functions doing matrix-vector multiplications by A_n(p). However -- I don't particularly like the idea of this padding. Also, since Q here is fixed, is there potentially a smarter way to do this? The distribution of lengths of q s has a very long tail, so Q will be dominated by padding. EDIT: Here's a (edit 2: functional) minimal example sigma0 = jnp.eye(2) sigmax = jnp.array([[0, 1], [1, 0]]) sigmay = jnp.array([[0, -1j], [1j, 0]]) sigmaz = jnp.array([[1, 0], [0, -1]]) sigma = jnp.array([sigmax, sigmay, sigmaz]) def gates_func(params): theta = params["theta"] epsilon = params["epsilon"] n = jnp.array([jnp.cos(theta), 0, jnp.sin(theta)]) omega = jnp.pi / 2 * (1 + epsilon) X90 = expm(-1j * omega * jnp.einsum("i,ijk->jk", n, sigma) / 2) return { "Z90": expm(-1j * jnp.pi / 2 * sigmaz / 2), "X90": X90 } def multiply_out(params): gate_lists = [["X90", "X90"], ["X90","Z90"], ["Z90", "X90"], ["X90","Z90","X90"]] gates = gates_func(params) out = jnp.zeros(len(gate_lists)) for i, gate_list in enumerate(gate_lists): init = jnp.array([1.0,0.0], dtype=jnp.complex128) for g in gate_list: init = gates[g] @ init out = out.at[i].set(jnp.abs(init[0])) return out params = dict(theta=-0.0, epsilon=0.001) multiply_out(params)
The main issue here is that JAX does not support string inputs. But you can use NumPy to manipulate string arrays and turn them into integer categorical arrays that can then be used by jax.jit and jax.vmap. The solution might look something like this: import numpy as np def gates_func_int(params, gate_list_vals): g = gates_func(params) identity = jnp.eye(*list(g.values())[0].shape) return jnp.stack([g.get(val, identity) for val in gate_list_vals]) @jax.jit def multiply_out_2(params): # compile-time pre-processing gate_lists = [["X90", "X90"], ["X90","Z90"], ["Z90", "X90"], ["X90","Z90","X90"]] max_size = max(map(len, gate_lists)) gate_array = np.array([gates + [''] * (max_size - len(gates)) for gates in gate_lists]) gate_list_vals, gate_list_ints = np.unique(gate_array, return_inverse=True) gate_list_ints = gate_list_ints.reshape(gate_array.shape) # runtime computation gates = gates_func_int(params, gate_list_vals)[gate_list_ints] initial = jnp.array([[1.0],[0.0]], dtype=jnp.complex128) return jax.vmap(lambda g: jnp.abs(jnp.linalg.multi_dot([*g, initial]))[0])(gates).ravel() multiply_out_2(params)
2
1
78,576,233
2024-6-4
https://stackoverflow.com/questions/78576233/how-do-you-get-additional-combinations-when-adding-items-to-a-list-that-i-have
Using python I would like to calculate all combinations of 3 from a list. For example, list = [a,b,c,d] and combinations would be - [a,b,c], [a,b,d], [a,c,d], [b,c,d]. And then I would like to add some items to the original list and get only the additional combinations of 3. For example, adding items [e,f] would generate new combinations - [a,b,e], [a,b,f], [a,c,e], [a,c,f], [a,d,e], [a,d,f], [a,e,f], [b,c,e], [b,c,f],... The lists will be large so we need to avoid generating the combinations twice and then filtering in order to get the 'additional combinations'. Background: I use itertools.combinations to get all combinations (of 3) for a list right now of about 100 items. That generates a lot of combinations and I doing a bunch of calculations and whatnot based on those combinations, looking for patterns and matches and stuff. I get through all that processing and if I don't have a 'successful' combination of 3 then I generate more candidates for the list (which in itself takes a long time). When I add the additional candidates to the list (usually like 10 or so), I then restart the analysis on the combinations which seems wasteful, so I would like to only be checking the 'additional' combinations.
Add the new items one by one: from itertools import combinations old = ['a', 'b', 'c', 'd'] new = ['e', 'f'] for item in new: for comb in combinations(old, 2): print(*comb, item) old.append(item) Output (Attempt This Online!): a b e a c e a d e b c e b d e c d e a b f a c f a d f a e f b c f b d f b e f c d f c e f d e f Or if order within each combination doesn't matter (inspired by a comment): from math import comb from itertools import combinations, islice old = ['a', 'b', 'c', 'd'] new = ['e', 'f'] new_combs = comb(len(old) + len(new), 3) - comb(len(old), 3) for comb in islice(combinations(new + old, 3), new_combs): print(*comb) Output (Attempt This Online!): e f a e f b e f c e f d e a b e a c e a d e b c e b d e c d f a b f a c f a d f b c f b d f c d
4
4
78,569,685
2024-6-3
https://stackoverflow.com/questions/78569685/why-does-count-method-return-the-wrong-number-of-items
I'm using pySpark and using count() on a dataFrame I seem to get the incorrect results; I made a csv, and I want to filter the rows with an incorrect type. Everything works (I use .show() to check), however when i call count(), the results I get are incorrect. I found out columnPruning is part of the problem, however disabling it still returns the incorrect result. In order to find the correct results, I need to call dataFrame.cache().count(). My questions are: Why do I get the wrong result? What happens under the hood? Is it an intended behaviour, or a bug? How should I handle it? Using .cache() works, but is expensive and I don't exactly understand why it works. Here's the code: from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() #spark.conf.set('spark.sql.csv.parser.columnPruning.enabled', False) myDf = spark.read.csv( 'C:/example/myCsv.csv', header=True, inferSchema=False, schema='c1 INTEGER, c2 STRING, bad STRING', columnNameOfCorruptRecord='bad', ) myDf.show() print(myDf.count()) myCleanDf = myDf.filter(myDf.bad.isNull()).drop('bad') myBadsDf = myDf.filter(myDf.bad.isNotNull()).select('bad') myCleanDf.show() myBadsDf.show() #Wrong results print(myCleanDf.count()) print(myBadsDf.count()) #Correct results print(myCleanDf.cache().count()) print(myBadsDf.cache().count()) (The csv contains sample data. Putting a row with an incorrect type will trigger the behaviour I describe) (Both with and without pruning the results I get are incorrect)
This issue was first raised here - SPARK-21610 which was subsequently 'fixed' by disallowing filtering the dataframe when only internal corrupt record column was used in the filter (via this PR 19199). Consequently, the error message "Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when the referenced columns only include the internal corrupt record column" was added in Spark 2.3. Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when the referenced columns only include the internal corrupt record column (named _corrupt_record by default). For example, spark.read.schema(schema).json(file).filter($"_corrupt_record".isNotNull).count() and spark.read.schema(schema).json(file).select("_corrupt_record").show(). Instead, you can cache or save the parsed results and then send the same query. For example, val df = spark.read.schema(schema).json(file).cache() and then df.filter($"_corrupt_record".isNotNull).count(). Reference: Migration Guide The same bug was raised here SPARK-22580 too. Apparently, this behaviour was changed by this PR 35844 in 2022 with the release of Spark 3.3. While, querying the 'corrupt record column' no longer raises the exception, I believe this is still a bug, which has remained in the codebase. You will get correct results if you add a cache() to your dataframe read. from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() # spark.conf.set('spark.sql.csv.parser.columnPruning.enabled', False) myDf = spark.read.csv( "/workspaces/cubie/cubie/tests/test_data/temp.csv", header=True, inferSchema=False, schema="c1 INTEGER, c2 STRING, bad STRING", columnNameOfCorruptRecord="bad", ).cache() myCleanDf = myDf.filter(myDf.bad.isNull()).drop("bad") myBadsDf = myDf.filter(myDf.bad.isNotNull()).select("bad") myCleanDf.show() myBadsDf.show() # Correct results with cache() print(myCleanDf.count()) print(myBadsDf.count()) Remember, cache() is also lazily evaluated. show() or count() is what triggers the cache to come into play.
2
1
78,575,305
2024-6-4
https://stackoverflow.com/questions/78575305/attributeerror-trainingarguments-object-has-no-attribute-model-init-kwargs
While finetuning Gemma2B model using QLoRA i'm getting error as AttributeError: 'TrainingArguments' object has no attribute 'model_init_kwargs' Code: Loading the libraries from enum import Enum from functools import partial import pandas as pd import torch from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments, set_seed, BitsAndBytesConfig from datasets import load_dataset from trl import SFTTrainer from peft import get_peft_model, LoraConfig, TaskType seed = 42 set_seed(seed) Loading the dataset and preprocess it. model_name = "gg-hf/gemma-2b-it" dataset_name = "FinGPT/fingpt-fiqa_qa" tokenizer = AutoTokenizer.from_pretrained(model_name) template = """{% for message in messages %}\n{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% if loop.last and add_generation_prompt %}{{'<|im_start|>assistant\n' }}{% endif %}{% endfor %}""" tokenizer.chat_template = template def preprocess(samples): batch = [] for system_prompt, input, output in zip(samples["instruction"], samples["input"], samples["output"]): conversation = [{"content": system_prompt, "role": "system"}, {"content": input, "role": "user"}, {"content": output, "role": "assistant"}] batch.append(tokenizer.apply_chat_template(conversation, tokenize=False)) return {"content": batch} dataset = load_dataset(dataset_name) dataset = dataset.map( preprocess, batched=True, remove_columns=dataset["train"].column_names ) dataset = dataset["train"].train_test_split(0.1) print(dataset) print(dataset["train"][0]) Create PEFT configurations peft_config = LoraConfig(r=8, lora_alpha=16, lora_dropout=0.1, target_modules=["gate_proj","q_proj","lm_head","o_proj","k_proj","embed_tokens","down_proj","up_proj","v_proj"], task_type=TaskType.CAUSAL_LM) Create Quantization configurations bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, ) Load the model and tokenizer class ChatmlSpecialTokens(str, Enum): user = "<|im_start|>user" assistant = "<|im_start|>assistant" system = "<|im_start|>system" eos_token = "<|im_end|>" bos_token = "<s>" pad_token = "<pad>" @classmethod def list(cls): return [c.value for c in cls] tokenizer = AutoTokenizer.from_pretrained( model_name, pad_token=ChatmlSpecialTokens.pad_token.value, bos_token=ChatmlSpecialTokens.bos_token.value, eos_token=ChatmlSpecialTokens.eos_token.value, additional_special_tokens=ChatmlSpecialTokens.list(), trust_remote_code=True ) tokenizer.chat_template = template model = AutoModelForCausalLM.from_pretrained(model_name) model.resize_token_embeddings(len(tokenizer)) model = get_peft_model(model, peft_config) model.print_trainable_parameters() # cast non-trainable params in fp16 for p in model.parameters(): if not p.requires_grad: p.data = p.to(torch.float16) Training Configurations output_dir = "Gemma2B_finetune_QLoRA" per_device_train_batch_size = 1 per_device_eval_batch_size = 1 gradient_accumulation_steps = 8 logging_steps = 5 learning_rate = 5e-4 max_grad_norm = 1.0 max_steps = 250 num_train_epochs=10 warmup_ratio = 0.1 lr_scheduler_type = "cosine" max_seq_length = 2048 training_arguments = TrainingArguments( output_dir=output_dir, per_device_train_batch_size=per_device_train_batch_size, per_device_eval_batch_size=per_device_eval_batch_size, gradient_accumulation_steps=gradient_accumulation_steps, save_strategy="no", evaluation_strategy="epoch", logging_steps=logging_steps, learning_rate=learning_rate, max_grad_norm=max_grad_norm, weight_decay=0.1, warmup_ratio=warmup_ratio, lr_scheduler_type=lr_scheduler_type, fp16=True, report_to=["tensorboard", "wandb"], hub_private_repo=True, push_to_hub=True, num_train_epochs=num_train_epochs, gradient_checkpointing=True, gradient_checkpointing_kwargs={"use_reentrant": False} ) Create trainer trainer = SFTTrainer( model=model, args=training_arguments, train_dataset=dataset["train"], eval_dataset=dataset["test"], tokenizer=tokenizer, packing=True, dataset_text_field="content", max_seq_length=max_seq_length, peft_config=peft_config, dataset_kwargs={ "append_concat_token": False, "add_special_tokens": False, }, ) The error I'm getting is like :- --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 1 ----> 1 trainer = SFTTrainer( 2 model=model, 3 args=training_arguments, 4 train_dataset=dataset["train"], 5 eval_dataset=dataset["test"], 6 tokenizer=tokenizer, 7 packing=True, 8 dataset_text_field="content", 9 max_seq_length=max_seq_length, 10 peft_config=peft_config, 11 dataset_kwargs={ 12 "append_concat_token": False, 13 "add_special_tokens": False, 14 }, 15 ) File /usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_deprecation.py:101, in _deprecate_arguments.<locals>._inner_deprecate_positional_args.<locals>.inner_f(*args, **kwargs) 99 message += "\n\n" + custom_message 100 warnings.warn(message, FutureWarning) --> 101 return f(*args, **kwargs) File /usr/local/lib/python3.10/dist-packages/trl/trainer/sft_trainer.py:154, in SFTTrainer.__init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers, preprocess_logits_for_metrics, peft_config, dataset_text_field, packing, formatting_func, max_seq_length, infinite, num_of_sequences, chars_per_token, dataset_num_proc, dataset_batch_size, neftune_noise_alpha, model_init_kwargs, dataset_kwargs, eval_packing) 150 warnings.warn( 151 "You passed `model_init_kwargs` to the SFTTrainer, the value you passed will override the one in the `SFTConfig`." 152 ) 153 args.model_init_kwargs = model_init_kwargs --> 154 if args.model_init_kwargs is None: 155 model_init_kwargs = {} 156 elif not isinstance(model, str): AttributeError: 'TrainingArguments' object has no attribute 'model_init_kwargs' Do let me know if there's any solution for this? Thanks.
Just replace your TrainingArguments constructor with SFTConfig constructor, and pass this to SFTTrainer. from trl import SFTConfig training_arguments = SFTConfig(your training args ...) trainer = SFTTrainer(args=training_arguments, rest of the args...)
3
3
78,573,180
2024-6-4
https://stackoverflow.com/questions/78573180/how-can-i-create-a-qlineedit-in-pyqt6-that-displays-truncated-text-similar-to-ho
I created a custom QLineEdit with a fixed size so that when the user types in text larger than the QLineEdit, the text gets truncated. When the user clicks out of the QLineEdit, the cursor jumps back to the beginning of the QLineEdit. Below is my code: from PyQt6.QtWidgets import QLineEdit class CustomLineEdit(QLineEdit): def focusOutEvent(self, event): super().focusOutEvent(event) self.setCursorPosition(0) # Move cursor to the beginning of the text If the text is larger than the QLineEdit, I want to make it so that when the user clicks on the QLineEdit again, the text will overflow and display its full text, similar to how in Excel, when you click on a cell with a large chunk of text, the full text is displayed without affecting the cell size. Is there a way to do this without changing QLineEdit to QTextEdit? To clarify this is what I have: This is what I want: I know I can set a sizePolicy so that the QLineEdit expands to show its content, but this is not what I want. I have other widgets next to the QLineEdit that need to remain in their respective positions.
The only way to achieve this is by manually changing the geometry of the line edit. Unfortunately, it's not that simple, especially when layout managers are used (which, by the way, should be mandatory). Most importantly, we need to ensure that the geometry change is: applied when the "something else" is trying to resize the widget (for instance, the parent resizing) and the widget is still focused; restored back when focus is left; We also need to ensure that the manual resizing doesn't go beyond the parent geometry, meaning that we also need to check that the widget does have a parent. While this may seem trivial and unnecessary, it's actually quite important. All this can be achieved with a basic boolean flag that only alters the geometry when required, and eventually tries to restore it back when needed. A possible implementation will then override the following methods: resizeEvent(), by possibly resizing the widget a further time (but also avoiding recursion, which is the reason for the boolean flag above); focusInEvent() to change the geometry so that all required (and available, depending on the parent) horizontal space will be used; focusOutEvent() to eventually restore the size previously externally set (usually, from the layout manager); Here is a possible implementation of the above: class OverflowLineEdit(QLineEdit): _recursionGuard = False _layoutGeo = QRect() def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.textChanged.connect(self._updateGeometry) self.setCursorPosition(0) def _setGeometry(self, geo): if geo.isValid(): self._recursionGuard = True super().setGeometry(geo) self._recursionGuard = False def _updateGeometry(self): if self.isWindow(): return self.ensurePolished() style = self.style() opt = QStyleOptionFrame() self.initStyleOption(opt) fm = QFontMetrics(self.font()) cm = self.contentsMargins() minWidth = ( fm.horizontalAdvance(self.text()) + cm.left() + cm.right() # horizontal margin is hardcoded to 2 + 4 # add the cursor width to avoid scrolling if possible + style.pixelMetric(style.PixelMetrics.PM_TextCursorWidth, opt, self) ) # adjust the minWidth to what the style wants minWidth = style.sizeFromContents( style.ContentsType.CT_LineEdit, opt, QSize(minWidth, fm.height()), self ).width() if self._layoutGeo.isValid(): refWidth = self._layoutGeo.width() else: refWidth = self.width() if minWidth > refWidth: parent = self.parentWidget() # remove the margins of the parent widget and its layout; # note that this doesn't consider nested layouts, and if we just # want to extend to the maximum available width (which is that # of the parent), "available" should just be "parent.rect()" margins = parent.contentsMargins() if parent.layout() is not None: margins += parent.layout().contentsMargins() available = parent.rect().marginsRemoved(margins) if self._layoutGeo.isValid(): geo = QRect(self._layoutGeo) else: geo = self.geometry() if available.width() < minWidth: geo.setWidth(available.width()) else: geo.setWidth(minWidth) if geo.x() < available.x(): geo.moveLeft(available.x()) if geo.right() > available.right(): geo.moveRight(available.right()) self._setGeometry(geo) self.raise_() elif ( self._layoutGeo.isValid() and minWidth < refWidth ): # restore the default size (probably set by the layout) self._setGeometry(self._layoutGeo) def focusInEvent(self, event): super().focusInEvent(event) self._layoutGeo = self.geometry() self._updateGeometry() def focusOutEvent(self, event): super().focusOutEvent(event) self.setCursorPosition(0) self._setGeometry(self._layoutGeo) def resizeEvent(self, event): super().resizeEvent(event) if not self._recursionGuard: self._layoutGeo = self.geometry() if self.hasFocus() and self.isVisible(): self._updateGeometry()
2
1
78,577,834
2024-6-4
https://stackoverflow.com/questions/78577834/numpy-get-a-matrix-of-distances-between-values
If I have a set of points: points = np.random.randint(0, 11, size=10) print(points) Output: [ 5 4 9 7 4 1 2 10 4 2] And if I want to get a matrix representing the distance from each point to each other point, I can do so like this: def get_diff_array(values): dX = values - values[0] tmp = [dX] for i in range(1, len(dX)): tmp.append((dX - dX[i])) return np.array(tmp) print(get_diff_array(points)) Output: [[ 0 -1 4 2 -1 -4 -3 5 -1 -3] [ 1 0 5 3 0 -3 -2 6 0 -2] [-4 -5 0 -2 -5 -8 -7 1 -5 -7] [-2 -3 2 0 -3 -6 -5 3 -3 -5] [ 1 0 5 3 0 -3 -2 6 0 -2] [ 4 3 8 6 3 0 1 9 3 1] [ 3 2 7 5 2 -1 0 8 2 0] [-5 -6 -1 -3 -6 -9 -8 0 -6 -8] [ 1 0 5 3 0 -3 -2 6 0 -2] [ 3 2 7 5 2 -1 0 8 2 0]] Is there a faster NumPy-specific way to calculate this? Currently it takes approximately 0.44 seconds for 10k points, which seems slow. This is really just a learning question to try to understand NumPy better.
You can use values[:, np.newaxis] - values[np.newaxis, :]: import numpy as np def diff(A): return A[:, np.newaxis] - A[np.newaxis, :] print(diff(np.random.randint(0, 11, size=10))) Prints [[ 0 -7 -8 -3 -6 -6 -4 -5 -10 -3] [ 7 0 -1 4 1 1 3 2 -3 4] [ 8 1 0 5 2 2 4 3 -2 5] [ 3 -4 -5 0 -3 -3 -1 -2 -7 0] [ 6 -1 -2 3 0 0 2 1 -4 3] [ 6 -1 -2 3 0 0 2 1 -4 3] [ 4 -3 -4 1 -2 -2 0 -1 -6 1] [ 5 -2 -3 2 -1 -1 1 0 -5 2] [ 10 3 2 7 4 4 6 5 0 7] [ 3 -4 -5 0 -3 -3 -1 -2 -7 0]]
2
4
78,577,950
2024-6-4
https://stackoverflow.com/questions/78577950/pandas-dataframe-check-relation-of-variable-and-rolling-mean
I have a DataFrame df with one time series variable, call it X. I can find the rolling mean over the last (say) n observations with df['rolling_mean'] = df.X.rolling(window=n).mean() Now I want a new column with a boolean value that applies the following logic to each row and returns true if either: X > rolling_mean and sometime within the next k observations, X attains a value less than rolling_mean or X < rolling_mean and sometime within the next k observations, X attains a value greater than rolling_mean. My current workaround looks something like: new_column = [] for i, row in df.iterrows(): next_k = df[i:i + k] if row.X < row.rolling_mean and any(next_k.X > row.rolling_mean): new_column.append(1) elif row.X > row.rolling_mean and any(next_k.X < row.rolling_mean): new_column.append(1) else: new_column.append(0) df['new_column'] = new_column But obviously this is iterative and not fast enough for a large dataset. Is there a fast/vectorized way of doing this?
A nice solution I came up with: df['rolling_max'] = df.X.rolling(k, min_periods=0).max().shift(-k) df['rolling_min'] = df.X.rolling(k, min_periods=0).min().shift(-k) df['will_drop_below_mean'] = (df.X > df.rolling_mean) & (df.rolling_min < df.rolling_mean) df['will_rise_above_mean'] = (df.X < df.rolling_mean) & (df.rolling_max > df.rolling_mean) df['new_column'] = ((df._will_drop_below_mean) | (df._will_rise_above_mean)).astype(int) Basically using rolling().max() and rolling().min() with .shift() to create two boolean masks.
2
0
78,577,752
2024-6-4
https://stackoverflow.com/questions/78577752/copy-specific-values-from-a-cells-and-then-add-them-above
I have the following dataframe with large amounts of data. C1 C2 C3 C4 t # 0 e 10001 252207 CAT t # 1 e 100018 219559 DOG t # 2 e 100068 251102 CAT t # 3 e 100089 107320 LION t # 4 e 100110 250975 TIGER t # 5 e 100111 28540 TIGER t # 6 e 100112 252253 COW t # 7 e 100157 17883 COW t # 8 e 100158 106226 DOG t # 9 e 100189 32004 CAT e 100189 250979 DOG e 100189 107997 CAT t # 10 e 100190 251325 LION e 100190 251325 LION e 100190 250999 LION t # 11 e 100194 250979 COW e 100194 65072 COW . ... ... ... t # 10000 e 200194 550979 COW e 200194 565072 COW If there is t # {i} in C2, then the other columns in this row are empty What I would like to achieve is to copy the existing numeric values from C2 and C3 - from one t # {i} to the next t # {i+1} and when C1 has a value of e - and insert them sequentially into C2 above in such a way that they do not repeat and that in C1 for these values there is always the letter v, while in C3 there can be anything different, the most readable would be to assign them, values from 1 upwards. C4 always remains empty. Below the output I would like to achieve. C1 C2 C3 C4 t # 0 v 10001 1 v 252207 2 e 10001 252207 CAT t # 1 v 100018 1 v 219559 2 e 100018 219559 DOG t # 2 v 100068 1 v 251102 2 e 100068 251102 CAT t # 3 v 100089 1 v 107320 2 e 100089 107320 LION t # 4 v 100110 1 v 250975 2 e 100110 250975 TIGER t # 5 v 100111 1 v 28540 2 e 100111 28540 TIGER t # 6 v 100112 1 v 252253 2 e 100112 252253 COW t # 7 v 100068 1 v 251102 2 e 100157 17883 COW t # 8 v 100158 1 v 106226 2 e 100158 106226 DOG t # 9 v 100189 1 v 32004 2 v 250979 3 v 107997 4 e 100189 32004 CAT e 100189 250979 DOG e 100189 107997 CAT t # 10 v 100190 1 v 251325 2 v 251325 3 v 250999 4 e 100190 251325 LION e 100190 251325 LION e 100190 250999 LION t # 11 v 100194 1 v 250979 2 v 65072 3 e 100194 250979 COW e 100194 65072 COW t # 10000 v 200194 1 v 550979 2 v 565072 3 e 200194 550979 COW e 200194 565072 COW I absolutely do not know how I can do this, For any help thanks. EDIT: I use txt file in df with open("results123.txt", "r") as f: text = [line.split() for line in f] df = pandas.DataFrame( text, columns=['C1','C2','C3','C4'] ) and my df looks like this: import json d = df.to_dict(orient='records') print(d) [{'C1': 't', 'C2': '#', 'C3': '0', 'C4': None}, {'C1': 'e', 'C2': '10001', 'C3': '252207', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '1', 'C4': None}, {'C1': 'e', 'C2': '100018', 'C3': '219559', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '2', 'C4': None}, {'C1': 'e', 'C2': '100068', 'C3': '251102', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '3', 'C4': None}, {'C1': 'e', 'C2': '100089', 'C3': '107320', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '4', 'C4': None}, {'C1': 'e', 'C2': '100110', 'C3': '250975', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '5', 'C4': None}, {'C1': 'e', 'C2': '100111', 'C3': '28540', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '6', 'C4': None}, {'C1': 'e', 'C2': '100112', 'C3': '252253', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '7', 'C4': None}, {'C1': 'e', 'C2': '100157', 'C3': '17883', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '8', 'C4': None}, {'C1': 'e', 'C2': '100158', 'C3': '106226', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '9', 'C4': None}, {'C1': 'e', 'C2': '100189', 'C3': '32004', 'C4': 'BOVINI'}, {'C1': 'e', 'C2': '100189', 'C3': '250979', 'C4': 'BOVINI'}, {'C1': 'e', 'C2': '100189', 'C3': '107997', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '10', 'C4': None}, {'C1': 'e', 'C2': '100190', 'C3': '251325', 'C4': 'BOVINI'}, {'C1': 'e', 'C2': '100190', 'C3': '251325', 'C4': 'BOVINI'}, {'C1': 'e', 'C2': '100190', 'C3': '250999', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '11', 'C4': None}, {'C1': 'e', 'C2': '100194', 'C3': '250979', 'C4': 'BOVINI'}, {'C1': 'e', 'C2': '100194', 'C3': '65072', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '12', 'C4': None}, {'C1': 'e', 'C2': '100203', 'C3': '250979', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '13', 'C4': None}, {'C1': 'e', 'C2': '100224', 'C3': '234727', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '14', 'C4': None}, {'C1': 'e', 'C2': '100229', 'C3': '253351', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '15', 'C4': None}, {'C1': 'e', 'C2': '100230', 'C3': '228074', 'C4': 'BOVINI'}, {'C1': 'e', 'C2': '100230', 'C3': '182318', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '16', 'C4': None}, {'C1': 'e', 'C2': '100231', 'C3': '252665', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '17', 'C4': None}, {'C1': 'e', 'C2': '100269', 'C3': '41716', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '18', 'C4': None}, {'C1': 'e', 'C2': '100277', 'C3': '251094', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '19', 'C4': None}, {'C1': 'e', 'C2': '100281', 'C3': '253887', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '20', 'C4': None}, {'C1': 'e', 'C2': '100283', 'C3': '20766', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '21', 'C4': None}, {'C1': 'e', 'C2': '100288', 'C3': '251001', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '22', 'C4': None}, {'C1': 'e', 'C2': '10029', 'C3': '250979', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '23', 'C4': None}, {'C1': 'e', 'C2': '100290', 'C3': '197427', 'C4': 'BOVINI'}, {'C1': 'e', 'C2': '100290', 'C3': '41716', 'C4': 'BOVINI'}, {'C1': 't', 'C2': '#', 'C3': '24', 'C4': None}, {'C1': 'e', 'C2': 'ID_NODO_OR', 'C3': 'ID_NODO_DEST', 'C4': 'SPECIE'}, {'C1': 'e', 'C2': 'ID_NODO_OR', 'C3': 'ID_NODO_DEST', 'C4': 'SPECIE'}]
You can iterate through the rows and use startswith() on row C2, in addition to the other condition: import pandas as pd def get_rows(df): Vs, res = 1, [] for _, r in df.iterrows(): if r['C2'].startswith('t #'): Vs = 1 elif r['C1'] == 'e': res.append(['v', r['C2'], str(Vs), None]) Vs += 1 res.append([r['C1'], r['C2'], r['C3'], r['C4']]) return pd.DataFrame(res, columns=df.columns) D = [ {"C1": "t", "C2": "#", "C3": "0", "C4": None}, {"C1": "e", "C2": "10001", "C3": "252207", "C4": "BOVINI"}, {"C1": "t", "C2": "#", "C3": "1", "C4": None}, {"C1": "e", "C2": "100018", "C3": "219559", "C4": "BOVINI"}, {"C1": "t", "C2": "#", "C3": "2", "C4": None}, {"C1": "e", "C2": "100068", "C3": "251102", "C4": "BOVINI"} ] print(get_rows(pd.DataFrame(D))) Prints C1 C2 C3 C4 0 t # 0 None 1 v 10001 1 None 2 e 10001 252207 BOVINI 3 t # 1 None 4 v 100018 2 None 5 e 100018 219559 BOVINI 6 t # 2 None 7 v 100068 3 None 8 e 100068 251102 BOVINI Write: you can simply add these lines at the end of function: res = pd.DataFrame(res, columns=df.columns) res.to_csv(filename, sep='\t', index=False, header=False)
2
2
78,575,922
2024-6-4
https://stackoverflow.com/questions/78575922/replace-3d-array-elements-with-numpy
I have a 3D array filled with RGB values and I need to replace some "pixels" with another value as fast as possible. The array looks like this: [[[ 78 77 75] [ 72 70 67] [ 72 70 67] ... [ 73 74 73] [ 71 72 71] [ 66 67 67]]] I used numpy select to replace some values but this is not what I need as it doesn't check for the whole pixel RGB. np.select([array > 77, array < 77, [0, 0], array) I tried to compare the whole RGB array with another one but it didn't work. np.select([array != [72, 70, 77]], [[0, 0, 0]], array) Creating a mask didn't help either. I wasn't able to replace the mask with an RGB array. color = [30, 30, 57] mask = np.any(array != color, axis=2)
You can define your two conditions, combine it using logical_or(), and then use where(): import numpy as np array = np.array([[[78, 77, 75], [72, 70, 67], [72, 70, 67]], [[78, 77, 75], [72, 70, 67], [72, 70, 67]], [[73, 74, 73], [71, 72, 71], [66, 67, 67]]]) color = [30, 30, 57] cond_a = array[:, :, 0] > 77 cond_b = array[:, :, 1] < 77 conds = np.logical_or(cond_a, cond_b) res = np.where(conds, color, array) print(res) Prints [[[30 30 57] [30 30 57] [30 30 57]] [[30 30 57] [30 30 57] [30 30 57]] [[30 30 57] [30 30 57] [30 30 57]]]
2
2
78,577,205
2024-6-4
https://stackoverflow.com/questions/78577205/multi-column-and-row-matching-and-removal
I am trying to remove rows based on if the previous row was "added" and the current row is "removed". My goal is to identify the rows that are only "added". import pandas as pd df = pd.DataFrame({'Event_Time': ['2024-05-28 12:37:00', '2024-05-28 12:41:00', '2024-05-28 16:12:00','2024-05-08 09:40:00'], 'Name': ['Kenzie Doe', 'Kenzie Doe', 'Kenzie Doe', 'Abby Stone'], 'Action': ['Added', 'Removed', 'Added', 'Added']}) I converted the datetime so it can sort properly. The names are also sorted so all the actions for one user are together and by proper date time. I am not sure if muli-row and column actions can be taken in python. I would like my final result to look like this: Event_time Name Action 2024-05-28 16:12:00 Kenzie Doe Added 2024-05-08 09:40:00 Abby Stone Added
You can use a mask: ~(same & (rem | rem_add)): import pandas as pd df = pd.DataFrame({'Event_Time': ['2024-05-28 12:37:00', '2024-05-28 12:41:00', '2024-05-28 16:12:00', '2024-05-08 09:40:00'], 'Name': ['Kenzie Doe', 'Kenzie Doe', 'Kenzie Doe', 'Abby Stone'], 'Action': ['Added', 'Removed', 'Added', 'Added']}) df['Event_Time'] = pd.to_datetime(df['Event_Time']) df = df.sort_values(by=['Name', 'Event_Time']) same = df['Name'] == df['Name'].shift(-1) rem = (df['Action'] == 'Added') & (df['Action'].shift(-1) == 'Removed') rem_add = (df['Action'] == 'Removed') & (df['Action'].shift(-1) == 'Added') print(df[~(same & (rem | rem_add))]) Prints Event_Time Name Action 3 2024-05-08 09:40:00 Abby Stone Added 2 2024-05-28 16:12:00 Kenzie Doe Added Mask: bitwise ~: inverse of the mask. same: checks if the current row and the next row have the same name. rem: checks if the current row has the action "Added" and the next row has the action "Removed". rem_add: checks if the current row has the action "Removed" and the next row has the action "Added". bitwise |: OR. bitwise &: AND.
2
2
78,560,578
2024-5-31
https://stackoverflow.com/questions/78560578/aws-textract-asynchronous-operations-within-multiprocessing
I am working in a Lambda function within AWS. I have two functions which asynchronously call on Textract to return the extracted text from an image. By switching to this asynchronous operation from a singular call one at a time (which must wait for the result to complete before submitting a new request), given the volume of images I need processed by Textract, I was able to reduce processing time for Textract from 8 minutes to about 3 minutes--a vast improvement. But, I am looking into using multiprocessing to see if I can reduce the time down even further. However, it appears that multiprocessing.map and multiprocessing.starmap do not seem to work very well in AWS Lambda. I saw some recommendations for using multiprocessing.Process or multiprocessing.Pipe, but it isn't clear if that will actually make a big impact. Based on my code below, will leveraging multiprocessing.Process or multiprocessing.Pipe make noticeable improvements in processing time or is it not worth the effort? If it is worth it, can anyone make any suggestions on how to actually implement this given my code? I am brand new to multiprocessing and there's a lot to wrap my head around, further complicated by trying to also implement in AWS. def extract_text_async(img, loc): img_obj = Image.fromarray(img).convert('RGB') out_img_obj = io.BytesIO() img_obj.save(out_img_obj, format="png") out_img_obj.seek(0) file_name = key_id + "_" + loc + ".png" s3.Bucket(bucket_name).put_object(Key=file_name, Body=out_img_obj, ContentType="image/png") response = textract_client.start_document_text_detection(DocumentLocation={'S3Object':{'Bucket': bucket_name,'Name': file_name}},JobTag=key_id + loc, NotificationChannel={'SNSTopicArn': snsarn,'RoleArn': rolearn},OutputConfig={'S3Bucket': output_bucket,'S3Prefix': str(datetime.now()).replace(" ", "_") + key_id + "_" + loc + "_textract_output"}) return response['JobId'] def fetch_textract_async(jobid): response = textract_client.get_document_text_detection(JobId=jobid,MaxResults=1000) status = response['JobStatus'] text_len = {} for y in range(len(response['Blocks'])): if 'Text' in response['Blocks'][y]: text_len[y] = len(response['Blocks'][y]['Text']) else: pass if bool(text_len): extracted_text = response['Blocks'][max(text_len, key=text_len.get)]['Text'] if extracted_text == '-': extracted_text = '' else: pass else: extracted_text = '' return extracted_text # example function calls s1_1 = extract_text_async(cropped_images['Section 1']['1'],"s1_1") s1_2 = extract_text_async(cropped_images['Section 1']['2'],"s1_2") s1_3 = extract_text_async(cropped_images['Section 1']['3'],"s1_3") s1_1_result = fetch_textract_async(s1_1) s1_2_result = fetch_textract_async(s1_2) s1_3_result = fetch_textract_async(s1_3)
In a well-architected, scalable setup for running Amazon Textract, the callback itself should be event-driven through SNS (which it looks from your snippet like you're already using?)... So your Lambdas will just be 1) kicking off jobs and 2) reading the results. If you're considering spiky workloads with very high concurrency (e.g. dump a large number of documents at once and process them as fast as possible), it's worth checking your applied quotas for e.g. StartDocumentTextDetection and GetDocumentTextDetection TPS, and max concurrent jobs. As mentioned on this page, smoothing out the workload is one good way to improve overall throughput. On the (1) job kick-off side: It looks like your extract_text_async takes an image as a pixel array. I hope you're not passing that over network anywhere? (e.g. between Lambdas, or from local to cloud) Compressed image formats like PNG, JPEG, etc are much smaller, which can accelerate data transfer if you're e.g. collecting the image from a local camera and uploading it through your Lambda or something. If you upload direct as PNG bytes you could also skip the Pillow dependency & conversion compute You could potentially avoid streaming the image through Lambda at all, by having your client just upload the image direct to S3 and using the S3 event notifications Lambda integration to kick off a Lambda that requests the Textract job for the newly-created S3 URI. The client doesn't receive a notification of the Textract job ID in this case, but you could e.g. have your Lambda store the association from S3 URI to job ID in a DynamoDB table, so it's easily queryable as soon as the job has been started. What actually triggers your job creation Lambda, and what size of batches of new documents it receives, will affect the scaling profile and whether multiprocessing is relevant: If the Lambda is invoked for each individual image with no batching, is Lambda scaling out fast enough? Might provisioned concurrency help? If the Lambda is invoked just once with very large batch sizes, maybe parallelizing the Textract API requests with multithreading could be helpful... But splitting the batches into separate invocations might help parallelize easier? If the jobs are being kicked off very efficiently, maybe you'll hit your Textract TPS or MaxConcurrentJobs quotas and see throttling? Could consider tuning the boto3 retry settings to optimize performance, or exploring quota increase requests On the (2) job retrieval side: The default SNS-to-Lambda integration is probably already doing some batching and invoking your function as concurrently as it has new data coming available: To understand how much multiprocessing might help, it's worth measuring what your typical batch size is for incoming SNS messages. Remember there's also the quota limit on GetDocumentTextDetection TPS, which if you have a very large number of documents all complete around the same time might cause contention. If this seems to be an issue, then tuning the boto3 retry settings and connecting via SQS for concurrency control may help accelerate the workload. You should probably be checking for NextToken and paginating your get_document_text_detection requests, in case you're going to process long documents... But this will only increase latency as your current solution seems to just be fetching the first page whether it's complete or not. In terms of staying under throttling quotas, AWS typically suggests retries and backoff rather than explicit quota management, because the former scales to distributed systems while the latter requires a central monitor of current consumption, with all the associated possible risks of deadlock and etc. In summary, focussing on multiprocessing might be a bit premature because it only addresses scaling within one running instance. It might be better to check whether the overall distributed architecture is well-optimized, including how those Lambdas get invoked and requests get batched. For more examples, I'd suggest to check out: The Textract IDP CDK constructs blog post and sample code The old Textract Serverless Large-Scale Document Processing sample (which isn't maintained anymore, but still an accessible starting point if you find CDK confusing)
3
1
78,574,363
2024-6-4
https://stackoverflow.com/questions/78574363/why-fastapi-isnt-validating-post-body
I'm working on a webserver built with FastAPI version 0.111.0 and SQLModel version 0.0.18. When I'm calling the POST endpoint with partial data or with keys I don't specify in my class, it doesn't trigger the unprocessable entity or bad request error. I'm using the method POST only for inserting data and not updating it. For update an entity, I use a separate PUT request. Here's my model: from sqlmodel import SQLModel, Field class Job(SQLModel, table=True): job_id: int = Field(primary_key=True) title: str start_time: str end_time: str | None = None pipeline_id: int = Field(foreign_key="pipeline.pipeline_id") prj_path: str branch_tag: str user: str status: str log: str = "" result: str = "" and here's the controller: from fastapi import APIRouter from models.job import Job job_router = APIRouter(prefix="/job", tags=["Jobs"]) @job_router.post("/", status_code=201, response_model=Job) async def add_job(job: Job): job = await c.insert_job(job) return job The c.insert_job(job) is where I'm saving the object in the database and the job_router is importend in the main file with the FastAPI app: from fastapi import FastAPI from routers.job import job_router app = FastAPI() app.include_router(job_router) Even if the auto-created documentation tells me that there are some required fields as shown here: job class in swagger if I'm sending a request with a body like { "job_id": 0 } or even { "job_id": 0, "test": "test" }, it passes without triggering anything like shown in the next image (here I return the partial object I get instead of adding it to the database because it would throw an error on fields without the default value): returned object
Use this model for response validation from pydantic import BaseModel from typing import Optional class JobRead(BaseModel): job_id: int title: str start_time: str end_time: Optional[str] = Field(default=None) pipeline_id: int prj_path: str branch_tag: str user: str status: str log: Optional[str] = Field(default="") result: Optional[str] = Field(default="") And you can do use Pydantic validation about validation https://docs.pydantic.dev/latest/#why-use-pydantic or use Depends() with Pydantic model from fastapi import APIRouter from models.job import Job job_router = APIRouter(prefix="/job", tags=["Jobs"]) @job_router.post("/", status_code=201, response_model=Job) async def add_job(job: Job = Depends()): job = await c.insert_job(job) return job about Depends https://fastapi.tiangolo.com/tutorial/dependencies/
2
1
78,574,047
2024-6-4
https://stackoverflow.com/questions/78574047/python-setattr-and-getattr-best-practice-with-flexible-mutable-object-variable
I have a class that holds an unspecified number of dicts as variables. I want to provide methods that allow the user to easily append to any of the dicts while naming the corresponding variable, without having to check whether the variable was already created. Example for my goal: >>> c = Container() >>> c.add_value(variable="first_dict", key="foo", value=1) >>> c.add_value(variable="first_dict", key="bar", value=2) >>> c.add_value(variable="second_dict", key="foo", value=2) >>> print(c.first_dict) {"foo":1, "bar":2) >>> print(c.second_dict) {"foo":2) Currently, this is my solution: class Container(): def __init__(self): pass def add_value(self, variable: str, key: Any, value: Any): x = getattr(self, variable, {}) x[key] = value setattr(self, variable, x) My concern is that accessing the attribute via getattr, then mutating it and setting it back via setattr introduces overhead that is not necessary. Is there a better way to write the Container.add_value()-method? Should I approach the whole problem differently?
You can initialize and access each sub-dict with the setdefault method of the object's attribute dict: class Container: def add_value(self, variable, key, value): vars(self).setdefault(variable, {})[key] = value
2
2
78,573,638
2024-6-4
https://stackoverflow.com/questions/78573638/reindex-to-expand-and-fill-value-only-across-one-level-of-multi-index
I have a dataframe with an index of (month, A, B): foo N month A B 1983-03-01 3 9 0 1 1983-06-01 3 9 0 1 1983-09-01 3 9 0 1 1983-11-01 4 5 0 1 1984-05-01 4 5 0 1 1984-06-01 3 9 0 1 1984-09-01 3 9 0 2 I would like to fill all missing dates, provided that a certain (A, B) combination exists in the index. What I do not want to do is to fill in the index for all (A, B) combinations. That is, I would like to have for (A=3, B=9) and for (A=4, B=5) month-indices running from 1983-03-01 to 1984-09-01 and 0s for filling. But I don't want there to be any records of (A=3, B=5) or (A=4, B=9). If this was a single index, I could simply idx = pd.date_range(df['month'].min(), df['month'].max(), freq='M') df = df.set_index('month') df.index = df.reindex(idx, fill_value=0) How would I approach it in this situation? Worth noting that this solution should scale with a large number of unique values for A, B.
Assuming: df = pd.DataFrame({'month': pd.to_datetime(['1983-03-01', '1983-06-01', '1983-09-01', '1983-11-01', '1984-05-01', '1984-06-01', '1984-09-01']), 'A': [3, 3, 3, 4, 4, 3, 3], 'B': [9, 9, 9, 5, 5, 9, 9], 'foo': [0, 0, 0, 0, 0, 0, 0], 'N': [1, 1, 1, 1, 1, 1, 2]} ) You could use a groupby.apply: cols = df.columns.difference(['month', 'A', 'B']) out = (df.set_index('month').groupby(['A', 'B'])[cols] .apply(lambda x: x.reindex(pd.date_range(x.index.get_level_values('month').min(), x.index.get_level_values('month').max(), freq='MS').rename('month'), fill_value=0)) .reset_index()[df.columns] ) Output: month A B foo N 0 1983-03-01 3 9 0 1 1 1983-04-01 3 9 0 0 2 1983-05-01 3 9 0 0 3 1983-06-01 3 9 0 1 4 1983-07-01 3 9 0 0 5 1983-08-01 3 9 0 0 6 1983-09-01 3 9 0 1 7 1983-10-01 3 9 0 0 8 1983-11-01 3 9 0 0 9 1983-12-01 3 9 0 0 10 1984-01-01 3 9 0 0 11 1984-02-01 3 9 0 0 12 1984-03-01 3 9 0 0 13 1984-04-01 3 9 0 0 14 1984-05-01 3 9 0 0 15 1984-06-01 3 9 0 1 16 1984-07-01 3 9 0 0 17 1984-08-01 3 9 0 0 18 1984-09-01 3 9 0 2 19 1983-11-01 4 5 0 1 20 1983-12-01 4 5 0 0 21 1984-01-01 4 5 0 0 22 1984-02-01 4 5 0 0 23 1984-03-01 4 5 0 0 24 1984-04-01 4 5 0 0 25 1984-05-01 4 5 0 1
2
2
78,573,321
2024-6-4
https://stackoverflow.com/questions/78573321/mutual-exclusivity-pairs-using-dataframe
How to check are the attributes mutually exclusive? That is, can a word have more than one non-zero entry in the sentiment attributes? Would like to check if each attribute is mutually exclusive of one another and to enumerate through all pairs to compare each. N | P | U | L | S | W 0 1 ... 0 2 ... 1 0 ... 0 0 ... N P, N U, N L, N S, N W P U, P L, P S, P W U L, U S, U W L S, S W Stumped using iloc here and if I can set my for loops to something like: a=range(0,7) b=range(0,7) for i in a: for j in b: sum(df.iloc[:,i]*df.iloc[:,j]!=0) However I know this produces an error, how to then dynamically update the index within iloc to iterate through each column? If the sum is 0, then col 0 and 1 are mutually exclusive. Now to iterate over all columns and enumerate through nested loops.
Check all pairs. The pair is mutually exclusive, if (df[ii] * df[jj] != 0).sum() == 0: import pandas as pd def get_mut_ex(D): df = pd.DataFrame(D) cols = df.columns res, n = [], len(cols) for i in range(n): for j in range(i + 1, n): ii, jj = cols[i], cols[j] print(df[jj], df[ii]) if (df[ii] * df[jj] != 0).sum() == 0: res += [(ii, jj)] return res D = { 'N': [0, 0, 1, 0], 'P': [1, 2, 0, 0], 'U': [0, 0, 0, 0], 'L': [0, 0, 0, 0], 'S': [0, 0, 0, 0], 'W': [0, 0, 0, 0] } print(get_mut_ex(D)) Prints [('N', 'P'), ('N', 'U'), ('N', 'L'), ('N', 'S'), ('N', 'W'), ('P', 'U'), ('P', 'L'), ('P', 'S'), ('P', 'W'), ('U', 'L'), ('U', 'S'), ('U', 'W'), ('L', 'S'), ('L', 'W'), ('S', 'W')]
2
0
78,571,618
2024-6-3
https://stackoverflow.com/questions/78571618/selenium-how-to-select-a-value-form-a-select-box
I try to select the option "Psychatrie" on a website using the following code: from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By print(f"Checking Browser driver...") options = Options() options.add_argument("start-maximized") options.add_argument('--log-level=3') options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1}) options.add_experimental_option("excludeSwitches", ["enable-automation"]) options.add_experimental_option('excludeSwitches', ['enable-logging']) options.add_experimental_option('useAutomationExtension', False) options.add_argument('--disable-blink-features=AutomationControlled') srv=Service() link = "https://asu.kvs-sachsen.de/arztsuche/pages/search.jsf" driver = webdriver.Chrome (service=srv, options=options) waitWD = WebDriverWait (driver, 10) driver.get (link) waitWD.until(EC.presence_of_element_located((By.XPATH,'//div[@id="searchForm:specialismDetail:selectWindow"]'))).send_keys("Psychatrie") But i only get this error: (selenium) C:\DEV\Fiverr2024\ORDER\schlosswaechter>python temp1.py Checking Browser driver... Traceback (most recent call last): File "C:\DEV\Fiverr2024\ORDER\schlosswaechter\temp1.py", line 23, in <module> waitWD.until(EC.presence_of_element_located((By.XPATH,'//div[@id="searchForm:specialismDetail:selectWindow"]'))).send_keys("Psychatrie") File "C:\DEV\.venv\selenium\Lib\site-packages\selenium\webdriver\remote\webelement.py", line 231, in send_keys self._execute( File "C:\DEV\.venv\selenium\Lib\site-packages\selenium\webdriver\remote\webelement.py", line 395, in _execute return self._parent.execute(command, params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\DEV\.venv\selenium\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 347, in execute self.error_handler.check_response(response) File "C:\DEV\.venv\selenium\Lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 229, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable (Session info: chrome=124.0.6367.202) Stacktrace: GetHandleVerifier [0x00007FF7E2331522+60802] (No symbol) [0x00007FF7E22AAC22] (No symbol) [0x00007FF7E2167B13] (No symbol) [0x00007FF7E21B09F7] (No symbol) [0x00007FF7E21AEB1A] (No symbol) [0x00007FF7E21DAB7A] (No symbol) [0x00007FF7E21AA7C6] (No symbol) [0x00007FF7E21DAD90] (No symbol) [0x00007FF7E21FA224] (No symbol) [0x00007FF7E21DA923] (No symbol) [0x00007FF7E21A8FEC] (No symbol) [0x00007FF7E21A9C21] GetHandleVerifier [0x00007FF7E26341BD+3217949] GetHandleVerifier [0x00007FF7E2676157+3488183] GetHandleVerifier [0x00007FF7E266F0DF+3459391] GetHandleVerifier [0x00007FF7E23EB8E6+823622] (No symbol) [0x00007FF7E22B5FBF] (No symbol) [0x00007FF7E22B0EE4] (No symbol) [0x00007FF7E22B1072] (No symbol) [0x00007FF7E22A18C4] BaseThreadInitThunk [0x00007FF8F4D0257D+29] RtlUserThreadStart [0x00007FF8F676AA48+40] How can i select this value from the select-box on the page?
You have made a typo in the word "Psychatrie". It should have been "Psychiatrie". In addition, I've modified the code so that you first click on the drop-down menu and then search for the exact string from the options. Finally, by executing javascript, you can click on the item as the click() method throws an error. from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By print(f"Checking Browser driver...") options = Options() options.add_argument("start-maximized") options.add_argument('--log-level=3') options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1}) options.add_experimental_option("excludeSwitches", ["enable-automation"]) options.add_experimental_option('excludeSwitches', ['enable-logging']) options.add_experimental_option('useAutomationExtension', False) options.add_argument('--disable-blink-features=AutomationControlled') srv = Service() link = "https://asu.kvs-sachsen.de/arztsuche/pages/search.jsf" driver = webdriver.Chrome(service=srv, options=options) waitWD = WebDriverWait(driver, 10) driver.get(link) # First click on the drop-down menu waitWD.until(EC.presence_of_element_located((By.XPATH, '//div[@id="searchForm:specialismDetail:selectWindow"]'))).click() # Then search for the exact string element = waitWD.until(EC.presence_of_element_located((By.XPATH, "//span[text()='Psychiatrie']"))) driver.execute_script("arguments[0].click();", element)
2
1
78,571,907
2024-6-3
https://stackoverflow.com/questions/78571907/creating-a-new-column-by-multiplying-the-value-of-row-above-with-the-value-of-th
I am trying to create a new columns of percentages which is a product of value in a row above and the value in the same row of another column. I have tried using shift and loc with no luck. I have tried using: df.at[0,'new_col'] = df.at[0,'other_col'] This first part works well and then df['new_col'] = df['new_col'].shift(1)*df['other_col'] this however does not work. My data example is as follows: time val adj_val 0 1 1 1 0.5 0.5 2 0.6 0.3 3 0.7 0.21 4 0.9 0.189 I have been trying to work around this for a well such as using df.loc but with no luck. The values in adj_col are calculated as follows: 1 = 1 as per the first line of code - this works then 0.5 = 1 * 0.5 - the 1 is the first value in adj_val and 0.5 is in the val colum 0.21 = 0.3*0.7 0.189 = 0.9*0.21 ```
You cannot achieve the desired result with shift. shift will give you access to the previous row before any computation is performed, but you need the new value to become the next reference. What you want is a cumulative product, there is already a method for that in pandas: cumprod: df['adj_val'] = df['val'].cumprod() It is also achievable with numpy.multiply.accumulate: import numpy as np df['adj_val'] = np.multiply.accumulate(df['val']) Output: time val adj_val 0 0 1.0 1.000 1 1 0.5 0.500 2 2 0.6 0.300 3 3 0.7 0.210 4 4 0.9 0.189
3
4
78,561,921
2024-5-31
https://stackoverflow.com/questions/78561921/how-would-i-implement-the-hausdorff-distance-using-gekko
If the following function took in arrays A and B that contain arrays of Gekko variables instead of floats, how would I find the Hausdorff distance, aka, how would I define np.inf and modify the rest of the code? def hausdorff_distance(A, B): """ Compute the Hausdorff distance between two sets of points A and B. """ dist = 0 for a in A: min_dist = np.inf for b in B: dist = np.linalg.norm(a - b) if dist < min_dist: min_dist = dist if min_dist > dist: dist = min_dist return dist
Use the m.min2() function to calculate the minimum of two values. Here is an example implementation of the minimum distance function with Gekko operators. from gekko import GEKKO import numpy as np def hausdorff_distance(A, B): """ Compute the Hausdorff distance between two sets of points A and B, where A and B contain arrays of Gekko variables. """ j=0 min_dist = 1e9 for a in A: for b in B: diff = [a[i] - b[i] for i in range(2)] norm = m.Intermediate(m.sqrt(sum([diff[i]**2 for i in range(2)]))) min_dist = m.Intermediate(m.min2(norm,min_dist)) return min_dist m = GEKKO(remote=False) A = m.Array(m.Var, (3,2)) B = m.Array(m.Param, (3,2)) # Assign values to A and B for demonstration np.random.seed(10) for i in range(3): for j in range(2): A[i,j].value = 5*np.random.random() B[i,j].value = 10*np.random.random() min_distance = hausdorff_distance(A, B) m.Equation(min_distance>=1) m.Minimize(min_distance) m.options.SOLVER = 'APOPT' m.solve(disp=True) print(f'Hausdorff Distance: {min_distance.value[0]}') There is a constraint that min_distance>=1 and the objective is to minimize min_distance. For these random numbers, it converges with the following solver output: ---------------------------------------------------------------- APMonitor, Version 1.0.1 APMonitor Optimization Suite ---------------------------------------------------------------- --------- APM Model Size ------------ Each time step contains Objects : 9 Constants : 0 Variables : 40 Intermediates: 18 Connections : 27 Equations : 38 Residuals : 20 Number of state variables: 52 Number of total equations: - 37 Number of slack variables: - 1 --------------------------------------- Degrees of freedom : 14 ---------------------------------------------- Steady State Optimization with APOPT Solver ---------------------------------------------- Iter Objective Convergence 0 9.90552E+18 1.00000E+09 1 2.38606E+11 1.00000E+09 2 1.04285E+09 2.80978E+01 3 1.03024E+09 2.43451E+01 4 2.56078E+13 2.00741E+01 5 1.01917E+09 2.00741E+01 6 2.94752E+13 1.51037E+01 7 1.00988E+09 1.51037E+01 8 2.26979E+13 2.67173E+01 9 1.76442E+11 2.67173E+01 Iter Objective Convergence 10 9.99849E+08 1.51037E+01 11 9.98865E+08 3.91030E+06 12 4.99434E+08 8.53990E+09 13 9.12931E+06 1.24481E+12 14 1.39486E+09 1.54845E+11 15 6.87536E+07 1.51693E+11 16 2.16440E+08 3.71750E+10 17 1.64817E+21 3.63778E+10 18 6.94013E+20 1.40348E+10 19 2.19541E+21 5.90979E+09 Iter Objective Convergence 20 9.46806E+11 1.86947E+09 21 1.00000E+00 8.06243E-01 22 1.00000E+00 5.72972E-03 23 1.00000E+00 3.86073E-04 24 1.00000E+00 1.23727E-06 25 1.00000E+00 4.54747E-13 26 1.00000E+00 4.54747E-13 Successful solution --------------------------------------------------- Solver : APOPT (v1.0) Solution time : 1.619999999820720E-002 sec Objective : 1.00000000000000 Successful solution --------------------------------------------------- Hausdorff Distance: 1.0 Gekko calculates all of the distance norms simultaneously and uses MPCC function m.min2() for continuous derivatives that are compatible with the gradient-based optimizers. Another function is the m.min3() that uses binary variables, but then the problem becomes Mixed Integer Nonlinear Programming (MINLP) and may be more challenging to solve with the APOPT solver.
2
1
78,570,861
2024-6-3
https://stackoverflow.com/questions/78570861/smarter-way-to-create-diff-between-two-pandas-dataframes
I have two pandas dataframes which represent a directory structure with file hashes like import pandas as pd dir_old = pd.DataFrame([ {"Filepath": "dir1/file1", "Hash": "hash1"}, {"Filepath": "dir1/file2", "Hash": "hash2"}, {"Filepath": "dir2/file3", "Hash": "hash3"}, ]) dir_new = pd.DataFrame([ # {"Filepath": "dir1/file1", "Hash": "hash1"}, # deleted file {"Filepath": "dir1/file2", "Hash": "hash2"}, {"Filepath": "dir2/file3", "Hash": "hash5"}, # changed file {"Filepath": "dir1/file4", "Hash": "hash4"}, # new file ]) The dir_new shows the content of the directory structure after some changes. To compare these two dataframes I use df_merged = pd.merge(dir_new, dir_old, how='outer', indicator=True) print(df_merged) This will return Filepath Hash _merge 0 dir1/file1 hash1 right_only 1 dir1/file2 hash2 both 2 dir1/file4 hash4 left_only 3 dir2/file3 hash3 right_only 4 dir2/file3 hash5 left_only It is easy to identify the right_only rows as deleted, both as unchanged and left_only as new files. However what to do about the modified file dir/file3 which appears twice as right_only and left_only? I did the following: # The indicator columns _merge has categorical values. # We need to convert it to string to be able to add later a new value `modified` df_merged["State"] = df_merged["_merge"].astype(str) df_merged = df_merged.drop(columns=["_merge"]) # Identify the rows with duplicated filepath and only keep the new (left_only) ones modified = df_merged[df_merged.duplicated(subset=["Filepath"], keep=False)] keep = modified[modified["State"] == "left_only"] drop = modified[modified["State"] == "right_only"] # Rename the state of the new modified files to `changed` and drop the old duplicated row df_merged.iloc[keep.index, df_merged.columns.get_loc("State")] = "changed" df_dropped = df_merged.drop(drop.index) # Finally rename the State for all the remaining rows df_final = df_dropped.replace(to_replace=["right_only", "left_only", "both"], value=["deleted", "created", "equal"]).reset_index(drop=True) print(df_final) The output is Filepath Hash State 0 dir1/file1 hash1 deleted 1 dir1/file2 hash2 equal 2 dir1/file4 hash4 created 3 dir2/file3 hash5 changed So it works. But is strikes me as a very complicated solution. Is there maybe a smarter way create a diff between these two dataframes and especially to identify the modified rows between dir_old and dir_new ?
I'd do it first by merging only by Filepath and then compare Hash_x/Hash_y and indicator accordingly (seems straightforward to me): df = dir_new.merge(dir_old, on="Filepath", how="outer", indicator=True) hash_changed = df["Hash_x"] != df["Hash_y"] deleted = df["_merge"] == "right_only" created = df["_merge"] == "left_only" changed = (df["_merge"] == "both") & hash_changed unchanged = (df["_merge"] == "both") & ~hash_changed df.loc[deleted, "Status"] = "deleted" df.loc[created, "Status"] = "created" df.loc[changed, "Status"] = "changed" df.loc[unchanged, "Status"] = "unchanged" df["Hash"] = df[["Hash_x", "Hash_y"]].bfill(axis=1)["Hash_x"] print(df[["Filepath", "Hash", "Status"]]) Prints: Filepath Hash Status 0 dir1/file1 hash1 deleted 1 dir1/file2 hash2 unchanged 2 dir1/file4 hash4 created 3 dir2/file3 hash5 changed EDIT: Example of dynamic column names: df = dir_new.merge(dir_old, on="Filepath", how="outer", indicator=True) column_names = dir_new.columns.difference(["Filepath"]) hash_changed = df["Hash_x"] != df["Hash_y"] deleted = df["_merge"] == "right_only" created = df["_merge"] == "left_only" changed = (df["_merge"] == "both") & hash_changed unchanged = (df["_merge"] == "both") & ~hash_changed df.loc[deleted, "Status"] = "deleted" df.loc[created, "Status"] = "created" df.loc[changed, "Status"] = "changed" df.loc[unchanged, "Status"] = "unchanged" for c in column_names: df[c] = df[[f"{c}_x", f"{c}_y"]].bfill(axis=1)[f"{c}_x"] print(df[["Filepath", *column_names, "Status"]])
4
1
78,571,193
2024-6-3
https://stackoverflow.com/questions/78571193/avoiding-iteration-in-pandas-when-i-want-to-update-the-value-in-a-column-x-when
I have the following pandas dataframe: key1 key2 col_name bool col_1 col_2 col_3 a1 a2 col_1 0 5 10 20 b1 b2 col_3 1 10 10 5 c1 c2 col_1 1 5 15 5 Where bool==1, I would like to update the value in the column given by the col_name column to be 100. Expected output: key1 key2 col_name bool col_1 col_2 col_3 a1 a2 col_1 0 5 10 20 b1 b2 col_3 1 10 10 100 c1 c2 col_1 1 100 15 5 I can do this by iterating through the table, but from what I've read this is never best practice. What would be the most efficient way of doing this?
Build a boolean mask with numpy and update: # identify cells for which the col_name matches the column name # only keep those that have a bool of 1 in the row m = ((df['col_name'].to_numpy()[:, None] == df.columns.to_numpy()) & df['bool'].eq(1).to_numpy()[:, None] ) df[m] = 100 Output: key1 key2 col_name bool col_1 col_2 col_3 0 a1 a2 col_1 0 5 10 20 1 b1 b2 col_3 1 10 10 100 2 c1 c2 col_1 1 100 15 5 Intermediate m: array([[False, False, False, False, False, False, False], [False, False, False, False, False, False, True], [False, False, False, False, True, False, False]])
4
3
78,569,897
2024-6-3
https://stackoverflow.com/questions/78569897/survey-data-many-periods-transformation-to-current-and-previous-period-wide-to
I have a data frame (survey data) called df that looks like this (this is sample data): respondent_id r1age r2age r3age r4age r1smoke r2smoke r3smoke r4smoke r1income r2income r3income r4income 16178 35 38 41 44 1 1 1 1 60 62 68 70 161719 65 68 71 74 0 0 0 1 50 52 54 56 161720 47 50 53 56 0 1 0 1 80 82 85 87 The number after the "r" or "h" represents the wave or period of each interview. For this particular example, there are only four interviews for each respondent, and data for 3 different variables (age, whether the respondent smokes, and his/her gross annual income in $10,000). I'm interested in transforming this to get the following instead: respondent_id t_1_period t_age t_1_age t_smoke t_1_smoke t_income t_1_income 16178 1 38 35 1 1 62 60 16178 2 41 38 1 1 68 62 16178 3 44 41 1 1 70 68 161719 1 68 65 0 0 52 50 161719 2 71 68 0 0 54 52 161719 3 74 71 1 0 56 54 161720 1 50 47 1 0 82 80 161720 2 53 50 0 1 85 82 161720 3 56 53 1 0 87 85 I'm interested in repeating the respondents such that the number of observations for each respondent are the number of interviews/waves - 1 (that is, the unique transitions), and for each variable there must be t (current period) and t_1 (previous period) columns, again, for each transition. Additionally, I add a t_1_period column representing the number of the previous period for that observation. I have tried the following: df = pd.melt(df, id_vars=["respondent_id"]) variable_names = ["age", "smoke", "income"] new_rows = [] for respondent_id in df["respondent_id"].unique(): df_temp = df[df["respondent_id"] == respondent_id] for i in range(2, 5): new_row = {"respondent_id": respondent_id, "t_1_period": i-1} for var in variable_names: if var not in ["income"]: current_var = f"r{i}{var}" previous_var = f"r{i-1}{var}" new_row[f"t_{var}"] = df_temp[df_temp["variable"] == current_var]["value"].values[0] new_row[f"t_1_{var}"] = df_temp[df_temp["variable"] == previous_var]["value"].values[0] elif var == "income": current_var = f"h{i}{var}" previous_var = f"h{i-1}{var}" new_row[f"t_h{var}"] = df_temp[df_temp["variable"] == current_var]["value"].values[0] new_row[f"t_1_h{var}"] = df_temp[df_temp["variable"] == previous_var]["value"].values[0] new_rows.append(new_row) df_periods = pd.DataFrame(new_rows) In my real data, I have much more than 3 variables: I sometimes have up to 100. Additionally, all variables are always present for all periods, however some of them can have NaNs, but the columns are there. In terms of respondents, I can also have a lot: as much as 50,000 for example. Note that some variables start with "h" instead of "r", and others with "s" (not present in this example). My question: is there a faster way of transforming this? Every time I want to transform the data in this t vs. t-1 version for all variables I decide to include in variable_names I have to wait a lot. I believe there must be a better way of doing this. I appreciate your help, thank you.
There are many ways to approach that, wide_to_long is an option but you would need to pre-process the column names (it expects the stubnames as prefixes, not suffixes). I'd suggest to use a MultiIndex and stack, here is an example that doesn't require to know the stubnames: # set aside respondent_id and create a MultiIndex tmp = df.set_index('respondent_id') tmp.columns = pd.MultiIndex.from_frame(tmp.columns.str.extract(r'[rh](\d+)(\D+)'), names=['t_1_period', None]) # reshape tmp = tmp.stack(0, future_stack=True) # concatenate the long format with a shifted version of itself out = (pd.concat([tmp.groupby(level=0).shift(-1), tmp], keys=['t', 't_1'], axis=1) .sort_index(axis=1, level=1, sort_remaining=False) ) # flatten MultiIndex out.columns = out.columns.map('_'.join) out.reset_index(inplace=True) # remove the last value per group out = out[out['respondent_id'].duplicated(keep='last')].convert_dtypes() Output: respondent_id t_1_period t_age t_1_age t_income t_1_income t_smoke t_1_smoke 0 16178 1 38 35 62 60 1 1 1 16178 2 41 38 68 62 1 1 2 16178 3 44 41 70 68 1 1 4 161719 1 68 65 52 50 0 0 5 161719 2 71 68 54 52 0 0 6 161719 3 74 71 56 54 1 0 8 161720 1 50 47 82 80 1 0 9 161720 2 53 50 85 82 0 1 10 161720 3 56 53 87 85 1 0
4
3
78,570,583
2024-6-3
https://stackoverflow.com/questions/78570583/store-methods-in-class-in-order-they-are-written
This might be a two part question. First is the context if better alternatives can be suggested. Second part is the (probably) XY problem for my solution. xy - Have a method in the class which returns a list of all functions in the class in order they were written/typed. My projects typically involve long sequences of queries/data transformations. I want to be able to wrap each step as a function so that the docstrings can be discoverable by those tools. But don't want to have to retype every function call at the end of each notebook. Undesirable way: def print_something_else(): """ Some docstring here """ print("something else") def print_some_other_thing(): """ Some other docstring here """ print("something other thing") def print_something(): print("something") print_something_else() print_some_other_thing() print_something() I would like to just define the functions and have some wrapper so that they are run in order. Attempt: class RegisteredCommandSequence: def __init__(self): def __include(method): only_include_if = [ type(getattr(self, method)).__name__ == 'method', not(method.startswith('__')), method not in ('run_all', 'run_from_to')] return all(only_include_if) self._public_nonbuiltin_callable_methods = [ method for method in dir(self) if __include(method)] def run_all(self): for func in self._public_nonbuiltin_callable_methods: getattr(self, func)() class QueryBundle(RegisteredCommandSequence): def __init__(self): super().__init__() def print_something_else(self): """ Some docstring here """ print("something else") def print_some_other_thing(self): """ Some other docstring here """ print("something other thing") def print_something(self): print("something") test = QueryBundle() test.run_all() something other thing something something else The problem is the dir() doesn't preserve the order in which the functions were written, which is vital. If in the class itself I have a list of all the in-scope methods, it'd be very convenient to be able to run subsequences like test.run_from_to('print_something_else', 'print_some_other_thing'). Things I've considered Decorators which dictate the order of execution of each. Using the classes __dict__, which seems to preserve the order, but is empty when defined how I'm trying to use it. example: class ShowEmptyDict: def __init__(self): self.print_methods_in_class() def print_methods_in_class(self): return(self.__dict__) demo = ShowEmptyDict() if demo.print_methods_in_class(): print('Not empty') else: print('empty')
I would advise against source code order dictating execution order, but if you really need to, you can use inspect.getsourcelines() to get the line number where an object starts at. import inspect class Funcs: def func_8(self): pass def func_5(self): pass def func_4(self): pass instance = Funcs() funcs = [getattr(instance, func) for func in dir(instance) if inspect.ismethod(getattr(instance, func))] funcs.sort(key=lambda func: inspect.getsourcelines(func)[1]) print(funcs) prints out [ <bound method Funcs.func_8 of <__main__.Funcs object at 0x1175b0740>>, <bound method Funcs.func_5 of <__main__.Funcs object at 0x1175b0740>>, <bound method Funcs.func_4 of <__main__.Funcs object at 0x1175b0740>>, ] This generalizes to any namespace where you have functions, of course, not just a class instance.
1
2
78,556,850
2024-5-30
https://stackoverflow.com/questions/78556850/attributeerror-emailaddressmanager-object-has-no-attribute-is-verified
I get the following error while attempting to a register a user with the help of DRF, dj-rest-auth and django-allauth: AttributeError at /api/v1/dj-rest-auth/registration/ 'EmailAddressManager' object has no attribute 'is_verified' Here is the templates part of settings.py file: TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [BASE_DIR / "templates"], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', "django.template.context_processors.request", ], }, }, ] EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend" SITE_ID = 1 and project level urls.py file: urlpatterns = [ path('admin/', admin.site.urls), path('', include('apps.pages.front.urls')), path('api/v1/', include("apps.contacts.api.urls")), path('api-auth/', include("rest_framework.urls")), path("api/v1/dj-rest-auth/", include("dj_rest_auth.urls")), path("api/v1/dj-rest-auth/registration/", include("dj_rest_auth.registration.urls")), ] And just in case if order of installed apps matter, here is my installed_apps: INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'whitenoise.runserver_nostatic', 'django.contrib.staticfiles', 'django.contrib.sites', "rest_framework", "corsheaders", "rest_framework.authtoken", "allauth", "allauth.account", "allauth.socialaccount", "dj_rest_auth", 'dj_rest_auth.registration', 'drf_spectacular', 'django_filters', 'apps.pages.apps.PagesConfig', 'apps.accounts.apps.AccountsConfig', 'apps.contacts.apps.ContactsConfig', ] (i'm keeping my apps in a dedicated apps folder, and have created a custom user model) Leaving the email field empty will successfully register a new user, but it won't work if i add an email.
This issue occurs when there is a version mismatch between your django, django-allauth, or dj-rest-auth packages. In my case, I was using the following versions: django = β€œ^4.2.4” django-allauth = β€œ^0.54.0” dj-rest-auth = β€œ^5.0.2” To resolve the issue, I downgraded dj-rest-auth to version β€œ^5.0.1”. I recommend upgrading all three packages to their latest versions and it should work as intended.
2
1