question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
78,402,155 | 2024-4-29 | https://stackoverflow.com/questions/78402155/inconsistencies-in-time-arithmetic-with-timezone-aware-datetime-objects-in-pytho | Surprisingly, time arithmetic is not handled as expected with pythons timezone aware objects. For example consider this snippet that creates a timezone aware object at 2022-10-30 02:00. from datetime import datetime, timezone, timedelta from zoneinfo import ZoneInfo zone = ZoneInfo("Europe/Madrid") HOUR = timedelta(hours=1) u0 = datetime(2022, 10, 30, 2, tzinfo=zone) At 2:59 the clocks shifted back to 2:00, marking the ending of the daylight saving time period. This makes 2022-10-30 02:00 ambiguous. In 2022-10-30 the clocks showed 2:00 twice. Fist comes the DST instance 2022-10-30 02:00:00+02:00 followed by the winter time instance 2022-10-30 02:00:00+01:00, when the timezone shifted from CEST to CET. Python solves the ambiguity by selecting u0 to be the first of the two instances, the one within the DST interval. This is verified by by printing out u0 and its timezone name: >>> print(u0) 2022-10-30 02:00:00+02:00 >>> u0.tzname() 'CEST' which is the central European daylight saving timezone. If we add one hour to u0, the passage to CET, i.e. the winter timezone for central Europe, is correctly detected. >>> u1 = u0 + HOUR >>> u1.tzname() 'CET' However, the time does not fold back to 2:00, as expected: >>> print(u1) 2022-10-30 03:00:00+01:00 'CET' So, with the addition of a 1h interval, it looks as if 2 hours have passed. One hour due to the wall time shifting from 2:00 to 3:00 and another one due to the change of the timezone that is shifted 1h towards UTC. Conversely one would expect u1 to print out as 2022-10-30 02:00:00+01:00. This 2 hour shift is verified by converting u0 and u1 to UTC: >>> u0.astimezone(timezone.utc) datetime.datetime(2022, 10, 30, 0, 0, tzinfo=datetime.timezone.utc) >>> u1.astimezone(timezone.utc) datetime.datetime(2022, 10, 30, 2, 0, tzinfo=datetime.timezone.utc) To make things worse, the time interval between u1 and u0 is inconsistently calculated depending on the chosen timezone. On the one hand we have: >>> u1 - u0 datetime.timedelta(seconds=3600) which is equivalent to a 1h interval. On the other hand, if we do the same calculation in UTC: >>> u1.astimezone(timezone.utc) - u0.astimezone(timezone.utc) datetime.timedelta(seconds=7200) the calculated interval is 2h. In conclusion, it appears that Python's handling of timedelta in timezone-aware datetimes emphasizes local clock times rather than consistent logical intervals, leading to potential discrepancies when crossing DST boundaries. In my opinion, this can be misleading, as the existence of the zoneinfo library gives the impression that these kind of problems have been solved. Does anyone know if this is a bug or expected behaviour? Has anyone else encountered this issue, and how do you manage these discrepancies in your applications? If this is expected behavior, perhaps Python documentation should provide clearer guidance or warnings about performing time arithmetic with timezone-aware objects. Edit I have verified the described behavior with python 3.11 and 3.12. Similar results are obtained for other time zones, e.g. "Europe/Athens", but I have not performed an extensive check for all time zones. | In addition to my comment, here's an example using fold to maybe help clarify things (note the slightly modified time): from datetime import datetime, timezone, timedelta from zoneinfo import ZoneInfo zone = ZoneInfo("Europe/Madrid") HOUR = timedelta(hours=1) u0 = datetime(2022, 10, 30, 1, 59, 59, tzinfo=zone) print(u0+HOUR) # one hour later on a *wall clock*... 2022-10-30 02:59:59+02:00 # CEST print((u0+HOUR).replace(fold=1)) # specify on which side of the "DST fold" we want to be... 2022-10-30 02:59:59+01:00 # CET You can use fold to specify if the datetime should fall on a specific side of the DST "fold". Applying Python's timedelta arithmetic, wall time still doesn't care: print((u0+HOUR).replace(fold=1)-u0) 1:00:00 print((u0+HOUR)-u0) 1:00:00 So if you want timedelta arithmetic to be absolute time arithmetic, use UTC (as shown in the OP). Where things get really weird is DST gaps (non-existent datetimes): u0 = datetime(2022, 3, 27, 1, 59, 59, tzinfo=zone) print(u0+timedelta(seconds=1)) 2022-03-27 02:00:00+01:00 # non-existent datetime; should be 03:00 CEST Again, to circumvent such oddities: Use absolute add in UTC; print((u0.astimezone(timezone.utc)+timedelta(seconds=1)).astimezone(zone)) 2022-03-27 03:00:00+02:00 The dst tag wiki has more details and illustrations of DST gaps and folds. | 5 | 1 |
78,404,120 | 2024-4-29 | https://stackoverflow.com/questions/78404120/having-trouble-with-negative-lookahead-regex-in-python | So, I have to match everything before the last open-close parenthesis and get that in group. And after the last open-close parenthesis try to get the value with some pattern, again in group. This is my sample example: ID pqr () name:123. And this is my regex: ^(?P<JUNK>.*?)(?!\(.\))(\(.*\))?\sname\:(?P<name>\d+)\.$ And now with this I am getting ID pqr for JUNK key and 123 for name key which is perfect. And now, with this regex, it's working fine for below strings: ID pqr (a) () name:123. ID pqr (a) (b) () name:123. ID pqr (a) (b) () name:123. ID pqr (a) (b) (XX) name:123. in returns I am getting these outputs: {'JUNK': 'ID pqr ', 'name': '123'} {'JUNK': 'ID pqr (a) ', 'name': '123'} {'JUNK': 'ID pqr (a) (b) ', 'name': '123'} {'JUNK': 'ID pqr (a) (b) ', 'name': '123'} So far, it's working fine with above strings, but for the below ones I am having some trouble ID pqr (a) (b) (X) name:123. ID pqr (aa) (b) (X) name:123. ID pqr (a) (bb) (X) name:123. For these strings, I am getting output like this: {'JUNK': 'ID pqr (a) (b) (X)', 'name': '123'} {'JUNK': 'ID pqr ', 'name': '123'} {'JUNK': 'ID pqr (a) ', 'name': '123'} But basically I want like this: {'JUNK': 'ID pqr (a) (b) ', 'name': '123'} {'JUNK': 'ID pqr (aa) (b) ', 'name': '123'} {'JUNK': 'ID pqr (a) (bb) ', 'name': '123'} This is my regex101 attempt: https://regex101.com/r/AV8WlB/4 Can anyone point out where I am going wrong? | Try: ^(?P<JUNK>.*?)(\([^)]*\)\s)?name:(?P<name>\d+)\.$ See: regex101 Explanation ^: Start of string... (?P<JUNK>.*?): followed by JUNK... (\([^)]*\)\s)?: then, if applicable the last pair of brackets... name:: before literal "name:"... (?P<name>\d+): and collecting the number before... \.$: final "." at the end of the string. | 2 | 3 |
78,396,578 | 2024-4-27 | https://stackoverflow.com/questions/78396578/how-to-make-my-tkinter-app-fit-the-whole-window-no-matter-the-size | I made a Tkinter app using Python. I want it to auto-resize and fill the whole window no matter how I resize it. The problem is that when I expand the window size (using my cursor), grey space appears on the borders to fill the gaps (see the image below). Grey space filling Here is the code minus the unnecessary details : import tkinter as tk from tkinter import ttk import openpyxl root = tk.Tk() frame = ttk.Frame(root) frame.pack() widgets_frame = ttk.LabelFrame(frame, text = "Insert Data") widgets_frame.grid(row = 0, column = 0, padx = 20, pady = 10) day_label = ttk.Label(widgets_frame, text = "Date") day_label.grid(row = 1, column = 0, padx = 5, pady = (0, 5), sticky = "ew") day_entry = ttk.Entry(widgets_frame) day_entry.grid(row = 2, column = 0, padx = 5, pady = (0, 5), sticky = "ew") walking_entry = ttk.Entry(widgets_frame) walking_entry.insert(0, "Walking duration (min)") walking_entry.bind("<FocusIn>", lambda e: walking_entry.delete('0', 'end')) walking_entry.grid(row = 3, column = 0, padx = 5, pady = (0, 5), sticky = "ew") button = ttk.Button(widgets_frame, text = "Insert") button.grid(row = 4, column = 0, padx = 5, pady = 5, sticky = "nsew") treeFrame = ttk.Frame(frame) treeFrame.grid(row = 0, column = 1, pady = 10) treeScroll = ttk.Scrollbar(treeFrame) treeScroll.pack(side = "right", fill = "y") cols = ("Day", "Walking") treeview = ttk.Treeview(treeFrame, show = "headings", yscrollcommand = treeScroll.set, columns = cols, height = 13) treeview.column("Day", width = 100) treeview.column("Walking", width = 50) treeview.pack() treeScroll.config(command = treeview.yview) root.mainloop() Does anybody have a suggestion that would help me solve this issue ? Thanks. I added this line of code because, in another thread, they said it helps with expanding the widgets equally when the window size increases : root.grid_columnconfigure(0,weight = 1) root.grid_columnconfigure(1,weight = 1) root.grid_rowconfigure(0, weight = 1) root.grid_rowconfigure(1, weight = 1) root.grid_rowconfigure(2, weight = 1) root.grid_rowconfigure(3, weight = 1) root.grid_rowconfigure(4, weight = 1) Unfortunately, even after doing so, I didn't notice any changes. | You could use expand = True, fill = BOTH as an attribute for the required widgets to expand them to full size. | 2 | 1 |
78,406,021 | 2024-4-30 | https://stackoverflow.com/questions/78406021/pandas-series-get-value-by-index-with-fill-value-if-the-index-does-not-exist | ts = pd.Series({'a' : 1, 'b' : 2}) ids = ['a','c'] # 'c' is not in the index # the result I want np.array([ts.get(k, np.nan) for k in ids]) Is there a pandas native way to achieve this? | reindex and access the underlying numpy array: out = ts.reindex(ids).values # or # out = ts.reindex(ids).to_numpy() Output: array([ 1., nan]) Note that the fill_value in reindex is NaN by default, you can change it if desired: ts.reindex(ids, fill_value=0).values # array([1, 0]) | 2 | 2 |
78,405,251 | 2024-4-29 | https://stackoverflow.com/questions/78405251/create-dummy-for-missing-values-for-variable-in-python | I have the following dataframe in: a 1 3 2 2 3 Nan 4 3 5 Nan I need to recode this column so it looks like this: df_miss_a 1 0 2 0 3 1 4 0 5 1 I've tried: df_miss_a = np.where(df['a'] == 'Nan', 1, 0) and df_miss_a = np.where(df['a'] == Nan, 1, 0) The above outputs only 0s. The format of the output is unimportant. | If you have NaNs in your column you can use pd.Series.isna(): df_miss_a = df["a"].isna().astype(int) print(df_miss_a) Prints: 1 0 2 0 3 1 4 0 5 1 Name: a, dtype: int64 | 2 | 4 |
78,404,121 | 2024-4-29 | https://stackoverflow.com/questions/78404121/how-to-get-the-epoch-value-for-any-given-local-time-with-time-zone-daylight-sa | I'm new to Python and I have a use case which deals with datetime. Unfortunately, the data is not in UTC, so I have to rely on Local Time, Time Zones & Daylight Savings. I tried with datetime & pytz to get the epoch value from datetime, but its not working out. Approach 1 : If I try creating datetime object, localize it to Brussels time and use strftime to convert to epoch, I'm getting valid response for the window outside Daylight Saving Time (DST). But for time within DST, I'm getting a response of -1. Approach 2 : If I give the time zone details while creating the datetime object, and try to convert that to epoch, its giving value same as that for IST. from datetime import datetime import pytz print('localize') print(pytz.timezone("Asia/Kolkata").localize(datetime(2024, 10, 27, 0,0,0)).strftime('%s')) print(pytz.timezone("Europe/Brussels").localize(datetime(2024, 10, 27, 0,0,0)).strftime('%s')) print(pytz.timezone("Europe/Brussels").localize(datetime(2024, 10, 27, 6,0,0)).strftime('%s')) print('tzinfo') print(datetime(2024, 10, 27, 0,0,0, tzinfo=pytz.timezone("Asia/Kolkata")).strftime('%s')) print(datetime(2024, 10, 27, 0,0,0,tzinfo=pytz.timezone("Europe/Brussels")).strftime('%s')) print(datetime(2024, 10, 27, 6,0,0, tzinfo=pytz.timezone("Europe/Brussels")).strftime('%s')) Output: localize 1729967400 -1 1729989000 tzinfo 1729967400 1729967400 1729989000 What's even more confusing is, if I print the datetime values without formatting, Approach 1 shows valid info, while Approach 2 shows non-rounded offset values as shown below. print('datetime values showing appropriate offset') print(pytz.timezone("Asia/Kolkata").localize(datetime(2024, 10, 27, 0, 0, 0))) print(pytz.timezone("Europe/Brussels").localize(datetime(2024, 10, 27, 0, 0, 0))) print(pytz.timezone("Europe/Brussels").localize(datetime(2024, 10, 27, 6, 0, 0))) print('datetime values showing different offset') print(datetime(2024, 10, 27, 0,0,0, tzinfo=pytz.timezone("Asia/Kolkata"))) print(datetime(2024, 10, 27, 0,0,0,tzinfo=pytz.timezone("Europe/Brussels"))) print(datetime(2024, 10, 27, 6,0,0, tzinfo=pytz.timezone("Europe/Brussels"))) Output: datetime values showing appropriate offset 2024-10-27 00:00:00+05:30 2024-10-27 00:00:00+02:00 2024-10-27 06:00:00+01:00 datetime values showing different offset 2024-10-27 00:00:00+05:53 2024-10-27 00:00:00+00:18 2024-10-27 06:00:00+00:18 Which is better and why so? Not sure if I'm missing anything. Can anyone help on this? | Use the built-in zoneinfo (available since Python 3.9) and .timestamp() instead of the non-standard .strftime('%s') (not supported on Windows). This produces correct results: # `zoneinfo` requires "pip install -U tzdata" for the latest time zone info # on some OSes, e.g. Windows. import datetime as dt import zoneinfo as zi print(dt.datetime(2024, 10, 27, tzinfo=zi.ZoneInfo('Asia/Kolkata')).timestamp()) print(dt.datetime(2024, 10, 27, tzinfo=zi.ZoneInfo('Europe/Brussels')).timestamp()) print(dt.datetime(2024, 10, 27, 6, tzinfo=zi.ZoneInfo('Europe/Brussels')).timestamp()) print(dt.datetime(2024, 10, 27, tzinfo=zi.ZoneInfo('Asia/Kolkata'))) print(dt.datetime(2024, 10, 27, tzinfo=zi.ZoneInfo('Europe/Brussels'))) print(dt.datetime(2024, 10, 27, 6, tzinfo=zi.ZoneInfo('Europe/Brussels'))) Output: 1729967400.0 1729980000.0 1730005200.0 2024-10-27 00:00:00+05:30 2024-10-27 00:00:00+02:00 2024-10-27 06:00:00+01:00 | 2 | 2 |
78,403,517 | 2024-4-29 | https://stackoverflow.com/questions/78403517/iterators-in-jit-jax-functions | I'm new to JAX and reading the docs i found that jitted functions should not contain iterators (section on pure functions) and they bring this example: import jax.numpy as jnp import jax.lax as lax from jax import jit # lax.fori_loop array = jnp.arange(10) print(lax.fori_loop(0, 10, lambda i,x: x+array[i], 0)) # expected result 45 iterator = iter(range(10)) print(lax.fori_loop(0, 10, lambda i,x: x+next(iterator), 0)) # unexpected result 0 trying to fiddling with it a little bit in order to see if i can get directly an error instead of undefined behaviour i wrote @jit def f(x, arr): for i in range(10): x += arr[i] return x @jit def f1(x, arr): it = iter(arr) for i in range(10): x += next(it) return x print(f(0,array)) # 45 as expected print(f1(0,array)) # still 45 Is it a "chance" that the jitted function f1() now shows the correct behaviour? | Your code works because of the way that JAX's tracing model works. When JAX's tracing encounters Python control flow, like for loops, the loop is fully evaluated at trace-time (There's some exploration of this in JAX Sharp Bits: Control Flow). Because of this, your use of an iterator in this context is fine, because every iteration is evaluated at trace-time, and so next(it) is re-evaluated at every iteration. In contrast, when using lax.fori_loop, next(iterator) is only executed a single time and its output is treated as a trace-time constant that will not change during the runtime iterations. | 4 | 2 |
78,398,120 | 2024-4-28 | https://stackoverflow.com/questions/78398120/show-image-in-python-and-matlab-showing-different-colors | I'm working on matlab, but I noticed that the image I was loading becomes black-an-white, when the original image has grays in it. I decided to check the image using a imshow on a new file, but it really makes it black and white. My code is simply: image_path = 'no.png'; img = imread(image_path); imshow(img); axis off; Then I tried to make it grayscale, but the problem was not solved. Then I tried to use python to show the image, and it showed the image correctly. import matplotlib.pyplot as plt import matplotlib.image as mpimg def show_image(image_path): img = mpimg.imread(image_path) plt.imshow(img) plt.axis('off') plt.show() # Example usage image_path = "no.png" show_image(image_path) I don't know what's wrong. This is the original image: This is the image matlab shows: This is the image python shows: | Actually, the problem comes from the fact that your image is not grayscale, but is indexed. image_path = 'no.png'; [img,map] = imread(image_path); % Returns a non-empty map --> indexed image Now, you can get the expected output by using the map: imshow(img,map); axis off; You can look at this answer for how to check what the type of your image is, in particular the following part (trimmed for clarity): [imageArray, colorMap] = imread(fullImageFileName); [rows, columns, numberOfColorChannels] = size(imageArray); % % % % Tell user the type of image it is. if numberOfColorChannels == 3 % This is a true color, RGB image. % elseif numberOfColorChannels == 1 && isempty(colorMap) % This is a gray scale image, with no stored color map. % elseif numberOfColorChannels == 1 && ~isempty(colorMap) % This is an indexed image. It has one "value" channel with a % stored color map that is used to pseudocolor it. | 2 | 2 |
78,400,673 | 2024-4-29 | https://stackoverflow.com/questions/78400673/how-can-i-wait-for-the-actual-reply-of-an-openai-assistant-python-openai-api | I am interacting with the openai assistant API (python). so far, it works well. but sometimes the api returns the message i sent to the assistant instead of the assistants reply. from openai import OpenAI from os import environ OPEN_API_KEY = environ.get('OPEN_API_KEY') assistant_id = "xxxxxxx" client = OpenAI(api_key=OPEN_API_KEY) assistant = client.beta.assistants.retrieve(assistant_id) def ask_assistant(message_text): print(f'received message: {message_text}') thread = client.beta.threads.create() message = client.beta.threads.messages.create( thread_id=thread.id, role="user", content=message_text ) run = client.beta.threads.runs.create( thread_id=thread.id, assistant_id=assistant_id ) run_retrieve = client.beta.threads.runs.retrieve( thread_id=thread.id, run_id=run.id ) messages = client.beta.threads.messages.list(thread.id) final_text = messages.data[0].content[0].text.value try: final_text = messages.data[0].content[0].text.value print(final_text) except Exception as e: print(e) final_text = '' return final_text if __name__ == "__main__": ask_assistant('How are you?') as a hot fix, i implemented a sleep function that waits for 3 seconds before retrieving the reply. this works, but i don't expect it to be the best solution (or reliable at all). any ideas how to wait until the assistant REALLY replied? thank you very much. Update: This works: from openai import OpenAI from os import environ OPEN_API_KEY = environ.get('xxxxx') assistant_id = "xxxxx" client = OpenAI(api_key=OPEN_API_KEY) assistant = client.beta.assistants.retrieve(assistant_id) def get_answer(run, thread): while not run.status == "completed": print("Waiting for answer...") run = client.beta.threads.runs.retrieve( thread_id=thread.id, run_id=run.id ) messages = client.beta.threads.messages.list(thread.id) answer = messages.data[0].content[0].text.value try: answer = messages.data[0].content[0].text.value except Exception as e: print(e) answer = '' return answer def ask_assistant(message_text): print(f'sending to assistant: {message_text}') thread = client.beta.threads.create() message = client.beta.threads.messages.create( thread_id=thread.id, role="user", content=message_text ) run = client.beta.threads.runs.create( thread_id=thread.id, assistant_id=assistant_id ) answer = get_answer(run, thread) print(f'assistant response: {answer}') return answer if __name__ == "__main__": ask_assistant('Hello') | This should work: run = openai.beta.threads.runs.create( thread_id=thread.id, assistant_id=assistant.id ) print(run) while run.status !="completed": run = openai.beta.threads.runs.retrieve( thread_id=thread.id, run_id=run.id ) print(run.status) messages = openai.beta.threads.messages.list( thread_id=thread.id ) print(messages.data[0].content[0].text.value) Thanks to @joyasree78 | 2 | 2 |
78,400,329 | 2024-4-29 | https://stackoverflow.com/questions/78400329/multiplot-in-for-loop-by-importing-only-pandas | Sometime, DataFrame.plot() inside a for loop produces multiple charts. import pandas as pd data = {'Str': ['A', 'A', 'B', 'B'], 'Num': [i for i in range(4)]} df = pd.DataFrame(data) for n in ['A', 'B']: df[df.Str == n].plot(kind='bar') But sometimes, it produces a single chart. import pandas as pd data = {'C1': ['A', 'A', 'B', 'B'], 'C2': [i for i in range(4)], 'C3': [1,2,1,2]} df = pd.DataFrame(data) for n in [1,2]: df[df.C3 == n].groupby('C1').C2.sum().plot(kind='bar') From the previous code, if plt.show() was added at the end of the loop. It will produce multiple charts. import pandas as pd import matplotlib.pyplot as plt data = {'C1': ['A', 'A', 'B', 'B'], 'C2': [i for i in range(4)], 'C3': [1,2,1,2]} df = pd.DataFrame(data) for n in [1,2]: df[df.C3 == n].groupby('C1').C2.sum().plot(kind='bar') plt.show() I don't want to use plt.show(). Actually I want to import only pandas and create multiple charts using a for loop. | I'm not sure why matplotlib behave this way, but your code in the second loop products a Series while the first produces a DataFrame. This might force reusing the same axes. You could add as_index=False in the groupby to have an object that is comparable to the one in the first loop: for n in [1,2]: df[df.C3 == n].groupby('C1', as_index=False).C2.sum().plot(kind='bar') Output: Also note that (explicitly) importing matplotlib should not be an issue. It is already a dependency of pandas and will be imported by pandas anyway, therefore not importing it won't cause any gain in the code. | 2 | 1 |
78,399,639 | 2024-4-28 | https://stackoverflow.com/questions/78399639/how-to-use-io-bytesio-in-python-to-write-to-an-existing-buffer | According to How the write(), read() and getvalue() methods of Python io.BytesIO work? , it seems io.BytesIO will copy initial bytes provided. I have a buffer, and I want pickle to directly dump object in that buffer, without any copy. So I tried to use f = io.BytesIO(buffer). However, after f.write, my buffer is not modified. Here is an example: a = bytearray(1024) import io b = io.BytesIO(memoryview(a)) b.write(b"1234") a[:4] # a does not change What I want, is to make io.BytesIO directly writes to my buffer. My ultimate goal is: a = bytearray(1024) import io, pickle with io.BytesIO(memoryview(a)) as f: obj = [1, 2, 3] pickle.dump(obj, f) # a should hold the pickled data of obj | IIUC you can try to make your own IO class: import io import pickle class MyIO(io.IOBase): def __init__(self, buf): self.buf = memoryview(buf) self.position = 0 def write(self, b): start = self.position n = len(b) end = start + n self.buf[start: end] = b self.position = end a = bytearray(1024) with MyIO(a) as f: obj = [1, 2, 3] pickle.dump(obj, f) print(a) Prints: bytearray(b'\x80\x04\x95\x0b\x00\x00\x00\x00\x00\x00\x00]\x94(K\x01K\x02K\x03e.\x00\x00\x00 ... | 3 | 2 |
78,399,629 | 2024-4-28 | https://stackoverflow.com/questions/78399629/slicing-using-polars-filter-is-slower-than-pandas-loc | I am trying to switch some of my pandas code to polars to leverage it's performance. I have found that the .filter operation is much slower than a similar slicing using .loc. import pandas as pd import polars as pl import datetime as dt import numpy as np date_index = pd.date_range(dt.date(2001,1,1), dt.date(2020,1,1),freq='1H') n = date_index.shape[0] test_pd = pd.DataFrame(data = np.random.randint(1,100, n), index=date_index, columns = ['test']) test_pl = pl.DataFrame(test_pd.reset_index()) test_dates = date_index[np.random.randint(0,n,1000)] st = time.perf_counter() for i in test_dates: d = test_pd.loc[i,:] print(f"Pandas {time.perf_counter() - st}") st = time.perf_counter() for i in test_dates: d = test_pl.filter(index=i) print(f"Polars {time.perf_counter() - st}") Pandas 0.1854726000019582 Polars 2.1125728000042727 Is there some other way to speedup the slicing operation in polars? | Polars doesn't use indexes, so random access of one specific element (if not by row number) will always have to loop over all the data. But you can efficiently get all the dates you are interested in in one go using a left join: test_dates_df = pl.DataFrame({"index": test_dates}) out = test_dates_df.join(test_pl, on="index", how="left") Then out[0] contains the row where the index column matches test_dates[0], etc. On my machine this gives the following times: Pandas 0.029560166876763105 Polars 0.0009763331618160009 | 2 | 4 |
78,399,002 | 2024-4-28 | https://stackoverflow.com/questions/78399002/group-data-and-remove-duplicate | I have the data in the form of table below Name Mas Sce M ( (87) 83 (91) (97) ) T (77) 76 R (60) 32 G (95) 20 M ( (50) 89 (50) (99) ) Some of my data runs through multiple row such as M case. The data is enclosed within the bracket I have tried drop duplicates. It works when its single row. But, now i have a few rows as a group import pandas as pd d = {'Name': ['M', None, None, 'T', 'R', 'G', 'M', '', ''], 'Mas': ['( (87)', '(91)', '(97) )', '(77)', '(60)', '(95)', '( (50)', '(50)', '(99) )'], 'Sce': ['83', '', '', '76', '32', '20', '89', '', '']} df = pd.DataFrame(d) df['Name'] = df['Name'].ffill() print(df) df.drop_duplicates(subset='Name', keep='first', inplace=True) print(df) I want to remove reoccurence of the data. In this case, the 2nd M Name Mas Sce M ( (87) 83 (91) (97) ) T (77) 76 R (60) 32 G (95) 20 | A pure pandas one-liner with duplicated, a mask and ffill: out = df[~df['Name'].duplicated() .mask(df['Name'].replace('', None).isna()) .ffill().infer_objects(copy=False)] Output: Name Mas Sce 0 M ( (87) 83 1 None (91) 2 None (97) ) 3 T (77) 76 4 R (60) 32 5 G (95) 20 | 2 | 0 |
78,398,988 | 2024-4-28 | https://stackoverflow.com/questions/78398988/multi-processing-code-not-working-in-while-loop | Happy Sunday. I have this code that I want to run using the multi-processing module. But it doesn't just work for some reason. with ProcessPoolExecutor() as executor: while True: if LOCAL_RUN: print("ALERT: Doing a local run of the automation with limited capabilities.") list_of_clients = db_manager.get_clients() random.shuffle(list_of_clients) list_of_not_attempted_clients_domains = db_manager.tags() group_of_clients_handlers = {} # no matches if not list_of_not_attempted_clients_domains: sleep = 60 * 10 pretty_print(f'No matches found. Sleeping for {sleep}s') time.sleep(sleep) continue for client in list_of_clients: client_id = client[0] client_name = client[1] group_of_clients_handlers[client_id] = [ClientsHandler(db_manager), client_name] # MULTI-PROCESSING CODE try: print('running function...') executor.map( partial(run, group_of_clients_handlers=group_of_clients_handlers), list_of_not_attempted_clients_domains ) except Exception as err: print(err) Despite all my attempts to debug this, I have no idea why this doesn't work, although I feel it relates to processes taking time to start up or scheduling task etc but I am not certain. The while loop just keeps running and I see all the print statements like running function... but the run function never executes. The run function is a very large function with nested large functions. The except block doesn't print out any error either. Would love to hear what you think... | ProcessPoolExecutor.map creates an iterator, you must consume the iterator to get the exception, otherwise the exception will be discarded. from concurrent.futures import ProcessPoolExecutor def raising_func(val): raise ValueError(val) with ProcessPoolExecutor(4) as pool: pool.map(raising_func, [1,2,3,4,5]) with ProcessPoolExecutor(4) as pool: list(pool.map(raising_func, [1,2,3,4,5])) # < ---- exception is thrown from here | 2 | 2 |
78,398,642 | 2024-4-28 | https://stackoverflow.com/questions/78398642/maximum-number-of-elements-on-list-whose-value-sum-up-to-at-most-k-in-olog-n | I have this exercise to do: Let M be a positive integer, and V = β¨v1, . . . , vnβ© an ordered vector where the value of item vi is 5Γi. Present an O(log(n)) algorithm that returns the maximum number of items from V that can be selected given that the sum of the selected items is less than or equal to M (repeated selection of items is not allowed). First I did a naive solution where: I know the sum of elements on the array will be always less than the M/5 index on the array. So a did for i=0..i<=M/5 and found the sum. Moreover this is not O(log(n)) because given a big M, bigger than the sum of all elements on the array, it will be O(n). Therefore I tried divide and conquer, I thought a binary search should be the way. But actually no because if I do that I will sum the minimum elements that can be chosen to arrive in M, not the maximum. My code is below def max_items_selected_recursive2(M, V, left, right, max): if len(V[left:right]) == 1: return max mid = math.floor((left+right)/2) if V[mid] >= M: return max_items_selected_recursive2(M - V[mid], V, mid + 1, right, max+1) else: if M - V[mid] >= 0: return max_items_selected_recursive2(M - V[mid], V, left, mid - 1, max+1) else: return max_items_selected_recursive2(M, V, left, mid - 1, max) example of call M = 20 V = [0, 5, 10, 15, 20] max_items_selected_recursive2(M, V, 0, len(V) - 1, 0) +1 # +1 since all have the O element Any ideas on how to do this on O(log n)? | The sum 1 + 2 + ... + n = n * (n+1) / 2. The sum 5 + 10 + ... + 5n = 5 * (1 + 2 + ... + n) = 5 * n * (n+1) / 2. So given an M, we want to solve for n so that 5 * n * (n+1) / 2 <= M. Then, add 1 to this to account for zero. You can use the quadratic formula for an O(1) (which is also O(log n)) solution, or binary search on n for an O(log n) solution. | 4 | 5 |
78,388,116 | 2024-4-26 | https://stackoverflow.com/questions/78388116/how-can-i-locate-this-chemical-test-strip-in-the-picture-opencv-canny-edge-dete | I have an image that i would like to do an edge detection and draw a bounding box to it, my problem is my python code does not draw the bounding box and im not sure if its because it was not able to detect any objects in it or im just drawing the rectangle wrong. Here is my attempt import cv2 import numpy as np img = cv2.imread("image1.jpg") (B, G, R) = cv2.split(img) img = cv2.Canny(B, 20, 100) # Blue channel gives the best box so far # img = cv2.Canny(R, 20, 100) # img = cv2.Canny(R, 20, 100) ret,thresh = cv2.threshold(img,20,100,0) contours,hierarchy = cv2.findContours(thresh, 1, 2) cnt = contours[0] M = cv2.moments(cnt) for c in contours: rect = cv2.minAreaRect(c) box = cv2.boxPoints(rect) box = np.intp(box) img = cv2.drawContours(img,[box],0,(0,0,255),2) # display the image with bounding rectangle drawn on it # cv2.namedWindow('Bounding Rectangle', cv2.WINDOW_KEEPRATIO) cv2.imshow('Bounding Rectangle', img) cv2.waitKey(0) cv2.destroyAllWindows() This produces this image and I am expecting an image that is something like this: | My approach: optional: correct white balance so that the background loses its tint convert to color space with saturation channel, threshold that contours, minAreaRect This approach detects all four colored squares. # you can skip this step by defining `balanced = im * np.float32(1/255)` # gray = im.mean(axis=(0,1), dtype=np.float32) # gray world gray = np.median(im, axis=(0,1)).astype(np.float32) # majority vote print(gray) # [137. 127. 140.] balanced = (im / gray) * 0.8 # some moderate scaling so it's not "overexposed" # transform into color space that has a "saturation" axis hsv = cv.cvtColor(balanced, cv.COLOR_BGR2HSV) H,S,V = cv.split(hsv) # squares nicely visible in S for comparison, the saturation channel of the source, without white balance: # Otsu finds threshold level automatically (_, mask) = cv.threshold((S * 255).astype(np.uint8), 0, 255, cv.THRESH_BINARY | cv.THRESH_OTSU) # To remove minor debris/noise. Increase iterations as needed. mask = cv.morphologyEx(mask, cv.MORPH_OPEN, kernel=None, iterations=5) (contours, _) = cv.findContours(mask, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE) hull = cv.convexHull(np.concatenate(contours)) rrect = cv.minAreaRect(hull) # visualization canvas = im.copy() cv.drawContours(image=canvas, contours=contours, contourIdx=-1, color=(255,255,0), thickness=7) # cv.drawContours(image=canvas, contours=[hull], contourIdx=-1, color=(255,255,0), thickness=7) rrect_points = cv.boxPoints(rrect).round().astype(int) cv.polylines(canvas, [rrect_points], isClosed=True, color=(0,255,255), thickness=3) The code in one piece, minus some imshow/imwrite: import numpy as np import cv2 as cv im = cv.imread("picture.jpg") # gray = im.mean(axis=(0,1), dtype=np.float32) # gray world gray = np.median(im, axis=(0,1)).astype(np.float32) # majority vote print(gray) # [137. 127. 140.] balanced = (im / gray) * 0.8 # some moderate scaling so it's not "overexposed" # transform into color space that has a "saturation" axis hsv = cv.cvtColor(balanced, cv.COLOR_BGR2HSV) H,S,V = cv.split(hsv) # squares nicely visible in S # Otsu finds threshold level automatically (_, mask) = cv.threshold((S * 255).astype(np.uint8), 0, 255, cv.THRESH_BINARY | cv.THRESH_OTSU) # To remove minor debris/noise. Increase iterations as needed. mask = cv.morphologyEx(mask, cv.MORPH_OPEN, kernel=None, iterations=5) (contours, _) = cv.findContours(mask, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE) hull = cv.convexHull(np.concatenate(contours)) rrect = cv.minAreaRect(hull) # visualization canvas = im.copy() cv.drawContours(image=canvas, contours=contours, contourIdx=-1, color=(255,255,0), thickness=7) # cv.drawContours(image=canvas, contours=[hull], contourIdx=-1, color=(255,255,0), thickness=7) rrect_points = cv.boxPoints(rrect).round().astype(int) cv.polylines(canvas, [rrect_points], isClosed=True, color=(0,255,255), thickness=3) | 3 | 4 |
78,390,439 | 2024-4-26 | https://stackoverflow.com/questions/78390439/how-to-pass-in-variables-during-redirect-in-html-using-jinja-syntax | I have the following code: @app.route("/", methods=["GET", "POST"]) def login(): if request.method == "GET": return render_template("login.html") elif request.method == "POST": username, password = request.form.get("username"), request.form.get("password") if validate_login(username, password): return redirect("/home.html") else: return render_template("login.html", login_message="Invalid username or password!") with the following home.html file: {% extends "layout.html" %} {% block body %} Hello {{ username }}, your password is {{ password }}! {% endblock %} The point here is that I am finding some ways to pass in variables to home.html using the redirect function (just like the last line of the code render_template), but after some intense google searches, I still can't find a way to do so. Do anyone know how to do this? | I'm assuming you're using flask as your routing engine. If so, instead of redirecting to a bare html url, you can redirect to a new route. Something like: if validate_login(username, password): return redirect(f"/home/{username}") Then have that route render the home.html template using the variable as normal: @app.route("/home/<username>") def user_home(username): return render_template("home.html", username=username) For additional variables you need, you could either have the user_home function retrieve them and pass them along to the template, or use some middleware function to add additional data to the user's request. | 2 | 1 |
78,386,255 | 2024-4-25 | https://stackoverflow.com/questions/78386255/odoo-cant-compare-dates-for-setting-a-computed-field-using-api-depends | Using Odoo 16, using Odoo.sh I am trying to set a date in a model that should be the furthest in the future of the dates among the related objects from a one2many field. So I have a computed field with a @api.depends computation to iterate through the one2many and find + assign this data to a field. Everything in my below code compiles and upgrades without problem, but I get an error when I trigger the @api.depends function by adding a order_line_ids: if order_line.delivery > record.full_delivery_date: TypeError: '>' not supported between instances of 'datetime.date' and 'bool' Relevant code in primary model operations.purchaseorder: full_delivery_date = fields.Date(string="Full Delivery Date (calculated)", compute='_compute_deliver_date', default=lambda self: fields.Date.today()) order_line_ids = fields.One2many('operations.purchaseorderline','purchase_order_id',string="Order Lines") @api.depends('order_line_ids') def _compute_deliver_date(self): for record in self: if record.order_line_ids: for order_line in record.order_line_ids: if order_line.delivery > record.full_delivery_date: record.full_delivery_date = order_line.delivery Relevant code in co-model operations.purchaseorderline: purchase_order_id = fields.Many2one('operations.purchaseorder',string='Purchase Order') delivery = fields.Date("Promised Delivery Date") | As per comment from @CZoellner the issue was because I had not put handling into the code for when order_line.delivery was empty. Correct code below: @api.depends('order_line_ids') def _compute_full_delivery_date(self): for record in self: if record.order_line_ids: for order_line in record.order_line_ids: if order_line.delivery: if record.full_delivery_date: if order_line.delivery > record.full_delivery_date: record.full_delivery_date = order_line.delivery else: record.full_delivery_date = order_line.delivery | 2 | 1 |
78,390,521 | 2024-4-26 | https://stackoverflow.com/questions/78390521/consistent-initialisation-in-simple-dae-system-using-gekko | I'm messing around with GEKKO for dynamic simulations to see how it compares to commercial simulation software e.g. gPROMS. As a simple test I'm attempting to model the blowdown of a pressurised vessel. However I cannot seem to get the solver to consistently initialise the simulation. I have modelled the system using the following system of DAE's dM/dt = -outlet_flowrate (1) outlet_flowrate = f(Pressure,Outlet_Pressure) (2) Pressure = f(M,T) (3) Where Outlet_Pressure is fixed at a value, T is fixed at a value. I expect that I can specify the initial mass (M), the initial pressure could then be calculated using equation (3) , the initial outlet flowrate can then be calculated using equation (2). I have used arbitrary function for f(Pressure,Outlet_pressure) and f(M,T) to simplify the model as a minimum reproducible example: from gekko import GEKKO import numpy as np import matplotlib.pyplot as plt m = GEKKO(remote=False) m.time = np.linspace(0,20,100) mass = m.Var(value=2,lb=0,ub=100) Outlet_flowrate = m.Var(lb=0,ub=10000000) pressure = m.Var(lb=0,ub=1000000) m.Equation(mass.dt() == -Outlet_flowrate) m.Equation(Outlet_flowrate == (pressure-2)) m.Equation(pressure == mass*2) m.free_initial(pressure) m.free_initial(Outlet_flowrate) m.options.IMODE=4 m.options.RTOL=1e-15 m.solve() plt.plot(m.time,pressure.value) print(pressure.value) plt.show() However when I attempt to solve this model I get the following output from the solver: --------- APM Model Size ------------ Each time step contains Objects : 0 Constants : 0 Variables : 3 Intermediates: 0 Connections : 2 Equations : 3 Residuals : 3 Number of state variables: 398 Number of total equations: - 396 Number of slack variables: - 0 --------------------------------------- Degrees of freedom : 2 @error: Degrees of Freedom * Error: DOF must be zero for this mode STOPPING... I have tried solving the model using IMODE=7 which does solve the model however this initialises the pressure at 0. I think the initial mass should be sufficient to specify the initial pressure at 4 not zero. I am aware that I can trivially reduce the model to remove pressure and flowrate as variabes, however in the future I would like to apply this methodology to other systems which aren't as trivial to reduce. I'm also aware that I could specify the initial pressure by manually computing the initial pressure from the initial mass. Is manually computing the initial condition something that most people do or have I missed a method in GEKKO to calculate the initial values of these variables ? Thanks | The first node of a dynamic simulation is deactivated in Gekko because initial conditions are typically fixed. Gekko doesn't require consistent initial condition and can solve higher-index DAEs without Pantelides algorithm (differentiate higher-order algebraic equations to return to ODE or index-1 DAE form). There is no easy way to undo this default behavior. As an alternative, a small time step (e.g. 1e-10) can be placed at the beginning to calculate the correct initial condition of variables solved with algebraic equations. from gekko import GEKKO import numpy as np import matplotlib.pyplot as plt m = GEKKO(remote=False) m.time = np.linspace(0,20,100) m.time = np.insert(m.time,1,1e-10) mass = m.Var(value=2,lb=0,ub=100) Outlet_flowrate = m.Var(lb=0,ub=10000000) pressure = m.Var(lb=0,ub=1000000) m.Equation(mass.dt() == -Outlet_flowrate) m.Equation(Outlet_flowrate == (pressure-2)) m.Equation(pressure == mass*2) m.options.IMODE=4 m.options.SOLVER=1 m.solve() plt.plot(m.time[1:],pressure.value[1:]) print(pressure.value) plt.show() The first time point is removed from the results with: plt.plot(m.time[1:],pressure.value[1:]) The initial conditions are removed and the results are numerically the same if the initial time step is small enough. | 2 | 1 |
78,390,452 | 2024-4-26 | https://stackoverflow.com/questions/78390452/how-to-adjust-spacy-tokenizer-so-that-it-splits-number-followed-by-dot-at-line-e | i have a use case in spacy where i want to find phone numbers in German sentences. Unfortunately the tokenizer is not doing the tokenization as expected. When the number is at the end of a sentence the number and the dot is not split into two tokens. English and German Version differ here, see the following code: import spacy nlp_en = spacy.blank("en") nlp_de = spacy.blank("de") text = "Die Nummer lautet 1234 123444." doc_en = nlp_en(text) doc_de = nlp_de(text) print(doc_en[-1]) #output is: . print(doc_de[-1]) #output is: 123444. expected output is: 123444. is split into two tokens. But i also want to use the "de" version as it has other meaningful defaults for German sentences... my spaCy version: 3.7.4 in a similar case i was able to solve the problem with nlp_de.tokenizer.add_special_case but here i need to match a number that i don't know. and i couldn't find a way to use regex with add_special_case I also had a look at: Is it possible to change the token split rules for a Spacy tokenizer? which seems promising. but i wasn't able to figure out how to adjust the tokenizer. I guess I should use a custom tokenizer and the information from https://github.com/explosion/spaCy/blob/master/spacy/lang/de/punctuation.py !? | You may use 'suffixes' to fix issues with punctuation. Here is an example: import spacy nlp_en = spacy.blank("en") nlp_de = spacy.blank("de") text = "Die Nummer lautet 1234 123448." suffixes = nlp_de.Defaults.suffixes + [r'\.',] suffix_regex = spacy.util.compile_suffix_regex(suffixes) nlp_de.tokenizer.suffix_search = suffix_regex.search doc_en = nlp_en(text) doc_de = nlp_de(text) print(doc_en[-1]) #output is: . print(doc_de[-1]) #output is: . | 2 | 1 |
78,386,497 | 2024-4-25 | https://stackoverflow.com/questions/78386497/how-to-regress-multiple-gaussian-peaks-from-a-spectrogram-using-scipy | I have this code that aims to fit the data in here DriveFilelink The function read_file is utilized to extract information in a structured manner. However, I'm encountering challenges with the Gaussian fit, evident from the discrepancies observed in the Gaussian fit image. This issue appears to stem from the constraints imposed on certain parameters, such as fixing the mean of all the Gaussians and three out of the four amplitudes. Despite these constraints, they are necessary as they are based on available information. Does anyone know how I can fix this problem? In order to have a better fit. The code is the following: import numpy as np import matplotlib.pyplot as plt from scipy.signal import find_peaks from scipy.optimize import curve_fit from google.colab import drive import os # Mount Google Drive drive.mount('/content/drive') # Function to read numeric data from a file def read_numeric_data(file_path): with open(file_path, 'r', encoding='latin1') as f: data = [] live_time = None real_time = None for line in f: if 'LIVE_TIME' in line: live_time = line.strip() elif 'REAL_TIME' in line: real_time = line.strip() elif all(char.isdigit() or char in ".-+e" for char in line.strip()): row = [float(x) for x in line.split()] data.extend(row) return np.array(data), live_time, real_time file_path = '/content/drive/MyDrive/ProjetoXRF_Manuel/April2024/20240402_In.mca' data, _, _ = read_numeric_data(file_path) a = -0.0188026396003431 b = 0.039549044037714 Data = data # FunciΓ³n para convolucionar mΓΊltiples gaussianas def multi_gaussian(x, *params): eps = 1e-10 y = np.zeros_like(x) amplitude_relations = [1,0.5538, 0.1673 , 0.1673*0.5185 ] meanvalues= [24.210, 24.002 , 27.276 , 27.238] amplitude = params[0] for i in range(4): sigma = params[i * 3 + 2] # Get the fitted standard deviation for this Gaussian y += amplitude*amplitude_relations[i]* np.exp(-(x - meanvalues[i])**2 / (2 * sigma**2 + eps)) return y sigma=[] area=[] # FunciΓ³n para graficar el espectro de energΓa convolucionado def plot_convolved_spectrum(Data, a, b,i, ax=None): maxim = np.max(Data) workingarray = np.squeeze(Data) # Definir puntos mΓ‘ximos peaks, _ = find_peaks(workingarray / maxim, height=0.1) peak_values = workingarray[peaks] / maxim peak_indices = peaks # Calcular valores de energΓa correspondientes a los picos energy_spectrum = a + b * peak_indices # Preparar datos para convoluciΓ³n data = workingarray[:885] / maxim data_y = data / data.max() data_x = a + b * np.linspace(0, 885, num=len(data_y)) # Ajustar mΓΊltiples gaussianas al espectro de energΓa peak_indices2, _ = find_peaks(data_y, height=0.1) peak_amplitudes = [1,0.5538, 0.1673 , 0.1673*0.5185 ] peak_means = [24.210, 24.002 , 27.276 , 27.238] peak_sigmas = [0.1] * 4 params_init = list(zip(peak_amplitudes, peak_means, peak_sigmas)) params_init = np.concatenate(params_init) # Ajustar curva params, params_cov = curve_fit(multi_gaussian, data_x, data_y, p0=params_init) # Obtener una interpolaciΓ³n de alta resoluciΓ³n del ajuste x_fine = np.linspace(data_x.min(), data_x.max(), num=20000) y_fit = multi_gaussian(x_fine, *params) # Data Graphic ax.scatter(data_x, data_y, color='black', marker='o', s=20, label = 'Data' ) y_data_error =np.sqrt(workingarray[:885]) ax.plot(data_x, data_y + y_data_error/maxim, color='black',linestyle='--') ax.plot(data_x, data_y - y_data_error/maxim, color='black',linestyle='--') # Fit Graphic ax.plot(x_fine, y_fit, label="Fit", linewidth=1.5) # Extraer desviaciones estΓ‘ndar y amplitudes sigmas_array = params[2::3] # Calcular sigma promedio sigma.append(np.mean(sigmas_array)) # ConfiguraciΓ³n del grΓ‘fico ax.set_xlabel('Energy (KeV)') ax.set_ylabel('Normalized Data') ax.legend() ax.set_title('Convolved Energy Spectrum') # Imprimir informaciΓ³n print("Standard deviations:", sigmas_array) fig, ax = plt.subplots() plot_convolved_spectrum(Data, a, b,1,ax=ax) ax.set_xlim(22, 28) plt.show() Note I've experimented with setting bounds and refining initial guesses to improve the Gaussian fit. However, it's important to note that working with bounds isn't equivalent to fixing parameters. My aim is to maintain a degree of freedom of just two parameters: the amplitude of the first Gaussian and the standard deviation of all Gaussians. While exploring these adjustments, I'm striving to strike a balance between constraint and flexibility to achieve the desired fit accuracy. Note This is the model: The parameters of the Gaussian function are defined as follows: A represents the amplitude, x_0 denotes the mean value, and sigma represents the standard deviation. In the context of analyzing energy spectra, I possess prior knowledge about the mean values and the relationships governing the amplitudes. Consequently, I aim to constrain certain parameters (such as A and mean) based on this known information. Specifically, I intend to fit only the first amplitude and then utilize the known relationships, such as A2 = 0.1673 * A1 for the second peak. And the to fit the corresponding standar deviation. Why using a sum of gaussians?. The apparent singularity of the first peak in the plot might suggest a single Gaussian. However, this is not the case. The Gaussian representing this energy peak actually consists of two Gaussians that are summed together. This complexity arises because the energy emission at that level comprises two distinct values, 24.002 and 24.210. And due to the resolution of the experiment we are not able to distinguish them just by seen the plot. And for all these reasons is that I am trying to fix some parameters. If someone has issue with the data here it is: extracted data.txt | Here is a MCVE showing how to regress a variable number of peaks from the significant part of your signal. import itertools import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy import signal, stats, optimize We load you data and post-process it to get physical series: data = pd.read_csv("20240402_In.mca", encoding="latin-1", names=["y"]).loc[12:2058] data["y"] = pd.to_numeric(data["y"]) data["y"] /= data["y"].max() data["x"] = -0.0188026396003431 + 0.039549044037714 * data.index data = data.loc[(20. <= data["x"]) & (data["x"] <= 30.), :] We create the model which accept a variable number of peaks (driven by find_peaks solution): def peak(x, A, s, x0): law = stats.norm(loc=x0, scale=s) return A * law.pdf(x) / law.pdf(x0) def model(x, *parameters): n = len(parameters) // 3 y = np.zeros_like(x) for i in range(n): y += peak(x, *parameters[(i * 3):((i + 1) * 3)]) return y We identify peaks: indices, bases = signal.find_peaks(data.y, prominence=0.0125) #(array([ 67, 118, 195, 210]), # {'prominences': array([0.01987281, 0.99920509, 0.18918919, 0.03338633]), # 'left_bases': array([ 1, 1, 179, 205]), # 'right_bases': array([ 76, 160, 219, 219])}) This is the key point, from the identification, we create our educated guess: x0s = data.x.values[indices] As = bases["prominences"] ss = (data.x.values[bases["right_bases"]] - data.x.values[bases["left_bases"]]) / 8. p0 = list(itertools.chain(*zip(As, ss, x0s))) #[0.01987281399046105, # 0.37077228785356864, # 22.682348638047493, # 0.9992050874403816, # 0.7860372502495658, # 24.699349883970907, # 0.1891891891891892, # 0.19774522018856988, # 27.744626274874886, # 0.033386327503974564, # 0.06921082706599968, # 28.337861935440593] Now we regress the model wrt your dataset: popt, pcov = optimize.curve_fit(model, data.x, data.y, p0=p0) #array([1.03735804e-02, 9.61270732e-01, 2.29030214e+01, 9.92651381e-01, # 1.59694755e-01, 2.46561911e+01, 1.85332645e-01, 1.21882422e-01, # 2.77807838e+01, 3.67805911e-02, 1.21890416e-01, 2.83583849e+01]) We can now estimate the model: yhat = model(data.x, *popt) Graphically, it leads to: Which is quite satisfactory for the unconstrained fit but certainly does not fulfill the constraint you want to enforce. Could you publish your data ready for coding as arrays in your post? | 2 | 3 |
78,389,992 | 2024-4-26 | https://stackoverflow.com/questions/78389992/does-python-throw-away-unused-expressions | In pyhon3.10, I can run this code snippet without raising any errors: {}[0]:1 which creates an empty dictionary, and then accesses the 0 key. However, the :1 that follows is what I would consider an invalid syntax. And indeed, if I try to determine the type of the result: type({}[0]:1) a syntax error gets raised. A similar behavior occurs whenever I try to work with the result, such as print({}[0]:1). Why does this happen? I am assuming that the interpreter recognizes the expression as not being assigned and does not compile it. And therefore you can run your code with the {}[0]:1 line being present. However, this is not consistent with other syntax errors being raised by different syntactically invalid code (such as 1:1 which raises an error). | {}[0]:1 as a statement is not a syntax error. It's an annotation where {}[0] is the thing being annotated and 1 is the annotation. Normally, you would write something like x: int. {}[0] is not an error because it is executed as if on the left-hand side of an assignment. We can see this by using dis.dis (this is Python 3.11.2): >>> dis.dis("{}[0]:1") 0 0 RESUME 0 1 2 SETUP_ANNOTATIONS 4 BUILD_MAP 0 6 POP_TOP 8 LOAD_CONST 0 (0) 10 POP_TOP 12 LOAD_CONST 1 (1) 14 POP_TOP 16 LOAD_CONST 2 (None) 18 RETURN_VALUE Only when using {}[0]:1 as an expression, it is invalid syntax. It's a coincidence that you chose {}[0] on the left-hand side of :, this does not always work: >>> print(x):1 File "<stdin>", line 1 print(x):1 ^^^^^^^^ SyntaxError: illegal target for annotation (You should have gotten the same error for 1:1.) The corresponding section in the language reference is here: https://docs.python.org/3/reference/simple_stmts.html#annotated-assignment-statements | 2 | 6 |
78,389,427 | 2024-4-26 | https://stackoverflow.com/questions/78389427/how-to-generate-multiple-responses-for-single-prompt-with-google-gemini-api | Context I am using google gemini-api. Using their Python SDK Goal I am trying to generate multiple possible responses for a single prompt according to the docs and API-reference Expected result - multiple response for a single prompt Actual result - single response Code I have tried # ... more code above model = genai.GenerativeModel(model_name="gemini-1.5-pro-latest", system_instruction=system_instruction) response = model.generate_content("What is the meaning of life?") resps = response.candidates resps is a list which should contain more than 1 response. But there is only 1 response inside it. The prompt used here is a demo prompt. But the outcome is same for any input string. If any more information is needed please ask in the comments. | When I saw the official document GenerationConfig of Method: models.generateContent, it says as follows. candidateCount: integer Optional. Number of generated responses to return. Currently, this value can only be set to 1. If unset, this will default to 1. It seems that in the current stage, candidateCount is only 1. This is the same with "v1". Ref But, when I saw google.ai.generativelanguage.GenerationConfig of the official document, it says as follows. candidate_count: int Optional. Number of generated responses to return. This value must be between [1, 8], inclusive. If unset, this will default to 1. From this, it seems that it is possible to try to change the value of candidate_count by the script. From this document, when your script is modified, it becomes as follows. model = genai.GenerativeModel( model_name="gemini-1.5-pro-latest", system_instruction=system_instruction, generation_config=glm.GenerationConfig(candidate_count=2), ) response = model.generate_content("What is the meaning of life?") resps = response.candidates In this case, please add import google.ai.generativelanguage as glm. When I tested the above script with candidate_count=2, an error like Only one candidate can be specified occurred. I guess that this value might be able to be changed in the future update. When a value of more than 1 got to be able to be used, please test the above-modified script. References: GenerationConfig google.ai.generativelanguage.GenerationConfig | 2 | 1 |
78,389,964 | 2024-4-26 | https://stackoverflow.com/questions/78389964/return-diag-elements-of-nxn-matrix-without-using-a-for-loop | Imagine having an array of matrices (nxkxk), how would I return the diagonal entries while keeping the original shape without using a for-loop. For example, without keeping the original shape we could do np.diagonal(array_of_matrices, axis1=1, axis2=2) Obviously I could just do this and then reconstruct the original shape, but I'm wondering if there is a cleaner way. I have tried nothing and I'm all out of ideas. np.diag does not take axis arguments. | You can use broadcasting to multiply an eye matrix by your k x n x n shaped array to return only the diagonal elements. import numpy as np n = 4 k = 3 arr = np.arange(k * n * n).reshape(k, n, n) diags = arr * np.eye(n) Here are the elements: arr # returns: array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]], [[16, 17, 18, 19], [20, 21, 22, 23], [24, 25, 26, 27], [28, 29, 30, 31]], [[32, 33, 34, 35], [36, 37, 38, 39], [40, 41, 42, 43], [44, 45, 46, 47]]]) diags # returns: array([[[ 0., 0., 0., 0.], [ 0., 5., 0., 0.], [ 0., 0., 10., 0.], [ 0., 0., 0., 15.]], [[16., 0., 0., 0.], [ 0., 21., 0., 0.], [ 0., 0., 26., 0.], [ 0., 0., 0., 31.]], [[32., 0., 0., 0.], [ 0., 37., 0., 0.], [ 0., 0., 42., 0.], [ 0., 0., 0., 47.]]]) | 2 | 4 |
78,387,204 | 2024-4-25 | https://stackoverflow.com/questions/78387204/pybind11-cant-figure-out-how-to-access-tuple-elements | I'm an experienced Python programmer trying to learn C++ to speed up some projects. Passing a py::tuple to a function, how do I access the elements? Here's the constructor for a class I'm making to hold images from a video. Buffer::Buffer(int size, py::tuple hw_args_) { unsigned int frame_height = get<0>(hw_args_); unsigned int frame_width = get<1>(hw_args_); Buffer::time_codes = new int[size]; Buffer::time_stamps = new string[size]; Buffer::frames = new unsigned char[size][hw_args[0]][hw_args[1]]; if ((Buffer::time_codes && Buffer::time_stamps) == 0) { cout << "Error allocating memory\n"; exit(1); } } This gives me an error "No instance of the overloaded function 'get' matches the argument list, argument types are pybind11::tuple" I've also tried setting the frame height and widths this way. unsigned int frame_height = hw_args_[0]; unsigned int frame_width = hw_args_[1]; This gives an error "No suitable conversion function from 'pybind11::detail::tuple_accessor' to 'unsigned int' exists" I'm at a loss, I can only seem to find info on making tuples in C++, not accessing them from Pybind11 / Python. | py::tuple has an operator[] that returns a detail::tuple_accessor which has a .cast as any py::object uint32_t frame_height = hw_args_[0].cast<uint32_t>(); you could check its use from pybind11 tests [](const py::tuple &tup) { if (py::len(tup) != 4) { throw py::cast_error("Invalid size"); } return SimpleStruct{tup[0].cast<bool>(), tup[1].cast<uint32_t>(), tup[2].cast<float>(), tup[3].cast<long double>()}; } | 3 | 3 |
78,387,084 | 2024-4-25 | https://stackoverflow.com/questions/78387084/understanding-pythons-bisect-library-clarifying-usage-of-bisect-left | During my recent work on a mathematical problem in Python, I encountered some confusion regarding the behavior of the bisect.bisect_left function from the bisect library. This confusion arose due to its similarity with another function in the library, namely bisect.insort_left. In my specific use case, I was utilizing a custom key function, such as bisect.insort_left(my_list, my_item, key=my_key), which behaved as expected. This function determined the appropriate index for my_item in my_list using the specified key and inserted it accordingly. However, when attempting a similar operation with bisect.bisect_left(my_list, my_item, key=my_key), I encountered an unexpected TypeError: "other argument must be K instance". This error message lacked clarity regarding the underlying issue. Upon investigating the source code of bisect, I discovered the correct usage pattern, as indicated by line 71 of the source code. It became apparent that the correct usage involves calling the key function with the item as an argument, like so: bisect.bisect_left(my_list, my_key(my_item), key=my_key). I am curious about the design decision behind this requirement. Why is it necessary to call my_key(my_item) when using bisect.bisect_left compared to the more straightforward usage in bisect.insort_left? | Let's say the list items contain a lot of data. Each item can be e. g. a dict { 'id': 12, 'firstname': 'foo', 'lastname': 'bar', 'address': ..., 'phone': ... ... } These items are sorted in the list by id with an appropriate key function like lambda item: item['id']. If you want to find an item with bisect_left it shouldn't be necessary to provide the whole item with all of its data but the id should be enough. In the docs this is mentioned a bit cryptically: To support searching complex records, the key function is not applied to the x value. If you want to insert an item with insort_left of course you have to provide the complete item with all data. | 3 | 4 |
78,381,155 | 2024-4-24 | https://stackoverflow.com/questions/78381155/python-setuptools-multiple-extension-modules-with-shared-c-source-code-building | I'm working on a Python project with a setup.py that has something like this1: setup( cmdclass={"build_ext": my_build_ext}, ext_modules=[ Extension("A", ["a.c", "common.c"]), Extension("B", ["b.c", "common.c"]) ] ) I'm running into a problem when building the modules in parallel where it seems like one module tries to read common.o/common.obj while another is compiling it, and it fails. Is there some way to get setuptools to compile the C files for each module into their own build directories so that they aren't overwriting each other? The actual project is more complicated with more modules and source files. | I found a potential solution by overriding build_extension() in a custom build_ext class: import copy, os from setuptools import Extension, setup from setuptools.command.build_ext import build_ext class my_build_ext(build_ext): def build_extension(self, ext): # Append the extension name to the temp build directory # so that each module builds to its own directory. # We need to make a (shallow) copy of 'self' here # so that we don't overwrite this value when running in parallel. self_copy = copy.copy(self) self_copy.build_temp = os.path.join(self.build_temp, ext.name) build_ext.build_extension(self_copy, ext) setup( cmdclass={"build_ext": my_build_ext}, ext_modules=[ Extension("A", ["a.c", "common.c"]), Extension("B", ["b.c", "common.c"]) ] ) I've also since been told of the (currently undocumented) libraries parameter for setup(): from setuptools import Extension, setup setup( libraries=[ ("common", {"sources": ["common.c"]}), ], ext_modules=[ Extension("A", sources=["a.c"], libraries=["common"]), Extension("B", sources=["b.c"], libraries=["common"]), ], ) Both solutions worked for me, but in slightly different ways. The first solution recompiles the code for each module, which allows you to specify different parameters to use for each module (ex. different defs). The second solution only has to compile to code once and it will reuse that for every module. | 3 | 3 |
78,384,202 | 2024-4-25 | https://stackoverflow.com/questions/78384202/filter-a-polars-dataframe-based-on-json-in-string-column | I have a Polars dataframe like df = pl.DataFrame({ "tags": ['{"ref":"@1", "area": "livingroom", "type": "elec"}', '{"ref":"@2", "area": "kitchen"}', '{"ref":"@3", "type": "elec"}'], "name": ["a", "b", "c"], }) ββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βββββββ β tags β name β β --- β --- β β str β str β ββββββββββββββββββββββββββββββββββββββββββββββββββββββͺβββββββ‘ β {"ref":"@1", "area": "livingroom", "type": "elec"} β a β β {"ref":"@2", "area": "kitchen"} β b β β {"ref":"@3", "type": "elec"} β c β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββ What I would to do is create a filter function that filters dataframe based on the tags column. Particularly, I would like to only be left with rows where the tags column has an area key and a type key that has a value "elec". How can I achieve this (ideally using the native expressions API)? | pl.Expr.str.json_path_match can be used to extract the first match of the JSON string with the a suitable path expression. ( df .filter( pl.col("tags").str.json_path_match("$.area").is_not_null(), pl.col("tags").str.json_path_match("$.type") == "elec", ) ) shape: (1, 2) ββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βββββββ β tags β name β β --- β --- β β str β str β ββββββββββββββββββββββββββββββββββββββββββββββββββββββͺβββββββ‘ β {"ref":"@1", "area": "livingroom", "type": "elec"} β a β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββ More generally, pl.Expr.str.json_decode ca be used to obtain a struct column with the information of the JSON. This struct can be unnested and used for any downstream filtering operation. ( df .with_columns( pl.col("tags").str.json_decode() ) .unnest("tags") ) shape: (3, 4) βββββββ¬βββββββββββββ¬βββββββ¬βββββββ β ref β area β type β name β β --- β --- β --- β --- β β str β str β str β str β βββββββͺβββββββββββββͺβββββββͺβββββββ‘ β @1 β livingroom β elec β a β β @2 β kitchen β null β b β β @3 β null β elec β c β βββββββ΄βββββββββββββ΄βββββββ΄βββββββ | 2 | 3 |
78,383,764 | 2024-4-25 | https://stackoverflow.com/questions/78383764/rolling-windows-over-internal-lists | I have a frame like this: import polars as pl from polars import Boolean, List, col src = pl.DataFrame( { "c1": ["a", "b", "c", "d"], "c2": [ [0, 0], [0, 1, 0, 0], [1, 0, 1, 1, 1], [1, 1, 0], ], }, schema_overrides={"c2": List(Boolean)}, ) For each of the inner lists in "c2", I am trying to calculate the maximum sum in a sliding window of length 3: [1, 0, 0, 1, 1, 1] β [(1, 0, 0), (0, 0, 1), (0, 1, 1),(1, 1, 1)] β [1, 1, 2, 3] β 3 What is the most efficient way to do it? I cannot find the method to roll over inner lists. Several answers on StackOverflow advise me to explode the inner lists, roll over the exploded series, restore the structure somehow. β¦ which leads to monstrosities like the following: ( src.with_row_index("outer_index") .explode("c2") .with_row_index("inner_index") .rolling("inner_index", period="3i", offset="0i", closed="left") .agg("outer_index", "c1", col("c2").sum()) .drop("inner_index") ) First of all, this is ugly. And this is wrong: rolling windows must respect borders of the original internal lists, but here they do not. For readability, here is the dump I get with df.to_dicts() [{'outer_index': [0, 0, 1], 'c1': ['a', 'a', 'b'], 'c2': 0}, {'outer_index': [0, 1, 1], 'c1': ['a', 'b', 'b'], 'c2': 1}, {'outer_index': [1, 1, 1], 'c1': ['b', 'b', 'b'], 'c2': 1}, {'outer_index': [1, 1, 1], 'c1': ['b', 'b', 'b'], 'c2': 1}, {'outer_index': [1, 1, 2], 'c1': ['b', 'b', 'c'], 'c2': 1}, {'outer_index': [1, 2, 2], 'c1': ['b', 'c', 'c'], 'c2': 1}, {'outer_index': [2, 2, 2], 'c1': ['c', 'c', 'c'], 'c2': 2}, {'outer_index': [2, 2, 2], 'c1': ['c', 'c', 'c'], 'c2': 2}, {'outer_index': [2, 2, 2], 'c1': ['c', 'c', 'c'], 'c2': 3}, {'outer_index': [2, 2, 3], 'c1': ['c', 'c', 'd'], 'c2': 3}, {'outer_index': [2, 3, 3], 'c1': ['c', 'd', 'd'], 'c2': 3}, {'outer_index': [3, 3, 3], 'c1': ['d', 'd', 'd'], 'c2': 2}, {'outer_index': [3, 3], 'c1': ['d', 'd'], 'c2': 1}, {'outer_index': [3], 'c1': ['d'], 'c2': 0}] | You can access .rolling_sum() via the .list.eval() API. df.with_columns( pl.col("c2").list.eval( pl.element().rolling_sum(3) ) ) shape: (4, 2) βββββββ¬ββββββββββββββββββββββββ β c1 β c2 β β --- β --- β β str β list[i64] β βββββββͺββββββββββββββββββββββββ‘ β a β [null, null] β β b β [null, null, 1, 1] β β c β [null, null, 2, 2, 3] β β d β [null, null, 2] β βββββββ΄ββββββββββββββββββββββββ And take the .max() df.with_columns( pl.col("c2").list.eval( pl.element().rolling_sum(3).max() ) ) shape: (4, 2) βββββββ¬ββββββββββββ β c1 β c2 β β --- β --- β β str β list[i64] β βββββββͺββββββββββββ‘ β a β [null] β β b β [1] β β c β [3] β β d β [2] β βββββββ΄ββββββββββββ | 3 | 3 |
78,381,346 | 2024-4-24 | https://stackoverflow.com/questions/78381346/building-plots-with-plotnine-and-python | I am looking for a way that I can modify plots by adding to an existing plot object. For example, I want to add annotations at particular dates in a work plot, but want a standard way of building the base chart, and then "adding" annotations if I decide to later. Example of what I want to do (using Altair) It might be easier to show what I want with altair as an example: import altair as alt import plotnine as plt data = pd.DataFrame([ {'x': 0, 'y': 0}, {'x': 2, 'y': 1}, {'x': 3, 'y': 4} ]) events = pd.DataFrame([ {'x': 1, 'y': 3, 'label': 'the one'} ]) base = alt.Chart(data).mark_line().encode( x='x', y='y' ) annotate = ( alt.Chart(events).mark_text().encode( x='x', y='y', text='label' ) + alt.Chart(events).mark_rule().encode( x='x', color=alt.value('red') ) ) # display base + annotate makes what I want. I can also make functions def make_base_plot(data): return alt.Chart(data).mark_line().encode(x='x', y='y') def make_annotations(events): return ( alt.Chart(events).mark_text().encode( x='x', y='y', text='label' ) + alt.Chart(events).mark_rule().encode( x='x', color=alt.value('red') ) ) which enables me to plot the base data without annotations, or provide the annotation message later, or edit the events for the audience the plot is intended for. Same example using plotnine If I want to do this "all at once" here is how I would create this plot: import altair as alt import plotnine as plt data = pd.DataFrame([ {'x': 0, 'y': 0}, {'x': 2, 'y': 1}, {'x': 3, 'y': 4} ]) events = pd.DataFrame([ {'x': 1, 'y': 3, 'label': 'the one'} ]) ( p9.ggplot(data, p9.aes(x='x', y='y')) + p9.geom_line(color='blue') + p9.theme_bw() + p9.geom_text(mapping=p9.aes(x='x', y='y', label='label'), data=events) + p9.geom_vline(mapping=p9.aes(xintercept='x'), data=events, color='red') ) However, the two "natural" attempts to decompose this fail: ... # This part is fine base = ( p9.ggplot(data, p9.aes(x='x', y='y')) + p9.geom_line(color='blue') + p9.theme_bw() ) # So is this annotations = ( p9.ggplot(data, p9.aes(x='x', y='y')) + p9.geom_text(mapping=p9.aes(x='x', y='y', label='label'), data=events) + p9.geom_vline(mapping=p9.aes(xintercept='x'), data=events, color='red') ) # This fails base + annotations # Error message # AttributeError: 'ggplot' object has no attribute '__radd__' Trying this without annotations having a p9.ggplot object to start fails when I try to create the annotations object. My question is, how do I decompose the grammar of graphics in plotnine so I can have functions create common components that I can compose, similar to Altair? I know an alternative is to create a function that has two inputs (data and events) and do this in one pass, but that means when creating a template for a graph I have to anticipate all future annotations I want to make, if I want to build from a template of graphs. | Similar to R's ggplot2 (see here) you can use a list to decompose the creation of a plot in multiple parts where each part consists of multiple components or layers: import plotnine as p9 import pandas as pd data = pd.DataFrame([ {'x': 0, 'y': 0}, {'x': 2, 'y': 1}, {'x': 3, 'y': 4} ]) events = pd.DataFrame([ {'x': 1, 'y': 3, 'label': 'the one'} ]) base = ( p9.ggplot(data, p9.aes(x='x', y='y')) + p9.geom_line(color='blue') + p9.theme_bw() ) annotations = [ p9.geom_text(mapping=p9.aes(x='x', y='y', label='label'), data=events), p9.geom_vline(mapping=p9.aes(xintercept='x'), data=events, color='red') ] base + annotations Hence, you can rewrite your altair functions using plotnine like so: def make_base_plot(data, color = 'blue'): return ( p9.ggplot(data, p9.aes(x='x', y='y')) + p9.geom_line(color=color) + p9.theme_bw() ) def make_annotations(events, color = 'red'): return [ p9.geom_text(mapping=p9.aes(x='x', y='y', label='label'), data=events), p9.geom_vline(mapping=p9.aes(xintercept='x'), data=events, color=color) ] make_base_plot(data, 'red') +\ make_annotations(events, 'blue') | 3 | 2 |
78,379,854 | 2024-4-24 | https://stackoverflow.com/questions/78379854/count-matches-with-multiple-options | I have a frame like this: src = pl.DataFrame( { "c1": ["a", "b", "c", "d"], "c2": [[0], [2, 3, 4], [3, 4, 7, 9], [3, 9]], } ) ... and a list of targets: targets = pl.Series([3, 7, 9]) ... and I want to count the number of targets in "c2": dst = pl.DataFrame( { "c1": ["a", "b", "c", "d"], "c2": [[0], [2, 3, 4], [3, 4, 7, 9], [3, 9]], "match_count": [0, 1, 3, 2], } ) What is the most efficient way to do it? I see count_matches, but it does not work with multiple options: df["c"].list.count_matches(3) # OK. df["c"].list.count_matches([3, 7, 9]) # No way. | You could use pl.Expr.list.eval to evaluate for each element in the list column, whether it is contained in target. Then, the sum of the boolean list gives the number of matches. dst.with_columns( pl.col("c2").list.eval(pl.element().is_in(targets)).list.sum().alias("res") ) shape: (4, 4) βββββββ¬ββββββββββββββ¬ββββββββββββββ¬ββββββ β c1 β c2 β match_count β res β β --- β --- β --- β --- β β str β list[i64] β i64 β u32 β βββββββͺββββββββββββββͺββββββββββββββͺββββββ‘ β a β [0] β 0 β 0 β β b β [2, 3, 4] β 1 β 1 β β c β [3, 4, β¦ 9] β 3 β 3 β β d β [3, 9] β 2 β 2 β βββββββ΄ββββββββββββββ΄ββββββββββββββ΄ββββββ Similarly, you could use pl.Expr.list.eval to filter column c2 and, then, use pl.Expr.list.len to see how many values remain. If this is performance critical, I'd suggest benchmarking both approaches. Alternatively, you are really interested in the number of elements in the set intersection (as mentioned by @jqurious) or the elements in at least one list are unique, you can use pl.Expr.list.set_intersection. dst.with_columns( pl.col("c2") .list.set_intersection(targets.to_list()) .list.len() .alias("res_intersection") ) shape: (4, 4) βββββββ¬ββββββββββββββ¬ββββββββββββββ¬βββββββββββββββββββ β c1 β c2 β match_count β res_intersection β β --- β --- β --- β --- β β str β list[i64] β i64 β u32 β βββββββͺββββββββββββββͺββββββββββββββͺβββββββββββββββββββ‘ β a β [0] β 0 β 0 β β b β [2, 3, 4] β 1 β 1 β β c β [3, 4, β¦ 9] β 3 β 3 β β d β [3, 9] β 2 β 2 β βββββββ΄ββββββββββββββ΄ββββββββββββββ΄βββββββββββββββββββ | 2 | 2 |
78,378,343 | 2024-4-24 | https://stackoverflow.com/questions/78378343/selecting-checkboxes-that-are-wrapped-with-a-and-i-tags | I'm trying to check if the checkbox is selected using is_selected() method. I understand that since the element I am trying to validate is not a legit checkbox, it's not right to validate it this way. It's pretty obvious because it's wrapped in an anchor(<a>) and italic tag(<i>). I've tried different variants of XPaths such as: //input[@name='non_stop']/preceding-sibling::i //input[@name='non_stop']/parent::a //a[@title='Non Stop Flights']/i[contains(@class, 'ico ico-checkbox')] (possibly the most wrong one here) But is there a workaround to verify this? Here's my code: driver = webdriver.Chrome() driver.get("https://www.yatra.com/") non_stop_flights = driver.find_element( By.XPATH, "//input[@name='non_stop']/preceding-sibling::i") print("Before clicking") print(non_stop_flights.is_selected()) # Expecting False here non_stop_flights.click() sleep(3) print("After clicking") print(non_stop_flights.is_selected()) # Expecting True here But I am still getting False as the output in the last line. How to make it recognize that the checkbox has been checked? | The problem is that the non-stop INPUT isn't actually storing checked state. It's being stored in the I tag right above it but inside the A tag. <a for="BE_flight_non_stop" title="Non Stop Flights" class="custom-check"> -> <i class="ico ico-checkbox"></i> <input data-trackcategory="Home Page" data-trackaction="Booking Engine" data-trackvalue="Non Stop Flight - Checked/Unchecked" type="checkbox" name="non_stop" id="BE_flight_non_stop" class="eventTrackable js-prodSpecEvtCat"> Non Stop Flights </a> The checked state is tracked using a class on the I tag... Not checked <i class="ico ico-checkbox"></i> Checked <i class="ico ico-checkbox ico-checkbox-checked"></i> ^^^^^^^^^^^^^^^^^^^^ checked state With this info, we can make a quick method to determine the checked state of these fields def is_checked(e): """ Returns a True if the checkbox is checked, False otherwise. Parameters: e (WebElement): The I tag for the checkbox """ return "ico-checkbox-checked" in e.get_attribute("class") and then call it from your original (modified) script driver = webdriver.Chrome() driver.get("https://www.yatra.com/") non_stop_flights = driver.find_element(By.CSS_SELECTOR, "a[title='Non Stop Flights'] i") print("Before clicking") print(is_checked(non_stop_flights)) # Expecting False here non_stop_flights.click() print("After clicking") print(is_checked(non_stop_flights)) # Expecting True here and it works as expected. | 2 | 1 |
78,378,769 | 2024-4-24 | https://stackoverflow.com/questions/78378769/how-to-authenticate-on-msgraph-graphserviceclient | There is no documentation for do this, I found it unacceptable. from msal import ConfidentialClientApplication from msgraph import GraphServiceClient client_id = '' client_secret = '' tenant_id = '' authority = f'https://login.microsoftonline.com/{tenant_id}' scopes = ['https://graph.microsoft.com/.default'] app = ConfidentialClientApplication( client_id, authority=authority, client_credential=client_secret, ) response = app.acquire_token_for_client(scopes) graph_client = GraphServiceClient( credentials=response, scopes=scopes ) await graph_client.users.get() /usr/local/lib/python3.10/dist-packages/kiota_authentication_azure/azure_identity_access_token_provider.py in get_authorization_token(self, uri, additional_authentication_context) 101 ) 102 else: --> 103 result = self._credentials.get_token(*self._scopes, claims=decoded_claim) 104 105 if inspect.isawaitable(result): > AttributeError: 'dict' object has no attribute 'get_token' Analyzing the stack, you can see that the object passed as credentials in the `msgraph` client is not what it expects; `acquire_token_for_client` returns a dictionary, but `GraphServiceClient` expects it to have a function called "get_token." How can this be resolved? | I registered one Entra ID application and granted User.Read.All permission of Application type as below: Initially, I too got same error when I ran your code to fetch list of users like this: from msal import ConfidentialClientApplication from msgraph import GraphServiceClient client_id = '' client_secret = '' tenant_id = '' authority = f'https://login.microsoftonline.com/{tenant_id}' scopes = ['https://graph.microsoft.com/.default'] app = ConfidentialClientApplication( client_id, authority=authority, client_credential=client_secret, ) response = app.acquire_token_for_client(scopes) graph_client = GraphServiceClient( credentials=response, scopes=scopes ) result = await graph_client.users.get() print(result) Response: To resolve the error, make use of below modified code that uses client credentials flow to authenticate with MS Graph and listed users successfully: import asyncio from azure.identity import ClientSecretCredential from msgraph import GraphServiceClient tenant_id = "tenantID" client_id = "appID" client_secret = "secret" credential = ClientSecretCredential( tenant_id=tenant_id, client_id=client_id, client_secret=client_secret ) client = GraphServiceClient(credential) async def main(): result = await client.users.get() users = result.value for user in users: print("User ID:", user.id) print("User Display Name:", user.display_name) print("-" * 50) # Separating each user with a line asyncio.run(main()) Response: References: List users - Microsoft Graph GitHub - microsoftgraph/msgraph-sdk-python | 2 | 2 |
78,378,241 | 2024-4-24 | https://stackoverflow.com/questions/78378241/how-can-i-know-the-coincidences-in-2-list-in-python-order-matters-but-when-1-fa | I have 2 python lists to compare. list1 = ['13.3. Risk', '13.3.1. Process', 'Change'] list2 = ['Change', '13.3. Risk', '13.3.1. Process'] I want to know how exact the order of elements is. If I go item by item, the coincidence itΒ΄s 0 since the first one fails. But if you look carefully, just fails the first element. And the rest are in order. So the coincidence, or better explained: accuracy/precision is 66.66% I have tried 3 things: Element by element coincidences= [i == j for i, j in zip(list1, list2)] percentaje= 100 * sum(coincidences) / len(list1) This results on 0% in this example. Levenstein distance I convert list to string with join and calculate levenstein distance from Levenshtein import distance str1 = ','.join(list1) str2 = ','.join(list2) lev_dist = distance(str1, str2) percentaje= 100 * (1 - lev_dist / max(len(str1), len(str2))) This results on 39.80582524271845% Spearman Coef from scipy.stats import spearmanr pos_list1 = {elem: i for i, elem in enumerate(list1)} range_list2 = [pos_list1 [elem] for elem in list2] coef, p_valor = spearmanr(list(range(len(list1))), rango_lista2) print(f'Spearman coef is: {coef}') This results on -0.5 So as you see, I dont get the expected 66.66% Is there another way of doing this? | May calculate Levenshtein distance between lists itself, not their concatenations: lev_dist = distance(list1, list2) percentage = 100 * (1 - lev_dist / (len(list1) + len(list2))) shows 66.66666666666666 | 2 | 2 |
78,377,954 | 2024-4-24 | https://stackoverflow.com/questions/78377954/groupby-mean-if-condition-is-true | I got the following dataframe: index user default_shipping_cost category shipping_cost shipping_coalesce estimated_shipping_cost 0 0 1 1 clothes NaN 1.0 6.0 1 1 1 1 electronics 2.0 2.0 6.0 2 2 1 15 furniture NaN 15.0 6.0 3 3 2 15 furniture NaN 15.0 15.0 4 4 2 15 furniture NaN 15.0 15.0 Per user, combine shipping_cost with default_shipping_cost and calculate the mean of the combined shipping_costs but only if there is at least one shipping_cost given. Explanation: user_1 shipping_cost is given (at least once) so we can calculate the mean user_2 there are no shipping_cost, so I would like to go with Nan Code: import pandas as pd pd.set_option("display.max_columns", None) pd.set_option("display.max_rows", None) pd.set_option('display.width', 1000) df = pd.DataFrame( { 'user': [1, 1, 1, 2, 2], 'default_shipping_cost': [1, 1, 15, 15, 15], 'category': ['clothes', 'electronics', 'furniture', 'furniture', 'furniture'], 'shipping_cost': [None, 2, None, None, None] } ) df.reset_index(inplace=True) df['shipping_coalesce'] = df.shipping_cost.combine_first(df.default_shipping_cost) dfg_user = df.groupby(['user']) df['estimated_shipping_cost'] = dfg_user['shipping_coalesce'].transform("mean") print(df) Expected output: index user default_shipping_cost category shipping_cost shipping_coalesce estimated_shipping_cost 0 0 1 1 clothes NaN 1.0 6.0 1 1 1 1 electronics 2.0 2.0 6.0 2 2 1 15 furniture NaN 15.0 6.0 3 3 2 15 furniture NaN 15.0 NaN 4 4 2 15 furniture NaN 15.0 NaN | Add an extra condition with transform('any') and where: df['estimated_shipping_cost'] = (dfg_user['shipping_coalesce'].transform('mean') .where(dfg_user['shipping_cost'].transform('any')) ) Output: index user default_shipping_cost category shipping_cost shipping_coalesce estimated_shipping_cost 0 0 1 1 clothes NaN 1.0 6.0 1 1 1 1 electronics 2.0 2.0 6.0 2 2 1 15 furniture NaN 15.0 6.0 3 3 2 15 furniture NaN 15.0 NaN 4 4 2 15 furniture NaN 15.0 NaN Intermediate: dfg_user['shipping_cost'].transform('any') 0 True 1 True 2 True 3 False 4 False Name: shipping_cost, dtype: bool | 2 | 1 |
78,377,078 | 2024-4-24 | https://stackoverflow.com/questions/78377078/generating-combinations-in-pandas-dataframe | I have a dataset with ["Uni", 'Region', "Profession", "Level_Edu", 'Financial_Base', 'Learning_Time', 'GENDER'] columns. All values in ["Uni", 'Region', "Profession"] are filled while ["Level_Edu", 'Financial_Base', 'Learning_Time', 'GENDER'] always contain NAs. For each column with NAs there are several possible values Level_Edu = ['undergrad', 'grad', 'PhD'] Financial_Base = ['personal', 'grant'] Learning_Time = ["morning", "day", "evening"] GENDER = ['Male', 'Female'] I want to generate all possible combinations of ["Level_Edu", 'Financial_Base', 'Learning_Time', 'GENDER'] for each observation in the initial data. So that each initial observation would be represented by 36 new observations (obtained by the formula of combinatorics: N1 * N2 * N3 * N4, where Ni is the length of the i-th vector of possible values for a column) Here is a Python code for recreating two initial observations and approximation of the result I desire to get (showing 3 combinations out of 36 for each initial observation I want). import pandas as pd import numpy as np sample_data_as_is = pd.DataFrame([["X1", "Y1", "Z1", np.nan, np.nan, np.nan, np.nan], ["X2", "Y2", "Z2", np.nan, np.nan, np.nan, np.nan]], columns=["Uni", 'Region', "Profession", "Level_Edu", 'Financial_Base', 'Learning_Time', 'GENDER']) sample_data_to_be = pd.DataFrame([["X1", "Y1", "Z1", "undergrad", "personal", "morning", 'Male'], ["X2", "Y2", "Z2", "undergrad", "personal", "morning", 'Male'], ["X1", "Y1", "Z1", "grad", "personal", "morning", 'Male'], ["X2", "Y2", "Z2", "grad", "personal", "morning", 'Male'], ["X1", "Y1", "Z1", "undergrad", "grant", "morning", 'Male'], ["X2", "Y2", "Z2", "undergrad", "grant", "morning", 'Male']], columns=["Uni", 'Region', "Profession", "Level_Edu", 'Financial_Base', 'Learning_Time', 'GENDER']) | You can combine itertools.product and a cross-merge: from itertools import product data = {'Level_Edu': ['undergrad', 'grad', 'PhD'], 'Financial_Base': ['personal', 'grant'], 'Learning_Time': ['morning', 'day', 'evening'], 'GENDER': ['Male', 'Female']} out = (sample_data_as_is[['Uni', 'Region', 'Profession']] .merge(pd.DataFrame(product(*data.values()), columns=data.keys()), how='cross') ) Output: Uni Region Profession Level_Edu Financial_Base Learning_Time GENDER 0 X1 Y1 Z1 undergrad personal morning Male 1 X1 Y1 Z1 undergrad personal morning Female 2 X1 Y1 Z1 undergrad personal day Male 3 X1 Y1 Z1 undergrad personal day Female 4 X1 Y1 Z1 undergrad personal evening Male .. .. ... ... ... ... ... ... 67 X2 Y2 Z2 PhD grant morning Female 68 X2 Y2 Z2 PhD grant day Male 69 X2 Y2 Z2 PhD grant day Female 70 X2 Y2 Z2 PhD grant evening Male 71 X2 Y2 Z2 PhD grant evening Female [72 rows x 7 columns] If you want the specific order of rows/columns from your expected output: cols = ['Uni', 'Region', 'Profession'] out = (pd.DataFrame(product(*data.values()), columns=data.keys()) .merge(sample_data_as_is[cols], how='cross') [cols+list(data)] ) Output: Uni Region Profession Level_Edu Financial_Base Learning_Time GENDER 0 X1 Y1 Z1 undergrad personal morning Male 1 X2 Y2 Z2 undergrad personal morning Male 2 X1 Y1 Z1 undergrad personal morning Female 3 X2 Y2 Z2 undergrad personal morning Female 4 X1 Y1 Z1 undergrad personal day Male .. .. ... ... ... ... ... ... 67 X2 Y2 Z2 PhD grant day Female 68 X1 Y1 Z1 PhD grant evening Male 69 X2 Y2 Z2 PhD grant evening Male 70 X1 Y1 Z1 PhD grant evening Female 71 X2 Y2 Z2 PhD grant evening Female [72 rows x 7 columns] | 5 | 3 |
78,347,434 | 2024-4-18 | https://stackoverflow.com/questions/78347434/problem-setting-up-llama-2-in-google-colab-cell-run-fails-when-loading-checkpo | I'm trying to use Llama 2 chat (via hugging face) with 7B parameters in Google Colab (Python 3.10.12). I've already obtain my access token via Meta. I simply use the code in hugging face on how to implement the model along with my access token. Here is my code: !pip install transformers from transformers import AutoModelForCausalLM, AutoTokenizer import torch token = "---Token copied from Hugging Face and pasted here---" tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf", token=token) model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf", token=token) It starts downloading the model but when it reaches Loading checkpoint shards: it just stops running and there is no error: | The issue is with Colab instance running out of RAM. Based on your comments you are using basic Colab instance with 12.7 Gb CPU RAM. For LLama model you'll need: for the float32 model about 25 Gb (but you'll need both cpu RAM and same 25 gb GPU ram); for the bfloat16 model around 13 Gb (and still not enough to fit basic Colab Cpu instance, given that you'll also need to store intermediate calculations from the model); Check this link for the details on the required resources: huggingface.co/NousResearch/Llama-2-7b-chat-hf/discussions/3 Also if you want only to do inference (predictions) on the model I would recommend to use it's quantized 4bit or 8bit versions. Both can be ran on CPU and don't need a lot of memory. | 3 | 5 |
78,364,357 | 2024-4-22 | https://stackoverflow.com/questions/78364357/how-count-occurrences-across-all-columns-in-a-polars-dataframe | I have an imported .csv of number values- I want to sort the dataframe so I end up a list showing how many times each value occurs across the whole dataframe. E.g. 1: 5 2: 0 3: 23 4: 8 I have found how to count the values of a specified column, but I can't find a way to do the same thing for the entire dataframe- I could count the values of each column and then combine them afterward but it is a bit clunky and I was looking for a more elegant solution. This is an example of what I was trying: sort_dataframe = df.select(pl.col("1", "2", "3", "4", "5", "6", "7").value_counts()) Which results in: polars.exceptions.ComputeError: Series length 16 doesn't match the DataFrame height of 26 | TLDR. You can use value_counts after unpivoting the dataframe into a long format. df.unpivot().get_column("value").value_counts() Explanation Let us consider the following example dataframe. import polars as pl df = pl.DataFrame({ "col_1": [1, 2, 3], "col_2": [2, 3, 7], "col_3": [1, 1, 9], }) shape: (3, 3) βββββββββ¬ββββββββ¬ββββββββ β col_1 β col_2 β col_3 β β --- β --- β --- β β i64 β i64 β i64 β βββββββββͺββββββββͺββββββββ‘ β 1 β 2 β 1 β β 2 β 3 β 1 β β 3 β 7 β 9 β βββββββββ΄ββββββββ΄ββββββββ First, we can unpivot the dataframe using pl.DataFrame.unpivot to obtain a single column containing all values. df.unpivot() shape: (9, 2) ββββββββββββ¬ββββββββ β variable β value β β --- β --- β β str β i64 β ββββββββββββͺββββββββ‘ β col_1 β 1 β β col_1 β 2 β β col_1 β 3 β β col_2 β 2 β β col_2 β 3 β β col_2 β 7 β β col_3 β 1 β β col_3 β 1 β β col_3 β 9 β ββββββββββββ΄ββββββββ Finally, we can get the value column as a pl.Series and use pl.Series.value_counts to count the number of occurrences of each value. counts = df.melt().get_column("value").value_counts() shape: (5, 2) βββββββββ¬ββββββββ β value β count β β --- β --- β β i64 β u32 β βββββββββͺββββββββ‘ β 7 β 1 β β 3 β 2 β β 9 β 1 β β 2 β 2 β β 1 β 3 β βββββββββ΄ββββββββ This can also be simply converted to a python dictionary. dict(counts.iter_rows()) {3: 2, 7: 1, 1: 3, 2: 2, 9: 1} | 2 | 2 |
78,357,791 | 2024-4-20 | https://stackoverflow.com/questions/78357791/how-to-apply-break-system-packages-conditionally-only-when-the-system-pip-py | I am adapting my pip commands to newer versions of Ubuntu (that support PEP 668). Out of the options, the only one worked so far (in my specific use case) is to Use --break-system-packages at the end of pip as indicated in this answer. That is, change sudo pip install xyz to sudo pip install xyz --break-system-packages . This worked for the newer versions of Ubuntu but causes an error in older versions of Ubuntu (22.04 LTS) that do not recognize the --break-system-packages option. The error message from pip is: no such option: --break-system-packages pip --version shows: pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10) How can one add conditions around pip commands so that it uses the --break-system-packages option only when the pip/python version is high enough to recognize it? | You can set an environment variable instead of using --break-system-packages. PIP_BREAK_SYSTEM_PACKAGES=1 pip install xyz should work in both new and older versions of pip | 8 | 16 |
78,370,120 | 2024-4-23 | https://stackoverflow.com/questions/78370120/how-to-drag-qlineedit-form-one-cell-of-qgridlayout-to-other-cell-in-pyqt | I have gridlayout with four cells (0,0),(0,1),(1,0),(1,1). Every cell is vertical layout with scrollbar. Initially only (0,0) cell contains QLineEdit in it. I want to drag and drop them in any one of the cells. How can I do it? I want to layout the cells like the image contained in the following link. I have tried this code main.py : import sys from PyQt5.QtWidgets import QApplication from mywidgets import MyWidget from myui import Ui_Form class MainWidget(MyWidget, Ui_Form): def __init__(self, *args, **kwargs): super().__init__() self.setupUi(self) self.show() if __name__ == '__main__': app = QApplication(sys.argv) w = MainWidget() sys.exit(app.exec_()) mywidgets.py : from PyQt5.QtWidgets import QLineEdit, QWidget, QScrollArea from PyQt5.QtCore import Qt, QMimeData from PyQt5.QtGui import QDrag, QPixmap class MyLineEdit(QLineEdit): def mouseMoveEvent(self, e): if e.buttons() == Qt.LeftButton: drag = QDrag(self) mime = QMimeData() drag.setMimeData(mime) pixmap = QPixmap(self.size()) self.render(pixmap) drag.setPixmap(pixmap) drag.exec_(Qt.MoveAction) class MyWidget(QWidget): def dragEnterEvent(self, e): e.accept() def dropEvent(self, e): pos = e.pos() widget = e.source() print('accepted but not moved , there is no layout') print('pos : ', pos) print('event.source : ', widget , 'event.source.parent() :', widget.parent()) widget.setParent(None) print('parent : ', self.parent()) widget.show() class MyScrollArea(QScrollArea): pass my PyQt5-designer ui file translated with pyuic5 ; myui.py : from PyQt5 import QtCore, QtGui, QtWidgets from PyQt5.QtCore import Qt, QMimeData from PyQt5.QtGui import QDrag class Ui_Form(object): def setupUi(self, Form): Form.setObjectName("Form") Form.resize(698, 635) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Preferred, QtWidgets.QSizePolicy.Preferred) sizePolicy.setHorizontalStretch(0) sizePolicy.setVerticalStretch(0) sizePolicy.setHeightForWidth(Form.sizePolicy().hasHeightForWidth()) Form.setSizePolicy(sizePolicy) Form.setAcceptDrops(True) self.gridLayout = QtWidgets.QGridLayout(Form) self.gridLayout.setObjectName("gridLayout") self.scrollArea = MyScrollArea(Form) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Expanding) sizePolicy.setHorizontalStretch(1) sizePolicy.setVerticalStretch(1) sizePolicy.setHeightForWidth(self.scrollArea.sizePolicy().hasHeightForWidth()) self.scrollArea.setSizePolicy(sizePolicy) self.scrollArea.setAcceptDrops(True) self.scrollArea.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn) self.scrollArea.setWidgetResizable(True) self.scrollArea.setObjectName("scrollArea") self.scrollAreaWidgetContents = MyWidget() self.scrollAreaWidgetContents.setGeometry(QtCore.QRect(0, -219, 315, 561)) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Preferred) sizePolicy.setHorizontalStretch(0) sizePolicy.setVerticalStretch(0) sizePolicy.setHeightForWidth(self.scrollAreaWidgetContents.sizePolicy().hasHeightForWidth()) self.scrollAreaWidgetContents.setSizePolicy(sizePolicy) self.scrollAreaWidgetContents.setAcceptDrops(True) self.scrollAreaWidgetContents.setObjectName("scrollAreaWidgetContents") self.verticalLayout = QtWidgets.QVBoxLayout(self.scrollAreaWidgetContents) self.verticalLayout.setObjectName("verticalLayout") self.lineEdit_1 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_1.setObjectName("lineEdit_1") self.verticalLayout.addWidget(self.lineEdit_1) self.lineEdit_2 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_2.setObjectName("lineEdit_2") self.verticalLayout.addWidget(self.lineEdit_2) self.lineEdit_3 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_3.setObjectName("lineEdit_3") self.verticalLayout.addWidget(self.lineEdit_3) self.lineEdit_4 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_4.setObjectName("lineEdit_4") self.verticalLayout.addWidget(self.lineEdit_4) self.lineEdit_5 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_5.setObjectName("lineEdit_5") self.verticalLayout.addWidget(self.lineEdit_5) self.lineEdit_15 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_15.setObjectName("lineEdit_15") self.verticalLayout.addWidget(self.lineEdit_15) self.lineEdit_14 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_14.setObjectName("lineEdit_14") self.verticalLayout.addWidget(self.lineEdit_14) self.lineEdit_13 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_13.setObjectName("lineEdit_13") self.verticalLayout.addWidget(self.lineEdit_13) self.lineEdit_12 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_12.setObjectName("lineEdit_12") self.verticalLayout.addWidget(self.lineEdit_12) self.lineEdit_11 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_11.setObjectName("lineEdit_11") self.verticalLayout.addWidget(self.lineEdit_11) self.lineEdit_10 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_10.setObjectName("lineEdit_10") self.verticalLayout.addWidget(self.lineEdit_10) self.lineEdit_9 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_9.setObjectName("lineEdit_9") self.verticalLayout.addWidget(self.lineEdit_9) self.lineEdit_8 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_8.setObjectName("lineEdit_8") self.verticalLayout.addWidget(self.lineEdit_8) self.lineEdit_7 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_7.setObjectName("lineEdit_7") self.verticalLayout.addWidget(self.lineEdit_7) self.lineEdit_6 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_6.setObjectName("lineEdit_6") self.verticalLayout.addWidget(self.lineEdit_6) self.scrollArea.setWidget(self.scrollAreaWidgetContents) self.gridLayout.addWidget(self.scrollArea, 0, 0, 1, 1) self.scrollArea_2 = MyScrollArea(Form) self.scrollArea_2.setAcceptDrops(True) self.scrollArea_2.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn) self.scrollArea_2.setWidgetResizable(True) self.scrollArea_2.setObjectName("scrollArea_2") self.scrollAreaWidgetContents_2 = MyWidget() self.scrollAreaWidgetContents_2.setGeometry(QtCore.QRect(0, 0, 315, 305)) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Preferred) sizePolicy.setHorizontalStretch(0) sizePolicy.setVerticalStretch(0) sizePolicy.setHeightForWidth(self.scrollAreaWidgetContents_2.sizePolicy().hasHeightForWidth()) self.scrollAreaWidgetContents_2.setSizePolicy(sizePolicy) self.scrollAreaWidgetContents_2.setAcceptDrops(True) self.scrollAreaWidgetContents_2.setObjectName("scrollAreaWidgetContents_2") self.scrollArea_2.setWidget(self.scrollAreaWidgetContents_2) self.gridLayout.addWidget(self.scrollArea_2, 0, 1, 1, 1) self.scrollArea_3 = MyScrollArea(Form) self.scrollArea_3.setAcceptDrops(True) self.scrollArea_3.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn) self.scrollArea_3.setWidgetResizable(True) self.scrollArea_3.setObjectName("scrollArea_3") self.scrollAreaWidgetContents_3 = MyWidget() self.scrollAreaWidgetContents_3.setGeometry(QtCore.QRect(0, 0, 315, 304)) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Preferred) sizePolicy.setHorizontalStretch(0) sizePolicy.setVerticalStretch(0) sizePolicy.setHeightForWidth(self.scrollAreaWidgetContents_3.sizePolicy().hasHeightForWidth()) self.scrollAreaWidgetContents_3.setSizePolicy(sizePolicy) self.scrollAreaWidgetContents_3.setAcceptDrops(True) self.scrollAreaWidgetContents_3.setObjectName("scrollAreaWidgetContents_3") self.scrollArea_3.setWidget(self.scrollAreaWidgetContents_3) self.gridLayout.addWidget(self.scrollArea_3, 1, 0, 1, 1) self.scrollArea_4 = MyScrollArea(Form) self.scrollArea_4.setAcceptDrops(True) self.scrollArea_4.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn) self.scrollArea_4.setWidgetResizable(True) self.scrollArea_4.setObjectName("scrollArea_4") self.scrollAreaWidgetContents_4 = MyWidget() self.scrollAreaWidgetContents_4.setGeometry(QtCore.QRect(0, 0, 315, 304)) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Preferred) sizePolicy.setHorizontalStretch(0) sizePolicy.setVerticalStretch(0) sizePolicy.setHeightForWidth(self.scrollAreaWidgetContents_4.sizePolicy().hasHeightForWidth()) self.scrollAreaWidgetContents_4.setSizePolicy(sizePolicy) self.scrollAreaWidgetContents_4.setAcceptDrops(True) self.scrollAreaWidgetContents_4.setObjectName("scrollAreaWidgetContents_4") self.scrollArea_4.setWidget(self.scrollAreaWidgetContents_4) self.gridLayout.addWidget(self.scrollArea_4, 1, 1, 1, 1) self.gridLayout.setColumnStretch(1, 1) self.gridLayout.setRowStretch(1, 1) self.retranslateUi(Form) QtCore.QMetaObject.connectSlotsByName(Form) def retranslateUi(self, Form): _translate = QtCore.QCoreApplication.translate Form.setWindowTitle(_translate("Form", "Form")) self.lineEdit_1.setText(_translate("Form", "1")) self.lineEdit_2.setText(_translate("Form", "2")) self.lineEdit_3.setText(_translate("Form", "3")) self.lineEdit_4.setText(_translate("Form", "4")) self.lineEdit_5.setText(_translate("Form", "5")) self.lineEdit_15.setText(_translate("Form", "6")) self.lineEdit_14.setText(_translate("Form", "7")) self.lineEdit_13.setText(_translate("Form", "8")) self.lineEdit_12.setText(_translate("Form", "9")) self.lineEdit_11.setText(_translate("Form", "10")) self.lineEdit_10.setText(_translate("Form", "11")) self.lineEdit_9.setText(_translate("Form", "12")) self.lineEdit_8.setText(_translate("Form", "13")) self.lineEdit_7.setText(_translate("Form", "14")) self.lineEdit_6.setText(_translate("Form", "15")) def mouseMoveEvent(self, e): if e.buttons() == Qt.LeftButton: drag = QDrag(self) mime = QMimeData() drag.setMimeData(mime) drag.exec_(Qt.MoveAction) from mywidgets import MyLineEdit, MyScrollArea, MyWidget but it is not working: output printed: accepted but not moved , there is no layout pos : PyQt5.QtCore.QPoint(27, 95) event.source : <mywidgets.MyLineEdit object at ........> event.source.parent() : <mywidgets.MyWidget object at ........> parent : <PyQt5.QtWidgets.QWidget object at ......> I want to drag elements from (0,0) cell and drop anyone of the cell such that it will remove from cell (0,0) and add in layout of cell where it dropped. Every cell contains a scrollbar because there can be many elements in each cell. I guess is something related to layout and widget , not able in designer to add a layout without widgets in cell (0,1) ,(1.-,0) and (1,1) , and still have problem getting the layout out of a position in PyQt | OK kind of solved the layout problem thanks to an old post I used some time ago that went nearly forgotten answer to How to make QScrollArea working properly Alternatively, you can also do the same from Designer with a small workaround: add any widget (a button, a label, it doesn't matter) to the scrollAreaWidgetContents widget; you actually already have such a widget (the gridLayoutWidget_2 you created inside it), so you could use that for this purpose; right click on the scroll area and select grid layout from the "Lay out" sub menu; remove the widget added before (or the gridLayoutWidget_2 in your case); This is required because Designer doesn't allow to set a layout for a widget until it has at least one child widget. That explain how to create an empty layout in PyQt-Designer . My revised code ; main.py : import sys from PyQt5.QtWidgets import QApplication from mywidgets import MyWidget from myui import Ui_Form class MainWidget(MyWidget, Ui_Form): def __init__(self, *args, **kwargs): super().__init__() self.setupUi(self) self.show() if __name__ == '__main__': app = QApplication(sys.argv) w = MainWidget() sys.exit(app.exec_()) mywidgets.py : from PyQt5.QtWidgets import QLineEdit, QWidget, QScrollArea from PyQt5.QtCore import Qt, QMimeData from PyQt5.QtGui import QDrag, QPixmap, QCursor class MyLineEdit(QLineEdit): def dragEnterEvent(self, e): e.ignore() def mouseMoveEvent(self, e): if e.buttons() == Qt.LeftButton: drag = QDrag(self) mime = QMimeData() drag.setMimeData(mime) pixmap = QPixmap(self.size()) self.render(pixmap) drag.setPixmap(pixmap) drag.exec_(Qt.MoveAction) class MyWidget(QWidget): def dragEnterEvent(self, e): e.accept() def dropEvent(self, e): pos = e.pos() widget = e.source() print('accepted but not moved , there is no layout') print('pos : ', pos, 'mouse : ', QCursor.pos()) print('event.source : ', widget , 'event.source.parent() :', widget.parent()) widget.setParent(None) print('parent : ', self.parent() , self.parent().objectName()) print('self.layput() : ', self.layout()) print('self.objectName() : ', self.objectName()) #widget.show() # not needed its shown in layout if self.layout() != None : self.layout().addWidget(e.source()) class MyScrollArea(QScrollArea): pass my PyQt5-Designer ui file translated with pyuic5 ; myui.py : from PyQt5 import QtCore, QtGui, QtWidgets from PyQt5.QtCore import Qt, QMimeData from PyQt5.QtGui import QDrag class Ui_Form(object): def setupUi(self, Form): Form.setObjectName("Form") Form.resize(698, 635) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Preferred, QtWidgets.QSizePolicy.Preferred) sizePolicy.setHorizontalStretch(0) sizePolicy.setVerticalStretch(0) sizePolicy.setHeightForWidth(Form.sizePolicy().hasHeightForWidth()) Form.setSizePolicy(sizePolicy) Form.setAcceptDrops(True) self.gridLayout = QtWidgets.QGridLayout(Form) self.gridLayout.setObjectName("gridLayout") self.scrollArea = MyScrollArea(Form) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Expanding) sizePolicy.setHorizontalStretch(1) sizePolicy.setVerticalStretch(1) sizePolicy.setHeightForWidth(self.scrollArea.sizePolicy().hasHeightForWidth()) self.scrollArea.setSizePolicy(sizePolicy) self.scrollArea.setAcceptDrops(True) self.scrollArea.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn) self.scrollArea.setWidgetResizable(True) self.scrollArea.setObjectName("scrollArea") self.scrollAreaWidgetContents = MyWidget() self.scrollAreaWidgetContents.setGeometry(QtCore.QRect(0, 0, 315, 897)) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Preferred) sizePolicy.setHorizontalStretch(0) sizePolicy.setVerticalStretch(0) sizePolicy.setHeightForWidth(self.scrollAreaWidgetContents.sizePolicy().hasHeightForWidth()) self.scrollAreaWidgetContents.setSizePolicy(sizePolicy) self.scrollAreaWidgetContents.setAcceptDrops(True) self.scrollAreaWidgetContents.setObjectName("scrollAreaWidgetContents") self.verticalLayout = QtWidgets.QVBoxLayout(self.scrollAreaWidgetContents) self.verticalLayout.setSpacing(30) self.verticalLayout.setObjectName("verticalLayout") self.lineEdit_1 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_1.setObjectName("lineEdit_1") self.verticalLayout.addWidget(self.lineEdit_1) self.lineEdit_2 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_2.setObjectName("lineEdit_2") self.verticalLayout.addWidget(self.lineEdit_2) self.lineEdit_3 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_3.setObjectName("lineEdit_3") self.verticalLayout.addWidget(self.lineEdit_3) self.lineEdit_4 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_4.setObjectName("lineEdit_4") self.verticalLayout.addWidget(self.lineEdit_4) self.lineEdit_5 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_5.setObjectName("lineEdit_5") self.verticalLayout.addWidget(self.lineEdit_5) self.lineEdit_15 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_15.setObjectName("lineEdit_15") self.verticalLayout.addWidget(self.lineEdit_15) self.lineEdit_14 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_14.setObjectName("lineEdit_14") self.verticalLayout.addWidget(self.lineEdit_14) self.lineEdit_13 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_13.setObjectName("lineEdit_13") self.verticalLayout.addWidget(self.lineEdit_13) self.lineEdit_12 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_12.setObjectName("lineEdit_12") self.verticalLayout.addWidget(self.lineEdit_12) self.lineEdit_11 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_11.setObjectName("lineEdit_11") self.verticalLayout.addWidget(self.lineEdit_11) self.lineEdit_10 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_10.setObjectName("lineEdit_10") self.verticalLayout.addWidget(self.lineEdit_10) self.lineEdit_9 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_9.setObjectName("lineEdit_9") self.verticalLayout.addWidget(self.lineEdit_9) self.lineEdit_8 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_8.setObjectName("lineEdit_8") self.verticalLayout.addWidget(self.lineEdit_8) self.lineEdit_7 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_7.setObjectName("lineEdit_7") self.verticalLayout.addWidget(self.lineEdit_7) self.lineEdit_6 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_6.setObjectName("lineEdit_6") self.verticalLayout.addWidget(self.lineEdit_6) self.scrollArea.setWidget(self.scrollAreaWidgetContents) self.gridLayout.addWidget(self.scrollArea, 0, 0, 1, 1) self.scrollArea_2 = MyScrollArea(Form) self.scrollArea_2.setAcceptDrops(True) self.scrollArea_2.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn) self.scrollArea_2.setWidgetResizable(True) self.scrollArea_2.setObjectName("scrollArea_2") self.scrollAreaWidgetContents_2 = MyWidget() self.scrollAreaWidgetContents_2.setGeometry(QtCore.QRect(0, 0, 315, 305)) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Preferred) sizePolicy.setHorizontalStretch(0) sizePolicy.setVerticalStretch(0) sizePolicy.setHeightForWidth(self.scrollAreaWidgetContents_2.sizePolicy().hasHeightForWidth()) self.scrollAreaWidgetContents_2.setSizePolicy(sizePolicy) self.scrollAreaWidgetContents_2.setAcceptDrops(True) self.scrollAreaWidgetContents_2.setObjectName("scrollAreaWidgetContents_2") self.verticalLayout_3 = QtWidgets.QVBoxLayout(self.scrollAreaWidgetContents_2) self.verticalLayout_3.setObjectName("verticalLayout_3") self.scrollArea_2.setWidget(self.scrollAreaWidgetContents_2) self.gridLayout.addWidget(self.scrollArea_2, 0, 1, 1, 1) self.scrollArea_3 = MyScrollArea(Form) self.scrollArea_3.setAcceptDrops(True) self.scrollArea_3.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn) self.scrollArea_3.setWidgetResizable(True) self.scrollArea_3.setObjectName("scrollArea_3") self.scrollAreaWidgetContents_3 = MyWidget() self.scrollAreaWidgetContents_3.setGeometry(QtCore.QRect(0, 0, 315, 304)) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Preferred) sizePolicy.setHorizontalStretch(0) sizePolicy.setVerticalStretch(0) sizePolicy.setHeightForWidth(self.scrollAreaWidgetContents_3.sizePolicy().hasHeightForWidth()) self.scrollAreaWidgetContents_3.setSizePolicy(sizePolicy) self.scrollAreaWidgetContents_3.setAcceptDrops(True) self.scrollAreaWidgetContents_3.setObjectName("scrollAreaWidgetContents_3") self.verticalLayout_2 = QtWidgets.QVBoxLayout(self.scrollAreaWidgetContents_3) self.verticalLayout_2.setObjectName("verticalLayout_2") self.scrollArea_3.setWidget(self.scrollAreaWidgetContents_3) self.gridLayout.addWidget(self.scrollArea_3, 1, 0, 1, 1) self.scrollArea_4 = MyScrollArea(Form) self.scrollArea_4.setAcceptDrops(True) self.scrollArea_4.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn) self.scrollArea_4.setWidgetResizable(True) self.scrollArea_4.setObjectName("scrollArea_4") self.scrollAreaWidgetContents_4 = MyWidget() self.scrollAreaWidgetContents_4.setGeometry(QtCore.QRect(0, 0, 315, 304)) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Preferred) sizePolicy.setHorizontalStretch(0) sizePolicy.setVerticalStretch(0) sizePolicy.setHeightForWidth(self.scrollAreaWidgetContents_4.sizePolicy().hasHeightForWidth()) self.scrollAreaWidgetContents_4.setSizePolicy(sizePolicy) self.scrollAreaWidgetContents_4.setAcceptDrops(True) self.scrollAreaWidgetContents_4.setObjectName("scrollAreaWidgetContents_4") self.verticalLayout_4 = QtWidgets.QVBoxLayout(self.scrollAreaWidgetContents_4) self.verticalLayout_4.setObjectName("verticalLayout_4") self.scrollArea_4.setWidget(self.scrollAreaWidgetContents_4) self.gridLayout.addWidget(self.scrollArea_4, 1, 1, 1, 1) self.gridLayout.setColumnStretch(1, 1) self.gridLayout.setRowStretch(1, 1) self.retranslateUi(Form) QtCore.QMetaObject.connectSlotsByName(Form) def retranslateUi(self, Form): _translate = QtCore.QCoreApplication.translate Form.setWindowTitle(_translate("Form", "Form")) self.lineEdit_1.setText(_translate("Form", "1")) self.lineEdit_2.setText(_translate("Form", "2")) self.lineEdit_3.setText(_translate("Form", "3")) self.lineEdit_4.setText(_translate("Form", "4")) self.lineEdit_5.setText(_translate("Form", "5")) self.lineEdit_15.setText(_translate("Form", "6")) self.lineEdit_14.setText(_translate("Form", "7")) self.lineEdit_13.setText(_translate("Form", "8")) self.lineEdit_12.setText(_translate("Form", "9")) self.lineEdit_11.setText(_translate("Form", "10")) self.lineEdit_10.setText(_translate("Form", "11")) self.lineEdit_9.setText(_translate("Form", "12")) self.lineEdit_8.setText(_translate("Form", "13")) self.lineEdit_7.setText(_translate("Form", "14")) self.lineEdit_6.setText(_translate("Form", "15")) def mouseMoveEvent(self, e): if e.buttons() == Qt.LeftButton: drag = QDrag(self) mime = QMimeData() drag.setMimeData(mime) drag.exec_(Qt.MoveAction) from mywidgets import MyLineEdit, MyScrollArea, MyWidget Picture of results: Only big problem with this setup is that when QLineEdit-Widgets get dragged and dropped around they are added to the layout at the end of it, don't know how to insert them in the exact position where they are dropped, to me is not an easy problem to solve | 2 | 1 |
78,374,548 | 2024-4-23 | https://stackoverflow.com/questions/78374548/no-authentication-box-appears-when-authenticating-google-earth-engine-gee-pyth | I try to authenticate the GEE, the kernel keeps running, but authenticating box does not show to paste the authentication code in (picture below, but no authentication box showed). Anyone has similar experience? | You should add the authentication code to the box appearing on top of VSCode. This is the same box displayed under "the authorization workflow will generate a code which you should paste in the box below" in Jupyter. | 2 | 2 |
78,364,978 | 2024-4-22 | https://stackoverflow.com/questions/78364978/move-specific-column-information-to-a-new-row-under-the-current-row | Consider this df: data = { 'Name Type': ["Primary", "Primary", "Primary"], 'Full Name': ["John Snow", "Daenerys Targaryen", "Brienne Tarth"], 'AKA': ["Aegon Targaryen", None, None], 'LQAKA': ["The Bastard of Winterfell", "Mother of Dragons", None], 'Other': ["Info", "Info", "Info"]} df = pd.DataFrame(data) I need to move akas and lqakas if they are not None below each Primary name and also assign the Name Type to be AKA or LQAKA. If it is None, no row should be created. There are many other columns like column other that should keep info in the same row as Primary name. So the expected result would be: Name Type Full Name Other Primary John Snow Info AKA Aegon Targaryen LQAKA The Bastard of Winterfell Primary Daenerys Targaryen Info LQAKA Mother of Dragons Primary Brienne Tarth Info | You can melt+dropna+sort_index and post-process: # reshape, remove None, and reorder out = (df.melt(['Name Type', 'Other'], ignore_index=False) .dropna(subset='value') .sort_index(kind='stable', ignore_index=True) .rename(columns={'value': 'Full Name'}) ) # identify rows with "Full Name" m = out['variable'].ne('Full Name') # mask unwanted entries out.loc[m, 'Name Type'] = out.pop('variable') out.loc[m, 'Other'] = '' Variant with stack, that will directly reshape in the desired order and drop the None automatically: out = df.set_index(['Name Type', 'Other']).stack().reset_index(name='Full Name') m = out['level_2'].ne('Full Name') out.loc[m, 'Name Type'] = out.pop('level_2') out.loc[m, 'Other'] = '' Output: Name Type Other Full Name 0 Primary Info John Snow 1 AKA Aegon Targaryen 2 LQAKA The Bastard of Winterfell 3 Primary Info Daenerys Targaryen 4 LQAKA Mother of Dragons 5 Primary Info Brienne Tarth handling many columns Assuming you can identify the 3 columns to melt, you could modify the above approach to: cols = df.columns.difference(['Full Name', 'AKA', 'LQAKA']) out = (df.melt(cols, ignore_index=False) .dropna(subset='value') .sort_index(kind='stable', ignore_index=True) .rename(columns={'value': 'Full Name'}) ) m = out['variable'].ne('Full Name') out.loc[m, 'Name Type'] = out.pop('variable') out.loc[m, cols[1:]] = '' out = out[['Name Type', 'Full Name']+list(cols[1:])] Output: Name Type Full Name Other Other2 Other3 0 Primary John Snow Info info2 info3 1 AKA Aegon Targaryen 2 LQAKA The Bastard of Winterfell 3 Primary Daenerys Targaryen Info info2 info3 4 LQAKA Mother of Dragons 5 Primary Brienne Tarth Info info2 info3 exploding lists if any in the input Adding an explode step: import pandas as pd data = {'Name Type': ["Primary", "Primary", "Primary"], 'Full Name': ["John Snow", "Daenerys Targaryen", "Brienne Tarth"], 'AKA': [["Aegon Targaryen", "Lord Snow"], None, None], 'LQAKA': ["The Bastard of Winterfell", ["Mother of Dragons", "The Unburnt"], None], 'Other': ["Info", "Info", "Info"]} df = pd.DataFrame(data) cols = df.columns.difference(['Full Name', 'AKA', 'LQAKA']) out = (df.melt(cols, ignore_index=False) .dropna(subset='value') .sort_index(kind='stable', ignore_index=True) .rename(columns={'value': 'Full Name'}) .explode('Full Name', ignore_index=True) ) m = out['variable'].ne('Full Name') out.loc[m, 'Name Type'] = out.pop('variable') out.loc[m, cols[1:]] = '' out = out[['Name Type', 'Full Name']+list(cols[1:])] Output: Name Type Full Name Other 0 Primary John Snow Info 1 AKA Aegon Targaryen 2 AKA Lord Snow 3 LQAKA The Bastard of Winterfell 4 Primary Daenerys Targaryen Info 5 LQAKA Mother of Dragons 6 LQAKA The Unburnt 7 Primary Brienne Tarth Info | 2 | 3 |
78,368,881 | 2024-4-22 | https://stackoverflow.com/questions/78368881/curve-fitting-with-scipy-failing-to-give-a-correct-fit | I have two NumPy arrays x_data and y_data. When I try to fit my data using a second order step response model and scipy.optimize.curve_fit with this code: import numpy as np from scipy.optimize import curve_fit import matplotlib.pyplot as plt x_data = np.array([51.5056, 51.5058, 51.5061, 51.5064, 51.5067, 51.5069, 51.5072, 51.5075, 51.5078, 51.5081, 51.5083, 51.5086, 51.5089, 51.5092, 51.5094, 51.5097, 51.51 , 51.5103, 51.5106, 51.5108, 51.5111, 51.5114, 51.5117, 51.5119, 51.5122, 51.5125, 51.5128, 51.5131, 51.5133, 51.5136, 51.5139, 51.5142, 51.5144, 51.5147, 51.515 , 51.5153, 51.5156, 51.5158, 51.5161, 51.5164, 51.5167, 51.5169, 51.5172, 51.5175, 51.5178, 51.5181, 51.5183, 51.5186, 51.5189, 51.5192, 51.5194, 51.5197, 51.52 , 51.5203, 51.5206, 51.5208, 51.5211, 51.5214, 51.5217, 51.5219, 51.5222, 51.5225, 51.5228, 51.5231, 51.5233, 51.5236, 51.5239, 51.5242, 51.5244, 51.5247, 51.525 , 51.5253, 51.5256, 51.5258, 51.5261, 51.5264, 51.5267, 51.5269, 51.5272, 51.5275, 51.5278, 51.5281, 51.5283, 51.5286, 51.5289, 51.5292, 51.5294, 51.5297, 51.53 , 51.5303, 51.5306, 51.5308, 51.5311, 51.5314, 51.5317, 51.5319, 51.5322, 51.5325, 51.5328, 51.5331, 51.5333, 51.5336, 51.5339, 51.5342]) y_data = np.array([2.99 , 2.998, 3.024, 3.036, 3.038, 3.034, 3.03 , 3.025, 3.02 , 3.016, 3.012, 3.006, 3.003, 3. , 2.997, 2.995, 2.993, 2.99 , 2.989, 2.987, 2.986, 2.985, 2.983, 2.983, 2.982, 2.98 , 2.98 , 2.979, 2.978, 2.978, 2.976, 2.977, 2.976, 2.975, 2.975, 2.975, 2.975, 2.974, 2.975, 2.974, 2.974, 2.974, 2.973, 2.974, 2.973, 2.974, 2.973, 2.974, 2.974, 2.974, 2.974, 2.974, 2.974, 2.974, 2.973, 2.974, 2.974, 2.974, 2.974, 2.974, 2.974, 2.974, 2.974, 2.975, 2.974, 2.975, 2.975, 2.976, 2.976, 2.976, 2.976, 2.975, 2.976, 2.976, 2.977, 2.977, 2.976, 2.977, 2.977, 2.977, 2.977, 2.978, 2.978, 2.978, 2.979, 2.978, 2.978, 2.979, 2.979, 2.979, 2.979, 2.979, 2.98 , 2.98 , 2.98 , 2.98 , 2.98 , 2.981, 2.981, 2.981, 2.981, 2.982, 2.981, 2.982]) # Second order function definition def second_order_step_response(t, K, zeta, omega_n): omega_d = omega_n * np.sqrt(1 - zeta**2) phi = np.arccos(zeta) return K * (1 - (1 / np.sqrt(1 - zeta**2)) * np.exp(-zeta * omega_n * t) * np.sin(omega_d * t + phi)) # Initial Guess of parameters K_guess = y_data.max() - y_data.min() zeta_guess = 0.5 # Typically between 0 and 1 for underdamped systems omega_n_guess = 2 * np.pi / (x_data[1] - x_data[0]) # A rough estimate based on data sampling rate # Fit the model with increased maxfev and parameter bounds params, covariance = curve_fit( second_order_step_response, x_data, y_data, p0=[K_guess, zeta_guess, omega_n_guess], maxfev=5000, # Increase max function evaluations bounds=([0, 0, 0], [np.inf, 1, np.inf]) # Bounds for K, zeta, and omega_n ) K_fitted, zeta_fitted, omega_n_fitted = params # Generate data for the fitted model x_fit = np.linspace(min(x_data), max(x_data), 300) # More points for a smoother line y_fit = second_order_step_response(x_fit, K_fitted, zeta_fitted, omega_n_fitted) # Plot plt.figure(figsize=(10, 6)) plt.scatter(x_data, y_data, color='red', label='Original Data') # Original data plt.plot(x_fit, y_fit, 'blue', label='Fitted Model') # Fitted model plt.title('Fitting Second Order System Step Response to Data') plt.xlabel('Time') plt.ylabel('Y') plt.legend() plt.show() # Display fitted parameters print(f"Fitted Parameters: Gain (K) = {K_fitted}, Damping Ratio (zeta) = {zeta_fitted}, Natural Frequency (omega_n) = {omega_n_fitted}") This is the equation I am using to fit the data I get the following fit. The K_fitted(gain) and zeta_fitted(dampening coefficient) are within realistic values but omega_n_fitted is way to large. What is the problem? Edit1: Adding image over underdamped and overdamped systems for context. I am fitting the underdamped system | COMMENT : The model equation appears not well compatible with the data. The fitting is far to be correct. A different model equation is considered below. The variable t is repaced by a variable x=ln(t-t0). Of course this isn't significant on physical viewpoint. This is purely mathematical. | 5 | 2 |
78,373,626 | 2024-4-23 | https://stackoverflow.com/questions/78373626/two-dimensions-array-9x9-of-integers-without-using-numpy-array-subclassing-muta | Few days ago I found the next example of howto implement customs list classes via subclassing the MutableSequence from collections.abc. class TypedList(MutableSequence): def __init__(self, oktypes, *args): self.oktypes = oktypes self.list = list() self.extend(list(args)) def check(self, v): if not isinstance(v, self.oktypes): raise TypeError(v) def __len__(self): return len(self.list) def __getitem__(self, i): return self.list[i] def __delitem__(self, i): del self.list[i] def __setitem__(self, i, v): self.check(v) self.list[i] = v def insert(self, i, v): self.check(v) self.list.insert(i, v) def __str__(self): return str(self.list) Example of use of TypedList: tl = TypedList(int) # ok tl.append(1) # next TypeError will be raised tl.append('1') My Question: I was wondering if there is way of implementing an two-dimensions SudokuArray(MutableSequence) class in similar manner for mananing a sudoku array in a game? For giving you an idea, below you can see a possible implementation (non-operative) of that SudokuArray(MutableSequence): class SudokuArray(MutableSequence): def __init__(self, n_rows=9, n_columns=9, init_value=None): self.n_rows = n_rows self.n_columns = n_columns self.array = [[init_value for _ in range(0, self.n_columns, 1)] for _ in range(0, self.n_rows, 1)] def check(self, row_number, column_number, number): if number in self.get_row_values(row_number) or number in self.get_column_values(column_number) or number in self.get_nonet_values(row_number, column_number)): raise ExistentNumberError("Existent number in row, column or nonet", number, row_number, column_number) def __len__(self): return self.n_rows, self.n_columns def __getitem__(self, row_number, column_number): return self.array[column_number][row_number] def __delitem__(self, row_number, column_number): del self.array[column_number][row_number] def __setitem__(self, row_number, column_number, number): self.check(number, row_number, column_number) self.array[column_number][row_number] = number def insert(self, row_number, column_number, number): self.check(row_number, column_number, number) self.array.insert(row_number, column_number, number) def __str__(self): return str(self.array) def get_row_values(self, row_number): # todo implement get_row_values method pass def get_column_values(self, column_number): # todo implement get_column_values method pass def get_nonet_values(self, row_number, column_number): # todo implement get_nonet_values method pass class ExistentNumberError(Exception): def __init__(self, message, number, row_number, column_number): super().__init__(message) self.number = number self.row_number = row_number self.column_number = column_number Example of use of SudokuArray: sudoku_array = SudokuArray() sudoku_array[0][0] = 1 # next ExistentNumberError will be raised sudoku_array[0][0] = 1 Of course I could use numpy.array but I think our 9x9 array is very simple for using numpy. Also I want to avoid dependencies. Any idea? | There are these issues in your attempt: The if statement in check lacks an opening parenthesis. __len__ should return a number, not a tuple. __getitem__ and similar methods should not take two index arguments, but one. However, this argument can be a tuple (of 2 coordinates) In __setitem__ the call to self.check has arguments in the wrong order. A Sudoku always has the same number of rows and columns, so I would not have separate constructor parameters for the number of rows and the number of columns. Just have one parameter. The insert and __delitem__ methods should not be used on a Sudoku: you don't want to change the number of entries on any row. In the below correction I will still include them, as maybe you have a good reason for this? The get_***_values methods were not implemented. Here is the suggested code: from collections.abc import MutableSequence from math import isqrt class SudokuArray(MutableSequence): def __init__(self, size=9, init_value=None): self.size = size self.array = [[init_value for _ in range(self.size)] for _ in range(self.size)] def check(self, row_number, column_number, number): if (number in self.get_row_values(row_number) or number in self.get_column_values(column_number) or number in self.get_nonet_values(row_number, column_number)): raise ExistentNumberError("Existent number in row, column or nonet", number, row_number, column_number) def __len__(self): return self.size * self.size def __getitem__(self, coordinates): return self.array[coordinates[0]][coordinates[1]] def __delitem__(self, coordinates): del self.array[coordinates[0]][coordinates[1]] def __setitem__(self, coordinates, number): self.check(coordinates[0], coordinates[1], number) self.array[coordinates[0]][coordinates[1]] = number def insert(self, coordinates, number): self.check(coordinates[0], coordinates[1], number) self.array[coordinates[0]].insert(coordinates[1], number) def __str__(self): return "\n".join(" ".join(map(str, row)) for row in self.array).replace("None", ".") def get_row_values(self, row_number): return self.array[row_number][:] def get_column_values(self, column_number): return [row[column_number] for row in self.array] def get_nonet_values(self, row_number, column_number): width = isqrt(self.size) row_number -= row_number % width column_number -= column_number % width return [val for row in self.array[row_number:row_number+width] for val in row[column_number:column_number+width]] Example use: sudoku = SudokuArray() sudoku[1, 2] = 9 sudoku[2, 3] = 8 sudoku[5, 5] = 4 sudoku[4, 4] = 3 sudoku[3, 3] = 6 print(sudoku) | 2 | 2 |
78,363,215 | 2024-4-21 | https://stackoverflow.com/questions/78363215/python-multiprocessing-when-i-launch-many-processes-on-a-huge-pandas-data-frame | I am trying to gain execution time with python's multiprocessing library (pool_starmap) on a code that executes the same task in parallel, on the same Pandas DataFrame, but with different call arguments. When I execute this code on a small portion of the data frame and with 10 jobs, everything works just fine. However when I put the whole 100 000 000 lines dataset with 63 jobs (using a cluster computer with 64 CPU cores for this), the code just... freezes. It is running, but not doing anything (I know it because, once every 10 000 task, the code is supposed to print that it is alive). I have searched and found similar issues on the internet, but none of the answers applied to my specific case, so here I am. Minimal Example I have made a minimal, self-sustaining example to reproduce this problem. Let's say to simplify that my data frame has 2 columns; the 1st one being "stores", the other is "price". I want to recover the mean_price for each store. Of course in this specific problem one would just groupBy the dataframe on stores and aggregate over the price but this is a simplification; let's assume that the task can only be done one store at a time (this is my case). Here's what a minimal example looks like: #minimal example #changes according to SIGHUP and Frank Yellin import time import pandas as pd import random as rd import multiprocessing as mp import psutil #RAM usage def create_datafile(nrows): """ create a random pandas dataframe file To visualize this rather simple example, let's say that we are looking at a pool of 0.1*nrows products across different stores, that can have different values of the attribute "price" (in the list "stores"). """ price = [rd.randint(0,300) for i in range(nrows)] stores = [i%(0.1*nrows) for i in range(nrows)] data=zip(stores,price) return pd.DataFrame(data=data, columns=["stores", "price"]) def task(store): global data """ the task we want to accomplish: compute mean price for each store in the dataframe. """ if rd.randint(1,10000)==1: print('I am alive!') product_df = data.groupby('stores', as_index = False).agg(mean_price = ("price", 'mean')) selected_store = product_df[product_df['stores'] == store] #select the mean for a given store return (store, selected_store['mean_price']) def pinit(_df): global data data = _df def get_ram_usage_pct(): #source: https://www.pragmaticlinux.com/2020/12/monitor-cpu-and-ram-usage-in-python-with-psutil/ """ Obtains the system's current RAM usage. :returns: System RAM usage as a percentage. :rtype: float """ return psutil.virtual_memory().percent if __name__ == "__main__": ## nrows=100000000 nb_jobs= 63 print('Creating data...') df = create_datafile(nrows) print('Data created.') print('RAM usage after data creation is {} %'.format(get_ram_usage_pct())) stores_list = [i%(0.1*nrows) for i in range(nrows)] dic_mean={} #launch multiprocessing tasks with starmap tic=time.time() print(f'Max number of jobs: {mp.cpu_count() - 1}') print(f'Running: {min(nb_jobs, mp.cpu_count() - 1)} jobs...') with mp.Pool(initializer=pinit, initargs=(df,), processes=min(nb_jobs, mp.cpu_count() - 1)) as pool: for store,result in pool.map_async(task, stores_list).get(): dic_mean[store] = result[store] toc=time.time() print(f'Processed data in {round((toc-tic)/60,1)} minutes (rounded to 0.1).') #print(dic_mean) #dic_means now contains all the means computed by each program. I am using Python 3.9.2. If you launch this code with: nrows = 10000 and nb_jobs = 10, you should not encounter any problem. however with nrows=100000000 and nb_jobs=63, the issue I mentionned should occur. I am rather new to Python Multiprocessing, so any hint would be welcome ! Thanks in advance. | Ok, so thanks to @SIGHUP and @Frank Yellin, I was able to find the issue, so I will share it here if anyone encounters a similar issue. Python seems unable to print anything when there are too many concurrent processes running. The solution, to check if your program is alive, is to make it write into a .txt file, for example. Once there are too many processes, print statements will NOT appear in the Python console. I don't know if the print statements freeze the entire program or if it continues running. However I would suggest removing any print statement to avoid any surprise. Here is a way to make the code from my example work without hassle (beware, 1 000 000 rows or more will take a long time): #minimal example #changes according to SIGHUP and Frank Yellin import time import pandas as pd import random as rd import multiprocessing as mp import psutil #RAM usage import sys def create_datafile(nrows): """ create a random pandas dataframe file To visualize this rather simple example, let's say that we are looking at a pool of 0.1*nrows products across different stores, that can have different values of the attribute "price" (in the list "stores"). """ price = [rd.randint(0,300) for i in range(nrows)] stores = [i%(0.1*nrows) for i in range(nrows)] data=zip(stores,price) return pd.DataFrame(data=data, columns=["stores", "price"]) def task(store): global data global alive_file """ the task we want to accomplish: compute mean price for each store in the dataframe. """ #print('I am alive!',flush=True) DO NOT put a print statement with open(alive_file, 'a') as f: f.write("I am alive !") product_df = data.groupby('stores', as_index = False).agg(mean_price = ("price", 'mean')) selected_store = product_df[product_df['stores'] == store] #select the mean for a given store return (store, selected_store['mean_price']) def pinit(_df, _alive_file): global data global alive_file data = _df alive_file = _alive_file def get_ram_usage_pct(): #source: https://www.pragmaticlinux.com/2020/12/monitor-cpu-and-ram-usage-in-python-with-psutil/ """ Obtains the system's current RAM usage. :returns: System RAM usage as a percentage. :rtype: float """ return psutil.virtual_memory().percent if __name__ == "__main__": ## nrows= int(sys.argv[1]) #number of rows in dataframe nb_jobs= int(sys.argv[2]) #number of jobs print('Creating data...') tic=time.time() df = create_datafile(nrows) toc=time.time() print(f'Data created. Took {round((toc-tic)/60,1)} minutes (rounded to 0.1)') print('RAM usage after data creation is {} %'.format(get_ram_usage_pct())) #print(data_df) #create parameters for multiprocessing task stores_list = [(i % (0.1 * nrows),) for i in range(nrows)] #dics_stores=[{} for _ in stores_list] #parameters = [(df, stores_list[i]) for i in range(nrows)] dic_mean={} #launch multiprocessing tasks with starmaps tic=time.time() print(f'Max number of jobs: {mp.cpu_count() - 1}') print(f'Running: {min(nb_jobs, mp.cpu_count() - 1)} jobs...') with mp.Pool(initializer=pinit, initargs=(df,"alive.txt",), processes=min(nb_jobs, mp.cpu_count() - 1)) as pool: for store,result in pool.starmap_async(task, stores_list).get(): dic_mean[store] = result[store] toc=time.time() print(f'Processed data in {round((toc-tic)/60,1)} minutes (rounded to 0.1).') #print(dic_mean) #dic_means now contains all the means computed by each program. Thank you to everyone who took the time to examine my issue and made my code better, helping me identify the true issue. | 2 | 0 |
78,369,755 | 2024-4-23 | https://stackoverflow.com/questions/78369755/polars-compare-two-dataframes-is-there-a-way-to-fail-immediately-on-first-mism | I'm using polars.testing assert_frame_equal method to compare two sorted dataframes containing same columns and below is my code: assert_frame_equal(src_df, tgt_df, check_dtype=False, check_row_order=False) For a dataframe containing 5 million records, it takes long time to report a failure as it compares all the rows between two dataframes. Is there a way that we can make polars to fail immediately and report on first mismatch/failure and stop the execution as we just need to know the first failure. I tried searching through and i'm unable to find any documentation for this requirement. Can someone please help me on this? | The polars.testing.* methods call .to_list() when reporting differences. https://github.com/pola-rs/polars/blob/main/py-polars/polars/testing/asserts/frame.py#L128-L130 I've found this to be a source of significant slowdown when larger data is involved. If you also want error reporting, it seems you need to resort to doing it manually. .arg_true() can be used as part of getting the index of the first mismatch. a = pl.Series(["a", "b", "c", "d"]) b = pl.Series(["a", "b", "e", "f"]) a.ne_missing(b).arg_true() shape: (2,) Series: '' [u32] [ 2 3 ] You can refer to the implementation for the other pre-checks performed, but you could do something similar to: import polars as pl N = 5 src_df = pl.DataFrame({ "a": range(N), "b": list(range(N - 2)) + [42, 42] }).sort(pl.all()) tgt_df = pl.DataFrame({ "a": range(N), "b": range(N) }).sort(pl.all()) """ Insert other equality pre-checks here """ for col in src_df: try: idx = col.ne_missing(tgt_df[col.name]).arg_true().head(1).item() print("LEFT:", src_df[idx]) print("RIGHT:", tgt_df[idx]) break except ValueError: pass LEFT: shape: (1, 2) βββββββ¬ββββββ β a β b β β --- β --- β β i64 β i64 β βββββββͺββββββ‘ β 3 β 42 β βββββββ΄ββββββ RIGHT: shape: (1, 2) βββββββ¬ββββββ β a β b β β --- β --- β β i64 β i64 β βββββββͺββββββ‘ β 3 β 3 β βββββββ΄ββββββ | 2 | 1 |
78,366,661 | 2024-4-22 | https://stackoverflow.com/questions/78366661/i-get-an-empty-array-from-vector-serch-in-mongodb-with-langchain | I have the code: loader = PyPDFLoader(βhttps://arxiv.org/pdf/2303.08774.pdfβ) data = loader.load() docs = text_splitter1.split_documents(data) vector_search_index = βvector_indexβ vector_search = MongoDBAtlasVectorSearch.from_documents( documents=docs, embedding=OpenAIEmbeddings(disallowed_special=()), collection=atlas_collection, index_name=vector_search_index, ) query = "What were the compute requirements for training GPT 4" results = vector_search1.similarity_search(query) print("result: ", results) And in results I have every time only empty array. I don't understand what I do wrong. This is the link on the langchain documentation with examples. Information is saved normally in database, but I cannot search info in this collection. | So I was able to get this to work in MongoDB with the following code: text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=150) loader = PyPDFLoader("https://arxiv.org/pdf/2303.08774.pdf") data = loader.load() docs = text_splitter.split_documents(data) DB_NAME = "langchain_db" COLLECTION_NAME = "atlas_collection" ATLAS_VECTOR_SEARCH_INDEX_NAME = "vector_index" MONGODB_ATLAS_CLUSTER_URI = uri = os.environ.get("MONGO_DB_ENDPOINT") client = MongoClient(MONGODB_ATLAS_CLUSTER_URI) MONGODB_COLLECTION = client[DB_NAME][COLLECTION_NAME] vector_search = MongoDBAtlasVectorSearch.from_documents( documents=docs, embedding=OpenAIEmbeddings(disallowed_special=()), collection=MONGODB_COLLECTION, index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME, ) query = "What were the compute requirements for training GPT 4" results = vector_search.similarity_search(query) print("result: ", results) At this point, I did get the same results that you did. Before it would work, I had to create the vector search index and I made sure it was named the same as what is specified in ATLAS_VECTOR_SEARCH_INDEX_NAME: FWIW - It was easier for me to do in Astra DB (I tried this first, because I am a DataStax employee): text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=150) loader = PyPDFLoader("https://arxiv.org/pdf/2303.08774.pdf") data = loader.load() docs = text_splitter.split_documents(data) atlas_collection = "atlas_collection" ASTRA_DB_API_ENDPOINT = os.environ.get("ASTRA_DB_API_ENDPOINT") ASTRA_DB_APPLICATION_TOKEN = os.environ.get("ASTRA_DB_APPLICATION_TOKEN") vector_search = AstraDBVectorStore.from_documents( documents=docs, embedding=OpenAIEmbeddings(disallowed_special=()), collection_name=atlas_collection, api_endpoint=ASTRA_DB_API_ENDPOINT, token=ASTRA_DB_APPLICATION_TOKEN, ) query = "What were the compute requirements for training GPT 4" results = vector_search.similarity_search(query) print("result: ", results) Worth noting, that Astra DB will create your vector index automatically based on the dimensions of the embedding model. | 3 | 2 |
78,353,162 | 2024-4-19 | https://stackoverflow.com/questions/78353162/odoo-14-binary-field-inside-hr-employee-public | I have a problem with the hr.employee.public model. If I try to add a new field Char, Integer, Many2one, etc. I have no issues. However, if I try to insert a Binary field like this employee_signature = fields.Binary(string='Employee signature', attachment=True, store=True) I always get the following error: psycopg2.errors.UndefinedColumn: column emp.employee_signature does not exist LINE 3: ...l,emp.priv_email,emp.employee_file_name_signature,emp.employ... Where am I going wrong? What could be the problem? | If you inherit hr.employee.public to add a stored field (not x2many fields), you should see the same error message because of the init function which tries to get fields from hr.employee You don't need to set the store attribute to True because it's the default value (if you set store to False, you will not be able to save the attachment). When the attachment attribute is set to True (the default value), Odoo will set the column type of the field to None and ignore updating the database schema To fix this issue, you can use the same logic as image_1920 field (which is an extended Binary field) and add the field to the hr.employee model Example: class HrEmployee(models.Model): _inherit = 'hr.employee' employee_signature = fields.Binary(string='Employee signature') class HREmployeePublic(models.Model): _inherit = "hr.employee.public" employee_signature = fields.Binary(compute="_compute_employee_signature", compute_sudo=True) def _compute_employee_signature(self): for employee in self: employee_id = self.sudo().env['hr.employee'].browse(employee.id) employee.employee_signature = employee_id.employee_signature In the example above the employee signature attachment attribute is set to True and Odoo will not create a column in the employee table, this is not a problem because the same field in the public profile is a non-stored computed field so Odoo will not try to get the field from employee table For more details, check [IMP] hr: Introduce the public employee profile commit | 3 | 1 |
78,371,754 | 2024-4-23 | https://stackoverflow.com/questions/78371754/storing-numpy-array-in-raw-binary-file | How to store a 2D numpy ndarray in raw binary format? It should become a raw array of float32 values, in row-major order, no padding, without any headers. According to the documentations, ndarray.tofile() can store it as binary or textual, but the format argument is a string to textual formatting. And np.save() saves it in .npy format. | with open('out', 'wb') as f: f.write(arr.tobytes()) should do. You may have to add a astype(np.float32) if arr is not already a float32 array. Likewise, if the array is not already in "row major order", you may have to add a .T somewhere. And of course (but I take you know that very well, if you want to dump binary representation) you need to be aware of little/big endian order. EDIT: Since you've mentioned tofile', I looked at it. Indeed, that the same thing. Just don't pass any format (otherwise, indeed, it is a text file) with open('out', 'wb') as f: arr.tofile(f) Not that it is significantly shorter. But it also works. And the doc says explicitly that it does exactly (when no sep, no format is passed) what my 1st solution does. | 2 | 2 |
78,369,655 | 2024-4-23 | https://stackoverflow.com/questions/78369655/scipys-solve-ivp-having-trouble-solving-simple-ivp | I'm attempting to solve a very simple IVP in Python in order to do error analysis: import numpy as np from scipy.integrate import solve_ivp def dh_dt_zerodriver(t, h): return -2 / h t = 50 steps = 10 dt = t / steps t_span = [0, t] t_eval = np.arange(0, t + dt, dt) h0 = 5 sol = solve_ivp(dh_dt_zerodriver, t_span=t_span, y0=[h0], t_eval=t_eval) However, the solution will not compute as it runs for an indefinite amount of time. I have used solve_ivp to solve more complex IVPs without analytical solutions, but this one seems to have me stumped despite it seemingly being a fairly simply problem. My paper is based on using the RK45 method, but if I must use some different numerical method then it is okay. | The problem is the domain you want your solution. Your ODE is h'(t) = -2/h(t) where h(0) = 5. The analytical solution is h(t) = sqrt(25 - 4t). For t < 6.25, the solution is real, but for t >= 6.25, the solution becomes complex. If you were to instead integrate over t = [0, 6], you would quickly get the correct result. | 2 | 4 |
78,368,454 | 2024-4-22 | https://stackoverflow.com/questions/78368454/group-streaks-ending-in-false-and-apply-forward-backward-filling | I have a dataframe, where I want to forward / backward fill, based on a boolean series, df['condition']. A single group consists of a streak of True values including the breaking False entry that separates a streak from the next one. What I mean is very clear when looking at my dataframes. My input looks like this: condition_values = [True, True, True, False, True, True, False, True, True, False, True, True, True] value_values = [0.1, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, 0.5, np.nan, 0.9, np.nan] data = {'condition': condition_values, 'value': value_values} df = pd.DataFrame(data) condition value 0 True 0.1 1 True NaN 2 True NaN 3 False NaN 4 True NaN 5 True NaN 6 False NaN 7 True NaN 8 True NaN 9 False 0.5 10 True NaN 11 True 0.9 12 True NaN My desired output looks like this: condition_values = [True, True, True, False, True, True, False, True, True, False, True, True, True] value_values = [0.1,0.1, 0.1, 0.1, np.nan, np.nan, np.nan, 0.5, 0.5, 0.5, 0.9, 0.9, 0.9] data = {'condition': condition_values, 'value': value_values} df2 = pd.DataFrame(data) condition value 0 True 0.1 1 True 0.1 2 True 0.1 3 False 0.1 4 True NaN 5 True NaN 6 False NaN 7 True 0.5 8 True 0.5 9 False 0.5 10 True 0.9 11 True 0.9 12 True 0.9 I have tried to make a bunch of dataframes, split by the Falses, ffill and bfill, and then re-concatenated. There has to be a faster way. I am very willing to take hints instead of the full solution - I'm trying to figure this out one way or another. | You were quite close indeed! You can try as follows: Option 1: ffill().bfill() df['value'] = ( df.groupby( df['condition'].shift().eq(0).cumsum() .where(df['condition']).ffill() )['value'] .apply(lambda x: x.ffill().bfill()) .droplevel(0) ) Option 2: transform(first) If you only have one value per group (as in your example), this can be done even simpler: df['value'] = ( df.groupby( df['condition'].shift().eq(0).cumsum() .where(df['condition']).ffill() )['value'].transform('first') ) Output condition value 0 True 0.1 1 True 0.1 2 True 0.1 3 False 0.1 4 True NaN 5 True NaN 6 False NaN 7 True 0.5 8 True 0.5 9 False 0.5 10 True 0.9 11 True 0.9 12 True 0.9 Explanation Start with Series.shift for df['condition'] and check Series.eq to 0 (as you did). Now, chain Series.where to "reset" False rows to NaN and apply Series.ffill, thus adding them to the previous group. Pass the result to df.groupby, select column 'value'. Option 1: use groupby.apply to apply Series.ffill and Series.bfill to each group. In this case, you need to get rid of the prepended index level with df.droplevel before assignment to df['value']. Option 2: if you only have 1 value, you can simply use groupby.transform and get groupby.first. With default setting, this will return the first non-NaN value. | 2 | 3 |
78,367,794 | 2024-4-22 | https://stackoverflow.com/questions/78367794/ansible-join-flatten-a-dict-of-lists | I have a dict of lists such as here, though the inner data could be any level of complexity (perhaps strings, perhaps dicts, perhaps multi layer nested complex object). my_dict: my_list_a: - a - b - c my_list_b: - a - d - e list_c: - f How do I flatten this structure / join the inner items into a list in the following format through jinja2 (preserving order is not important to me in my current use case but it's always nice when possible). my_combined_list: - a - b - c - a - d - e - f Unfortunately as list comprehensions are out of the question due to lack of support in Jinja, I haven't been able to piece a solution together so far. There are also some extra things I need to do, though most are easily doable on the final list (deduplicate, sort, etc). Something that needs to be done during the join is: select or reject based on a test (e.g. only include lists that start with string: my_ meaning list_c is not added to the end result) I've found many ways for list of dicts, or various more complex special cases, but I can't seem to find anything on making a list of all inner items of a dict of lists of items. | Q: Flatten the inner items into a list. A: The simplest option is my_combined_list: "{{ my_dict.values() | flatten }}" gives my_combined_list: [a, b, c, a, d, e, f] Q: Select or reject based on a test (start string: my_ , e.g. list_c is omitted). A: For example, use the test match allow_match: [my_] my_combined_list: "{{ my_dict | dict2items | selectattr('key', 'match', allow_match|join('|')) | map(attribute='value') | flatten }}" gives my_combined_list: [a, b, c, a, d, e] Example of a complete playbook for testing - hosts: localhost vars: my_dict: my_list_a: [a, b, c] my_list_b: [a, d, e] list_c: [f] my_combined_list: "{{ my_dict.values() | flatten }}" allow_match: [my_] my_combined_lis2: "{{ my_dict | dict2items | selectattr('key', 'match', allow_match|join('|')) | map(attribute='value') | flatten }}" tasks: - debug: var: my_combined_list | to_yaml - debug: var: my_combined_lis2 | to_yaml | 2 | 4 |
78,368,186 | 2024-4-22 | https://stackoverflow.com/questions/78368186/move-data-in-dataframe-based-on-a-value-of-a-different-column | I have a dataframe that looks like this: Put/Call StrikePrice fixedprice floatprice fixedspread floatspread Put 10 0 20 0 0 Put 10 0 20 0 0 nan 0 0 0 13 15 nan 0 0 0 14 16 If the put/call column has the value 'Put', I need to take the value from the strike price column and place it in the fixedspread column, and I need to take the value from the float price column and place it in the float spread column. Once the values are in the correct places I can get rid of the Put/Call column, strike price column, float price column and fixed price column. the output should look something like this: fixedspread floatspread 10 20 10 20 13 15 14 16 | You can directly combine the columns with mask using the underlying numpy array for the right-hand side: out = (df[['fixedspread', 'floatspread']] .mask(df['Put/Call'].eq('Put'), df[['StrikePrice', 'floatprice']].values) ) Output: fixedspread floatspread 0 10 20 1 10 20 2 13 15 3 14 16 | 2 | 2 |
78,367,100 | 2024-4-22 | https://stackoverflow.com/questions/78367100/does-this-code-adding-256-to-a-uint8-fail-in-numpy-2-due-to-nep-50 | The following code works in numpy 1.26.4, but not in numpy 2.0.0rc1, giving OverflowError: Python integer -256 out of bounds for int8 import numpy as np np.array([1], dtype=np.int8) + (-256) Is that expected? Is it due to NEP 50 adoption? If so, which part of NEP 50 mandates this new behavior? | Yeah, that's NEP 50 behavior. Quoting NEP 50: This proposal uses a βweak scalarβ logic. This means that Python int, float, and complex are not assigned one of the typical dtypes, such as float64 or int64. Rather, they are assigned a special abstract DType, similar to the βscalarβ hierarchy names: Integral, Floating, ComplexFloating. When promotion occurs (as it does for ufuncs if no exact loop matches), the other DType is able to decide how to regard the Python scalar. E.g. a UInt16 promoting with an Integral will give UInt16. At no time is the value used to decide the result of this promotion. The value is only considered when it is converted to the new dtype; this may raise an error. Before, the value of the Python int -256 would have been examined to determine what result dtype to use. Now, adding a Python int to data of int8 dtype always converts the Python int to int8 dtype. If the value doesn't fit in int8 dtype, you get this error. | 3 | 4 |
78,361,389 | 2024-4-21 | https://stackoverflow.com/questions/78361389/pystata-run-stata-instances-in-parallel-from-python | I'm using the pystata package that allows me to run stata code from python, and send data from python to stata and back. The way I understand this, is that there is a single stata instance that is running in the background. I want to bootstrap some code that wraps around the stata code, and I would like to run this in parallel. Essentially, I would like to have something like from joblib import Parallel, delayed import pandas as pd def single_instance(seed): # initialize stata from pystata import config, stata config.init('be') # run some stata code (load a data set and collapse, for example) stata.run('some code') # load stata data to python df = stata.pdataframe_from_data() out = do_something_with_data(df, seed) return out if __name__ == '__main__': seeds = np.arange(1, 100) Parallel(backend='loky', n_jobs=-1)( delayed(single_instance)(seeds[i]) for i in values) where there is some code that is run in parallel, and each thread is initializing its own stata instance in parallel. However, I'm worried that all these parallelized threads are accessing the same stata instance -- can this work as I expect? How should I set this up? joblib.externals.loky.process_executor._RemoteTraceback: """ Traceback (most recent call last): File "/home/x/miniconda3/envs/stata/lib/python3.12/site-packages/joblib/externals/loky/process_executor.py", line 391, in _process_worker call_item = call_queue.get(block=True, timeout=timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/x/miniconda3/envs/stata/lib/python3.12/multiprocessing/queues.py", line 122, in get return _ForkingPickler.loads(res) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/x/miniconda3/envs/stata/lib/python3.12/site-packages/joblib/externals/cloudpickle/cloudpickle.py", line 649, in subimport __import__(name) File "/usr/local/stata/utilities/pystata/stata.py", line 8, in <module> config.check_initialized() File "/usr/local/stata/utilities/pystata/config.py", line 281, in check_initialized _RaiseSystemException(''' File "/usr/local/stata/utilities/pystata/config.py", line 86, in _RaiseSystemException raise SystemError(msg) SystemError: Note: Stata environment has not been initialized yet. To proceed, you must call init() function in the config module as follows: from pystata import config config.init() """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "test.py", line 299, in <module> bootstrap(aggregation='occ') File "test.py", line 277, in bootstrap z = Parallel(backend='loky', n_jobs=-1)( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/x/miniconda3/envs/stata/lib/python3.12/site-packages/joblib/parallel.py", line 1098, in __call__ self.retrieve() File "/home/x/miniconda3/envs/stata/lib/python3.12/site-packages/joblib/parallel.py", line 975, in retrieve self._output.extend(job.get(timeout=self.timeout)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/x/miniconda3/envs/stata/lib/python3.12/site-packages/joblib/_parallel_backends.py", line 567, in wrap_future_result return future.result(timeout=timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/x/miniconda3/envs/stata/lib/python3.12/concurrent/futures/_base.py", line 456, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "/home/x/miniconda3/envs/stata/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result raise self._exception joblib.externals.loky.process_executor.BrokenProcessPool: A task has failed to un-serialize. Please ensure that the arguments of the function are all picklable. | Using backend="multiprocessing" as an argument to joblib.Parallel will launch Stata instances in separate processes. | 2 | 1 |
78,362,581 | 2024-4-21 | https://stackoverflow.com/questions/78362581/error-when-trying-to-do-multiturn-chat-with-gemini-pro | This is the code I took from the API docs in order to start multiturn chat model = genai.GenerativeModel("gemini-pro") messages = [{'role':'user', 'parts': ['hello']}] response = model.generate_content(messages) # "Hello, how can I help" messages.append(response.candidates[0].content) messages.append({'role':'user', 'parts': ['How does quantum physics work?']}) response = model.generate_content(messages) But I'm getting the following error: TypeError: Could not create Blob, expected Blob, dict or an Image type(PIL.Image.Image or IPython.display.Image). Got a: <class 'google.ai.generativelanguage_v1beta.types.content.Content'> Value: parts { text: "hello" } I also tried using the send_message feature, but I got the same error after I tried to use send_message the second time. | As a simple modification, how about the following modification? Modified script 1: genai.configure(api_key=apiKey) model = genai.GenerativeModel("gemini-pro") messages = [{"role": "user", "parts": ["hello"]}] response = model.generate_content(messages) messages.append({"role": "model", "parts": [response.text]}) print("--- response 1") print(response.text) messages.append({"role": "user", "parts": ["How does quantum physics work?"]}) response = model.generate_content(messages) print("--- response 2") print(response.text) Modified script 2: As another modification, how about the following modification? Ref import google.generativeai as genai apiKey = "###" # If you use API key, please set your API key. genai.configure(api_key=apiKey) model = genai.GenerativeModel("gemini-pro") chat = model.start_chat() def get_chat_response(chat: genai.ChatSession, prompt: str) -> str: response = chat.send_message(prompt) return response.text prompt = "hello" print(get_chat_response(chat, prompt)) prompt = "How does quantum physics work?" print(get_chat_response(chat, prompt)) | 2 | 0 |
78,363,045 | 2024-4-21 | https://stackoverflow.com/questions/78363045/return-null-when-multiple-values-has-same-mode | Sharing one sample code . For column b as u can see both values{1,2} has same frequency{2} so mode is returning both values but I want null . Which logically means that it is unable to find one unique value which is most occuring. import polars as pl if __name__ == '__main__': df = pl.DataFrame( { "a": [1, 1, 2, 3], "b": [1, 1, 2, 2],}) # print(df) print(df.select(pl.col("b").mode())) | You could try using a pl.when().then() construct. df.select( pl.when( pl.col("b").mode().len() == 1 ).then( pl.col("b").mode() ) ) shape: (2, 1) ββββββββ β b β β --- β β i64 β ββββββββ‘ β null β β null β ββββββββ If you prefer a single null, you can append .unique() to the closing parenthesis of .then(). | 2 | 1 |
78,362,424 | 2024-4-21 | https://stackoverflow.com/questions/78362424/how-do-i-print-an-equation-using-sympy | How do I show an equation like this: I can show the left side using display(binomial(n,k)) and the right side using y= factorial(n)/(factorial(n-k) * factorial(k)) display(y) but how do I show the equation? Also, how do I make it show (n-k) instead of (-k+n)? | Make it into an Equality. from sympy import init_session init_session(quiet=True) import sympy as smp n, k = smp.symbols("n k") eq = smp.Eq(smp.binomial(n, k), smp.factorial(n)/(smp.factorial(n-k) * smp.factorial(k))) eq The variable ordering in displayed result is determined internally by sympy (I believe it is alphabetical). There is a way to adjust this ordering by writing your own printer methods, but it is arguably more work than necessary. So, unless you have a specific reason why -k + n is an issue, I recommend leaving it as it is (mathematically, it is equivalent). | 2 | 3 |
78,361,571 | 2024-4-21 | https://stackoverflow.com/questions/78361571/annotate-decorator-with-paramspec-correctly-using-new-typing-syntax-3-12 | I'm trying to use new type hints from Python 3.12, and suddenly PyCharm highlights some obscure problem about new usage of ParamSpec. import functools from dataclasses import dataclass from typing import Callable from typing import Concatenate @dataclass class Message: text: str type CheckFunc[**P] = Callable[Concatenate[Message, P], None] def check_is_command_not_answer[**P](func: CheckFunc[P]) -> CheckFunc[P]: @functools.wraps(func) def inner(message: Message, *args: P.args, **kwargs: P.kwargs) -> None: if message.text: ... else: return func(message, *args, **kwargs) return inner On return inner PyCharm blames: Expected type '(Message, ParamSpec("P")) -> None', got '(message: Message, ParamSpec("P"), ParamSpec("P")) -> None' instead Here is the screenshot. BTW, I'm using this decorator here and there, and the code works perfectly. Pyright doesn't complain for any issues here either. I cannot spot the problem. Can you? | Your code has no problem. The problem is with PyCharm. Its support for Concatenate et al. is incomplete, as discussed in PY-51766. I'm unable to find a good existing issue for this bug, however. Put a type: ignore or noqa comment there and move on. | 3 | 1 |
78,360,352 | 2024-4-21 | https://stackoverflow.com/questions/78360352/polars-lazyframe-not-returning-specified-schema-order-after-collecting | I have a function which runs in a loop performing calculations on a list of arrays. At a point during the first iteration of the function, a polars lazyframe is initialised. On the following iterations, a new dataframe is specified using the same schema, and the two dataframes are joined row-wise using pl.vstack, and then specified as a lazyframe again. import numpy as np import polars as pl def my_func(): array_list = [np.zeros((1,19))]*2 #this is just for example and not representative of shape of real array. for i, _ in enumerate(array_list): #calculations are done here result = np.zeros((1,19)) #result of calculations (correct shape of real result) if i < 1: result_df = pl.DataFrame(data = result, schema = { 'MDL', 'MVL', 'MWVL', 'RR', 'DET', 'ADL', 'LDL', 'DIV', 'EDL', 'LAM', 'TT', 'LVL', 'EVL', 'AWVL', 'LWVL', 'LWVLI', 'EWVL', 'Ratio_DRR', 'Ratio_LD'}, orient='row').lazy() else: new_df = pl.DataFrame(data=data, schema = { 'MDL', 'MVL', 'MWVL', 'RR', 'DET', 'ADL', 'LDL', 'DIV', 'EDL', 'LAM', 'TT', 'LVL', 'EVL', 'AWVL', 'LWVL', 'LWVLI', 'EWVL', 'Ratio_DRR', 'Ratio_LD' }, orient='row') #append new dataframe to results result_df = result_df.collect().vstack(new_df, in_place=True).lazy() return result_df When returning the dataframe outside the function, the column names are no longer in order, but the data is. e.g result.schema OrderedDict([('LAM', Float64), ('LDL', Float64), ('ADL', Float64), ('DIV', Float64), ('MDL', Float64), ('MWVL', Float64), ('LWVL', Float64), ('MVL', Float64), ('TT', Float64), ('DET', Float64), ('RR', Float64), ('EDL', Float64), ('Ratio_LD', Float64), ('Ratio_DRR', Float64), ('LVL', Float64), ('LWVLI', Float64), ('EWVL', Float64), ('EVL', Float64), ('AWVL', Float64)]) I assume this is due to my naivety about how lazyframes work, but is there a way to enfore the order without renaming the columns? Thanks. | The problem is you're using a set which is an unordered collection in Python. >>> schema = {'MDL', 'MVL', 'MWVL', 'RR', 'DET'} >>> type(schema) set >>> schema {'DET', 'MDL', 'MVL', 'MWVL', 'RR'} # "random" order You want to use a list/tuple instead. There is a LazyFrame constructor, and you can use concat to combine frames (eager or lazy). pl.LazyFrame() pl.concat() frames = [] for ...: df = pl.LazyFrame(...) frames.append(df) return pl.concat(frames) The use of .vstack() and any in_place methods/parameters in Polars is generally not recommended. | 2 | 2 |
78,360,123 | 2024-4-21 | https://stackoverflow.com/questions/78360123/rolling-sum-with-periods-given-from-pandas-column | Trying to calculate in pandas the rolling sum of Values in Column A, with lookback periods given in Column B and results of rolling sums stored in Column C. Index | Column A | Column B || Column C | | -------- | -------- || -------- | 0 | 1 | 1 || 1 | 1 | 2 | 2 || 3 | 2 | 1 | 3 || 4 | 3 | 3 | 2 || 4 | 4 | 2 | 4 || 8 | For example, for last row, the rolling sum should sum the last 4 values from Column A, since 4 is given in column B. Avoiding loops would be optimal. Although a simple task, I haven't managed to come up with a solution. | Since your rolling sums depend on all values, you will have to compute one per window. This can be done using numpy and indexing lookup: import numpy as np idx, vals = pd.factorize(df['B']) df['C'] = np.vstack([ df['A'].rolling(v, min_periods=1).sum() for v in vals ])[idx, np.arange(len(df))] Output: A B C 0 1 1 1.0 1 2 2 3.0 2 1 3 4.0 3 3 2 4.0 4 2 4 8.0 Variant if you want to use different aggregation functions at once (here sum and mean for the demo, although D could also be computed as C/B): import numpy as np idx, vals = pd.factorize(df['B']) df[['C', 'D']] = np.dstack([ df['A'].rolling(v, min_periods=1) .agg(['sum', 'mean']) for v in vals ])[np.arange(len(df)), :, idx] Output: A B C D 0 1 1 1.0 1.000000 1 2 2 3.0 1.500000 2 1 3 4.0 1.333333 3 3 2 4.0 2.000000 4 2 4 8.0 2.000000 Reproducible input: df = pd.DataFrame({'A': [1,2,1,3,2], 'B': [1,2,3,2,4]}) | 2 | 3 |
78,359,610 | 2024-4-20 | https://stackoverflow.com/questions/78359610/split-array-of-integers-into-subarrays-with-the-biggest-sum-of-difference-betwee | I'm trying to find the algorithm efficiently solving this problem: Given an unsorted array of numbers, you need to divide it into several subarrays of length from a to b, so that the sum of differences between the minimum and maximum numbers in each of the subarrays is the greatest. The order of the numbers must be preserved. Examples: a = 3, b = 7 input: [5, 8, 4, 5, 1, 3, 5, 1, 3, 1] answer: [[5, 8, 4], [5, 1, 3], [5, 1, 3, 1]] (diff sum is 12) a = 3, b = 4 input: [1, 6, 2, 2, 5, 2, 8, 1, 5, 6] answer: [[1, 6, 2], [2, 5, 2, 8], [1, 5, 6]] (diff sum is 16) a = 4, b = 5 input: [5, 8, 4, 5, 1, 3, 5, 1, 3, 1, 2] answer: splitting is impossible The only solution I've come up with so far is trying all of the possible subarray combinations. from collections import deque def partition_array(numbers, min_len, max_len): max_diff_subarray = None queue = deque() for end in range(min_len - 1, max_len): if end < len(numbers): diff = max(numbers[0:end + 1]) - min(numbers[0:end + 1]) queue.append(Subarray(previous=None, start=0, end=end, diff_sum=diff)) while queue: subarray = queue.popleft() if subarray.end == len(numbers) - 1: if max_diff_subarray is None: max_diff_subarray = subarray elif max_diff_subarray.diff_sum < subarray.diff_sum: max_diff_subarray = subarray continue start = subarray.end + 1 for end in range(start + min_len - 1, start + max_len): if end < len(numbers): diff = max(numbers[start:end + 1]) - min(numbers[start:end + 1]) queue.append(Subarray(previous=subarray, start=start, end=end, diff_sum=subarray.diff_sum + diff)) else: break return max_diff_subarray class Subarray: def __init__(self, previous=None, start=0, end=0, diff_sum=0): self.previous = previous self.start = start self.end = end self.diff_sum = diff_sum numbers = [5, 8, 4, 5, 1, 3, 5, 1, 3, 1] a = 3 b = 7 result = partition_array(numbers, a, b) print(result.diff_sum) Are there any more time efficient solutions? | First let's solve a simpler problem. Let's run through an array, and give mins and maxes for all windows of fixed size. def window_mins_maxes (size, array): min_values = deque() min_positions = deque() max_values = deque() max_positions = deque() for i, value in enumerate(array): if size <= i: yield (i, min_values[0], max_values[0]) if min_positions[0] <= i - size: min_values.popleft() min_positions.popleft() if max_positions[0] <= i - size: max_values.popleft() max_positions.popleft() while 0 < len(min_values) and value <= min_values[-1]: min_values.pop() min_positions.pop() min_values.append(value) min_positions.append(i) while 0 < len(max_values) and max_values[-1] <= value: max_values.pop() max_positions.pop() max_values.append(value) max_positions.append(i) yield (len(array), min_values[0], max_values[0]) This clearly takes memory O(size). What's less obvious is that it takes time O(n) to process an array of length n. But we can see that with amortized analysis. To each element we'll attribute the cost of checking the possible value that is smaller than it, the cost of some later element checking that it should be removed, and the cost of being added. That accounts for all operations (though this isn't the order that they happen) and is a fixed amount of work per element. Also note that the memory needed for this part of the solution fits within O(n). So far I'd consider this a well-known dynamic programming problem. Now let's make it more challenging. We will tackle the partition problem as a traditional dynamic programming problem. We'll build up an array best_weight of the best partition to that point, and prev_index of the start of the previous partition ending just before that point. To build it up, we'll use the above algorithm to take a previous partition and add one of min_len to it. If it is better than the previous, we'll save its information in those arrays. We'll then scan forward from that partition and do that up to max_len. Then we move on to the next possible start of a partition. When we're done we'll find the answer from that code. Here is what that looks like: def partition_array(numbers, min_len, max_len): if max_len < min_len or len(numbers) < min_len: return (None, None) best_weight = [None for _ in numbers] prev_index = [None for _ in numbers] # Need an extra entry for off of the end of the array. best_weight.append(None) prev_index.append(None) best_weight[0] = 0 for i, min_value, max_value in window_mins_maxes(min_len, numbers): window_start_weight = best_weight[i - min_len] if window_start_weight is not None: j = i while j - i < max_len - min_len and j < len(numbers): new_weight = window_start_weight + max_value - min_value if best_weight[j] is None or best_weight[j] < new_weight: best_weight[j] = new_weight prev_index[j] = i - min_len if numbers[j] < min_value: min_value = numbers[j] if max_value < numbers[j]: max_value = numbers[j] j += 1 # And fill in the longest value. new_weight = window_start_weight + max_value - min_value if best_weight[j] is None or best_weight[j] < new_weight: best_weight[j] = new_weight prev_index[j] = i - min_len if best_weight[-1] is None: return (None, None) else: path = [len(numbers)] while prev_index[path[-1]] is not None: path.append(prev_index[path[-1]]) path = list(reversed(path)) partitioned = [numbers[path[i]:path[i+1]] for i in range(len(path)-1)] return (best_weight[-1], partitioned) Note that we do O(1) work for each possible start and length. And so that is time O((max_len + 1 - min_len)*n). And the data structures we used are all bounded above by O(n) in size. Giving the overall efficiency that I promised in the comments. Now let's test it. print(partition_array([5, 8, 4, 5, 1, 3, 5, 1, 3, 1], 3, 7)) print(partition_array([1, 6, 2, 2, 5, 2, 8, 1, 5, 6], 3, 4)) print(partition_array([5, 8, 4, 5, 1, 3, 5, 1, 3, 1, 2], 4, 5)) And the output is: (12, [[5, 8, 4], [5, 1, 3], [5, 1, 3, 1]]) (16, [[1, 6, 2], [2, 5, 2, 8], [1, 5, 6]]) (None, None) | 8 | 9 |
78,359,009 | 2024-4-20 | https://stackoverflow.com/questions/78359009/calculating-min-value-with-condition | A column is added to the dataframe with min value for every item just from columns corresponding from dictionary. How can I add a condition when calculating min value - if the values in the selected columns are greater than the values in the column 'Col7'? import pandas as pd my_dict={'Item1':['Col1','Col3','Col6'], 'Item2':['Col2','Col4','Col6','Col8'] } df=pd.DataFrame({ 'Col0':['Item1','Item2'], 'Col1':[20,25], 'Col2':[89,15], 'Col3':[36,30], 'Col4':[40,108], 'Col5':[55,2], 'Col6':[35,38], 'Col7':[30,20] }) df['min']=df.apply(lambda r:r[[col for col in my_dict.get(r['Col0'], []) if col in r]].min(),axis=1) The result should be: df=pd.DataFrame({ 'Col0':['Item1','Item2'], 'Col1':[20,25], 'Col2':[89,15], 'Col3':[36,30], 'Col4':[40,108], 'Col5':[55,2], 'Col6':[35,38], 'Col7':[30,20], 'min':[35,38] }) | You can get the minimum value and column name by adapting the answer given to your earlier question, passing a default value to min for the case where the condition causes there to be no matching columns: df[['min', 'name']] = df.apply( lambda r:min(((r[col], col) for col in my_dict.get(r['Col0'], []) if col in r and r[col] > r['Col7']), default=(np.nan, '')), axis=1, result_type='expand' ) Output (for your sample data): Col0 Col1 Col2 Col3 Col4 Col5 Col6 Col7 min name 0 Item1 20 89 36 40 55 35 30 35 Col6 1 Item2 25 15 30 108 2 38 20 38 Col6 If we change the condition to r[col] > r['Col7']*1.2, the output is Col0 Col1 Col2 Col3 Col4 Col5 Col6 Col7 min name 0 Item1 20 89 36 40 55 35 30 NaN 1 Item2 25 15 30 108 2 38 20 38.0 Col6 Note I've used NaN and '' as the default value, you can use whatever you choose in their place. | 3 | 1 |
78,355,086 | 2024-4-19 | https://stackoverflow.com/questions/78355086/pandas-rolling-closest-value | Suppose we have the following dataframe: timestamp open high low close delta atr last_index bearish bullish_turning_point 2 04-10-2024 01:54:44 18370.00 18377.75 18367.50 18376.00 32 0 1949 False True 5 04-10-2024 03:21:14 18376.50 18383.00 18375.25 18381.25 28 0 3899 False True 7 04-10-2024 04:38:54 18378.50 18386.25 18378.25 18385.50 133 0 5199 False True 9 04-10-2024 05:30:27 18384.00 18389.50 18378.75 18388.25 135 0 6499 False True 12 04-10-2024 06:06:12 18371.00 18378.00 18369.50 18378.00 130 0 8449 False True 14 04-10-2024 06:33:44 18372.25 18383.75 18372.00 18376.25 67 0 9749 False True 18 04-10-2024 07:21:14 18377.50 18387.75 18376.25 18380.00 8 0 12349 False True 22 04-10-2024 07:47:58 18388.00 18396.75 18385.25 18389.50 -30 0 14949 False True 25 04-10-2024 08:06:17 18390.75 18397.00 18387.50 18392.00 -25 0 16899 False True 28 04-10-2024 08:33:32 18384.75 18398.00 18383.25 18394.00 89 0 18849 False True 30 04-10-2024 08:54:35 18391.25 18403.00 18387.75 18399.25 84 0 20149 False True 34 04-10-2024 09:11:15 18388.75 18396.25 18385.75 18392.25 15 0 22749 False True 43 04-10-2024 10:02:22 18343.50 18350.50 18341.25 18350.50 113 0 28599 False True 46 04-10-2024 10:14:44 18352.00 18361.75 18352.00 18360.00 -42 0 30549 False True 49 04-10-2024 10:35:49 18354.00 18361.25 18347.75 18358.00 49 0 32499 False True 52 04-10-2024 10:54:18 18362.25 18372.00 18361.50 18372.00 180 0 34449 False True 56 04-10-2024 11:12:32 18369.25 18379.50 18367.00 18376.50 78 0 37049 False True 59 04-10-2024 11:27:27 18370.00 18376.50 18367.50 18373.25 54 0 38999 False True 65 04-10-2024 12:01:53 18377.75 18388.25 18377.50 18383.25 108 0 42899 False True 73 04-10-2024 12:25:04 18382.00 18386.25 18381.00 18384.75 65 0 48099 False True How can I find the "nearest" close to "open" for each line before? E.g. For line 30 (close: 18399.25) this would be line 25 (open: 18390.75). For line 52 (close: 18372.00) this would be 14 (open: 18372.25) and so on. | If you want to include the current row's open in the closest open values, you can do this: df['nearest'] = [(abs(df.loc[:i, 'open'] - close)).idxmin() for i, close in df['close'].items()] Output: [2, 5, 7, 9, 7, 5, 7, 22, 25, 25, 30, 30, 43, 46, 49, 14, 5, 14, 9, 28] If you don't want to include the current row's open, it gets a little more complicated: df['nearest'] = [0] + [(abs(df.loc[:df.index[i-1], 'open'] - close)).idxmin() for i, close in enumerate(df['close'][1:], 1)] Output: [0, 2, 5, 7, 7, 5, 7, 9, 22, 25, 25, 30, 2, 2, 46, 14, 5, 14, 9, 28] You can substitute np.nan (or anything else) for the value for the first row as desired. The above code implements what is effectively an expanding window. If you want to implement a rolling window, you can use this code: k = 5 # length of window df['nearest'] = [0] * k + [(abs(df.loc[df.index[i-k]:df.index[i-1], 'open'] - close)).idxmin() for i, close in enumerate(df['close'][k:], k)] Output: # k = 5 [0, 0, 0, 0, 0, 5, 7, 9, 22, 25, 25, 30, 28, 43, 46, 34, 34, 56, 59, 65] # k = 8 [0, 0, 0, 0, 0, 0, 0, 0, 22, 25, 25, 30, 12, 14, 46, 28, 28, 56, 34, 34] | 2 | 2 |
78,358,646 | 2024-4-20 | https://stackoverflow.com/questions/78358646/how-to-compare-hierarchy-in-2-pandas-dataframes-new-sample-data-updated | I have 2 dataframes that captured the hierarchy of the same dataset. Df1 is more complete compared to Df2, so I want to use Df1 as the standard to analyze if the hierarchy in Df2 is correct. However, both dataframes show the hierarchy in a bad way so it's hard to know the complete structure row by row. Eg. Company A may have subsidiary: B, C, D, E and the relationship is A owns B owns C owns D owns E. In Df1, it may show: | Ultimate Parent | Parent | Child | | --------------- | ------ |-------| | A | B | C | | B | C | D | --> new | C | D | E | So if you break down to analyze row by row, the same entity can be shown as "Ultimate Parent" or "Child" at the same time, which makes it complicated. On the other hand, as Df2 is incomplete, so it won't have all the data (A, B, C, D, E). It will only contain partial data, eg. A, D, E in this case, so the dataframe will look like this | Ultimate Parent | Parent | Child | | --------------- | ------ |-------| | A | D | E | Now I want to (1) use Df1 to get the correct/complete hierarchy (2) compare and identify the gap between Df1 and Df2. The logic is as following: If A owns B owns C owns D owns E and Df1 looks like this | Ultimate Parent | Parent | Child | | --------------- | ------ |-------| | A | B | C | | C | D | E | I want to add 1 column to put all the related entities together and in order from ultimate parent to child | Ultimate Parent | Parent | Child | Hierarchy | | --------------- | ------ |-------|-------------| | A | B | C |A, B, C, D, E| | C | D | E |A, B, C, D, E| And then compare this Df1 with Df2 and add a column to Df2 to identify the gap. The most ideal (but optional) situation is to have another column stating the reason if it's wrong. | Ultimate Parent | Parent | Child | Right/Wrong| Reason | | --------------- | ------ |-------|------------|-----------------| | A | D | E | Right | | | C | B | A | Wrong | wrong hierarchy | | C | A | B | Wrong | wrong hierarchy | --> new | G | A | B | Wrong | wrong entities | --> new | A | F | G | Wrong | wrong entities | I have tried multiple string matching methods, but I'm stuck in the step and idea where I think order matters but I don't know how to compare strings in order when they're related but scattered in different rows. | Basically, you'll need to build a network graph of df1 to get a comprejhension map of the hierarchies. Once this is done, you need to compare the hierarchies of df2 with those of df1 and finally validate. To do so, you can define function. You'll create a new column hierarchies to df1 and Right/Wrong, Reason to df2. . import pandas as pd import networkx as nx data1 = { 'Ultimate Parent': ['A', 'C'], 'Parent': ['B', 'D'], 'Child': ['C', 'E'] } df1 = pd.DataFrame(data1) data2 = { 'Ultimate Parent': ['A', 'C', 'A'], 'Parent': ['D', 'B', 'F'], 'Child': ['E', 'A', 'G'] } df2 = pd.DataFrame(data2) G = nx.DiGraph() for _, row in df1.iterrows(): G.add_edge(row['Parent'], row['Child']) if row['Ultimate Parent'] != row['Parent']: G.add_edge(row['Ultimate Parent'], row['Parent']) def complete_hierarchy(node, graph): descendants = nx.descendants(graph, node) descendants.add(node) return ', '.join(sorted(descendants)) df1['Hierarchy'] = df1['Ultimate Parent'].apply(lambda x: complete_hierarchy(x, G)) def validate_row(row, hierarchy_df, graph): filtered_hierarchy = hierarchy_df[hierarchy_df['Ultimate Parent'] == row['Ultimate Parent']] if filtered_hierarchy.empty: return pd.Series(["Wrong", "wrong entities"]) full_hierarchy = filtered_hierarchy.iloc[0]['Hierarchy'] hierarchy_elements = set(full_hierarchy.split(', ')) if set([row['Parent'], row['Child']]).issubset(graph.nodes()): if row['Parent'] not in hierarchy_elements or row['Child'] not in hierarchy_elements: return pd.Series(["Wrong", "wrong hierarchy"]) elif f"{row['Parent']}, {row['Child']}" not in full_hierarchy: return pd.Series(["Wrong", "wrong hierarchy"]) else: return pd.Series(["Right", ""]) else: return pd.Series(["Wrong", "wrong entities"]) df2[['Right/Wrong', 'Reason']] = df2.apply(lambda row: validate_row(row, df1, G), axis=1) print("Df1 - Complete Hierarchy:") print(df1) print("\nDf2 - Validation Results:") print(df2) Which gives you Df1 - Complete Hierarchy: Ultimate Parent Parent Child Hierarchy 0 A B C A, B, C, D, E 1 C D E C, D, E Df2 - Validation Results: Ultimate Parent Parent Child Right/Wrong Reason 0 A D E Right 1 C B A Wrong wrong hierarchy 2 A F G Wrong wrong entities | 2 | 1 |
78,359,422 | 2024-4-20 | https://stackoverflow.com/questions/78359422/trying-to-find-pythonic-way-to-partially-fill-numpy-array | I have a numpy array psi of shape (3,1000) psi.__class__ Out[115]: numpy.ndarray psi.shape Out[116]: (3, 1000) I want to partially fill psi with another array b b.__class__ Out[113]: numpy.ndarray b.shape Out[114]: (3, 500) I can do this with a loop: for n in range(3): psi[n][:500] = b[n] But it seems to me that there ought to be a way to do this more directly. But for instance psi[:][:500] = b fails with error Traceback (most recent call last): File "<ipython-input-120-6b23082d9d6b>", line 1, in <module> psi[:][:500] = b ValueError: could not broadcast input array from shape (3,500) into shape (3,1000) I've a few variations on the theme as well, with similar results. This seems pretty straightforward. Any idea how to do it? | You can use: psi[:, :500] = b Or, if n (=3) in your loop does not match the first dimension of b: idx = np.arange(3) psi[idx, :500] = b[idx] Comparing the two approaches (psi1 with the loop, psi2 as vectorial assignment): np.allclose(psi1, psi2) # True | 2 | 2 |
78,358,193 | 2024-4-20 | https://stackoverflow.com/questions/78358193/struggling-to-position-objects-correctly-in-manim | My end goal with this program is to simulate constrained pendulums with springs but that is a goal for later, currently I have been trying to learn how object creation, positioning and interactions would work and will eventually build the way up. Currently I have been trying to make my code produce N number of independent pendulums side by side but they all spawn at the same position. I have tried a lot of things from trying to define a method to create multiple pendulums at once to the current iteration where I try to use a for loop for creating and shifting the position of the pendulum but to no avail. import numpy as np import manim as mn class Pendulum: def __init__(self, mass, length, theta): self.mass = mass self.length = length self.g = -9.81 self.angle = theta self.angular_vel = 0 def step(self, dt): # Defining RK4 def runge_kutta_4(f, t, y0, h=None): if h is None: h = t[1] - t[0] n = len(t) y = np.zeros((n, len(y0))) y[0] = y0 for i in range(n - 1): k1 = h * f(t[i], y[i]) k2 = h * f(t[i] + h/2, y[i] + k1/2) k3 = h * f(t[i] + h/2, y[i] + k2/2) k4 = h * f(t[i] + h, y[i] + k3) y[i+1] = y[i] + (k1 + 2*k2 + 2*k3 + k4) / 6 return y def pendulum_equations(t, state): theta, omega = state force = self.mass * self.g * np.sin(theta) torque = force * self.length MoI = 1/3 * self.mass * self.length**2 alpha = torque / MoI return np.array([omega, alpha]) state = np.array([self.angle, self.angular_vel]) t = np.array([0, dt]) sol = runge_kutta_4(pendulum_equations, t, state) self.angle = sol[-1, 0] self.angular_vel = sol[-1, 1] class PhysicalPendulum(mn.Scene): def construct(self): p = Pendulum(2, 10, np.pi/2) N = 3 # change number of pendulums here pendulums = [] scale = 0.5 spacing = 3 # Adjust the spacing between pendulums as needed def get_pendulum(i, rod_width=0.2, rod_height=1): rod = mn.Rectangle(width=rod_width, height=scale * p.length, color=mn.BLUE) rod.shift(mn.DOWN * scale * p.length / 2) rod.rotate(p.angle, about_point=rod.get_top()) pendulum = mn.VGroup(rod) pendulum.shift(mn.UP * 3) # Adjust the vertical shift as needed if i % 2 == 0: pendulum.shift(mn.RIGHT * spacing * i) else: pendulum.shift(mn.LEFT * spacing * i) return pendulum def step(pendulum, dt, i): p.step(dt) pendulum.become(get_pendulum(i)) for i in range(N): pendulum = get_pendulum(i) pendulum.add_updater(lambda mob, dt: step(mob, dt, i)) pendulums.append(pendulum) self.add(*pendulums) self.wait(20) for pendulum in pendulums: pendulum.remove_updater(step) Also this is my first time trying Object Oriented Programming so I would appreciate any tips on how to improve the coding style and any comments on things I have been doing wrong. | This is a classical example of a good question. You were almost complete in your code. I made a few change to it, and as I was doing this in google.colab, there might be some things that you'll need to add, import numpy as np import manim as mn class Pendulum: g = -9.81 def __init__(self, mass, length, theta): self.mass = mass self.length = length self.angle = theta self.angular_vel = 0 def step(self, dt): def runge_kutta_4(f, t, y0, h=None): if h is None: h = t[1] - t[0] n = len(t) y = np.zeros((n, len(y0))) y[0] = y0 for i in range(n - 1): k1 = h * f(t[i], y[i]) k2 = h * f(t[i] + h/2, y[i] + k1/2) k3 = h * f(t[i] + h/2, y[i] + k2/2) k4 = h * f(t[i] + h, y[i] + k3) y[i+1] = y[i] + (k1 + 2*k2 + 2*k3 + k4) / 6 return y def pendulum_equations(t, state): theta, omega = state force = self.mass * self.g * np.sin(theta) torque = force * self.length moment_of_inertia = 1/3 * self.mass * self.length**2 alpha = torque / moment_of_inertia return np.array([omega, alpha]) state = np.array([self.angle, self.angular_vel]) t = np.array([0, dt]) sol = runge_kutta_4(pendulum_equations, t, state) self.angle = sol[-1, 0] self.angular_vel = sol[-1, 1] class PhysicalPendulum(mn.Scene): def construct(self): N = 3 pendulums = [Pendulum(2, 10, np.pi/2) for _ in range(N)] scale = 0.5 spacing = 3 def get_pendulum(pendulum, i, rod_width=0.2, rod_height=1): rod = mn.Rectangle(width=rod_width, height=scale * pendulum.length, color=mn.BLUE) rod.shift(mn.DOWN * scale * pendulum.length / 2) rod.rotate(pendulum.angle, about_point=rod.get_top()) pendulum_group = mn.VGroup(rod) pendulum_group.shift(mn.UP * 3) pendulum_group.shift(mn.RIGHT * spacing * i) return pendulum_group def step(pendulum, dt, i): pendulums[i].step(dt) pendulum.become(get_pendulum(pendulums[i], i)) pendulum_groups = [] for i in range(N): pendulum_group = get_pendulum(pendulums[i], i) pendulum_group.add_updater(lambda mob, dt, i=i: step(mob, dt, i)) pendulum_groups.append(pendulum_group) self.add(*pendulum_groups) self.wait(20) for pendulum_group in pendulum_groups: pendulum_group.remove_updater(step) To run it in colab, if you do, you'll have to do a few things. It is rather tricky to install manin there. You need to do this: !pip install manim !apt-get install texlive texlive-latex-extra texlive-fonts-extra texlive-latex-recommended texlive-science dvipng !apt-get install ffmpeg !apt-get install sox !apt-get install libcairo2-dev libjpeg-dev libgif-dev !pip install manim To run the code: %load_ext manim and %%manim -ql -v WARNING PhysicalPendulum Here is a snapshot: | 2 | 1 |
78,358,268 | 2024-4-20 | https://stackoverflow.com/questions/78358268/how-to-remove-indexing-past-lexsort-depth-may-impact-performance | I've a dataframe with a non-unique MultiIndex: A B L1 L2 7.0 7.0 -0.4 -0.1 8.0 5.0 -2.1 1.6 5.0 8.0 -1.8 -0.8 7.0 7.0 0.5 -1.2 NaN -1.1 -0.9 5.0 8.0 0.6 2.3 I want to select some rows using a tuple of values: data = df.loc[(7, 7), :] With no surprise a warning is triggered: PerformanceWarning: indexing past lexsort depth may impact performance. I'm trying to understand what in the current index causes this warning. I've read many answers here, some are related to old versions of pandas, other helped. From what I've read the warning is caused by two properties: The index entries are not unique and The index entries are not sorted. So I'm processing the dataframe index with this function designed from the answers found on this stack: def adjust_index(df): df = df.sort_index() # sort index levels = list(range(len(df.index.levels))) df_idx = df.groupby(level=levels).cumcount() # unique index df_adj = df.set_index(df_idx, append=True) # change index df_adj = df_adj.reset_index(level=-1, drop=True) # drop sorting level return df_adj This doesn't remove the warning. Can you explain what is wrong, useless or missing? The rest of the code: import pandas as pd from numpy import nan, random as npr npr.seed(2) # Dataframe with unsorted MultiIndex def create_df(): n_rows = 6 data = npr.randn(n_rows, 2).round(1) choices = [8, 7, 5, 7, 8, nan] columns = ['A', 'B'] levels = ['L1', 'L2'] tuples = list(zip(npr.choice(choices, n_rows), npr.choice(choices, n_rows))) index = pd.MultiIndex.from_tuples(tuples, names=levels) df = pd.DataFrame(data, index=index, columns=columns) return df df = create_df() df = adjust_index(df) data = df.loc[(7, 7), :] # <-- triggers warning | I got rid of the warning by sorting the index and putting the NaN values first: df.sort_index(inplace=True, na_position="first") data = df.loc[(7, 7), :] print(data) Prints: A B L1 L2 7.0 7.0 -0.4 -0.1 7.0 0.5 -1.2 I think the issue is with the NaN value you have in index. Pandas has special index.codes for each unique value in index and NaN is encoded as -1. So to have sorted index you have to have this -1 value on first position, hence na_position="first" | 4 | 6 |
78,354,782 | 2024-4-19 | https://stackoverflow.com/questions/78354782/how-to-use-returns-context-requirescontext-with-async-functions-in-python | I am very fond of the returns library in python, and I would like to use it more. I have a little issue right now. Currently, I have a function that uses a redis client and gets the value corresponding to a key, like so: from redis import Redis from returns.context import RequiresContext def get_value(key: str) -> RequiresContext[str, Redis]: def inner(client: Redis) -> str: value = client.get(key) return value.decode("utf-8") return RequiresContext(inner) Obviously, that function works like a charm: with Redis( host=redis_host, port=redis_port, password=redis_password, ) as redis_client: value = get_value(key="my-key")(redis_client) print("value = ", value) Now, I would like to use the asyncio pendant of that code, i.e. use the redis.asyncio.Redis. Unfortunately, it looks like things become a bit more complicated in that case. I should probably switch from RequiresContext to RequiresContextFutureResultE, but I was not able to find a working solution. Here's the best code I was able to come up with: async def get_value(key: str) -> RequiresContextFutureResultE[str, Redis]: async def inner(client: Redis) -> FutureResultE[str]: value = await client.get(key) return FutureResult.from_value(value.decode("utf-8")) return RequiresContextFutureResultE(inner) When I run it like this: async def main(): async with Redis( host="localhost", port=6379, password="902nks291", db=15, ) as redis_client: rcfr = get_value(key="user-id") value = await rcfr(redis_client) print("value: ", value) asyncio.run(main()) I get the error that rcfr is not a callable. Can someone help me figure out how I should fix my code to make it work the way I want? | If you call a function (get_value) defined with async def you get an awaitable which you must use with await to get its return value. That's why you get the error. But get_value shouldn't be async def. It just defines and returns a function (wrapped by RequiresContextFutureResultE), it doesn't perform any IO itself. | 2 | 2 |
78,357,085 | 2024-4-20 | https://stackoverflow.com/questions/78357085/welford-variance-differs-from-numpy-variance | I want to use Welford's method to compute a running variance and mean. I came across this implementation of Welford's method in Python. However, when testing to double-check that it results in the same output as the standard Numpy implementation of calculating variance, I do find that there is a difference in output. Running the following code (using the python module unittest) shows that the two give different results (even after testing many times): random_sample = np.random.normal(0, 1, 100) std = np.var(random_sample, dtype=np.longdouble) mean = np.mean(random_sample, dtype=np.longdouble) welford = Welford() welford.add_all(random_sample) self.assertAlmostEqual(mean, welford.mean) self.assertAlmostEqual(var, welford.var_s) >> AssertionError: 1.1782075496578717837 != 1.1901086360180526 within 7 places (0.011901086360180828804 difference) Interestingly, there is only a difference in the variance, not the mean. For my purposes, a 0.012 difference is significant enough that it could affect my results. Why would there be such a difference? Could this be due to compounding floating point errors? If so, would my best bet be to rewrite the package to use the Decimal class? | By default, np.var computes the so-called "population variance", in which the number of degrees of freedom is equal to the number of elements in the array. wellford.var_s is the sample variance, in which the number of degrees of freedom is the number of elements in the array minus one. To eliminate the discrepancy, pass ddof=1 to np.var: import numpy as np from welford import Welford random_sample = np.random.normal(0, 1, 100) var = np.var(random_sample, dtype=np.longdouble, ddof=1) welford = Welford() welford.add_all(random_sample) np.testing.assert_allclose(var, welford.var_s) Alternatively, if it is appropriate to use the population variance in your application, use welford.var_p. var = np.var(random_sample, dtype=np.longdouble) np.testing.assert_allclose(var, welford.var_p) For a description of the difference between the two, see the development version of the np.var documentation. | 2 | 6 |
78,356,546 | 2024-4-19 | https://stackoverflow.com/questions/78356546/how-to-highly-optimize-correlation-calculations-using-pandas-dataframes-and-s | I want to calculate pearson correlations of the columns of a pandas DataFrame. I don't just want to use DataFrame.corr() because I also need the pvalue of the correlation; therefore, I am using scipy.stats.pearsonr(x, y). My problem right now is that my dataframe is huge (shape: (1166, 49262)), so I'm looking at (49262^2-49262)/2 correlations. Please advise on how I can optimize this to reduce the computation time. My code for the correlation: # the variable `data` contains the dataframe of shape (1166, 49262) # setting up output dataframes dfcols = pd.DataFrame(columns=data.columns) correlation = dfcols.T.join(dfcols, how='outer') pvalues = correlation.copy() # pairwise calculation for r in range(len(data.columns)): for c in range(r+1, len(data.columns)): # iterate over all combinations of columns to calculate correlation tmp = input.iloc[:, [r,c]].dropna() if len(tmp) < 2: # too few data points to calculate correlation coefficient result = (0, 1) else: result = pearsonr(tmp.iloc[:, 0], tmp.iloc[:, 1]) correlation.iloc[r, c] = result[0] pvalues.iloc[r, c] = result[1] Some notes: I am also open to suggestions for packages other than scipy; I just need pvalues for the correlations. I think iterating over the columns by integer indexing instead of column name is faster; can someone confirm/deny this? The data frame has a lot of missing data, so I .dropna() and catch cases where less than two data points remain. Simply iterating over the dataframe and extracting the columns in pairs would take over 16.5 days. without even doing any calculations. (Extrapolated from the first 5 full passes using the following code) def foo(): data = load_df() # the pd.DataFrame of shape (1166, 49262) cols = data.columns for i in range(len(cols)): logging.info(f"{i+1}/{len(cols)}") for j in range(i+1, len(cols)): tmp = data.iloc[:, [i, j]].dropna() if len(tmp) < 2: # You may ignore this for this post; I was looking for columns pairs with too few data points to correlate logging.warn(f"correlating columns '{cols[i]}' and '{cols[j]}' results in less than 2 usable data points") foo() I think I could use multithreading to at least use some more threads for correlation calculations. In case someone might deem this important: The data I'm working with is a proteomic dataset with ~50,000 peptides and 1166 patients; I want to correlate the expression of the peptides over all patients in a pairwise fashion. | You can obtain about a 200x speedup by using pd.corr() plus converting the R values into a probability with a beta distribution. I would suggest implementing this by looking at how SciPy did it and seeing if there are any improvements which are applicable to your case. The source code can be found here. This tells you exactly how they implemented the p-value. Specifically, they take a beta distribution with a = b = n / 2 - 1, running from -1 to 1, and find either the cumulative distribution function or the survival function of that distribution at the specified R value. So while pearsonr() does not support being vectorized across all pairs of columns, the underlying beta distribution does support this. Using this, you can turn the correlation that pd.corr() gives you into a correlation plus a p-value. I've checked this against your existing algorithm, and it agrees with it to within machine epsilon. I also tested it with NA values. In terms of speed, it is roughly ~200x faster than your original solution, making faster than a multicore solution while only using one core. Here is the code. Note that only calculate_corr_fast and get_pvalue_vectorized are important to the solution. The rest is just to set up test data or for comparison. import pandas as pd import numpy as np from scipy.stats import pearsonr import scipy M = 1000 N = 200 P = 0.1 A = np.random.rand(M, N) A[np.random.rand(M, N) < P] = np.nan df = pd.DataFrame(A, columns=[f"a{i}" for i in range(1, N + 1)]) # setting up output dataframes def calculate_corr(data): dfcols = pd.DataFrame(columns=data.columns) correlation = dfcols.T.join(dfcols, how='outer') pvalues = correlation.copy() # pairwise calculation for r in range(len(data.columns)): for c in range(r, len(data.columns)): # iterate over all combinations of columns to calculate correlation tmp = data.iloc[:, [r,c]].dropna() if len(tmp) < 2: # too few data points to calculate correlation coefficient result = (0, 1) else: result = pearsonr(tmp.iloc[:, 0], tmp.iloc[:, 1]) correlation.iloc[r, c] = result[0] correlation.iloc[c, r] = result[0] pvalues.iloc[r, c] = result[1] pvalues.iloc[c, r] = result[1] return correlation, pvalues def get_pvalue_vectorized(r, ab, alternative='two-sided'): """Get p-value from beta dist given the statistic, and alternative.""" assert len(r.shape) == 2 assert len(ab.shape) == 2 diag = np.arange(r.shape[0]) # This is just to keep squareform happy. These don't actually # get sent to the beta distribution function. r[diag, diag] = 0 ab[diag, diag] = 0 # Avoid doing repeated computations of r,c and c,r rsq = scipy.spatial.distance.squareform(r) r[diag, diag] = 1 absq = scipy.spatial.distance.squareform(ab) kwargs = dict(a=absq, b=absq, loc=-1, scale=2) if alternative == 'less': pvalue = scipy.stats.beta.cdf(rsq, **kwargs) elif alternative == 'greater': pvalue = scipy.stats.beta.sf(rsq, **kwargs) elif alternative == 'two-sided': pvalue = 2 * (scipy.stats.beta.sf(np.abs(rsq), **kwargs)) else: message = "`alternative` must be 'less', 'greater', or 'two-sided'." raise ValueError(message) # Put back into 2d matrix pvalue = scipy.spatial.distance.squareform(pvalue) return pvalue def calculate_corr_fast(data): correlation = data.corr() # For each pair of data values, count how many cases where both data values are # defined at the same position, using matrix multiply as a pairwise boolean and. data_notna = data.notna().values.astype('float32') value_count = data_notna.T @ data_notna # This is the a and b parameter for the beta distribution. It is different # for every p-value, because each one can potentially have a different number # of missing values ab = value_count / 2 - 1 pvalues = get_pvalue_vectorized(correlation.values, ab) invalid = value_count < 2 pvalues[invalid] = np.nan # Convert back to dataframe pvalues = pd.DataFrame(pvalues, columns=correlation.columns, index=correlation.index) return correlation, pvalues correlation, pvalues = calculate_corr(df) correlation_fast, pvalues_fast = calculate_corr_fast(df) assert np.allclose(pvalues_fast.values.astype('float64'), pvalues.values.astype('float64')) assert np.allclose(correlation_fast.values.astype('float64'), correlation.values.astype('float64')) Benchmark results for 1000x200 dataframe: Original code 40.5 s Β± 1.18 s per loop (mean Β± std. dev. of 7 runs, 1 loop each) @Andrej Kesely's answer ~20 seconds My answer 190 ms Β± 209 Β΅s per loop (mean Β± std. dev. of 7 runs, 10 loops each) Benchmark notes: I used a dataframe with 1000 rows and 200 columns. I tried to use about the same number of rows so as to preserve the split of work between finding correlations and calculating p-value. I reduced the number of columns so the benchmark would finish in my lifetime. :) My benchmarking system has 4 cores. Specifically, it is a Intel(R) Core(TM) i7-8850H CPU with fewer cores. | 2 | 5 |
78,344,470 | 2024-4-18 | https://stackoverflow.com/questions/78344470/how-to-have-a-programmatical-conversation-with-an-agent-created-by-agent-builder | I created an agent with No Code tools offered by the Agent Builder GUI: https://vertexaiconversation.cloud.google.com/ I created a playbook and added a few Data store Tools for the agent to use for RAG. I'd like to call this agent programmatically to integrate it into mobile apps or web pages. There's a lot of code related to the classic Dialogflow agents, the Agent Builder is quite new and uses the Gemini 1.0 Pro under the hood. I've seen this code https://stackoverflow.com/a/78229704/292502 however the question was about DialogFlow ES while the Agent Builder agent is rather a DialogFLow CX agent under the hood (and is listed in the Dialogflow CX dashboard). The Python package is promising, but I haven't found how can I have a conversation with the agent Playbook after I get hold of one. Or maybe I'm just looking at the wrong place. I was also browsing https://github.com/GoogleCloudPlatform/python-docs-samples/tree/main/dialogflow-cx but webhooks, intents and fulfillments are for the "classic" agents. I tried to go over https://github.com/googleapis/google-cloud-python/blob/main/packages/google-cloud-dialogflow-cx/samples/generated_samples/ but haven't find the one which would help me yet. | Looks like the "traditional" Dialogflow CX API calls can invoke an Agent Builder agent: import uuid from google.cloud.dialogflowcx_v3beta1.services.agents import AgentsClient from google.cloud.dialogflowcx_v3beta1.services.sessions import SessionsClient from google.cloud.dialogflowcx_v3beta1.types import session PROJECT_ID = "your-project-name" LOCATION_ID = "project-region" # example: us-central1 AGENT_ID = "a uuid of the agent" AGENT = f"projects/{PROJECT_ID}/locations/{LOCATION_ID}/agents/{AGENT_ID}" LANGUAGE_CODE = "en-us" SESSION_ID = uuid.uuid4() session_path = f"{AGENT}/sessions/{SESSION_ID}" print(f"Session path: {session_path}\n") client_options = None agent_components = AgentsClient.parse_agent_path(AGENT) location_id = agent_components["location"] if location_id != "global": api_endpoint = f"{LOCATION_ID}-dialogflow.googleapis.com:443" print(f"API Endpoint: {api_endpoint}\n") client_options = {"api_endpoint": api_endpoint} session_client = SessionsClient(client_options=client_options) text = "Your test prompt" text_input = session.TextInput(text=text) query_input = session.QueryInput(text=text_input, language_code=LANGUAGE_CODE) request = session.DetectIntentRequest( session=session_path, query_input=query_input ) response = session_client.detect_intent(request=request) print("=" * 20) print(f"Query text: {response.query_result.text}") response_messages = [ " ".join(msg.text.text) for msg in response.query_result.response_messages ] print(f"Response text: {' '.join(response_messages)}\n") | 2 | 4 |
78,337,963 | 2024-4-17 | https://stackoverflow.com/questions/78337963/pytest-two-async-functions-both-with-endless-loops-and-await-commands | I am unit testing a python code. It has an asyncio loop. The main two functions in this loop have endless while loops. I have tested the non-async parts of the code. I am wondering what is the best strategy to unit test these two functions: import asyncio def main(): loop = asyncio.get_event_loop() loop.run_until_complete(run_async_tasks()) . . async def run_async_tasks(): # initialize loop and its variables loop = asyncio.get_event_loop() queue = asyncio.Queue(maxsize=100) # Creating Asyncio tasks task1 = loop.create_task(receive_data(queue), name="receive_data") task2 = loop.create_task(analyse_data(queue), name="analyse_data") await asyncio.gather(task1, task2) async def receive_data(queue): # Data packets are yielded from an non-ending stream by sync_receive_data_packets for data_packet in sync_receive_data_packets(): await queue.put(data_packet) async def analyse_data(queue): while not termination_signal_received(): data_packet = await queue.get() sync_data_analysis(data_packet) queue.task_done() My question is if/how it is possible to unit test receive_data and analyse_data. The main logic for reading the data (sync_receive_data_packets) and executing the analysis (sync_data_analysis) are in non-async functions for the sake of simplicity. They are successfully tested. I am not sure how to test the await queue.put and await queue.get() parts especially when they are in endless loops. In fact I am wondering if these functions can be unit tested as they don't return anything. One puts an item into a queue and the other reads it. | Both functions are not "endless" - the first will end when the generator returned by sync_receive_data_packets is exhausted, and the other whenever termination_signal_received() returns True. Just use mock.patch to point these two functions to callables under the control of your tests. Your tests should further verify if the queue is filled for the first function, and if it is consumed and sync_data_analisys is called with the proper values (it should be patched as well) Otherwise (if they were being repeated with a while True, with no stopping conditions), the way to unit test these would be to write a specialized Queue class that would work as a queue, but that could raise an exception under control of the test code when .get or .put are called. | 2 | 2 |
78,354,608 | 2024-4-19 | https://stackoverflow.com/questions/78354608/instagram-api-on-python-missing-client-id-parameter | I am trying to get a long-lived access token from the Instagram graph API through a python script. No matter what I do, I keep receiving the following error: {"error":{"message":"Missing client_id parameter.","type":"OAuthException","code":101,"fbtrace_id":"AogeGpzZGW9AwqCYKHlmKM"}} My script is the following: import requests refresh_token_url = 'https://graph.facebook.com/v19.0/oauth/access_token' payload = { 'grant_type': 'fb_exchange_token', 'client_id': '1234567890123456' 'client_secret': '1234abcd1234abcd1234abcd', 'fb_exchange_token':'abcd1234' } r = requests.get(refresh_token_url, data=payload) I am for sure passing the client_id parameter. I verified also by printing r.request.body On the contrary, the request is successful if I send it with the Postman app: Why do I keep getting that error message? Thank you! | The Postman called by GET parameters. In modified version, the params argument is used instead of data. This change will append the parameters in payload to the URL as a query string, So your code needs to change it import requests refresh_token_url = 'https://graph.facebook.com/v19.0/oauth/access_token' payload = { 'grant_type': 'fb_exchange_token', 'client_id': '1234567890123456', 'client_secret': '1234abcd1234abcd1234abcd', 'fb_exchange_token': 'abcd1234' } r = requests.get(refresh_token_url, params=payload) | 2 | 1 |
78,350,869 | 2024-4-19 | https://stackoverflow.com/questions/78350869/is-there-a-pythons-f-string-equivalent-in-matlab | I am wondering if there is a way to type and format strings in Matlab like how you have f-strings in Python: Python input: username = lemslems text = f"Hello {username}!" Output: Hello lemslems! I am asking this because I hate writing in the % format and it is just super convenient to write it like how I always do in Python | Matlab doesn't have a direct equivalent to Python's f-strings, but you can achieve similar functionality using string formatting or concatenation. One way to do this is by using the sprintf function, which formats data into strings: name = 'John'; age = 30; formatted_string = sprintf('My name is %s and I am %d years old.', name, age); disp(formatted_string); Alternatively, you can use string concatenation: name = 'John'; age = 30; formatted_string = ['My name is ', name, ' and I am ', num2str(age), ' years old.']; disp(formatted_string); Both approaches will produce the following output: My name is John and I am 30 years old. These methods allow you to insert variables into strings similar to f-strings in Python. | 2 | 4 |
78,351,871 | 2024-4-19 | https://stackoverflow.com/questions/78351871/incompatible-dtype-in-dataframe | I am developing a python app and I'm working with pandas dataframe. Unfortunately, I have this warning: "Setting an item of incompatible dtype is deprecated and will raise an error in a future version of pandas. Value '[149.7 149.7 149.7 149.7]' has dtype incompatible with float64, please explicitly cast to a compatible dtype first. df1.loc[tmp_idx_start:tmp_idx_start+tmp_idx-1, 'AuM'] = (feesPrefinancing.loc[:tmp_idx-1, 'Fees'] + feesMobiDeferred.loc[:tmp_idx-1, 'Fees'] + feesLTQDeferred.loc[:tmp_idx-1, 'Fees']).values " And here is my code : df1['Date'] = pd.to_datetime(pd.date_range(simulationDate, end=feesLTQDeferred['Valuation date'].iloc[-1], freq='MS', inclusive='both').date) df1['AuM'] = 0. df1.loc[4:7, 'AuM'] = (feesPrefinancing.loc[:3, 'Fees'] + feesMobiDeferred.loc[:3, 'Fees'] + feesLTQDeferred.loc[:3, 'Fees']).values df1.loc[8:, 'AuM'] = (annuities['Invested annuity'].values + feesPrefinancing.loc[4:, 'Fees'].values + feesMobiDeferred.loc[4:, 'Fees'].values + feesLTQDeferred.loc[4:, 'Fees'].values) As you can guess, I have a dataframe with the first columns being dates and then the second one should contain floats number. That is why I initialized this column with a "0." value (That is also what I read on other post to solve this error). Unfortunately, this warning won't go away even by casting the lines with some "astype" and other things like this. Here is the result obtain which is correct but I have the warning: Date AuM 0 2023-01-01 0.000000 1 2023-02-01 0.000000 2 2023-03-01 0.000000 3 2023-04-01 0.000000 4 2023-05-01 0.000000 5 2023-06-01 0.000000 6 2023-07-01 0.000000 7 2023-08-01 0.000000 8 2023-09-01 149.700000 9 2023-10-01 149.700000 10 2023-11-01 149.700000 11 2023-12-01 149.700000 12 2024-01-01 2475.800000 13 2024-02-01 2475.800000 14 2024-03-01 2475.800000 15 2024-04-01 2475.800000 16 2024-05-01 2475.800000 17 2024-06-01 2475.800000 18 2024-07-01 2475.800000 19 2024-08-01 2475.800000 Do you have an idea how to solve this ? | You assigning data types that are not strictly floats to AuM that is set as a float when initialized with 0. Assign the correct data type to AuM: import numpy as np import pandas as pd df1['Date'] = pd.to_datetime(pd.date_range(simulationDate, end=fees:TQDeferred['Valuation date'].iloc[-1], freq='MS', inclusive='both').date) df1['AuM'] = 0.0 df1.loc[4:7, 'AuM'] = (feesPrefinancing.loc[:3, 'Fees'] + feesMoboDeferred.loc[:3, 'Fees'] + feesLTQDeferred.loc[:3, 'Fees']).astype(np.float64).values df1.loc[8:, 'AuM'] = (annuities['Invested annuity'] + feesPrefinancing.loc[4:, 'Fees'] + feesMobiDeferred.loc[4:, 'Fees'] + feesLTQDeferred.loc[4:, 'Fees']).astype(np.float64).values Chech data types of columns involved in the calculus and make sure they're all numeric. Make sure they are all consistently float64 | 2 | 1 |
78,349,268 | 2024-4-18 | https://stackoverflow.com/questions/78349268/changing-command-order-in-pythons-typer | I want Typer to display my commands in the order I have initialized them and it displays those commands in alphabetic order. I have tried different approaches including this one: https://github.com/tiangolo/typer/issues/246 In this I get AssertionError. Others like subclassing some Typer and click classes does actually nothing. I want the commands to be in the same order as in this working piece of code: import typer import os app = typer.Typer() @app.command() def change_value(file_name, field): print("Here I will change the", file_name, field) @app.command() def close_field(file_name, field): print("I will close field") @app.command() def add_transaction(file_name): print("I will add the transaction") if __name__ == "__main__": app() Please help :) | All right I have found solution elsewhere, while trying to post the ticket to Typer (they have a really comprehensive troubleshooting!) https://github.com/tiangolo/typer/issues/428 The app that displays commands in the order I want them displayed looks like this: import typer from click import Context from typer.core import TyperGroup class OrderCommands(TyperGroup): def list_commands(self, ctx: Context): return list(self.commands) app = typer.Typer( cls=OrderCommands, no_args_is_help=True ) @app.command() def change_value(file_name, field): print("Here I will change the", file_name, field) @app.command() def close_field(file_name, field): print("I will close field") @app.command() def add_transaction(file_name): print("I will add the transaction") if __name__ == "__main__": app() | 2 | 3 |
78,349,687 | 2024-4-18 | https://stackoverflow.com/questions/78349687/how-do-you-turn-pairs-from-a-dataframe-column-into-two-new-columns | I've been trying to take a DataFrame like df below, and turning some of the columns (say B_m and B_n) into two columns (call them B_m1, B_m2, B_n1 and B_n2), for each pair of values in that column (akin to itertools.combinations(col, r=2)), where they share the same group number. So for B_m, the rows where Group is 0, we should get that B_m1 = [-1, -1, 0] and B_m2 = [0, 1, 1]. df = pd.DataFrame( data=[ [0, 4, 7, -1, 0.9], [0, 4, 7, 0, 0.3], [0, 4, 7, 1, 0.2], [1, 3, 3, 1, 0.5], [1, 3, 3, 0, 0.2], [2, 1, 8, 0, 0.6], ], columns=['Group', 'A_x', 'A_y', 'B_m', 'B_n'], ) print(df) In the case there is only 1 row with a given group, it should be removed. For 4 or more rows with the same group number, we should similarly find all repetitions without repeats. Shown below is what the expected result should look like. expected = pd.DataFrame( data=[ [0, 4, 7, -1, 0, 0.9, 0.3], [0, 4, 7, -1, 1, 0.9, 0.2], [0, 4, 7, 0, 1, 0.3, 0.2], [1, 3, 3, 1, 0, 0.5, 0.2], ], columns=['Group', 'A_x', 'A_y', 'B_m1', 'B_m2', 'B_n1', 'B_n2'], ) print(expected) My first attempt not only took ages to run, using some awkard looping and making a new DataFrame (not elegant and don't have it saved), but also didn't work. Since then, I've not been able to come up with an alternative solution. For the record, I did post a similar, but different question about 3 months ago, if that is of any inspiration. | Here is an approach with groupby.apply and concat: from itertools import combinations cols = ['B_m', 'B_n'] group = list(df.columns.difference(cols, sort=False)) g = df.groupby(group, sort=False) out = pd.concat((g[c].apply(lambda x: pd.DataFrame(list(combinations(x, r=2)))) .rename(columns=lambda x: f'{c}{x+1}') for c in cols), axis=1).reset_index(group) Output: Group A_x A_y B_m1 B_m2 B_n1 B_n2 0 1 4 7 -1.0 0.0 0.9 0.3 1 1 4 7 -1.0 1.0 0.9 0.2 2 1 4 7 0.0 1.0 0.3 0.2 0 2 3 3 1.0 0.0 0.5 0.2 | 2 | 1 |
78,350,242 | 2024-4-18 | https://stackoverflow.com/questions/78350242/uninstall-last-pip-installed-packages | I have just installed a package in my virtual environment which I shouldn't have installed. It also installed many dependency packages. Is there a way to roll back and uninstall the package and its dependencies just installed? Something like "uninstall packages installed in the last 1 hour" or a similar functionality. | Below snippet can help you identify the recently installed pacakage. Once you have the list of packages, you can manually uninstall that pacakge. import pkg_resources import os import time for package in pkg_resources.working_set: print("%s: %s" % (package, time.ctime(os.path.getctime(package.location)))) | 2 | 3 |
78,350,133 | 2024-4-18 | https://stackoverflow.com/questions/78350133/what-is-the-correct-type-annotation-for-bytes-or-bytearray | In Python 3.11 or newer, is there a more convenient type annotation to use than bytes | bytearray for a function argument that means "An ordered collection of bytes"? It seems wasteful to require constructing a bytes from a bytearray (or the other way around) just to satisfy the type-checker. Note that the function does not mutate the argument; it's simply convenient to pass bytes or bytearray instances from different call sites. e.g. def serialize_to_stream(stream: MyStream, data: bytes | bytearray) -> None: for byte in data: stream.accumulate(byte) (This example is contrived, of course, but the purpose is to show that data is only read, never mutated). | The typing module used to have a type to represent this: ByteString. However, it was deprecated in 3.9. From the same section: Prefer collections.abc.Buffer, or a union like bytes | bytearray | memoryview. | 2 | 4 |
78,343,764 | 2024-4-17 | https://stackoverflow.com/questions/78343764/json-text-as-command-line-argument-when-running-python-script | I have read similar questions relating to passing JSON text as a command line argument with python, but none of the solutions have worked with my case. I am automating a python script, and the automation runs a powershell script that takes a JSON object generated from a power automate flow. Everything works great until it comes to processing the JSON in my python script. My goal is to convert the JSON to a dictionary so that I can use the key value pairs in my code. My powershell script looks like this: Python script.py {"Items":[{"Name":"foo","File":"\\\\files\\foo\\foo.csv"},{"Name":"bar","File":"\\\\files\\bar\\bar.csv"},{"Name":"baz","File":"\\\\files\\baz\\baz.csv"}]} My JSON looks like this: { "Items": [ { "Name": "foo", "File": "\\\\files\\foo\\foo.csv" }, { "Name": "bar", "File": "\\\\files\\bar\\bar.csv" }, { "Name": "baz", "File": "\\\\files\\baz\\baz.csv" } ] } I tried this solution from SO: if len(sys.argv) > 1: d = json.loads(sys.argv[1]) print(d) but it returns this error: Unexpected token ':' in expression or statement. + CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException + FullyQualifiedErrorId : UnexpectedToken I am unsure how to solve this problem, any suggestions would help! | Depending on the PowerShell version you're using, you will need to handle it differently, if using pwsh 7.3+ the solution is as simple as wrapping your Json string with single-quotes. Otherwise, if using below that, you would need to escape the double-quotes with \, otherwise those get eaten when passed thru your Python script. Using this can handle it dynamically: $json = @' { "Items": [ { "Name": "foo", "File": "\\\\files\\foo\\foo.csv" }, { "Name": "bar", "File": "\\\\files\\bar\\bar.csv" }, { "Name": "baz", "File": "\\\\files\\baz\\baz.csv" } ] } '@ if ($PSVersionTable.PSVersion -ge '7.3') { Python script.py $json } else { Python script.py $json.Replace('"', '\"') } Then assuming your Python code would be: import json import sys if len(sys.argv) > 1: d = json.loads(sys.argv[1]) for i in d['Items']: print(i) Then the output in your PowerShell console would be: {'Name': 'foo', 'File': '\\\\files\\foo\\foo.csv'} {'Name': 'bar', 'File': '\\\\files\\bar\\bar.csv'} {'Name': 'baz', 'File': '\\\\files\\baz\\baz.csv'} An easier and probably more robust alternative that doesn't require escaping of quotes in any version of PowerShell can be to convert the Json string to Base64 in PowerShell and pass it as argument to your Python script: # Json defined here $json = @' ... ... '@ $b64 = [System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($json)) Python script.py $b64 Then in Python the code works with a minimal change: from base64 import b64decode import json import sys if len(sys.argv) > 1: d = json.loads(b64decode(sys.argv[1])) for i in d['Items']: print(i) | 2 | 2 |
78,344,344 | 2024-4-18 | https://stackoverflow.com/questions/78344344/pywinauto-throwing-an-memoryerror-on-windows-11-but-not-on-windows-10 | I have this code that works properly on windows 10 that I use to send text and enter keys to a specific window using pywinauto, but once i tried to use it in another system using win 11 it does not work. Here is the code: import pywinauto import time boolBisCheckbox = False def parar(eachFfox): global boolBisCheckbox if boolBisCheckbox == False: appname = r"\[FIREFOX " + eachFfox + r"\]" else: appname = r"\[FIREFOX " + eachFfox + r"2\]" handle = pywinauto.findwindows.find_window(title_re=appname) app = pywinauto.application.Application(backend="win32").connect(handle=handle) Wizard = app[appname] Wizard.send_keystrokes("parar") time.sleep(0.5) Wizard.send_keystrokes("{ENTER}") time.sleep(6) Wizard.send_keystrokes("{ENTER}") time.sleep(10) Wizard.send_keystrokes("{ENTER}") input() parar("A") And here is the error: Traceback (most recent call last): File "C:\pypy\testParar.py", line 23, in <module> parar("A") File "C:\pypy\testParar.py", line 12, in parar handle = pywinauto.findwindows.find_window(title_re=appname) File "C:\Python\lib\site-packages\pywinauto\findwindows.py", line 113, in find_window element = find_element(**kwargs) File "C:\Python\lib\site-packages\pywinauto\findwindows.py", line 84, in find_element elements = find_elements(**kwargs) File "C:\Python\lib\site-packages\pywinauto\findwindows.py", line 283, in find_elements elements = [elem for elem in elements if _title_match(elem)] File "C:\Python\lib\site-packages\pywinauto\findwindows.py", line 283, in <listcomp> elements = [elem for elem in elements if _title_match(elem)] File "C:\Python\lib\site-packages\pywinauto\findwindows.py", line 279, in _title_match t = w.rich_text File "C:\Python\lib\site-packages\pywinauto\win32_element_info.py", line 83, in rich_text return handleprops.text(self.handle) File "C:\Python\lib\site-packages\pywinauto\handleprops.py", line 92, in text buffer_ = create_unicode_buffer(length) File "C:\Python\lib\ctypes\_init_.py", line 297, in create_unicode_buffer buf = buftype() MemoryError I took the same code used in another PC running win 10 where it was working properly, and used it in win 11. The systems both have 32 Gb of Ram. I google to try to find similar questions but only found one and there was no answer to the problem. | I just identified the culprit, thank you so much for your insights Vasily Ryabov ! It was SteelSeriesGG.exe app for the keyboard (bloatware **** ...), even in the tray it was somehow messing with it. | 2 | 1 |
78,341,348 | 2024-4-17 | https://stackoverflow.com/questions/78341348/find-the-minimum-and-maximum-possible-sum-for-a-given-number-from-an-integer-ar | There is an ordered array of integer elements and a limit. It is necessary to find the minimum possible and maximum possible sum that can greater or equal the limit, for the given elements. The minimum sum is the minimum sum of the sequence greater or equal than limit The maximum sum, is the sum of the sequence that satisfies the conditions, and gives the largest result Conditions: The elements of the sequence can be repeated: for example you can take 101 + 101 + 101 + 101 + 101 + 201 + 201 When summing, you can not skip elements of the array, for example, you can not add 101 + 301 at once, must be 101 + 201 + 301 Stop summing as soon as you reach the limit First Example: arr = [100, 200, 300, 1000] limit = 1000 The minimum here is 1000, because we can take 10 times 100 for example. The maximum here is 1900, since we can take 100 + 200 + 300 + 300 + 1000. Second Example: arr = [3, 10, 15] limit = 1000 The minimum here is 1000, because we can take 300 times 3 and 10 times 10. The maximum here is 1014, since we can take 318 times 3 and 3 time 10 and 2 times 15 I wrote 1 algorithm based on recursion and one based on dynamic programming. Both the first and the second one produce incorrect results in certain situations Example for Python: from pprint import pprint from typing import Tuple, List import resource import sys resource.setrlimit(resource.RLIMIT_STACK, (0x10000000, resource.RLIM_INFINITY)) sys.setrecursionlimit(0x100000) class CollectingRangeAnalyzer: def __init__(self): self.memo = {} def recursion_method(self, pools: List[int], target_cap: int) -> Tuple[float, float]: self._collect_helper(pools, target_cap, [], 0) if not self.memo: raise ValueError("No valid range found") max_cap = max(self.memo) min_cap = min(self.memo, key=lambda x: x if x >= target_cap else float("inf")) return max_cap, min_cap def _collect_helper(self, pools_, target_sum_, path, cur_sum): if cur_sum >= target_sum_: return if self.memo.get(cur_sum): return for i, v in enumerate(pools_): cur_sum += v path.append(v) self._collect_helper(pools_, target_sum_, path, cur_sum) self.memo[cur_sum] = True cur_sum -= v path.pop() @staticmethod def dynamic_method(arr, limit): table = [] arr_size = len(arr) n_cols = limit // arr[0] + 1 max_cap = float("-inf") min_cap = float("inf") for i in range(arr_size): table.append([]) for j in range(n_cols): table[i].append(0) for i in range(arr_size): for j in range(n_cols): if i == 0 and arr[0] * (j + 1) <= limit + arr[i]: table[i][j] = arr[i] * (j + 1) elif i > j or j < 0: table[i][j] = table[i - 1][j] else: diagonal_prev = table[i - 1][j - 1] j_prev = table[i][j-1] if diagonal_prev < limit: table[i][j] = diagonal_prev + arr[i] else: table[i][j] = max(diagonal_prev, j_prev) max_cap = max(table[i][j], max_cap) min_cap = min(table[i][j], min_cap, key=lambda x: x if x >= limit else float("inf")) return max_cap, min_cap # First Example first_analysis_class = CollectingRangeAnalyzer() first_array = [100, 200, 300, 1000] first_limit = 1000 rec_first = first_analysis_class.recursion_method(first_array, first_limit) # output: (1900, 1000) SUCCESS dynamic_first = first_analysis_class.dynamic_method(first_array, first_limit) # output: (1900, 1000) SUCCESS # But if added the 10000 in first_array and run again I'll get the wrong result in the recursion function. # # first_array = [100, 200, 300, 1000, 10000] # # # rec_first = first_analysis_class.recursion_method(first_array, first_limit) # output: (10900, 1000) WRONG # dynamic_first = first_analysis_class.dynamic_method(first_array, first_limit) # output: (1900, 1000) SUCCESS # Second Example second_analysis_class = CollectingRangeAnalyzer() second_array = [3, 10, 15] second_limit = 1000 rec_second = second_analysis_class.recursion_method(second_array, second_limit) # output: (1014, 1000) SUCCESS dynamic_second = second_analysis_class.dynamic_method(second_array, second_limit) # output: (1012, 1000) WRONG | I assume that you need to always start at the beginning of your sequence. Your examples were still not clear on that. If that is true then this should work: import heapq def min_max_limit_sum (arr, limit): # A path will be (idx_k, ...(idx_1, (idx_0, None))...) # By (value, index): path known = {} # heap of (value, counter, (next_index, prev_path)) upcoming = [(0, 0, (0, None))] counter = 1 best_min = limit + sum(arr) min_path = None best_max = limit - 1 max_path = None # While there is stuff in the heap. while len(upcoming): value, _, next_path = heapq.heappop(upcoming) i = next_path[0] if (value, i) in known or len(arr) <= i: # Filter out alternate routes, and running off the end of the sequence. continue else: path = next_path[1] known[(value, i)] = path if value < limit: value += arr[i] heapq.heappush(upcoming, (value, counter, (i, next_path))) heapq.heappush(upcoming, (value, counter, (i+1, next_path))) counter += 1 else: if value < best_min: best_min = value min_path = path if best_max < value: best_max = value max_path = path return (best_min, min_path, best_max, max_path) print(min_max_limit_sum([100, 200, 300, 1000], 1000)) If the assumption is not true, just set upcoming to have an entry for every single index. | 2 | 2 |
78,340,305 | 2024-4-17 | https://stackoverflow.com/questions/78340305/gekko-parameter-identification-on-a-spring-mass-system | I want to do parameter estimation on a Spring-Mass system with direct collocation method. The parameter k should be determined from response. I simulated this system by from scipy.integrate import odeint import numpy as np def dy(y, t): x, xdot = y return [xdot, -50*x] t = np.linspace(0, 1, 40) sol = odeint(dy, [2.0, 1.0], t) sol_x = sol[:, 0] sol_xdot = sol[:, 1] Then I have these code to identify parameter k: from gekko import GEKKO m = GEKKO(remote=False) m.time = t x = m.CV(value=sol_x); x.FSTATUS = 1 # fit to measurement xdot = m.CV(value=sol_xdot); xdot.FSTATUS = 1 k = m.FV(value = 40.0); k.STATUS = 1 # change initial value of k here m.Equation(x.dt() == xdot) # differential equation m.Equation(xdot.dt() == -k*x) m.options.IMODE = 5 # dynamic estimation m.options.NODES = 40 # collocation nodes m.options.EV_TYPE = 2 m.solve(disp=False) # display solver output By playing around initial value of k, I found k will converge to real value 50 if its initial value is 25 to 65. Otherwise the result will be -0.39 which is not good. I'm quite confusing because this system is linear and should be easy to be solved. So my quietion: how to fine tune the above code so that k converge to 50 with arbitry initial value? | The -0.39 is a local minimum to the optimization problem. As the initial guess is further from the solution, it finds a different local solution. To prevent non-physical solutions, add a constraint for the solver to search only within bounds. This can be done during initialization with: k = m.FV(value=ki,lb=10,ub=100) or after initialization with: k.LOWER = 10 k.UPPER = 100 Here is a complete script that returns the correct solution regardless of the initial guess. from gekko import GEKKO from scipy.integrate import odeint import numpy as np # generate data def dy(y, t): x, xdot = y return [xdot, -50*x] t = np.linspace(0, 1, 40) sol = odeint(dy, [2.0, 1.0], t) sol_x = sol[:, 0] sol_xdot = sol[:, 1] # regression with range of initial guess values kx = np.linspace(0,100,11) for i,ki in enumerate(kx): m = GEKKO(remote=False) m.time = t x = m.CV(value=sol_x); x.FSTATUS = 1 xdot = m.CV(value=sol_xdot); xdot.FSTATUS = 1 k = m.FV(value=ki,lb=10,ub=100); k.STATUS = 1 m.Equation(x.dt() == xdot) m.Equation(xdot.dt() == -k*x) m.options.IMODE = 5 # dynamic estimation m.options.NODES = 6 # collocation nodes (2-6) m.options.EV_TYPE = 2 m.solve(disp=False) print(f'Initial Guess: {ki} ' + f'Solution: {k.value[0]} ' + f'ObjFcn: {m.options.OBJFCNVAL}') The output shows that the correct solution is found, regardless of the initial guess. Initial Guess: 0.0 Solution: 50.000000284 ObjFcn: 1.8231507179e-10 Initial Guess: 10.0 Solution: 50.000000284 ObjFcn: 1.8229910145e-10 Initial Guess: 20.0 Solution: 50.000000284 ObjFcn: 1.8231237768e-10 Initial Guess: 30.0 Solution: 50.000000284 ObjFcn: 1.8229796958e-10 Initial Guess: 40.0 Solution: 50.000000284 ObjFcn: 1.8229906845e-10 Initial Guess: 50.0 Solution: 50.000000284 ObjFcn: 1.8229906842e-10 Initial Guess: 60.0 Solution: 50.000000284 ObjFcn: 1.8229906933e-10 Initial Guess: 70.0 Solution: 50.000000284 ObjFcn: 1.8229906849e-10 Initial Guess: 80.0 Solution: 50.000000284 ObjFcn: 1.8230214413e-10 Initial Guess: 90.0 Solution: 50.000000284 ObjFcn: 1.8229908716e-10 Initial Guess: 100.0 Solution: 50.000000284 ObjFcn: 1.8230004454e-10 Without the constraints, a local solution is found but the objective function is higher that indicates it is not a good fit. Initial Guess: 0.0 Solution: -0.39654383084 ObjFcn: 88263.987254 Initial Guess: 10.0 Solution: -0.39654383084 ObjFcn: 88263.987254 Initial Guess: 20.0 Solution: -0.39654383084 ObjFcn: 88263.987254 Initial Guess: 30.0 Solution: 50.000000284 ObjFcn: 1.8229906872e-10 Initial Guess: 40.0 Solution: 50.000000284 ObjFcn: 1.8229906866e-10 Initial Guess: 50.0 Solution: 50.000000284 ObjFcn: 1.8229906856e-10 Initial Guess: 60.0 Solution: 50.000000284 ObjFcn: 1.8229906861e-10 Initial Guess: 70.0 Solution: -0.39654383084 ObjFcn: 88263.987254 Initial Guess: 80.0 Solution: -0.39654383084 ObjFcn: 88263.987254 Initial Guess: 90.0 Solution: -0.39654383084 ObjFcn: 88263.987254 Initial Guess: 100.0 Solution: -0.39654383085 ObjFcn: 88263.987254 If constraints are unknown or there are multiple local solutions then a multi-start method can be used to search for a global optimum as shown in the Design Optimization course on Global Optimization. Below is a Global search over an initial guess space. The hyperopt package uses Bayesian optimization to find the global solution. from gekko import GEKKO from scipy.integrate import odeint import numpy as np from hyperopt import fmin, tpe, hp from hyperopt import STATUS_OK, STATUS_FAIL # generate data def dy(y, t): x, xdot = y return [xdot, -50*x] t = np.linspace(0, 1, 40) sol = odeint(dy, [2.0, 1.0], t) sol_x = sol[:, 0] sol_xdot = sol[:, 1] def objective(params): ki = params['kx'] m = GEKKO(remote=False) m.time = t x = m.CV(value=sol_x); x.FSTATUS = 1 xdot = m.CV(value=sol_xdot); xdot.FSTATUS = 1 k = m.FV(value=ki); k.STATUS = 1 m.Equation(x.dt() == xdot) m.Equation(xdot.dt() == -k*x) m.options.IMODE = 5 # dynamic estimation m.options.NODES = 6 # collocation nodes (2-6) m.options.EV_TYPE = 2 m.solve(disp=False) obj = m.options.OBJFCNVAL if m.options.APPSTATUS==1: s=STATUS_OK else: s=STATUS_FAIL m.cleanup() return {'loss':obj, 'status': s, 'k':k.value[0]} # Define the search space for the hyperparameters space = {'kx': hp.quniform('kx', 0, 100, 10),} best = fmin(objective, space, algo=tpe.suggest, max_evals=10) sol = objective(best) print(f"Solution Status: {sol['status']}") print(f"Objective: {sol['loss']:.2f}") print(f"Initial Guess: {best['kx']}") print(f"Solution: {sol['k']}") Here is the solution: Solution Status: ok Objective: 0.00 Initial Guess: 30.0 Solution: 50.000000284 While not needed for this problem, there is logic to detect when the poor initial guess produces a failed solution and eliminates that initial guess. | 3 | 1 |
78,342,121 | 2024-4-17 | https://stackoverflow.com/questions/78342121/how-can-i-make-playwright-waits-until-a-specific-cookie-appears-and-then-return | I'm making a python script that requests the user for their cookie through playwright webdriver This is my dummy code import asyncio from playwright.async_api import async_playwright async def main(): async with async_playwright() as playwright: browser = await playwright.chromium.launch(headless=False) context = await browser.new_context() page = await context.new_page() await page.goto('https://example.com') print('Please log-in') while True: for i in (await context.cookies()): if i['name'] == 'account_cookie': return i['value'] asyncio.run(main()) | You could use wait_for_function: await page.wait_for_function("document.cookie.includes('account_cookie')") | 2 | 2 |
78,341,579 | 2024-4-17 | https://stackoverflow.com/questions/78341579/using-list-of-lists-of-indices-to-slice-columns-and-obtain-the-row-wise-vector-l | I have an NxM array, as well as an arbitrary list of sets of column indices I'd like to use to slice the array. For example, the 3x3 array my_arr = np.array([[1, 2, 3], [1, 2, 3], [1, 2, 3]]) and index sets my_idxs = [[0, 1], [2]] I would like to use the pairs of indices to select the corresponding columns from the array and obtain the length of the (row-wise) vectors using np.linalg.norm(). I would like to do this for all index pairs. Given the aforementioned array and list of index sets, this should give: [[2.23606797749979, 3], [2.23606797749979, 3], [2.23606797749979, 3]] When all sets have the same number of indices (for example, using my_idxs = [[0, 1], [1, 2]] I can simply use np.linalg.norm(my_arr[:, my_idxs], axis=1): [[2.23606797749979, 3.605551275463989], [2.23606797749979, 3.605551275463989], [2.23606797749979, 3.605551275463989]] However, when they are not (as is the case with my_idxs = [[0, 1], [2]], the varying index list lengths yield an error when slicing as the array of index sets would be irregular in shape. Is there any way to implement the single-line option, without resorting to looping over the list of index sets and handling each of them separately? | You can try: my_arr = np.array([[1, 2, 3], [1, 2, 3], [1, 2, 3]]) my_idxs = [[0, 1], [2]] out = np.c_[*[np.linalg.norm(my_arr[:, i], axis=1) for i in my_idxs]] print(out) Prints: [[2.23606798 3. ] [2.23606798 3. ] [2.23606798 3. ]] | 2 | 3 |
78,340,572 | 2024-4-17 | https://stackoverflow.com/questions/78340572/why-does-groupby-with-dropna-false-prevent-a-subsequent-multiindex-dropna-to-w | My understanding is MultiIndex.dropna() removes index entries for which at least one level is NaN, there are no conditions. However it seems if a previous groupby was used with dropna=False, it's no longer possible to use MultiIndex.dropna(). What are the reasons for the different behavior? How to drop the NaN entries after using groupby? (I'm aware the NaN groups would be dropped by groupby without the dropna parameter, but I'm looking for a solution working in the case the parameter has been used at some point earlier). import pandas as pd import numpy as np d = {(8.0, 8.0): {'A': -1.10, 'B': -1.0}, (7.0, 8.0): {'A': -0.10, 'B': 0.1}, (5.0, 8.0): {'A': 1.15, 'B': -1.2}, (7.0, 7.0): {'A': 1.10, 'B': 1.6}, (7.0, np.NaN): {'A': 0.70, 'B': -0.7}, (8.0, np.NaN): {'A': -1.00, 'B': 0.9}, (np.NaN, 5.0): {'A': -2.20, 'B': 1.1}} # This works as expected index = pd.MultiIndex.from_tuples(d.keys(), names=['L1', 'L2']) df = pd.DataFrame(d.values(), index=index) print(df.index.dropna()) # This doesn't work as expected df = df.groupby(['L1', 'L2'], dropna=False).mean() print(df.index.dropna()) MultiIndex([(8.0, 8.0), (7.0, 8.0), (5.0, 8.0), (7.0, 7.0)], names=['L1', 'L2']) MultiIndex([(5.0, 8.0), (7.0, 7.0), (7.0, 8.0), (7.0, nan), (8.0, 8.0), (8.0, nan), (nan, 5.0)], names=['L1', 'L2']) | Looking at the sources of pd.MultiIndex.dropna() reveals that there are codes for each value of index. Pandas expects code value -1 for NaN, which apparently is not the case when doing .groupby() (a bug?). You can avoid this issue by reconstructing the index and then drop NaN values, e.g.: df = df.groupby(["L1", "L2"], dropna=False).mean() # reconstruct the index (this will assign code -1 to NaN): df.index = pd.MultiIndex.from_tuples(df.index.to_list(), names=df.index.names) print(df.index.dropna()) Prints: MultiIndex([(5.0, 8.0), (7.0, 7.0), (7.0, 8.0), (8.0, 8.0)], names=['L1', 'L2']) | 4 | 4 |
78,339,449 | 2024-4-17 | https://stackoverflow.com/questions/78339449/what-is-the-best-approach-to-compute-bruns-constant-with-python-witch-include | Brun's constant : https://en.wikipedia.org/wiki/Brun%27s_theorem http://numbers.computation.free.fr/Constants/Primes/twin.html How to compute Brun's constant up to 10^20 with python knowing that primality check has a heavy cost and summing result up to 10^20 is a long task here is my 2 cents attempt : IsPrime : fastest way to check if the number is prime I know digit_root : compute the digital root of the number If someone know what could be improve bellow to reach computation to 10^20, you're welcome import numpy as np import math import time #Brun's constant #p B_2(p) #10^2 1.330990365719... #10^4 1.616893557432... #10^6 1.710776930804... #10^8 1.758815621067... #10^10 1.787478502719... #10^12 1.806592419175... #10^14 1.820244968130... #10^15 1.825706013240... #10^16 1.830484424658... #B_2 should reach 1.9 at p ~ 10^530 which is far beyond any computational project #B_2*(p)=B_2(p)+ 4C_2/log(p) #p B2*(p) #10^2 1.904399633290... #10^4 1.903598191217... #10^6 1.901913353327... #10^8 1.902167937960... #10^10 1.902160356233... #10^12 1.902160630437... #10^14 1.902160577783... #10^15 1.902160582249... #10^16 1.902160583104... def digit_root(number): return (number - 1) % 9 + 1 first25Primes=[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97] def IsPrime(n) : # Corner cases if (n <= 1) : return False if (n <= 3) : return True # This is checked so that we can skip # middle five numbers in below loop if (n % 2 == 0 or n % 3 == 0) : return False #exclude digital root 3 or 6 or 9 if digit_root(n) in (3,6,9): return False if (n != 2 and n != 7 and n != 5 and str(n)[len(str(n))-1] not in ("1","3","7","9")): #si le nombre ne se termine pas par 1 3 7 9 return False for i in first25Primes: if n%i == 0 and i < n: return False if (n>2): if (not(((n-1) / 4) % 1 == 0 or ((n+1) / 4) % 1 == 0)): return False if (n>3): if (not(((n-1) / 6) % 1 == 0 or ((n+1) / 6) % 1 == 0)): return False i = 5 while(i * i <= n) : if (n % i == 0 or n % (i + 2) == 0) : return False i = i + 6 return True def ComputeB_2Aster(B_2val,p): return B_2 + (C_2mult4/np.log(p)) start = time.time() #approx De Brun's B_2 = np.float64(0) B_2Aster = np.float64(0) one = np.float64(1) #Twin prime constant C_2 = np.float64(0.6601618158468695739278121100145557784326233602847334133194484233354056423) C_2mult4 = C_2 * np.float64(4) lastPrime = 2 lastNonPrime = 1 for p in range(3, 100000000000000000000,2): if IsPrime(p): lastNonPrime = p-1 if lastPrime == p-2 and lastNonPrime == p-1: B_2 = B_2 + (one/np.float64(p)) + (one/np.float64(lastPrime)) lastPrime = p else: lastNonPrime = p if p<10000000000: if p%1000001==0: print(f'p:{p} \t\t[elapsed:{time.time()-start}]\nB_2:{B_2:.52f} B_2Aster:{ComputeB_2Aster(B_2,p-2):.52f}\n',end="") else: print(f'p:{p} \t\t[elapsed:{time.time()-start}]\nB_2:{B_2:.52f} B_2Aster:{ComputeB_2Aster(B_2,p-2):.52f}\n',end="") print(f'p:{p} \t\t[elapsed:{time.time()-start}]\nB_2:{B_2:.52f} B_2Aster:{ComputeB_2Aster(B_2,p-2):.52f}\n',end="") | Now, if you remember your classes in number theory there is something called sieve of Eratosthenes which is an algorithm to find all prime numbers up to any given limit: algorithm Sieve of Eratosthenes is input: an integer n > 1. output: all prime numbers from 2 through n. let A be an array of Boolean values, indexed by integers 2 to n, initially all set to true. for i = 2, 3, 4, ..., not exceeding βn do if A[i] is true for j = i2, i2+i, i2+2i, i2+3i, ..., not exceeding n do set A[j] := false return all i such that A[i] is true. Now, this algorithm, translated into python is simply def sieve_of_eratosthenes(max_num): is_prime = np.full(max_num + 1, True, dtype=bool) is_prime[0:2] = False p = 2 while (p * p <= max_num): if (is_prime[p] == True): for i in range(p * p, max_num + 1, p): is_prime[i] = False p += 1 return np.nonzero(is_prime)[0] Now, to compute Brun's constant, you will need that list of primes and the sum. So, def brun_constant(primes): sum_primes = mpfr('0') last_p = mpz('2') count = 0 for p in primes: p_gmp = mpz(p) if count > 0: if is_prime(p_gmp + 2): sum_primes += (1/mpfr(p_gmp) + 1/mpfr(p_gmp + 2)) last_p = p_gmp count += 1 return float(sum_primes) So, your full code will be import numpy as np import gmpy2 from gmpy2 import mpz, mpfr, is_prime import time def sieve_of_eratosthenes(max_num): is_prime = np.full(max_num + 1, True, dtype=bool) is_prime[0:2] = False p = 2 while (p * p <= max_num): if (is_prime[p] == True): for i in range(p * p, max_num + 1, p): is_prime[i] = False p += 1 return np.nonzero(is_prime)[0] def brun_constant(primes): sum_primes = mpfr('0') last_p = mpz('2') count = 0 for p in primes: p_gmp = mpz(p) if count > 0: if is_prime(p_gmp + 2): sum_primes += (1/mpfr(p_gmp) + 1/mpfr(p_gmp + 2)) last_p = p_gmp count += 1 return float(sum_primes) start_time = time.time() limit = 10**6 primes = sieve_of_eratosthenes(limit) print(primes) brun_const = brun_constant(primes) end_time = time.time() execution_time = end_time - start_time print(f"Brun's constant up to {limit}: {brun_const}") print(execution_time) Note here I chose 10^6 as an example: [ 2 3 5 ... 999961 999979 999983] Brun's constant up to 1000000: 1.710776930804221 0.4096040725708008 So, it took 0.4 seconds. For 10^8 I got [ 2 3 5 ... 99999959 99999971 99999989] Brun's constant up to 100000000: 1.758815621067975 47.44146728515625 seconds I did not try higher because I was working in google.colab and I exceeded the allowed limit. Update As nicely observed by nocomment below, the process can be made even faster by eliminating the inner loop in the function sieve_of_erastothenes so it becomes: def sieve_of_eratosthenes(max_num): is_prime = np.full(max_num + 1, True, dtype=bool) is_prime[0:2] = False p = 2 while (p * p <= max_num): if (is_prime[p] == True): is_prime[p*p :: p] = False p += 1 return np.nonzero(is_prime)[0] With this change: [ 2 3 5 ... 99999959 99999971 99999989] Brun's constant up to 100000000: 1.7588156210679355 12.536579847335815 seconds for 10^8 Update 2: Execution time estimation As I mentioned above, I am not working on a machine allowing me to test for 10^20. So, I can give you a method to estimate the time, using a regression method. First start of by computing the execution time for a 6,7,8,9,10. Then fit the curve with a polynomial: import numpy as np from sklearn.linear_model import LinearRegression from sklearn.preprocessing import PolynomialFeatures import matplotlib.pyplot as plt import time import numpy as np limits = [10**exp for exp in range(2, 10)] times = [] for limit in limits: start_time = time.time() primes = sieve_of_eratosthenes(limit) brun_const = brun_constant(primes) end_time = time.time() execution_time = end_time - start_time times.append(execution_time) print(f"Limit: {limit}, Time: {execution_time}") X = np.log(limits).reshape(-1, 1) y = np.log(times).reshape(-1, 1) poly = PolynomialFeatures(degree=2) X_poly = poly.fit_transform(X) model = LinearRegression() model.fit(X_poly, y) X_fit = np.linspace(min(X), max(X), 400).reshape(-1, 1) X_fit_poly = poly.transform(X_fit) y_fit = model.predict(X_fit_poly) X_fit = np.exp(X_fit) y_fit = np.exp(y_fit) plt.figure(figsize=(10, 6)) plt.scatter(limits, times, color='blue', label='Actual Times') plt.plot(X_fit, y_fit, color='red', label='Polynomial Model Fit') plt.xscale('log') plt.yscale('log') plt.xlabel('Limits') plt.ylabel('Execution Time (seconds)') plt.title('Polynomial Regression Fit') plt.legend() plt.show() log_limit_20 = np.log(np.array([10**20], dtype=np.float64)).reshape(-1, 1) log_limit_20_poly = poly.transform(log_limit_20) estimated_log_time_20 = model.predict(log_limit_20_poly) estimated_time_20 = np.exp(estimated_log_time_20) print(f"Estimated execution time for limit 10^20: {estimated_time_20[0][0]} seconds") Which gives Estimated execution time for limit 10^20: 147470457222.1028 seconds or 40963960.339500725 hours, or 1706493 days So, I would suggest using a machine with GPU. | 2 | 3 |
78,339,087 | 2024-4-17 | https://stackoverflow.com/questions/78339087/is-it-possible-to-have-a-default-value-that-depends-on-a-previous-parameter | Suppose I want to write a recursive binary search function in Python. The recursive function needs to get the start and end of the current search interval as parameters: def binary_search(myarray, start, end): ... But, when I call the actual function from outside, I always start at 0 and finish at the end of the array. It is easy to make 0 a default value for start, but how can I write a default value for end? Is it possible to write something like: def binary_search(myarray, start=0, end=len(myarray)): ... ? | The common idiom* is: def binary_search(myarray, start=0, end=None): if end is None: end = len(myarray) *(usually used to avoid a mutable default argument value, like here: https://stackoverflow.com/a/41686973/5052365) | 2 | 4 |
78,322,897 | 2024-4-14 | https://stackoverflow.com/questions/78322897/why-does-this-code-use-more-and-more-memory-over-time | Python: 3.11 Saxonche: 12.4.2 My website keeps consuming more and more memory until the server runs out of memory and crashes. I isolated the problematic code to the following script: import gc from time import sleep from saxonche import PySaxonProcessor xml_str = """ <root> <stuff>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum ac auctor ex. Nunc in tincidunt urna. Sed tincidunt eros lacus, sed pulvinar sem venenatis et. Donec euismod orci quis pellentesque sagittis. Donec at tortor in dui mattis facilisis. Pellentesque vel varius lectus. Nunc sed gravida risus, ac finibus elit. Etiam sollicitudin nunc a velit efficitur molestie in ac lectus. Donec vulputate orci odio, sit amet hendrerit odio rhoncus commodo.</stuff> <stuff>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum ac auctor ex. Nunc in tincidunt urna. Sed tincidunt eros lacus, sed pulvinar sem venenatis et. Donec euismod orci quis pellentesque sagittis. Donec at tortor in dui mattis facilisis. Pellentesque vel varius lectus. Nunc sed gravida risus, ac finibus elit. Etiam sollicitudin nunc a velit efficitur molestie in ac lectus. Donec vulputate orci odio, sit amet hendrerit odio rhoncus commodo.</stuff> <stuff>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum ac auctor ex. Nunc in tincidunt urna. Sed tincidunt eros lacus, sed pulvinar sem venenatis et. Donec euismod orci quis pellentesque sagittis. Donec at tortor in dui mattis facilisis. Pellentesque vel varius lectus. Nunc sed gravida risus, ac finibus elit. Etiam sollicitudin nunc a velit efficitur molestie in ac lectus. Donec vulputate orci odio, sit amet hendrerit odio rhoncus commodo.</stuff> </root> """ while True: print('Running once...') with PySaxonProcessor(license=False) as proc: proc.parse_xml(xml_text=xml_str) gc.collect() sleep(1) This script consumes memory at a rate of about 0.5 MB per second. The memory usage does not plateau after a while. I have logs showing that memory usage continues to grow for hours until the server runs out of memory and crashes. Other things I tried that aren't shown above: Using a PyDocumentBuilder to parse the XML instead of a PySaxonProcessor. It didn't appear to change anything. Deleting the Saxon processor and the return value of parse_xml() using the del Python keyword. No change. I have to use Saxon instead of lxml because I need XPath 3.0 support. What am I doing wrong? How do I parse XML using Saxon in a way that doesn't leak? A few folks have suggested that instantiating the PySaxonProcessor once before the loop will fix the leak. It doesn't. This still leaks: with PySaxonProcessor(license=False) as proc: while True: print('Running once...') proc.parse_xml(xml_text=xml_str) gc.collect() sleep(1) | It looks like a memory leak. I created a bug to track it: https://saxonica.plan.io/issues/6391 And the issue is now fixed in the released SaxonC 12.5. | 4 | 3 |
78,316,570 | 2024-4-12 | https://stackoverflow.com/questions/78316570/docker-docker-compose-and-pycharm-debug-port-conflict | I Am trying to set up the debug environment for Pycharm and Python(Fastapi) in Docker (with docker-compose). But I stuck into the problem that I cannot launch both: the debug server and the docker image. My setup of the entry point of the app: # import debugpy # debugpy.listen(('0.0.0.0', 5678)) # debugpy.wait_for_client() # print("Debugger is attached!") import pydevd_pycharm pydevd_pycharm.settrace('localhost', port=5678, stdoutToServer=True, stderrToServer=True) In the Pycharm I set up port 5678 for Python Debug Server. So, if I start debug in Pycharm first, then I get error during docker-compose up: Error response from daemon: Ports are not available: exposing port TCP 0.0.0.0:5678 -> 0.0.0.0:0: listen tcp 0.0.0.0:5678: bind: address already in use If I start docker-compose first, then I get the error in Pycharm while starting debug: Address already in use It looks they both want to listen for the same port on my local machine and who listens the first, gets all access. I tried to google but nothing. At the same time, working with VSC does have any problems. It connects to the docker container and debugs as intended. Please advise. | Replace 'localhost' with 'host.docker.internal' (at least on a Mac) Remove any port forward for port 5678 (so Docker won't try to allocate it) Use this in the code: import pydevd_pycharm pydevd_pycharm.settrace('host.docker.internal', port=5678, stdoutToServer=True, stderrToServer=True) You might also want to take a look at the last answer here. | 2 | 1 |
78,311,513 | 2024-4-11 | https://stackoverflow.com/questions/78311513/train-neural-network-for-absolute-function-with-minimum-layers | I'm trying to train neural network to learn y = |x| function. As we know the absolute function has 2 different lines connecting with each other at point zero. So I'm trying to have following Sequential model: Hidden Layer: 2 Dense Layer (activation relu) Output Layer: 1 Dense Layer after training the model,it only fits the half side of the function. Most of the time it is right hand side, sometimes it is the left side. As soon as I add 1 more Layer in the hidden layer, so instead of 2 I have 3, it perfectly fits the function. Can anyone explain why there is need an extra layer when the absolute function has only one cut ? Here is the code: import numpy as np X = np.linspace(-1000,1000,400) np.random.shuffle(X) Y = np.abs(X) # Reshape data to fit the model input X = X.reshape(-1, 1) Y = Y.reshape(-1, 1) import tensorflow as tf import tensorflow as tf import numpy as np import matplotlib.pyplot as plt # Build the model model = tf.keras.models.Sequential([ tf.keras.layers.Dense(2, activation='relu'), tf.keras.layers.Dense(1) ]) # Compile the model model.compile(optimizer='adam', loss='mse',metrics=['mae']) model.fit(X, Y, epochs=1000) # Predict using the model Y_pred = model.predict(X) # Plot the results plt.scatter(X, Y, color='blue', label='Actual') plt.scatter(X, Y_pred, color='red', label='Predicted') plt.title('Actual vs Predicted') plt.xlabel('X') plt.ylabel('Y') plt.legend() plt.show() Plot for 2 Dense Layer: Plot for 3 Dense Layer: | It depends on the weight initialization. If both weights of are initilized with positive numbers, the network can only predict positive numbers. For negative numbers, it will always output zero. This will also result no gradients - there is no small step to the weights that would make the output match a bit better. So either switch to a different activation function, such as leaky Relu that also passes some signal for the negative values or change the init. In the code below I demonstrate it with different custom inits. good_init sets one weight to a positive, one to a negative values -> The problem gets solved. both bad_inits set the weights to the same sign, and only half of the domain will be learned. import tensorflow as tf import tensorflow as tf import numpy as np import matplotlib.pyplot as plt X = np.linspace(-1000,1000,400) np.random.shuffle(X) Y = np.abs(X) # Reshape data to fit the model input X = X.reshape(-1, 1) Y = Y.reshape(-1, 1) from keras import backend as K def good_init(shape, dtype=None): # one positive, one negative weight val=np.linspace(-1,1,np.prod(shape)).reshape(shape) return K.variable(value=val, dtype=dtype) def bad_init_right(shape, dtype=None): # both weights positive, only right side works val=np.linspace(-1,1,np.prod(shape)).reshape(shape) val=np.abs(val) return K.variable(value=val, dtype=dtype) def bad_init_left(shape, dtype=None): # both weights negative, only right side works val=np.linspace(-1,1,np.prod(shape)).reshape(shape) val=-np.abs(val) return K.variable(value=val, dtype=dtype) # Build the model model = tf.keras.models.Sequential([ tf.keras.layers.Dense(2, activation='relu', kernel_initializer=bad_init_left), tf.keras.layers.Dense(1) ]) # Compile the model model.compile(optimizer='adam', loss='mse',metrics=['mae']) model.fit(X, Y, epochs=100) # Predict using the model Y_pred = model.predict(X) # Plot the results plt.scatter(X, Y, color='blue', label='Actual') plt.scatter(X, Y_pred, color='red', label='Predicted') plt.title('Actual vs Predicted') plt.xlabel('X') plt.ylabel('Y') plt.legend() plt.show() | 5 | 1 |
78,336,104 | 2024-4-16 | https://stackoverflow.com/questions/78336104/ckeditor-is-not-good-for-django-anymore | before ckeditor worked for django but now it is not working and expired. django by itself suggest non-free ckeditor 4 LTS or ckeditor 5 but I don't know how to use it please if there is give me another editor for django or guide me for this ckeditor. it is the warning message: WARNINGS: ?: (ckeditor.W001) django-ckeditor bundles CKEditor 4.22.1 which isn't supported anmyore and which does have unfixed security issues, see for example https://ckeditor.com/cke4/release/CKEditor-4.24.0-LTS . You should consider strongly switching to a different editor (maybe CKEditor 5 respectively django-ckeditor-5 after checking whether the CKEditor 5 license terms work for you) or switch to the non-free CKEditor 4 LTS package. See https://ckeditor.com/ckeditor-4-support/ for more on this. (Note! This notice has been added by the django-ckeditor developers and we are not affiliated with CKSource and were not involved in the licensing change, so please refrain from complaining to us. Thanks.) | You can use github.com/hvlads/django-ckeditor You can install it with pip install django-ckeditor-5 further instructions can be found on the github page. | 2 | 3 |
78,313,930 | 2024-4-12 | https://stackoverflow.com/questions/78313930/generic-type-hinting-for-kwargs | I'm trying to wrap the signal class of blinker with one that enforces typing so that the arguments to send and connect get type-checked for each specific signal. eg if I have a signal user_update which expects sender to be an instance of User and have exactly two kwargs: time: int, audit: str, I can sub-class Signal to enforce that like so: class UserUpdateSignal(Signal): class Receiver(Protocol): def __call__(sender: User, /, time: int, audit: str): ... def send(sender: User, /, time: int, audit: str): # super call def connect(receiver: Receiver): # super call which results in the desired behavior when type-checking: user_update.send(user, time=34, audit="user_initiated") # OK @user_update.connect # OK def receiver(sender: User, /, time: int, audit: str): ... user_update.send("sender") # typing error - signature mismatch @user_update.connect # typing error - signature mismatch def receiver(sender: str): ... The issues with this approach are: it's very verbose, for a few dozen signals I'd have hundreds of lines of code it doesn't actually tie the type of the send signature to that of the connect signature - they can be updated independently, type-checking would pass, but the code would crash when run The ideal approach would apply a signature defined once to both send and connect - probably through generics. I've tried a few approaches so far: Positional Args Only with ParamSpec I can achieve the desired behavior using only class TypedSignal(Generic[P], Signal): def send(self, *args: P.args, **kwargs: P.kwargs): super().send(*args, **kwargs) def connect(self, receiver: Callable[P, None]): return super().connect(receiver=receiver) user_update = TypedSignal[[User, str]]() This type-checks positional args correctly but has no support for kwargs due to the limitations of Callable. I need kwargs support since blinker uses kwargs for every arg past sender. Other Attempts Using TypeVar and TypeVarTuple I can achieve type-hinting for the sender arg pretty simply using generics: T = TypeVar("T") class TypedSignal(Generic[T], Signal): def send(self, sender: Type[T], **kwargs): super(TypedSignal, self).send(sender) def connect(self, receiver: Callable[[Type[T], ...], None]) -> Callable: return super(TypedSignal, self).connect(receiver) # used as my_signal = TypedSignal[MyClass]() what gets tricky is when I want to add type-checking for the kwargs. The approach I've been attempting to get working is using a variadic generic and Unpack like so: T = TypeVar("T") KW = TypeVarTuple("KW") class TypedSignal(Generic[T, Unpack[KW]], Signal): def send(self, sender: Type[T], **kwargs: Unpack[Type[KW]]): super(TypedSignal, self).send(sender) def connect(self, receiver: Callable[[Type[T], Unpack[Type[KW]]], None]) -> Callable: return super(TypedSignal, self).connect(receiver) but mypy complains: error: Unpack item in ** argument must be a TypedDict which seems odd because this error gets thrown even with no usage of the generic, let alone when a TypedDict is passed. Using ParamSpec and Protocol P = ParamSpec("P") class TypedSignal(Generic[P], Signal): def send(self, *args: P.args, **kwargs: P.kwargs) -> None: super().send(*args, **kwargs) def connect(self, receiver: Callable[P, None]): return super().connect(receiver=receiver) class Receiver(Protocol): def __call__(self, sender: MyClass) -> None: pass update = TypedSignal[Receiver]() @update.connect def my_func(sender: MyClass) -> None: pass update.send(MyClass()) but mypy seems to wrap the protocol, so it expects a function that takes the protocol, giving the following errors: error: Argument 1 to "connect" of "TypedSignal" has incompatible type "Callable[[MyClass], None]"; expected "Callable[[Receiver], None]" [arg-type] error: Argument 1 to "send" of "TypedSignal" has incompatible type "MyClass"; expected "Receiver" [arg-type] Summary Is there a simpler way to do this? Is this possible with current python typing? mypy version is 1.9.0 - tried with earlier versions and it crashed completely. | After a lot of trial an error, I've found a relatively simple solution, although it depends on mypy_extensions which is deprecated so this may not be entirely future-proof, although it still works on the latest mypy version. Essentially, using mypy's NamedArg allows defining kwargs in a Callable, enabling us to simply use ParamSpec to solve this: class TypedSignal(Generic[P], Signal): def send(self, *args: P.args, **kwargs: P.kwargs): super().send(*args, **kwargs) def connect(self, receiver: Callable[P, None]): return super().connect(receiver=receiver) user_update = TypedSignal[[User, NamedArg(str, "metadata")]]() This correctly type-checks calls so that anything aside from the below: @user_update.connect def my_func(sender: User, metadata: str) -> None: pass user_update.send(User(), metadata="metadata") will throw an error. | 2 | 0 |
78,329,987 | 2024-4-15 | https://stackoverflow.com/questions/78329987/numba-dispatch-on-type | I would like to dispatch on the type of the second argument in a function in numba and fail in doing so. If it is an integer then a vector should be returned, if it is itself an array of integers, then a matrix should be returned. The first code does not work @njit def test_dispatch(X, indices): if isinstance(indices, nb.int64): ref_pos = np.empty(3, np.float64) ref_pos[:] = X[:, indices] return ref_pos elif isinstance(indices, nb.int64[:]): ref_pos = np.empty((3, len(indices)), np.float64) ref_pos[:, :] = X[:, indices] return ref_pos while the second one, with an else, does. @njit def test_dispatch(X, indices): if isinstance(indices, nb.int64): ref_pos = np.empty(3, np.float64) ref_pos[:] = X[:, indices] return ref_pos else: ref_pos = np.empty((3, len(indices)), np.float64) ref_pos[:, :] = X[:, indices] return ref_pos I guess that the problem is the type declaration via nb.int64[:] but I don't get it to work in any other way. Do you have an idea? Note that this question applies to numba>=0.59. generated_jit is deprecated in earlier versions and actually removed from versions 0.59 on. | You should not use isinstance in a JIT function like this, but instead use @overload (@generated_jit was the old obsolete way to do that) which is specifically made for this purpose. This enables Numba to generate the code faster since only a part of the function is compiled for each case rather than all the case for each specialization. Moreover, isinstance is experimental as specified by Numba in a warning when your first code is executed (warning are reported for users to read them ;) ). Using the new @overload method Starting from Numba 0.59, overload must be used instead: import numba as nb import numpy as np def test_dispatch_scalar(X, indices): ref_pos = np.empty(3, np.float64) ref_pos[:] = X[:, indices] return ref_pos def test_dispatch_vector(X, indices): ref_pos = np.empty((3, len(indices)), np.float64) ref_pos[:, :] = X[:, indices] return ref_pos # Pure-python fallback implementation def test_dispatch_impl(X, indices): if isinstance(indices, (int, np.integer)): return test_dispatch_scalar(X, indices) elif isinstance(indices, np.ndarray) and indices.ndim == 1 and np.issubdtype(indices.dtype, np.integer): return test_dispatch_vector(X, indices) else: assert False # Unsupported # Numba-specific overload @nb.extending.overload(test_dispatch_impl) def test_dispatch_impl_overload(X, indices): if isinstance(indices, nb.types.Integer): return test_dispatch_scalar elif isinstance(indices, nb.types.Array) and indices.ndim == 1 and isinstance(indices.dtype, nb.types.Integer): return test_dispatch_vector else: assert False # Unsupported @nb.njit def test_dispatch(X, indices): return test_dispatch_impl(X, indices) Old deprecated solution Here is an example reasoning about generic types: import numba as nb import numpy as np @nb.generated_jit(nopython=True) def test_dispatch(X, indices): if isinstance(indices, nb.types.Integer): def test_dispatch_scalar(X, indices): ref_pos = np.empty(3, np.float64) ref_pos[:] = X[:, indices] return ref_pos return test_dispatch_scalar elif isinstance(indices, nb.types.Array) and indices.ndim == 1 and isinstance(indices.dtype, nb.types.Integer): def test_dispatch_vector(X, indices): ref_pos = np.empty((3, len(indices)), np.float64) ref_pos[:, :] = X[:, indices] return ref_pos return test_dispatch_vector else: assert False # Unsupported Here is an example reasoning about specific types: import numba as nb import numpy as np @nb.generated_jit(nopython=True) def test_dispatch(X, indices): if indices == nb.types.int64: def test_dispatch_scalar(X, indices): ref_pos = np.empty(3, np.float64) ref_pos[:] = X[:, indices] return ref_pos return test_dispatch_scalar elif isinstance(indices, nb.types.Array) and indices.ndim == 1 and indices.dtype == nb.types.int64: def test_dispatch_vector(X, indices): ref_pos = np.empty((3, len(indices)), np.float64) ref_pos[:, :] = X[:, indices] return ref_pos return test_dispatch_vector else: assert False # Unsupported Requesting specifically 64-bit integers can be a bit too restrictive so I advise you to mix generic type tests and specific ones. For the same reason, you should avoid testing directly if arrays are of a specific type, simply because they can often be contiguous or not or can contain item types compatible with your function. Note that generic JIT functions are meant to generate functions which are compiled separately regarding the target input type (not the values). | 2 | 1 |
78,305,720 | 2024-4-10 | https://stackoverflow.com/questions/78305720/how-to-overlap-a-geopandas-dataframe-with-basemap | I have a shapefile that I read as a geopandas dataframe import geopandas as gpd gdf = gpd.read_file('myfile.shp') gdf.plot() where gdf.crs <Projected CRS: ESRI:54009> Name: World_Mollweide Axis Info [cartesian]: - E[east]: Easting (metre) - N[north]: Northing (metre) Area of Use: - name: World. - bounds: (-180.0, -90.0, 180.0, 90.0) Coordinate Operation: - name: World_Mollweide - method: Mollweide Datum: World Geodetic System 1984 - Ellipsoid: WGS 84 - Prime Meridian: Greenwich and gdf.total_bounds array([-17561329.90352868, -6732161.66088735, 17840887.22672861, 8750122.26961274]) I would like to use basemap to plot the lat/lon grid on top of it. This is what I am doing from mpl_toolkits.basemap import Basemap # Create a Basemap instance with the same projection as the GeoDataFrame map = Basemap(projection='moll', lon_0=-0, lat_0=-0, resolution='c') # Create a figure and axis fig, ax = plt.subplots(figsize=(10, 6)) # Plot the basemap map.drawcoastlines() map.drawcountries() map.drawparallels(range(-90, 91, 30), labels=[1,0,0,0], fontsize=10) map.drawmeridians(range(-180, 181, 60), labels=[0,0,0,1], fontsize=10) # Plot the GeoDataFrame on top of the basemap gdf.plot(ax=ax, color='red', markersize=5) but this is what I get | That's because the Mollweide projection's parameters (i.e, the proj-string) used by the Basemap are different from the ones of your GeoDataFrame (that is ESRI:54009) : >>> gdf.crs.srs 'esri:54009' >>> map.srs '+proj=moll +R=6370997.0 +units=m +lat_0=0.0 +lon_0=0.0 +x_0=18019900...' A simple fix would be to call to_crs (with the basemap's srs) before making the plot : gdf.plot(ax=ax, color='red', markersize=5) gdf.to_crs(map.srs).plot(ax=ax, color='red') SN: Using map as a variable name isn't recommended since it's a built-in. | 4 | 6 |
78,332,981 | 2024-4-16 | https://stackoverflow.com/questions/78332981/memory-leak-when-integrityerror-occurs | I am developing an API which accesses a MariaDB. I use SQLAlchemy for DB-access and encountered a strange memory leak while inserting data. When I insert data and there is no error, everything is normal. RAM consumption goes up but when the transaction is finished it goes down again as expected. That is not the case when an IntegrityError occurs due to the unique index in the table. Everytime I start a new insert and the error occurs, the ram consumption goes up and never down again. In my project, I use FastAPI and a uvicorn server to run the api, but for further investigation I narrowed down the problem to a compact 90-liner and have the same behaviour. The test script automatically creates the table, but the DB (etetete) has to be created manually. What am I missing here? Python Version 3.12 Latest MariaDB in freshly created Docker Container pip list output: Package Version ----------------- ------- greenlet 3.0.3 mariadb 1.1.10 packaging 24.0 pip 24.0 SQLAlchemy 2.0.29 typing_extensions 4.11.0 Python test script: import time from sqlalchemy import create_engine,Index,String from sqlalchemy.orm import DeclarativeBase from sqlalchemy.orm import mapped_column from sqlalchemy.orm import Mapped from sqlalchemy.orm import Session class Base(DeclarativeBase): pass class Person(Base): __tablename__ = "person" __table_args__ = ( Index("age","children","number",unique=True), ) id: Mapped[int] = mapped_column(primary_key=True,autoincrement=True) age: Mapped[int] children: Mapped[int] number: Mapped[int] name: Mapped[str] = mapped_column(String(20)) prename: Mapped[str] = mapped_column(String(20)) def generatePersons(nValues): persons = [] for i in range(0,nValues): persons.append(Person( age=i, children=i, number=i, name="Smith", prename="John" )) return persons #DB TEST FUNCTIONS--------------- def testDb(nValues): print("Start") engine = create_engine("mariadb+mariadbconnector://root:12345@localhost:3306/etetete") Base.metadata.create_all(engine) s = Session(engine) persons = generatePersons(nValues) try: s.add_all(persons) s.commit() except Exception as e: print("error" + str(e)) s.rollback() finally: s.expunge_all() s.expire_all() s.close() engine.clear_compiled_cache() engine.dispose() print("End") def testDbContextManager(nValues): print("Start") engine = create_engine("mariadb+mariadbconnector://root:12345@localhost:3306/etetete") Base.metadata.create_all(engine) persons = generatePersons(nValues) with Session(engine) as s: try: s.add_all(persons) s.commit() except Exception as e: print("error"+str(e)) print("End") #---------------DB TEST FUNCTIONS def executeTest(): for i in range(0,100): start = time.time() # testDb(200000) testDbContextManager(200000) end = time.time() res = end - start print("ExecutionTime: " + str(round(res, 2))+"s") input("Once again?") print() executeTest() I tried several different approaches. With or without context manager. Different SQLAlchemy versions. Using python garbage collector. First flushing and finally committing to DB. I watch the ram consumption with windows task manager, but also used python memory-profiler... | Ok, after hours and hours of research and many different tries I finally found a solution for this problem. It was no problem with SQLAlchemy, this was Pythons behaviour in exception handling. When the exception occurs, it builds up a stack traceback and since the exception traceback is so large due to the large number of results, it consumes a lot of memory and it never clears this stack traceback. The solution was to simply add this to the except block to manually delete the traceback: e.__traceback__ = None I implemented this solution after reading this blog post: https://cosmicpercolator.com/2016/01/13/exception-leaks-in-python-2-and-3/ | 2 | 2 |
78,313,566 | 2024-4-12 | https://stackoverflow.com/questions/78313566/including-a-select-all-feature-in-dash-plotly-callback-python | I've got a plotly bar chart that's connected to a callback to allow filtering. Using below, filtering is done on the Type column. Issue: The real data I actually filter on contains thousands of items. I want to show all data initially but don't want to visualise all individual items in the dropdown bar (because there are too many). Therefore, I'm aiming to incorporate a Select All feature. I don't think the current approach can be manipulated but I'm open to new ideas. Question: If Select All is chosen from the dropdown bar, it should be the only visible icon. You can't have all the items and something else. Where an individual type(s) is chosen initially, if Select All is subsequently selected, it should drop the individual item from the drop down bar. Example below. Start with B, then Select All is chosen so B should be dropped from the dropdown bar. Further, I doubt this is a function at all, but if Select All is in place, then individual type(s) cannot be added to the dropdown bar. import dash from dash import dcc from dash import html from dash.dependencies import Input, Output import dash_bootstrap_components as dbc import plotly.express as px import plotly.graph_objs as go import pandas as pd df = pd.DataFrame({ 'Type': ['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z'], }) N = 300 df = pd.concat([df] * N, ignore_index=True) df['TIMESTAMP'] = pd.date_range(start='2024/01/01 07:36', end='2024/01/30 08:38', periods=len(df)) df['DATE'], df['TIME'] = zip(*[(d.date(), d.time()) for d in df['TIMESTAMP']]) df['DATE'] = pd.to_datetime(df['DATE'], format='%Y-%m-%d') external_stylesheets = [dbc.themes.SPACELAB, dbc.icons.BOOTSTRAP] app = dash.Dash(__name__, external_stylesheets = external_stylesheets) filter_box = html.Div(children=[ html.Div(children=[ dcc.Dropdown( id = 'Type', options = [ {'label': x, 'value': x} for x in df['Type'].unique() ] + [ {'label': 'Select All', 'value': 'all'} ], value = 'all', multi = True, clearable = True, style = {'display': 'inline-block','margin':'0.1rem'} ), ], className = "vstack gap-1 h-100", ) ]) app.layout = dbc.Container([ dbc.Row([ dbc.Col([ dbc.Row([ dbc.Col(html.Div(filter_box), ), ]), ]), dbc.Col([ dbc.Row([ dcc.Graph(id = 'date-bar-chart'), ]), ]) ]) ], fluid = True) @app.callback( Output('date-bar-chart', 'figure'), [Input('Type', 'value'), ]) def chart(value_type): if 'all' in value_type: value_type = ['all'] else: value_type = value_type if value_type == 'all': dff = df elif value_type == ['all']: dff = df else: dff = df[df['Type'].isin(value_type)] df_count = dff.groupby(['DATE','Type'])['DATE'].count().reset_index(name = 'counts') if df_count.empty == True: type_fig = go.Figure() else: df_count = df_count type_fig = px.bar(x = df_count['DATE'], y = df_count['counts'], color = df_count['Type'] ) return type_fig if __name__ == '__main__': app.run_server(debug = True, port = 8052) | Your selection processing is OK, but you are missing an update of the Dropdown component. Also, in the case of adding something else to Select all option, you probably want to only purge the selection only, but keep the graph unchanged. It can be noted, that there are opposite cases, that end up with the similar selection value, e.g. ['A', 'all']. To distinguish them you need to track the previous state, which can be implemented with dcc.Store component. These sum up as the following changes: Additional imports from dash import no_update from dash.dependencies import State Some constants and an auxiliary chart creation method: SELECT_ALL = 'all' # to avoid string literals in the code def make_figure(*, select_all=True, selected_values=None): if select_all: dff = df # use source data frame elif selected_values is None: return go.Figure() # result is empty without filter values else: dff = df[df['Type'].isin(selected_values)] # filtered df_count = dff.groupby(['DATE', 'Type'])['DATE'].count().reset_index( name='counts') if df_count.empty: return go.Figure() return px.bar(x=df_count['DATE'], y=df_count['counts'], color=df_count['Type']) Layout update: filter_box = html.Div(children=[ dcc.Store(id='session_select_all_types', data=True, storage_type='session'), html.Div(children=[ dcc.Dropdown( ... app.layout = dbc.Container([ ... # Initialize with all types selected dcc.Graph(id='date-bar-chart', figure=make_figure(select_all=True)), Callback: @app.callback( Output('date-bar-chart', 'figure'), Output('Type', 'value'), Output('session_select_all_types', 'data'), Input('Type', 'value'), State('session_select_all_types', 'data')) def chart(value_type, select_all_types_previous): select_all_types = SELECT_ALL in value_type if select_all_types: if select_all_types_previous is True: return (no_update, # you don't want to recreate a valid figure [SELECT_ALL], # force it to have an only Select all value no_update) # already True value_type = [SELECT_ALL] figure = make_figure(select_all=select_all_types, selected_values=value_type) return (figure, value_type, # update the selection select_all_types) # update the flag in the session storage | 3 | 1 |
78,318,223 | 2024-4-12 | https://stackoverflow.com/questions/78318223/change-keyboardinterrupt-to-enter | I have the following code: import time def run_indefinitely(): while True: # code to run indefinitely goes here print("Running indefinitely...") time.sleep(1) try: # Run the code indefinitely until Enter key is pressed run_indefinitely() except KeyboardInterrupt: print("Loop interrupted by user") Is there a way to break out of the while loop by hitting 'enter' instead of ctrl+C ? | This is surprisingly tricky to do in Python. The technique involves spawning a worker thread to interrupt us later. Using python-readchar: import os import signal import sys import time from threading import Timer from readchar import readkey # pip install readchar def wait_for(key, timeout): """wait `timeout` seconds for user to press `key`""" pid = os.getpid() sig = signal.CTRL_C_EVENT if os.name == "nt" else signal.SIGINT timer = Timer(timeout, lambda: os.kill(pid, sig)) timer.start() while True: k = readkey() print(f"received {k!r}") if k == key: timer.cancel() # cancel the timer print("breaking") break def run_indefinitely(): while True: print("Running indefinitely...") try: wait_for(key="\n", timeout=1) except KeyboardInterrupt: continue else: break run_indefinitely() print("Loop interrupted by user") Using pynput: import os import signal import time from pynput import keyboard # pip install pynput pid = os.getpid() def on_press(key): if key is keyboard.Key["enter"]: sig = signal.CTRL_C_EVENT if os.name == "nt" else signal.SIGINT os.kill(pid, sig) with keyboard.Listener(on_press=on_press) as k: try: for i in range(10): print("Running indefinitely...") time.sleep(1) except KeyboardInterrupt: pass print("Loop interrupted by user") | 2 | 4 |
78,335,363 | 2024-4-16 | https://stackoverflow.com/questions/78335363/converting-a-binary-to-a-string-variable-in-polars-python-library-with-non-utf | I'm having trouble manipulating a dataset in Python which has non-UTF-8 characters. The strings are imported as a binary. But I am having issues converting the binary columns to strings where a cell has non UTF-8 characters. A minimal working example of my issue is import polars as pl import pandas as pd pd_df = pd.DataFrame([[b"bob", b"value 2", 3], [b"jane", b"\xc4", 6]], columns=["a", "b", "c"]) df = pl.from_pandas(pd_df) column_names = df.columns # Loop through the column names for col_name in column_names: # Check if the column has binary values if df[col_name].dtype ==pl.Binary: # Convert the binary column to string format print(col_name) df = df.with_columns(pl.col(col_name).cast(pl.String)) This throws an error when converting column b. For a solution, I'm fine converting any non-utf 8 characters to blanks. Have tried many other suggestions for conversion in online suggestions, but I can't get any of them to work. | This solution also relies on applying python's native bytes.decode to all elements in the columns of type pl.Binary. Unfortunately, we cannot yet use polars' native expression API for this, but need to call pl.Expr.map_elements instead. df.with_columns( pl.col(pl.Binary).map_elements( lambda bytes: bytes.decode(errors='ignore'), return_dtype=pl.String ) ) shape: (2, 3) ββββββββ¬ββββββββββ¬ββββββ β a β b β c β β --- β --- β --- β β str β str β i64 β ββββββββͺββββββββββͺββββββ‘ β bob β value 2 β 3 β β jane β β 6 β ββββββββ΄ββββββββββ΄ββββββ | 3 | 2 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.