question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
75,205,191 | 2023-1-23 | https://stackoverflow.com/questions/75205191/python-thread-calling-wont-finish-when-closing-tkinter-application | I am making a timer using tkinter in python. The widget simply has a single button. This button doubles as the element displaying the time remaining. The timer has a thread that simply updates what time is shown on the button. The thread simply uses a while loop that should stop when an event is set. When the window is closed, I use protocol to call a function that sets this event then attempts to join the thread. This works most of the time. However, if I close the program just as a certain call is being made, this fails and the thread continues after the window has been closed. I'm aware of the other similar threads about closing threads when closing a tkinter window. But these answers are old, and I would like to avoid using thread.stop() if possible. I tried reducing this as much as possible while still showing my intentions for the program. import tkinter as tk from tkinter import TclError, ttk from datetime import timedelta import time import threading from threading import Event def strfdelta(tdelta): # Includes microseconds hours, rem = divmod(tdelta.seconds, 3600) minutes, seconds = divmod(rem, 60) return str(hours).rjust(2, '0') + ":" + str(minutes).rjust(2, '0') + \ ":" + str(seconds).rjust(2, '0') + ":" + str(tdelta.microseconds).rjust(6, '0')[0:2] class App(tk.Tk): def __init__(self): super().__init__() self.is_running = False is_closing = Event() self.start_time = timedelta(seconds=4, microseconds=10, minutes=0, hours=0) self.current_start_time = self.start_time self.time_of_last_pause = time.time() self.time_of_last_unpause = None # region guisetup self.time_display = None self.geometry("320x110") self.title('Replace') self.resizable(False, False) box1 = self.create_top_box(self) box1.place(x=0, y=0) # endregion guisetup self.timer_thread = threading.Thread(target=self.timer_run_loop, args=(is_closing, )) self.timer_thread.start() def on_close(): # This occasionally fails when we try to close. is_closing.set() # This used to be a boolean property self.is_closing. Making it an event didn't help. print("on_close()") try: self.timer_thread.join(timeout=2) finally: if self.timer_thread.is_alive(): self.timer_thread.join(timeout=2) if self.timer_thread.is_alive(): print("timer thread is still alive again..") else: print("timer thread is finally finished") else: print("timer thread finished2") self.destroy() # https://stackoverflow.com/questions/111155/how-do-i-handle-the-window-close-event-in-tkinter self.protocol("WM_DELETE_WINDOW", on_close) def create_top_box(self, container): box = tk.Frame(container, height=110, width=320) box_m = tk.Frame(box, bg="blue", width=320, height=110) box_m.place(x=0, y=0) self.time_display = tk.Button(box_m, text=strfdelta(self.start_time), command=self.toggle_timer_state) self.time_display.place(x=25, y=20) return box def update_shown_time(self, time_to_show: timedelta = None): print("timer_run_loop must finish. flag 0015") # If the window closes at this point, everything freezes self.time_display.configure(text=strfdelta(time_to_show)) print("timer_run_loop must finish. flag 016") def toggle_timer_state(self): # update time_of_last_unpause if it has never been set if not self.is_running and self.time_of_last_unpause is None: self.time_of_last_unpause = time.time() if self.is_running: self.pause_timer() else: self.start_timer_running() def pause_timer(self): pass # Uses self.time_of_last_unpause, Alters self.is_running, self.time_of_last_pause, self.current_start_time def timer_run_loop(self, event): while not event.is_set(): if not self.is_running: print("timer_run_loop must finish. flag 008") self.update_shown_time(self.current_start_time) print("timer_run_loop must finish. flag 018") print("timer_run_loop() ending") def start_timer_running(self): pass # Uses self.current_start_time; Alters self.is_running, self.time_of_last_unpause if __name__ == "__main__": app = App() app.mainloop() You don't even have to press the button for this bug to manifest, but it does take trail and error. I just run it and hit alt f4 until it happens. If you run this and encounter the problem, you will see that "timer_run_loop must finish. flag 0015" is the last thing printed before we check if the thread has ended. That means, self.time_display.configure(text=strfdelta(time_to_show)) hasn't finished yet. I think closing the tkinter window while a thread is using this tkinter button inside of it is somehow causing a problem. There seems to be very little solid documentation about the configure method in tkinter. Python's official documention of tkinter mentions the function only in passing. It's just used as a read-only dictionary. A tkinter style class gets a little bit of detail about it's configure method, but this is unhelpful. The tkdocs lists configure aka config as one of the methods available for all widgets. This tutorial article seems to be the only place that shows the function actually being used. But it doesn't mention any possible problems or exceptions the method could encounter. Is there some resource sharing pattern I'm not using? Or is there a better way to end this thread? | Ok, so, first I would like to introduce the .after method, which can be used in conjunction with your widget, not requiring the use of threads Notice that the update_time function is called once and calls itself again, making the loop not interfere with tkinter's mainloop in general. This will close along with the program without any problems. import datetime from tkinter import * start_time = datetime.datetime.now() def update_timer(): current_time = datetime.datetime.now() timer_label.config(text=f'{current_time - start_time}') root.after(1000, update_timer) root = Tk() timer_label = Label(text='0') timer_label.pack() update_timer() root.mainloop() Now some explanation about Non-Daemon Threads... When you create a non-daemon thread, it will run until it finishes executing, that is, it will remain open even if the parent is closed, until the process ends. # import module from threading import * import time # creating a function def thread_1(): for i in range(5): print('this is non-daemon thread') time.sleep(2) # creating a thread T T = Thread(target=thread_1) # starting of thread T T.start() # main thread stop execution till 5 sec. time.sleep(5) print('main Thread execution') Output this is non-daemon thread this is non-daemon thread this is non-daemon thread main Thread execution this is non-daemon thread this is non-daemon thread Now see the same example using a daemon thread, this thread will respect the execution nature of the main thread, that is, if it stops, the 'subthread' stops too. # import modules import time from threading import * # creating a function def thread_1(): for i in range(5): print('this is thread T') time.sleep(3) # creating a thread T = Thread(target=thread_1, daemon=True) # starting of Thread T T.start() time.sleep(5) print('this is Main Thread') this is thread T this is thread T this is Main Thread That said, my main solution would be to use .after, if even this solution fails and you need to use threads, use threads daemon=True, this will correctly close the threads after you close your app with tkinter You can see more on Python Daemon Threads | 6 | 3 |
75,218,781 | 2023-1-24 | https://stackoverflow.com/questions/75218781/how-to-read-udp-data-from-a-given-port-with-python-in-windows | On Windows 10 I want to read data from UDP port 9001. I have created the following script which does not give any output (python 3.10.9): import socket sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.bind(("", 9001)) while True: data, addr = sock.recv(1024) print(f"received message: {data.decode()} from {addr}") I checked that a device is sending UDP data on port 9001 using wireshark. But the above code just "runs" on powershell without any output (and without any errors). Any ideas how to fix this? I found this page with a powershell script that is supposed to listen to a UDP port. So I tried this and created a file Start-UDPServer.ps1 with the content as described in that page as follows: function Start-UDPServer { [CmdletBinding()] param ( # Parameter help description [Parameter(Mandatory = $false)] $Port = 10000 ) # Create a endpoint that represents the remote host from which the data was sent. $RemoteComputer = New-Object System.Net.IPEndPoint([System.Net.IPAddress]::Any, 0) Write-Host "Server is waiting for connections - $($UdpObject.Client.LocalEndPoint)" Write-Host "Stop with CRTL + C" # Loop de Loop do { # Create a UDP listender on Port $Port $UdpObject = New-Object System.Net.Sockets.UdpClient($Port) # Return the UDP datagram that was sent by the remote host $ReceiveBytes = $UdpObject.Receive([ref]$RemoteComputer) # Close UDP connection $UdpObject.Close() # Convert received UDP datagram from Bytes to String $ASCIIEncoding = New-Object System.Text.ASCIIEncoding [string]$ReturnString = $ASCIIEncoding.GetString($ReceiveBytes) # Output information [PSCustomObject]@{ LocalDateTime = $(Get-Date -UFormat "%Y-%m-%d %T") SourceIP = $RemoteComputer.address.ToString() SourcePort = $RemoteComputer.Port.ToString() Payload = $ReturnString } } while (1) } and started it in an Powershell terminal (as admin) as .\Start-UDPServer.ps1 -Port 9001 and it returned to the Powershell immediately without ANY output (or error message). Maybe windows is broken? If there is a solution to finally listen to UDP port 9001, I still strongly prefer a python solution! | As far as I can see, your posted Python code should have given you an error when run if it was receiving any data, so that suggests that the data was not getting to the process at all. I'd recommend checking your Windows Firewall settings for that port, and any other host-based firewalls you might be running. But also, the recv() method does not return a tuple. recvfrom() does, so the following code works: import socket sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.bind(("", 9001)) while True: data, addr = sock.recvfrom(1024) print(f"received message: {data.decode()} from {addr}") A tangential note: the Powershell script does not actually start a UDP server, it just creates a function to do so. So you need to add a line Start-UDPServer -Port 9001 at the end to call the function if you want it to actually listen for datagrams. | 6 | 4 |
75,161,099 | 2023-1-18 | https://stackoverflow.com/questions/75161099/using-great-expectations-with-index-of-pandas-data-frame | If I have a a data frame df = pd.DataFrame({'A': [1.1, 2.2, 3.3], 'B': [4.4, 5.5, 6.6]}) I can use Great Expectations to check the name and dtypes of the columns like so: import great_expectations as ge df_asset = ge.from_pandas(df) # List of expectations df_asset.expect_column_to_exist('A') df_asset.expect_column_to_exist('B') df_asset.expect_column_values_to_be_of_type('A', 'float') df_asset.expect_column_values_to_be_of_type('B', 'float') if df_asset.validate()["success"]: print("Validation passed") else: print("Validation failed") But how can I do a similar thing to check the index of the data frame? I.e. if the data frame was instead df = pd.DataFrame({'A': [1.1, 2.2, 3.3], 'B': [4.4, 5.5, 6.6]}).set_index('A') I am looking for something like df_asset.expect_index_to_exist('idx') df_asset.expect_index_values_to_be_of_type('idx', 'float') to replace in the list of expectations | One quick hack is to use .reset_index to convert the index into a regular column: import great_expectations as ge df_asset = ge.from_pandas(df.reset_index()) # List of expectations df_asset.expect_column_to_exist('A') df_asset.expect_column_to_exist('B') df_asset.expect_column_values_to_be_of_type('A', 'float') df_asset.expect_column_values_to_be_of_type('B', 'float') # index-related expectations df_asset.expect_column_to_exist('index') df_asset.expect_column_values_to_be_of_type('index', 'int') if df_asset.validate()["success"]: print("Validation passed") else: print("Validation failed") Note that the default name for an unnamed index is 'index', but you can also control it with kwarg names (make sure you have pandas>=1.5.0). Here is an example: df_asset = ge.from_pandas(df.reset_index(names='custom_index_name')) This could be useful when you want to avoid clashes with existing column names. This approach can also be used for multiple indexes by providing a tuple of custom names. | 4 | 2 |
75,223,195 | 2023-1-24 | https://stackoverflow.com/questions/75223195/getting-attributeerror-module-shapely-geos-has-no-attribute-lgeos | I am trying to do this exercise from momepy (https://github.com/pysal/momepy/blob/main/docs/user_guide/getting_started.ipynb), but on the third codeblock f, ax = plt.subplots(figsize=(10, 10)) buildings.plot(ax=ax) ax.set_axis_off() plt.show() I got the following error AttributeError Traceback (most recent call last) Cell In[12], line 5 1 buildings = gpd.read_file(momepy.datasets.get_path('bubenec'), 2 layer='buildings') 4 f, ax = plt.subplots(figsize=(10, 10)) ----> 5 buildings.plot(ax=ax) 6 ax.set_axis_off() 7 plt.show() File ~/miniconda3/envs/testbed/lib/python3.10/site-packages/geopandas/plotting.py:925, in GeoplotAccessor.__call__(self, *args, **kwargs) 923 kind = kwargs.pop("kind", "geo") 924 if kind == "geo": --> 925 return plot_dataframe(data, *args, **kwargs) 926 if kind in self._pandas_kinds: 927 # Access pandas plots 928 return PlotAccessor(data)(kind=kind, **kwargs) File ~/miniconda3/envs/testbed/lib/python3.10/site-packages/geopandas/plotting.py:689, in plot_dataframe(df, column, cmap, color, ax, cax, categorical, legend, scheme, k, vmin, vmax, markersize, figsize, legend_kwds, categories, classification_kwds, missing_kwds, aspect, **style_kwds) 686 markersize = df[markersize].values 688 if column is None: --> 689 return plot_series( 690 df.geometry, 691 cmap=cmap, 692 color=color, ... ---> 67 geom = shapely.geos.lgeos.GEOSGeom_clone(geom._ptr) 68 return shapely.geometry.base.geom_factory(geom) 70 # fallback going through WKB AttributeError: module 'shapely.geos' has no attribute 'lgeos' Not entirely sure if I am missing a module or if there are library incompatibilities | In the new version(2.0.0 released on 12 December 2022) of shapely there is no attribute 'lgeos'. Link to document: https://pypi.org/project/Shapely/#history You can check version by: shapely.__version__ If you want to use lgeos in your code you can downgrade your version to 1.8.5 %pip install shapely==1.8.5 I was able to successfully import on version 1.8.5 on Jupyter. from shapely.geos import lgeos | 4 | 4 |
75,174,062 | 2023-1-19 | https://stackoverflow.com/questions/75174062/how-to-implement-a-fast-type-inference-procedure-for-ski-combinators-in-python | How to implement a fast simple type inference procedure for SKI combinators in Python? I am interested in 2 functions: typable: returns true if a given SKI term has a type (I suppose it should work faster than searching for a concrete type). principle_type: returns principle type if it exists and False otherwise. typable(SKK) = True typable(SII) = False # (I=SKK). This term does not have a type. Similar to \x.xx principle_type(S) = (t1 -> t2 -> t3) -> (t1 -> t2) -> t1 -> t3 principle_type(K) = t1 -> t2 -> t1 principle_type(SK) = (t3 -> t2) -> t3 -> t3 principle_type(SKK) = principle_type(I) = t1 -> t1 Theoretical questions: I read about HindleyβMilner type system. There are 2 algs: Algorithm J and Algorithm W. Do I understand correctly that they are used for more complex type system: System F? System with parametric polymorphism? Are there combinators typable in System F but not typable in the simple type system? As I understand, to find a principle type we need to solve a system of equations between symbolic expressions. Is it possible to simplify the algorithm and speed up the process by using SMT solvers like Z3? My implementation of basic combinators, reduction and parsing: from __future__ import annotations import typing from dataclasses import dataclass @dataclass(eq=True, frozen=True) class S: def __str__(self): return "S" def __len__(self): return 1 @dataclass(eq=True, frozen=True) class K: def __str__(self): return "K" def __len__(self): return 1 @dataclass(eq=True, frozen=True) class App: left: Term right: Term def __str__(self): return f"({self.left}{self.right})" def __len__(self): return len(str(self)) Term = typing.Union[S, K, App] def parse_ski_string(s): # remove spaces s = ''.join(s.split()) stack = [] for c in s: # print(stack, len(stack)) if c == '(': pass elif c == 'S': stack.append(S()) elif c == 'K': stack.append(K()) # elif c == 'I': # stack.append(I()) elif c == ')': x = stack.pop() if len(stack) > 0: # S(SK) f = stack.pop() node = App(f, x) stack.append(node) else: # S(S) stack.append(x) else: raise Exception('wrong c = ', c) if len(stack) != 1: raise Exception('wrong stack = ', str(stack)) return stack[0] def simplify(expr: Term): if isinstance(expr, S) or isinstance(expr, K): return expr elif isinstance(expr, App) and isinstance(expr.left, App) and isinstance(expr.left.left, K): return simplify(expr.left.right) elif isinstance(expr, App) and isinstance(expr.left, App) and isinstance(expr.left.left, App) and isinstance( expr.left.left.left, S): return simplify(App(App(expr.left.left.right, expr.right), (App(expr.left.right, expr.right)))) elif isinstance(expr, App): l2 = simplify(expr.left) r2 = simplify(expr.right) if expr.left == l2 and expr.right == r2: return App(expr.left, expr.right) else: return simplify(App(l2, r2)) else: raise Exception('Wrong type of combinator', expr) # simplify(App(App(K(),S()),K())) = S # simplify(parse_ski_string('(((SK)K)S)')) = S | Simple, maybe not the fastest (but reasonably fast if the types are small). from dataclasses import dataclass class OccursError(Exception): pass parent = {} Var = int def new_var() -> Var: t1 = Var(len(parent)) parent[t1] = t1 return t1 @dataclass class Fun: dom: "Var | Fun" cod: "Var | Fun" def S() -> Fun: t1 = new_var() t2 = new_var() t3 = new_var() return Fun(Fun(t1, Fun(t2, t3)), Fun(Fun(t1, t2), Fun(t1, t3))) def K() -> Fun: t1 = new_var() t2 = new_var() return Fun(t1, Fun(t2, t1)) def I() -> Fun: t1 = new_var() return Fun(t1, t1) def find(t1: Var | Fun) -> Var | Fun: if isinstance(t1, Var): if parent[t1] == t1: return t1 t2 = find(parent[t1]) parent[t1] = t2 return t2 if isinstance(t1, Fun): return Fun(find(t1.dom), find(t1.cod)) raise TypeError def occurs(t1: Var, t2: Var | Fun) -> bool: if isinstance(t2, Var): return t1 == t2 if isinstance(t2, Fun): return occurs(t1, t2.dom) or occurs(t1, t2.cod) raise TypeError def unify(t1: Var | Fun, t2: Var | Fun): t1 = find(t1) t2 = find(t2) if isinstance(t1, Var) and isinstance(t2, Var): parent[t1] = t2 elif isinstance(t1, Var) and isinstance(t2, Fun): if occurs(t1, t2): raise OccursError parent[t1] = t2 elif isinstance(t1, Fun) and isinstance(t2, Var): if occurs(t2, t1): raise OccursError parent[t2] = t1 elif isinstance(t1, Fun) and isinstance(t2, Fun): unify(t1.dom, t2.dom) unify(t1.cod, t2.cod) else: raise TypeError def apply(t1: Var | Fun, t2: Var | Fun) -> Var | Fun: t3 = new_var() unify(t1, Fun(t2, t3)) return t3 try: a = S() b = K() ab = apply(a, b) c = K() abc = apply(ab, c) print("#", find(abc)) except OccursError: print("# no type") try: a = S() b = I() ab = apply(a, b) c = I() abc = apply(ab, c) print("#", find(abc)) except OccursError: print("# no type") # Fun(dom=6, cod=6) # no type | 3 | 2 |
75,224,154 | 2023-1-24 | https://stackoverflow.com/questions/75224154/python-backoff-decorator-library-for-retrying-with-exception-treatment | I need the function do_it retrying up to 10 times if any exception is raised, with an interval of 0.5 to 1.5 seconds between retries. I am trying to do it with backoff decorator and the following code: import backoff x = 0 @backoff.on_exception(max_tries=10) def do_it(): global x x = x + 1 print(f'x: {x}') try: z = downloaded_from_somewhere('http://somewhere.com/z') # download may fail y = 1/z # z may be equal to zero, that will raise a division per zero and fail return y except: return f'Error after {x} tries' do_it() The function may raise an exception due to different failures in try: After 10 retries it will return Error after {x} tries The backoff decorator library documentation is precarious. How to correctly specify the decorator parameters @backoff.on_exception(max_tries=10, something_else=blablabla, and_other=blabla2) to have the function working properly? | but if the function fail after the 10 retries, how to instruct my code on how to move forward First, a caveat: the usage of the backoff decorator you show in your question is invalid; you must provide the wait_gen and exception parameters. If you're using the backoff.on_exception method, then you want your function to raise an exception on failure. This is how the backoff decorator knows to retry your function. For example, consider the following, in which we have a function that fails 80% of the time: import backoff import random @backoff.on_exception(backoff.constant, ValueError, interval=1, max_tries=5) def do_it(): x = random.randint(0, 100) if x < 80: print("Failed!") raise ValueError("Failed!") print("Success!") do_it() This will retry do_it() up to five times; if it fails more than five times, the exception bubbles up to the calling code. Running the above looks like this if it succeeds within max_tries: Failed! Failed! Success! And like this if it fails five times: Failed! Failed! Failed! Failed! Failed! Traceback (most recent call last): File "/home/lars/tmp/python/backofftest.py", line 15, in <module> do_it() File "/home/lars/.local/share/virtualenvs/python-LD_ZK5QN/lib64/python3.11/site-packages/backoff/_sync.py", line 105, in retry ret = target(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "/home/lars/tmp/python/backofftest.py", line 10, in do_it raise ValueError("Failed!") ValueError: Failed! Presumably if the function fails more than max_tries times, you would catch that exception in the calling code and exit gracefully with an appropriate error message: try: do_it() except ValueError as err: sys.exit("failed to do the thing") (Note that in this example I'm using a constant backoff period via backoff.constant; in practice, you would probably use backoff.expo instead for exponential backoff behavior.) | 5 | 14 |
75,222,897 | 2023-1-24 | https://stackoverflow.com/questions/75222897/sort-dataframe-based-on-minimum-value-of-two-columns | Let's assume I have the following dataframe: import pandas as pd d = {'col1': [1, 2,3,4], 'col2': [4, 2, 1, 3], 'col3': [1,0,1,1], 'outcome': [1,0,1,0]} df = pd.DataFrame(data=d) I want this dataframe sorted by col1 and col2 on the minimum value. The order of the indexes should be 2, 0, 1, 3. I tried this with df.sort_values(by=['col2', 'col1']), but than it takes the minimum of col1 first and then of col2. Is there anyway to order by taking the minimum of two columns? | Using numpy.lexsort: order = np.lexsort(np.sort(df[['col1', 'col2']])[:, ::-1].T) out = df.iloc[order] Output: col1 col2 col3 outcome 2 3 1 1 1 0 1 4 1 1 1 2 2 0 0 3 4 3 1 0 Note that you can easily handle any number of columns: df.iloc[np.lexsort(np.sort(df[['col1', 'col2', 'col3']])[:, ::-1].T)] col1 col2 col3 outcome 1 2 2 0 0 2 3 1 1 1 0 1 4 1 1 3 4 3 1 0 | 3 | 4 |
75,216,528 | 2023-1-23 | https://stackoverflow.com/questions/75216528/blinding-a-message-with-pycrypto-in-python-3 | In Python 2, there were two methods called blind/unblind. Their docs recommend that you should now use pkcs1_15, however they only show how to sign/verify a message. Their sample code looks like this: from Crypto.Signature import pkcs1_15 from Crypto.Hash import SHA256 from Crypto.PublicKey import RSA # Generate a new RSA key pair private_key = RSA.generate(3072) public_key = private_key.publickey() # Message which is larger than the modulus message = 'Some arbitrary text to blind.' message = message.encode('utf-8') hashed_message = SHA256.new(message) # Sign message on the senders side signed_message = pkcs1_15.new(private_key).sign(hashed_message) # On the receivers' side, you can verify the signed message with the public key pkcs1_15.new(public_key).verify(hashed_message, signed_message) # No exceptions raised, so we can conclude that the signature is valid. Does anyone know what the code would look like, if one were to blind it instead? | Firstly: PyCrypto is no longer maintained, and one should use its mostly-API-compatible successor, PyCryptodome, instead. If one is willing to learn a new API, they even advocate to switch to the Cryptography library instead. According to their documentation, they fully dropped support for manually blinding and unblinding messages. Instead they ask you to specifically pick one of the supported padding schemes, as is done in the sample code in your question. Now of course, if you are looking for actual blind signatures as per your tag, then it seems that you'll have to move away from PyCrypto(dome), as they no longer support that. In fact there seems to be no out-of-the-box support in any of the big Python cryptographic libraries, based on a cursory search. | 4 | 1 |
75,219,023 | 2023-1-24 | https://stackoverflow.com/questions/75219023/how-to-convert-date-as-monday-1st-tomonday-1-in-python | I have tried a lot. Banning words doesn't help, removing certain characters doesn't help. The datetime module doesn't have a directive for this. It has things like %d which will give you today's day, for example 24. I have a date in the format of 'Tuesday 24th January' but I need it to be 'Tuesday 24 January'. Is there a way to remove st,nd,rd,th. Or is there an even better way? EDIT: even removing rd would remove it from Saturday. So that doesn't work either. | You can use a regex: import re d = 'Tuesday 24th January' d = re.sub(r'(\d+)(st|nd|rd|th)', r'\1', d) # \1 to restore the captured day print(d) # Output Tuesday 24 January For Saturday 21st January: d = 'Saturday 21st January' d = re.sub(r'(\d+)(st|nd|rd|th)', r'\1', d) print(d) # Output Saturday 21 January | 3 | 2 |
75,211,952 | 2023-1-23 | https://stackoverflow.com/questions/75211952/test-fails-in-foundry-when-using-asterisk-for-unpacking-when-creating-a-data | I want to create a DataFrame in a fixture using the following code: @pytest.fixture def my_fun(spark_session): return spark_session.createDataFrame( [ (*['test', 'testy']) ], T.StructType([ T.StructField('mytest', T.StringType()), T.StructField('mytest2', T.StringType() ]) ) def test_something(my_fun): return However, this fails with the following error: TypeError: StructType can not accept object 'test' in type <class 'str'> If I use ('test', 'testy') instead of (*['test', 'testy']), it works. But shouldn't this be synonymous? (I'm using Python 3.8.13, pytest-7.0.1) | They are not the same. The round brackets in your example are not a tuple, they are just round brackets around a list. To make it a tuple you need to add a comma test = (*['test1', 'test2'],) # ('test1', 'test2') | 3 | 2 |
75,217,710 | 2023-1-24 | https://stackoverflow.com/questions/75217710/pandas-adding-a-row-to-each-group-if-missing-values-from-a-list | I hope you're doing okay. I've been trying to think how to solve the next problem, but I can't find a way to do it. Can you guys give me a hand, please? I have a dataframe with 4 columns, I want to add the remaining rows per group to have 3 Calendar Weeks, I want the new rows to keep the same value of ID of the group and display a NaN value for the Price and Attribute columns. import pandas as pd import numpy as np input = {'ID':['ITEM1', 'ITEM2', 'ITEM1', 'ITEM4', 'ITEM2', 'ITEM3', 'ITEM4', 'ITEM4'], 'Price':['11', '12', '11', '14', '12', '13', '14', '14' ], 'Attribute': ['A', 'B', 'A', 'D', 'B', 'C', 'D', 'D' ], 'Calendar Week':['1', '2', '2', '1', '3', '1', '3', '2'] } df = pd.DataFrame(input) df = df.sort_values(['ID', 'Calendar Week'], ascending = True).reset_index().drop(columns = 'index') df = ID Price Attribute Calendar Week ITEM1 11 A 1 ITEM1 11 A 2 ITEM2 12 B 2 ITEM2 12 B 3 ITEM3 13 C 1 ITEM4 14 D 1 ITEM4 14 D 2 ITEM4 14 D 3 Expected output: ID Price Attribute Calendar Week ITEM1 11 A 1 ITEM1 11 A 2 ITEM1 NaN NaN 3 ITEM2 NaN NaN 1 ITEM2 12 B 2 ITEM2 12 B 3 ITEM3 13 C 1 ITEM3 NaN NaN 2 ITEM3 NaN NaN 3 ITEM4 14 D 1 ITEM4 14 D 2 ITEM4 14 D 3 | (df.set_index(["ID", "Calendar Week"]) .reindex(pd.MultiIndex.from_product([df["ID"].unique(), ["1", "2", "3"]], names=["ID", "Calendar Week"])) .reset_index()[df.columns]) you can move ID & Calendar Week to index part then reindex with the every possibility of ID versus Calendar Week generated with a product then move them back to columns and restore the original column order to get ID Price Attribute Calendar Week 0 ITEM1 11 A 1 1 ITEM1 11 A 2 2 ITEM1 NaN NaN 3 3 ITEM2 NaN NaN 1 4 ITEM2 12 B 2 5 ITEM2 12 B 3 6 ITEM3 13 C 1 7 ITEM3 NaN NaN 2 8 ITEM3 NaN NaN 3 9 ITEM4 14 D 1 10 ITEM4 14 D 2 11 ITEM4 14 D 3 | 4 | 2 |
75,211,183 | 2023-1-23 | https://stackoverflow.com/questions/75211183/what-does-pydantic-orm-mode-exactly-do | According to the docs, Pydantic "ORM mode" (enabled with orm_mode = True in Config) is needed to enable the from_orm method in order to create a model instance by reading attributes from another class instance. If ORM mode is not enabled, the from_orm method raises an exception. My questions are: Are there any other effects (in functionality, performance, etc.) in enabling ORM mode? If not, why is it an opt-in feature? | Fortunately, at least the first question can be answered fairly easily. As of version 1.10.4, there are only two places (aside from the plugins), where orm_mode comes into play. BaseModel.from_orm This is basically an alternative constructor. It forgoes the regular __init__ method in favor of a marginally different setup. Not sure, why that is designed this way. But the orm_mode flag must be set for this method to not raise an error. Straightforward. I see no hidden surprises here. BaseModel.validate This method is the default validator for the BaseModel type. Without the orm_mode flag, the validator expects a value that is either 1) an instance of that particular model, 2) a dictionary that can be unpacked into the constructor of that model, or 3) something that can be coerced to a dictionary, then to be unpacked into the constructor of that model. If orm_mode is True and the validator encounters something that is not an instance of the model and not a dictionary, it assumes it is an object that can be passed to the aforementioned from_orm method and calls that instead of trying the dict coercion. Note that this method is not called during initialization and it is not called if something is assigned to a model field of any type that isn't BaseModel. It only comes into play, when you are dealing with nested models (and the objects that serve as the data input are also nested), i.e. with a model that has a field annotated with another model. Only then will the outer model call the validate method of the inner model. Consider the following: from __future__ import annotations from typing import TypeVar from pydantic import BaseModel M = TypeVar("M", bound=BaseModel) class Foo(BaseModel): x: int @classmethod def validate(cls: type[M], value: object) -> M: print("called `Foo.validate`") return super().validate(value) class Config: orm_mode = True class A: x = 1 foo = Foo.from_orm(A) print(foo.json()) The output is {"x": 1} and we see that Foo.validate was not called. Now we extend this a bit: ... class Bar(BaseModel): f: Foo class Config: orm_mode = True class B: f = A bar = Bar.from_orm(B) print(bar.json()) The new output: called `Foo.validate` {"f": {"x": 1}} Now the validator was called as expected and if we were to inject a similar print statement into Foo.from_orm we would see that it too was called, when we called Bar.from_orm right after Foo.validate was called. This may be relevant in certain niche situations, but generally speaking I would argue that this cascading application of from_orm during validation makes sense and should accommodate the main intended use case -- database ORM objects. If you want different behavior during validation, you can always define your own validator methods or even simply override the validate method (depending on your use case). There are no other uses of orm_mode in the source code, so that is it in terms of functionality. Performance is not really relevant in those contexts IMO because it is just an altogether different way of initializing an instance of the model. Unless you are interested in whether or not it is faster to first manually turn your ORM object into a dictionary and pass that to parse_obj or to just call from_orm on it. You could benchmark that fairly easily though. No other functionality of the BaseModel is affected (performance wise) by that config setting in any way that I can see. To your second question, I could only speculate. So I will refrain from answering. There is an issue already open for a while that suggests removing the setting altogether, which seems to be kind of in line with your reasoning that it should not be "opt-in" in any case. I am not sure if Samuel Colvin is still accepting backwards-incompatible feature requests for v2, but this issue has not gotten a lot of attention. You might want to participate there. | 14 | 9 |
75,207,298 | 2023-1-23 | https://stackoverflow.com/questions/75207298/drawing-an-arc-tangent-to-two-lines-segments-in-python | I'm trying to draw an arc of n number of steps between two points so that I can bevel a 2D shape. This image illustrates what I'm looking to create (the blue arc) and how I'm trying to go about it: move by the radius away from the target point (red) get the normals of those lines get the intersections of the normals to find the center of the circle Draw an arc between those points from the circle's center This is what I have so far: As you can see, the circle is not tangent to the line segments. I think my approach may be flawed thinking that the two points used for the normal lines should be moved by the circle's radius. Can anyone please tell me where I am going wrong and how I might be able to find this arc of points? Here is my code: import matplotlib.pyplot as plt import numpy as np #https://stackoverflow.com/questions/51223685/create-circle-tangent-to-two-lines-with-radius-r-geometry def travel(dx, x1, y1, x2, y2): a = {"x": x2 - x1, "y": y2 - y1} mag = np.sqrt(a["x"]*a["x"] + a["y"]*a["y"]) if (mag == 0): a["x"] = a["y"] = 0; else: a["x"] = a["x"]/mag*dx a["y"] = a["y"]/mag*dx return [x1 + a["x"], y1 + a["y"]] def plot_line(line,color="go-",label=""): plt.plot([p[0] for p in line], [p[1] for p in line],color,label=label) def line_intersection(line1, line2): xdiff = (line1[0][0] - line1[1][0], line2[0][0] - line2[1][0]) ydiff = (line1[0][1] - line1[1][1], line2[0][1] - line2[1][1]) def det(a, b): return a[0] * b[1] - a[1] * b[0] div = det(xdiff, ydiff) if div == 0: raise Exception('lines do not intersect') d = (det(*line1), det(*line2)) x = det(d, xdiff) / div y = det(d, ydiff) / div return x, y line_segment1 = [[1,1],[4,8]] line_segment2 = [[4,8],[8,8]] line = line_segment1 + line_segment2 plot_line(line,'k-') radius = 2 l1_x1 = line_segment1[0][0] l1_y1 = line_segment1[0][1] l1_x2 = line_segment1[1][0] l1_y2 = line_segment1[1][1] new_point1 = travel(radius, l1_x2, l1_y2, l1_x1, l1_y1) l2_x1 = line_segment2[0][0] l2_y1 = line_segment2[0][1] l2_x2 = line_segment2[1][0] l2_y2 = line_segment2[1][1] new_point2 = travel(radius, l2_x1, l2_y1, l2_x2, l2_y2) plt.plot(line_segment1[1][0], line_segment1[1][1],'ro',label="Point 1") plt.plot(new_point2[0], new_point2[1],'go',label="radius from Point 1") plt.plot(new_point1[0], new_point1[1],'mo',label="radius from Point 1") # normal 1 dx = l1_x2 - l1_x1 dy = l1_y2 - l1_y1 normal_line1 = [[new_point1[0]+-dy, new_point1[1]+dx],[new_point1[0]+dy, new_point1[1]+-dx]] plot_line(normal_line1,'m',label="normal 1") # normal 2 dx2 = l2_x2 - l2_x1 dy2 = l2_y2 - l2_y1 normal_line2 = [[new_point2[0]+-dy2, new_point2[1]+dx2],[new_point2[0]+dy2, new_point2[1]+-dx2]] plot_line(normal_line2,'g',label="normal 2") x, y = line_intersection(normal_line1,normal_line2) plt.plot(x, y,'bo',label="intersection") #'blue' theta = np.linspace( 0 , 2 * np.pi , 150 ) a = x + radius * np.cos( theta ) b = y + radius * np.sin( theta ) plt.plot(a, b) plt.legend() plt.axis('square') plt.show() Thanks a lot! | You could try making a Bezier curve, like in this example. A basic implementation might be: import matplotlib.path as mpath import matplotlib.patches as mpatches import matplotlib.pyplot as plt Path = mpath.Path fig, ax = plt.subplots() # roughly equivalent of your purple, red and green points points = [(3, 6.146), (4, 8), (6, 8.25)] pp1 = mpatches.PathPatch( Path(points, [Path.MOVETO, Path.CURVE3, Path.CURVE3]), fc="none", transform=ax.transData ) ax.add_patch(pp1) # lines between points ax.plot([points[0][0], points[1][0]], [points[0][1], points[1][1]], 'b') ax.plot([points[1][0], points[2][0]], [points[1][1], points[2][1]], 'b') # plot points for point in points: ax.plot(point[0], point[1], 'o') ax.set_aspect("equal") plt.show() which gives: To do this without using a Matplotlib PathPatch object, you can calculate the Bezier points as, for example, in this answer, which I'll use below to do the same as above (note to avoid using scipy's comb function, as in that answer, I've used the comb function from here): import numpy as np from math import factorial from matplotlib import pyplot as plt def comb(n, k): """ N choose k """ return factorial(n) / factorial(k) / factorial(n - k) def bernstein_poly(i, n, t): """ The Bernstein polynomial of n, i as a function of t """ return comb(n, i) * ( t**(n-i) ) * (1 - t)**i def bezier_curve(points, n=1000): """ Given a set of control points, return the bezier curve defined by the control points. points should be a list of lists, or list of tuples such as [ [1,1], [2,3], [4,5], ..[Xn, Yn] ] n is the number of points at which to return the curve, defaults to 1000 See http://processingjs.nihongoresources.com/bezierinfo/ """ nPoints = len(points) xPoints = np.array([p[0] for p in points]) yPoints = np.array([p[1] for p in points]) t = np.linspace(0.0, 1.0, n) polynomial_array = np.array( [bernstein_poly(i, nPoints-1, t) for i in range(0, nPoints)] ) xvals = np.dot(xPoints, polynomial_array) yvals = np.dot(yPoints, polynomial_array) return xvals, yvals # set control points (as in the first example) points = [(3, 6.146), (4, 8), (6, 8.25)] # get the Bezier curve points at 100 points xvals, yvals = bezier_curve(points, n=100) # make the plot fig, ax = plt.subplots() # lines between control points ax.plot([points[0][0], points[1][0]], [points[0][1], points[1][1]], 'b') ax.plot([points[1][0], points[2][0]], [points[1][1], points[2][1]], 'b') # plot control points for point in points: ax.plot(point[0], point[1], 'o') # plot the Bezier curve ax.plot(xvals, yvals, "k--") ax.set_aspect("equal") fig.show() This gives: | 4 | 1 |
75,214,934 | 2023-1-23 | https://stackoverflow.com/questions/75214934/python3-10-bdist-wheel-did-not-run-successfully-when-install-cffi | i installed Ubuntu 22.04 (fresh installation) an create an virtualenv with Python 3.10. The installed packages are as follows: libpython3.10:amd64 libpython3.10-dbg:amd64 libpython3.10-dev:amd64 libpython3.10-minimal:amd64 libpython3.10-stdlib:amd64 python3.10 python3.10-dbg python3.10-dev python3.10-minimal python3.10-venv When I try to install the application requirements file I see this error: pip install cffi==1.14.0 Collecting cffi==1.14.0 Downloading cffi-1.14.0.tar.gz (463 kB) ββββββββββββββββββββββββββββββββββββββββ 463.1/463.1 KB 6.8 MB/s eta 0:00:00 Preparing metadata (setup.py) ... done Requirement already satisfied: pycparser in ./env/lib/python3.10/site-packages (from cffi==1.14.0) (2.18) Building wheels for collected packages: cffi Building wheel for cffi (setup.py) ... error error: subprocess-exited-with-error Γ python setup.py bdist_wheel did not run successfully. β exit code: 1 β°β> [56 lines of output] running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.10 creating build/lib.linux-x86_64-3.10/cffi copying cffi/api.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/backend_ctypes.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/lock.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/verifier.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/vengine_gen.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/cparser.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/recompiler.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/pkgconfig.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/ffiplatform.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/model.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/commontypes.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/setuptools_ext.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/vengine_cpy.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/__init__.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/cffi_opcode.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/error.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/_cffi_include.h -> build/lib.linux-x86_64-3.10/cffi copying cffi/parse_c_type.h -> build/lib.linux-x86_64-3.10/cffi copying cffi/_embedding.h -> build/lib.linux-x86_64-3.10/cffi copying cffi/_cffi_errors.h -> build/lib.linux-x86_64-3.10/cffi running build_ext building '_cffi_backend' extension creating build/temp.linux-x86_64-3.10 creating build/temp.linux-x86_64-3.10/c x86_64-linux-gnu-gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DUSE__THREAD -DHAVE_SYNC_SYNCHRONIZE -I/usr/include/ffi -I/usr/include/libffi -I/opt/adapter-sqlpooler/env/include -I/usr/include/python3.10 -c c/_cffi_backend.c -o build/temp.linux-x86_64-3.10/c/_cffi_backend.o c/_cffi_backend.c: In function βctypedescr_deallocβ: c/_cffi_backend.c:407:23: error: lvalue required as left operand of assignment 407 | Py_REFCNT(ct) = 43; | ^ c/_cffi_backend.c:410:23: error: lvalue required as left operand of assignment 410 | Py_REFCNT(ct) = 0; | ^ c/_cffi_backend.c: In function βprepare_callback_info_tupleβ: c/_cffi_backend.c:6185:5: warning: βPyEval_InitThreadsβ is deprecated [-Wdeprecated-declarations] 6185 | PyEval_InitThreads(); | ^~~~~~~~~~~~~~~~~~ In file included from /usr/include/python3.10/Python.h:130, from c/_cffi_backend.c:2: /usr/include/python3.10/ceval.h:122:37: note: declared here 122 | Py_DEPRECATED(3.9) PyAPI_FUNC(void) PyEval_InitThreads(void); | ^~~~~~~~~~~~~~~~~~ c/_cffi_backend.c: In function βb_callbackβ: c/_cffi_backend.c:6245:5: warning: βffi_prep_closureβ is deprecated: use ffi_prep_closure_loc instead [-Wdeprecated-declarations] 6245 | if (ffi_prep_closure(closure, &cif_descr->cif, | ^~ In file included from c/_cffi_backend.c:15: /usr/include/x86_64-linux-gnu/ffi.h:347:1: note: declared here 347 | ffi_prep_closure (ffi_closure*, | ^~~~~~~~~~~~~~~~ error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for cffi Running setup.py clean for cffi Failed to build cffi Installing collected packages: cffi Running setup.py install for cffi ... error error: subprocess-exited-with-error Γ Running setup.py install for cffi did not run successfully. β exit code: 1 β°β> [58 lines of output] running install /opt/adapter-sqlpooler/env/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build/lib.linux-x86_64-3.10 creating build/lib.linux-x86_64-3.10/cffi copying cffi/api.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/backend_ctypes.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/lock.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/verifier.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/vengine_gen.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/cparser.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/recompiler.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/pkgconfig.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/ffiplatform.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/model.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/commontypes.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/setuptools_ext.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/vengine_cpy.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/__init__.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/cffi_opcode.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/error.py -> build/lib.linux-x86_64-3.10/cffi copying cffi/_cffi_include.h -> build/lib.linux-x86_64-3.10/cffi copying cffi/parse_c_type.h -> build/lib.linux-x86_64-3.10/cffi copying cffi/_embedding.h -> build/lib.linux-x86_64-3.10/cffi copying cffi/_cffi_errors.h -> build/lib.linux-x86_64-3.10/cffi running build_ext building '_cffi_backend' extension creating build/temp.linux-x86_64-3.10 creating build/temp.linux-x86_64-3.10/c x86_64-linux-gnu-gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DUSE__THREAD -DHAVE_SYNC_SYNCHRONIZE -I/usr/include/ffi -I/usr/include/libffi -I/opt/adapter-sqlpooler/env/include -I/usr/include/python3.10 -c c/_cffi_backend.c -o build/temp.linux-x86_64-3.10/c/_cffi_backend.o c/_cffi_backend.c: In function βctypedescr_deallocβ: c/_cffi_backend.c:407:23: error: lvalue required as left operand of assignment 407 | Py_REFCNT(ct) = 43; | ^ c/_cffi_backend.c:410:23: error: lvalue required as left operand of assignment 410 | Py_REFCNT(ct) = 0; | ^ c/_cffi_backend.c: In function βprepare_callback_info_tupleβ: c/_cffi_backend.c:6185:5: warning: βPyEval_InitThreadsβ is deprecated [-Wdeprecated-declarations] 6185 | PyEval_InitThreads(); | ^~~~~~~~~~~~~~~~~~ In file included from /usr/include/python3.10/Python.h:130, from c/_cffi_backend.c:2: /usr/include/python3.10/ceval.h:122:37: note: declared here 122 | Py_DEPRECATED(3.9) PyAPI_FUNC(void) PyEval_InitThreads(void); | ^~~~~~~~~~~~~~~~~~ c/_cffi_backend.c: In function βb_callbackβ: c/_cffi_backend.c:6245:5: warning: βffi_prep_closureβ is deprecated: use ffi_prep_closure_loc instead [-Wdeprecated-declarations] 6245 | if (ffi_prep_closure(closure, &cif_descr->cif, | ^~ In file included from c/_cffi_backend.c:15: /usr/include/x86_64-linux-gnu/ffi.h:347:1: note: declared here 347 | ffi_prep_closure (ffi_closure*, | ^~~~~~~~~~~~~~~~ error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure Γ Encountered error while trying to install package. β°β> cffi note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure. any ideas?? I have tried reinstalling the Python installation packages but the problem persists, I have read some publications and indicate that the problem may be caused because there are no headers related to the python dev package. | Most probably cffi==1.14.0 is not compatible with Python 3.10. Try the latest version: pip install cffi==1.15.1 | 4 | 4 |
75,209,249 | 2023-1-23 | https://stackoverflow.com/questions/75209249/overriding-a-method-mypy-throws-an-incompatible-with-super-type-error-when-ch | For the following code: class Person: number: int def __init__(self) -> None: self.number = 0 def __add__(self, other: 'Person') -> 'Person': self.number = self.number + other.number return self class Dan(Person): def __init__(self) -> None: super().__init__() def __add__(self, other: 'Dan') -> 'Dan': self.number = self.number + other.number + 1 return self MyPy Output: test.py:15: error: Argument 1 of "__add__" is incompatible with supertype "Person"; supertype defines the argument type as "Person" [override] test.py:15: note: This violates the Liskov substitution principle test.py:15: note: See https://mypy.readthedocs.io/en/stable/common_issues.html#incompatible-overrides I would think that the child type, Dan, is more specific than Person and would not violate the Liskov substitution principle. My first thought for a solution was to create a type variable person_co = TypeVar("person_co", bound=Person). But that did not work because Person was not yet defined. I also tried making the argument and return type Person for the __add__ method in Dan. That worked and MyPy did not throw errors but that is not correct, right? | Why it violates the LSP The LSP states that method return types are covariant, whereas method parameter types are contravariant in the subtype. You shall not strengthen the preconditions, i.e. you shall not require Dan -- a strict subtype of Person -- for other in the Dan.__add__ method, if you annotated other with Person in the signature of Person.__add__. Look at the following function: def f(p: Person, q: Person) -> None: out = p + q print(out.number) Since p must be a Person, I expect that I can pass some Dan instance to it. I can also pass some Person instance to q. I would expect p + q to work because I expect I can substitute any Dan instance, where a Person instance is required. Your definition of the overridden method Dan.__add__ would mean that I can not pass a Dan instance for p because your method would not actually allow q to be any Person, only Dan. That is why your code violates the Liskov substitution principle. It is not because of the return type annotation, but because of the argument type annotation. Option A: Just stick to Person From your example it seems you have a few different options to fix this. The simplest one would be to keep other: Person in your overridden method: from __future__ import annotations class Person: number: int def __init__(self) -> None: self.number = 0 def __add__(self, other: Person) -> Person: self.number = self.number + other.number return self class Dan(Person): def __init__(self) -> None: super().__init__() def __add__(self, other: Person) -> Dan: self.number = self.number + other.number + 1 return self This seems reasonable since your method only relies on other having the number attribute and it being something that can be added to an integer. (By the way, you can omit the quotation marks, if you add from __future__ import annotations at the top of your module.) Option B: Type variable for genericity A more sophisticated option is the one you mentioned that uses a type variable, making the method generic in terms of one (or both) of its parameters. You can set the upper bound by enclosing it in quotes, if its definition comes later. (Here the __future__ import doesn't help us.) Something like this works: from __future__ import annotations from typing import TypeVar P = TypeVar("P", bound="Person") class Person: number: int def __init__(self) -> None: self.number = 0 def __add__(self: P, other: Person) -> P: self.number = self.number + other.number return self class Dan(Person): def __init__(self) -> None: super().__init__() def __add__(self: P, other: Person) -> P: self.number = self.number + other.number + 1 return self Now the return type will depend on what exact subtype of Person the called method is bound to. This means that even without an additional override an instance of any subclass P of Person calling __add__ would return that same type P. The other parameter could of course also be annotated with P, but that would result in P collapsing the the next common ancestor, if self and other are different, which in practice would mean that you could only add a subtype of Dan to an instance of Dan. That would be a much stronger requirement. I don't know, if you want that. Option C: Protocol for maximum flexibility Lastly, to show a completely different approach, you could leverage structural subtyping (introduced by PEP 544) by declaring that the only relevant aspect about other is its number and that it can be added to an int (for example because it is also an int). You could define a Protocol that stands for any type that has such an attribute. This would be a much broader definition for your method: from typing import Protocol, TypeVar P = TypeVar("P", bound="Person") class HasNumber(Protocol): number: int class Person: number: int def __init__(self) -> None: self.number = 0 def __add__(self: P, other: HasNumber) -> P: self.number = self.number + other.number return self class Dan(Person): def __init__(self) -> None: super().__init__() def __add__(self: P, other: HasNumber) -> P: self.number = self.number + other.number + 1 return self Note that this means that some completely unrelated class (in terms of nominal inheritance) that has the number attribute of the type int can be added to an instance of Person. From the way your example is written, I don't see any problem with that and it is arguably more in line with the dynamic philosophy of Python. Option D: Self for Python 3.11+ Similarly to what SUTerliakov wrote in his answer, you could simply indicate that __add__ returns whatever type the class is. Combining this with the Protocol from Option C would seem to be a nice combination of simplicity (no need for a type variable and no need to annotate lowercase self) and flexibility ("if it quacks like a duck..."): from typing import Protocol, Self class HasNumber(Protocol): number: int class Person: number: int def __init__(self) -> None: self.number = 0 def __add__(self, other: HasNumber) -> Self: self.number = self.number + other.number return self class Dan(Person): def __init__(self) -> None: super().__init__() def __add__(self, other: HasNumber) -> Self: self.number = self.number + other.number + 1 return self Mix and match as you wish. As you can see, there are many options. | 3 | 4 |
75,211,830 | 2023-1-23 | https://stackoverflow.com/questions/75211830/how-to-install-python-modules-where-pipy-org-is-is-not-accessible-from-iran | so the problem is that pypi.org hase been filtered by iranian government(yes , i know it's ridiculous!). i tried to install some python modules from Github downloaded files: pip install moduleName but every module has it's own dependencies and try to connect to pipy.org to reach them. then there will be an error during installation. is there any solution? your help will be much appreciated. | I live in a country that also blocks services, mostly streaming platforms. In theory, the way behind this is the same whether to watch Netflix or download python and its dependencies. That is you'll probably need to use a VPN. As said by d-dutchveiws, there's tons of videos and resources on how to set up a VPN. If you do end up using a paid VPN service I would just like to add that I lived in the UAE for a while and I found that some VPN services were blocked by the country themselves. I know NordVPN did not work/was blocked by the UAE so I ended up finding expressVPN and that worked. In other words, I'd be sure not to commit to any payment plan/only use free trials because even the VPN services can be blocked. Hope I helped a bit! | 7 | 0 |
75,200,316 | 2023-1-22 | https://stackoverflow.com/questions/75200316/modulenotfounderror-no-module-named-nbformat | I would like to run python in a Quarto document. I followed the docs about installing and using python in Quarto, but the error stays. Here is some reproducible code: --- title: "matplotlib demo" format: html: code-fold: true jupyter: python3 --- For a demonstration of a line plot on a polar axis, see @fig-polar. ```{python} #| label: fig-polar #| fig-cap: "A line plot on a polar axis" import numpy as np import matplotlib.pyplot as plt r = np.arange(0, 2, 0.01) theta = 2 * np.pi * r fig, ax = plt.subplots( subplot_kw = {'projection': 'polar'} ) ax.plot(theta, r) ax.set_rticks([0.5, 1, 1.5, 2]) ax.grid(True) plt.show() ``` Error output: Starting python3 kernel...Traceback (most recent call last): File "/Applications/RStudio.app/Contents/Resources/app/quarto/share/jupyter/jupyter.py", line 21, in <module> from notebook import notebook_execute, RestartKernel File "/Applications/RStudio.app/Contents/Resources/app/quarto/share/jupyter/notebook.py", line 16, in <module> import nbformat ModuleNotFoundError: No module named 'nbformat' I also checked with Quarto if Jupyter is installed in the terminal like this: quarto check jupyter Output: [β] Checking Python 3 installation....OK Version: 3.7.11 (Conda) Path: /Users/quinten/Library/r-miniconda/envs/r-reticulate/bin/python Jupyter: 4.12.0 Kernels: julia-1.8, python3 [β] Checking Jupyter engine render....OK Which seems to be OK. So I was wondering if anyone knows how to fix this error? Edit: output conda info --envs Output of conda info: # conda environments: # /Users/quinten/.julia/conda/3 /Users/quinten/Library/r-miniconda /Users/quinten/Library/r-miniconda/envs/r-reticulate /Users/quinten/Library/rminiconda/general /Users/quinten/opt/anaconda3 base * /Users/quinten/opt/miniconda3 Edit: conda install Jupyter The condo install Jupyter was installed (thanks to @shafee), now when I check with quarto if Jupyter exists, I get the following error: quarto check jupyter [β] Checking Python 3 installation....OK Version: 3.7.11 (Conda) Path: /Users/quinten/Library/r-miniconda/envs/r-reticulate/bin/python Jupyter: 4.11.1 Kernels: julia-1.8, python3 (/) Checking Jupyter engine render....Unable to load extension: pydevd_plugins.extensions.types.pydevd_plugin_pandas_types Unable to load extension: pydevd_plugins.extensions.types.pydevd_plugin_pandas_types [β] Checking Jupyter engine render....OK | This error usually occurs when jupyter is not installed in the environment that is being used and from the output of quarto checks, its seems that Quarto is referring to the r-reticulate environment and possibly jupyter is not installed in that environment. If thats the case, you simply need to install jupyter in the r-reticulate environment. Option 01 One option could be to activate the r-reticulate environment and then install the jupyter there. So first, activate the environment. conda activate r-reticulate and then install jupyter there, conda install jupyter Option 02 If conda could not find that environment, use the specific path to that environment to install jupyter in that environment, /Users/quinten/Library/r-miniconda/envs/r-reticulate/bin/python -m pip install jupyter Option 03 If the r-reticulate environment is created by the {reticulate} R package, then an easy option could be using reticulate::conda_install to install a package. reticulate::conda_install(envname = "r-reticulate", "jupyter") Also, if you are wondering, why jupyter command runs even though it did not exist in the r-reticulate environment, its because you probably installed jupyter in the base environment and quarto is detecting that jupyter installation while checking. | 3 | 3 |
75,167,317 | 2023-1-19 | https://stackoverflow.com/questions/75167317/make-pydantic-basemodel-fields-optional-including-sub-models-for-patch | As already asked in similar questions, I want to support PATCH operations for a FastApi application where the caller can specify as many or as few fields as they like, of a Pydantic BaseModel with sub-models, so that efficient PATCH operations can be performed, without the caller having to supply an entire valid model just in order to update two or three of the fields. I've discovered there are 2 steps in Pydantic PATCH from the tutorial that don't support sub-models. However, Pydantic is far too good for me to criticise it for something that it seems can be built using the tools that Pydantic provides. This question is to request implementation of those 2 things while also supporting sub-models: generate a new DRY BaseModel with all fields optional implement deep copy with update of BaseModel These problems are already recognised by Pydantic. There is discussion of a class based solution to the optional model And there two issues open on the deep copy with update A similar question has been asked one or two times here on SO and there are some great answers with different approaches to generating an all-fields optional version of the nested BaseModel. After considering them all this particular answer by Ziur Olpa seemed to me to be the best, providing a function that takes the existing model with optional and mandatory fields, and returning a new model with all fields optional: https://stackoverflow.com/a/72365032 The beauty of this approach is that you can hide the (actually quite compact) little function in a library and just use it as a dependency so that it appears in-line in the path operation function and there's no other code or boilerplate. But the implementation provided in the previous answer did not take the step of dealing with sub-objects in the BaseModel being patched. This question therefore requests an improved implementation of the all-fields-optional function that also deals with sub-objects, as well as a deep copy with update. I have a simple example as a demonstration of this use-case, which although aiming to be simple for demonstration purposes, also includes a number of fields to more closely reflect the real world examples we see. Hopefully this example provides a test scenario for implementations, saving work: import logging from datetime import datetime, date from collections import defaultdict from pydantic import BaseModel from fastapi import FastAPI, HTTPException, status, Depends from fastapi.encoders import jsonable_encoder app = FastAPI(title="PATCH demo") logging.basicConfig(level=logging.DEBUG) class Collection: collection = defaultdict(dict) def __init__(self, this, that): logging.debug("-".join((this, that))) self.this = this self.that = that def get_document(self): document = self.collection[self.this].get(self.that) if not document: raise HTTPException( status_code=status.HTTP_404_NOT_FOUND, detail="Not Found", ) logging.debug(document) return document def save_document(self, document): logging.debug(document) self.collection[self.this][self.that] = document return document class SubOne(BaseModel): original: date verified: str = "" source: str = "" incurred: str = "" reason: str = "" attachments: list[str] = [] class SubTwo(BaseModel): this: str that: str amount: float plan_code: str = "" plan_name: str = "" plan_type: str = "" meta_a: str = "" meta_b: str = "" meta_c: str = "" class Document(BaseModel): this: str that: str created: datetime updated: datetime sub_one: SubOne sub_two: SubTwo the_code: str = "" the_status: str = "" the_type: str = "" phase: str = "" process: str = "" option: str = "" @app.get("/endpoint/{this}/{that}", response_model=Document) async def get_submission(this: str, that: str) -> Document: collection = Collection(this=this, that=that) return collection.get_document() @app.put("/endpoint/{this}/{that}", response_model=Document) async def put_submission(this: str, that: str, document: Document) -> Document: collection = Collection(this=this, that=that) return collection.save_document(jsonable_encoder(document)) @app.patch("/endpoint/{this}/{that}", response_model=Document) async def patch_submission( document: Document, # document: optional(Document), # <<< IMPLEMENT optional <<< this: str, that: str, ) -> Document: collection = Collection(this=this, that=that) existing = collection.get_document() existing = Document(**existing) update = document.dict(exclude_unset=True) updated = existing.copy(update=update, deep=True) # <<< FIX THIS <<< updated = jsonable_encoder(updated) collection.save_document(updated) return updated This example is a working FastAPI application, following the tutorial, and can be run with uvicorn example:app --reload. Except it doesn't work, because there's no all-optional fields model, and Pydantic's deep copy with update actually overwrites sub-models rather than updating them. In order to test it the following Bash script can be used to run curl requests. Again I'm supplying this just to hopefully make it easier to get started with this question. Just comment out the other commands each time you run it so that the command you want is used. To demonstrate this initial state of the example app working you would run GET (expect 404), PUT (document stored), GET (expect 200 and same document returned), PATCH (expect 200), GET (expect 200 and updated document returned). host='http://127.0.0.1:8000' path="/endpoint/A123/B456" method='PUT' data=' { "this":"A123", "that":"B456", "created":"2022-12-01T01:02:03.456", "updated":"2023-01-01T01:02:03.456", "sub_one":{"original":"2022-12-12","verified":"Y"}, "sub_two":{"this":"A123","that":"B456","amount":0.88,"plan_code":"HELLO"}, "the_code":"BYE"} ' # method='PATCH' # data='{"this":"A123","that":"B456","created":"2022-12-01T01:02:03.456","updated":"2023-01-02T03:04:05.678","sub_one":{"original":"2022-12-12","verified":"N"},"sub_two":{"this":"A123","that":"B456","amount":123.456}}' method='GET' data='' if [[ -n data ]]; then data=" --data '$data'"; fi curl="curl -K curlrc -X $method '$host$path' $data" echo $curl >&2 eval $curl This curlrc will need to be co-located to ensure the content type headers are correct: --cookie "_cookies" --cookie-jar "_cookies" --header "Content-Type: application/json" --header "Accept: application/json" --header "Accept-Encoding: compress, gzip" --header "Cache-Control: no-cache" So what I'm looking for is the implementation of optional that is commented out in the code, and a fix for existing.copy with the update parameter, that will enable this example to be used with PATCH calls that omit otherwise mandatory fields. The implementation does not have to conform precisely to the commented out line, I just provided that based on Ziur Olpa's previous answer. | When I first posed this question I thought that the only problem was how to turn all fields Optional in a nested BaseModel, but actually that was not difficult to fix. The real problem with partial updates when implementing a PATCH call is that the Pydantic BaseModel.copy method doesn't attempt to support nested models when applying it's update parameter. That's quite an involved task for the generic case, considering you may have fields that are dicts, lists, or sets of another BaseModel, just for instance. Instead it just unpacks the dict using **: https://github.com/pydantic/pydantic/blob/main/pydantic/main.py#L353 I haven't got a proper implementation of that for Pydantic, but since I've got a working example PATCH by cheating, I'm going to post this as an answer and see if anyone can fault it or provide better, possibly even with an implementation of BaseModel.copy that supports updates for nested models. Rather than post the implementations separately I am going to update the example given in the question so that it has a working PATCH and being a full demonstration of PATCH hopefully this will help others more. The two additions are partial and merge. partial is what's referred to as optional in the question code. partial: This is a function that takes any BaseModel and returns a new BaseModel with all fields Optional, including sub-object fields. That's enough for Pydantic to allow through any sub-set of fields without throwing an error for "missing fields". It's recursive - not really popular - but given these are nested data models the depth is not expected to exceed single digits. merge: The BaseModel update on copy method operates on an instance of BaseModel - but supporting all the possible type variations when descending through a nested model is the hard part - and the database data, and the incoming update, are easily available as plain Python dicts; so this is the cheat: merge is an implementation of a nested dict update instead, and since the dict data has already been validated at one point or other, it should be fine. Here's the full example solution: import logging from typing import Optional, Type from datetime import datetime, date from functools import lru_cache from pydantic import BaseModel, create_model from collections import defaultdict from pydantic import BaseModel from fastapi import FastAPI, HTTPException, status, Depends, Body from fastapi.encoders import jsonable_encoder app = FastAPI(title="Nested model PATCH demo") logging.basicConfig(level=logging.DEBUG) class Collection: collection = defaultdict(dict) def __init__(self, this, that): logging.debug("-".join((this, that))) self.this = this self.that = that def get_document(self): document = self.collection[self.this].get(self.that) if not document: raise HTTPException( status_code=status.HTTP_404_NOT_FOUND, detail="Not Found", ) logging.debug(document) return document def save_document(self, document): logging.debug(document) self.collection[self.this][self.that] = document return document class SubOne(BaseModel): original: date verified: str = "" source: str = "" incurred: str = "" reason: str = "" attachments: list[str] = [] class SubTwo(BaseModel): this: str that: str amount: float plan_code: str = "" plan_name: str = "" plan_type: str = "" meta_a: str = "" meta_b: str = "" meta_c: str = "" class SubThree(BaseModel): one: str = "" two: str = "" class Document(BaseModel): this: str that: str created: datetime updated: datetime sub_one: SubOne sub_two: SubTwo # sub_three: dict[str, SubThree] = {} # Hah hah not really the_code: str = "" the_status: str = "" the_type: str = "" phase: str = "" process: str = "" option: str = "" @lru_cache def partial(baseclass: Type[BaseModel]) -> Type[BaseModel]: """Make all fields in supplied Pydantic BaseModel Optional, for use in PATCH calls. Iterate over fields of baseclass, descend into sub-classes, convert fields to Optional and return new model. Cache newly created model with lru_cache to ensure it's only created once. Use with Body to generate the partial model on the fly, in the PATCH path operation function. - https://stackoverflow.com/questions/75167317/make-pydantic-basemodel-fields-optional-including-sub-models-for-patch - https://stackoverflow.com/questions/67699451/make-every-fields-as-optional-with-pydantic - https://github.com/pydantic/pydantic/discussions/3089 - https://fastapi.tiangolo.com/tutorial/body-updates/#partial-updates-with-patch """ fields = {} for name, field in baseclass.__fields__.items(): type_ = field.type_ if type_.__base__ is BaseModel: fields[name] = (Optional[partial(type_)], {}) else: fields[name] = (Optional[type_], None) if field.required else (type_, field.default) # https://docs.pydantic.dev/usage/models/#dynamic-model-creation validators = {"__validators__": baseclass.__validators__} return create_model(baseclass.__name__ + "Partial", **fields, __validators__=validators) def merge(original, update): """Update original nested dict with values from update retaining original values that are missing in update. - https://github.com/pydantic/pydantic/issues/3785 - https://github.com/pydantic/pydantic/issues/4177 - https://docs.pydantic.dev/usage/exporting_models/#modelcopy - https://github.com/pydantic/pydantic/blob/main/pydantic/main.py#L353 """ for key in update: if key in original: if isinstance(original[key], dict) and isinstance(update[key], dict): merge(original[key], update[key]) elif isinstance(original[key], list) and isinstance(update[key], list): original[key].extend(update[key]) else: original[key] = update[key] else: original[key] = update[key] return original @app.get("/endpoint/{this}/{that}", response_model=Document) async def get_submission(this: str, that: str) -> Document: collection = Collection(this=this, that=that) return collection.get_document() @app.put("/endpoint/{this}/{that}", response_model=Document) async def put_submission(this: str, that: str, document: Document) -> Document: collection = Collection(this=this, that=that) return collection.save_document(jsonable_encoder(document)) @app.patch("/endpoint/{this}/{that}", response_model=Document) async def patch_submission( this: str, that: str, document: partial(Document), # <<< IMPLEMENTED partial TO MAKE ALL FIELDS Optional <<< ) -> Document: collection = Collection(this=this, that=that) existing_document = collection.get_document() incoming_document = document.dict(exclude_unset=True) # VVV IMPLEMENTED merge INSTEAD OF USING BROKEN PYDANTIC copy WITH update VVV updated_document = jsonable_encoder(merge(existing_document, incoming_document)) collection.save_document(updated_document) return updated_document | 4 | 5 |
75,204,560 | 2023-1-22 | https://stackoverflow.com/questions/75204560/consuming-taskgroup-response | In Python3.11 it's suggested to use TaskGroup for spawning Tasks rather than using gather. Given Gather will also return the result of a co-routine, what's the best approach with TaskGroup. Currently I have async with TaskGroup() as tg: r1 = tg.create_task(foo()) r2 = tg.create_task(bar()) res = [r1.result(), r2.result()] Is there a more concise approach that can be used to achieve the same result? | The task groups were implemented more as a cleaner way to handle task lifetimes and exception handling, enabled greatly by the new exceptions group rework. I think its a popular misconception right now, but the TaskGroup is not a drop in replacement for all of the gather use-cases. For cases where you do not care about the results (which seems to be the only example I am seeing in new documentation and tutorials) it feels much more terse. When the task group has completed, it's still required by the user to pull results out of the completed coruntines. If you need values immediately then you can write it as you have it as res = [r1.result(), r2.result()] after the TaskGroup completes. A more terse syntax might be to gather the results after the TaskGroup completes with res = await asyncio.gather(r1, r2) (this will release the execution of your function which may or may not be desirable). This may look redundant to use both TaskGroup and gather, but the TaskGroup is solving a different purpose than what is provided by gather alone, being that it allows for waiting for your tasks with strong safety guarantees, logic around cancellation for failures, and grouping of exceptions. It might be possible to extend the default TaskGroup class to make this pattern easier. Here's one such idea that can keep track of which tasks were issued in the task group and provides a helper to fish out the results: class GatheringTaskGroup(asyncio.TaskGroup): def __init__(self): super().__init__() self.__tasks = [] def create_task(self, coro, *, name=None, context=None): task = super().create_task(coro, name=name, context=context) self.__tasks.append(task) return task def results(self): return [task.result() for task in self.__tasks] async def foo(): return 1 async def bar(): return 2 async with GatheringTaskGroup() as tg: task1 = tg.create_task(foo()) task2 = tg.create_task(bar()) print(tg.results()) [1, 2] | 13 | 19 |
75,194,858 | 2023-1-21 | https://stackoverflow.com/questions/75194858/best-way-to-feed-data-into-multiprocess | I have some ambiguity in me. I'm novice. I have reads that Multiprocessing are making local copies of Global Variable on each Process. However, this only applies on Windows, since it creates new instance of Python. Meanwhile, in Linux, Child Process are forked into Parent. Now, I have Global Variable that contains states of User-Choices. Taking a similar approach to switch statement using dictionary in Python. case = { 'lead_type': 0, 'deep': 0 } All this time, I store them first in multiprocessing.Manager() with the idea: it will help to avoid local copy inside each Process. But, Manager() create a new Proxy to share the data across Processes, which considered slow. Thus, multiprocessing.Value() is faster because it creates a pointer for that case object. Knowing all of that, should I stick with the use of Manager() or global? Or can I put dictionary object into Value() somehow? Is using Value() considered a good practice? EDIT: import re import csv import json import socket import multiprocessing from collections import defaultdict from multiprocessing import Manager, Process, cpu_count, Queue #This is the Target Dictionary case = { 'lead_type': 0, 'file_type': 0, 'deep': 0 } def websocket_tls(proc, cases): ... def consumer(procs, cases): while True: proc = procs.get() if proc is None: break if cases['lead_type'] == 1: if cases['deep'] == 1: websocket_tls(proc, cases) elif cases['deep'] == 2: websocket_insecure(proc, cases) elif cases['deep'] == 3: gorilla_tls(proc, cases) else: gorilla(proc, cases) else: break def producer(procs, cases): plist = [] for i in range(cpu_count()): procs.put(None) p = Process(target = consumer, args = (procs, cases)) p.start() total.append(p) for p in plist: p.join() # Another function also use Case def processor(file): procs = Queue(cpu_count()*10) if case['deep'] == 0: with open('payloads.json') as f: payloads = json.loads(f) #This is my ambiguity. #Should I keep case global or readded it to Manager to avoid copies? : cases = Manager().dict() for i, v in case.items(): cases[i] = v cases['payload'] = payloads columns = defaultdict(list) if case['file_type'] == 0: with open(file, 'r') as f: for line in f: liner = [line] + list(islice(f, cpu_count())) for i in liner: procs.put(i.strip()) producer(procs, cases) elif case['file_type'] == 1: with open(file, 'r') as f: reader = csv.reader(f) for rows in reader: for (i,v) in enumerate(row): columns[i].append(v) procs.put(columns[9] + columns[3]) producer(procs, cases) else: f = requests.get(file).text f = re.findall('(.*?),', f) for line in f: liner = [line] + list(islice(f, cpu_count())) for i in liner: procs.put(i.strip()) producer(procs, cases) print('Result: ' + str(cases['Result'])) print('Scrape: ' + str(cases['Scrape'])) return # Craft case identifying the correct file_type def choose_file(): print('1. Use .txt file') print('2. Use .csv file') print('3. Use Online Repository') answer = input('Choose: ') print() if answer == '1': case['file_type'] = 0 elif answer == '2': case['file_type'] = 1 else: case['file_type'] = 2 file = input('Choose Location: ') return file # Craft case to appropiate function with it's own deep def menu(): print('1) Test Alive Subdomain') print('2) Test Alive Proxy') print('3) Test Alive SNI') answer = input('Choose: ') print('') if asnwer == '1': print('1) Websocket Secure') print('2) Websocket Insecure') print('3) Websocket Secure with Gorilla') print('4) Websocket Insecure with Gorilla') answer = input('Choose: ') print('') if asnwer == '1': case['lead_type'] = 0 case['deep'] = 0 elif answer == '2': case['lead_type'] = 1 case['deep'] = 0 elif asnwer == '3': case['lead_type'] = 0 case['deep'] = 1 else: case['lead_type'] = 1 case['deep'] = 1 file = choose_file() processor(file) exit() menu() Added example to make it more clear @Booboo. As you can see, there's many function that accesses and modify the case, there's more actually. So, I can't make the case local, it stay global. The problem remains, should it keep global or need to be passed to Manager().dict() or even multiprocessing.Value() to avoid copies of case. Also, good to mention that the case has more suplement into it such as payloads on processor(). Note that there's more data to feed, it can be more bigger, depends on the user choices in menu(). After data feeding is completed, it's time for multiprocessing to determine result. Analogy: Most Functions -> Modify / Craft case -> Final case -> Manager().dict() / Stay global / multiprocessing.Value -> Producer & Consumer. case stays on Read-Only. | Update Based on your updated code/description where the dictionary has no keys added or deleted but integer values of existing keys might be changed by a process and the change reflected to other processes using the dictionary, then there are two approaches: Use a managed dictionary (e.g. multiprocessing.Manager().dict()). As you have noted, accessing or modifying key values are somewhat expensive since it results in making a remote method call to the actual dictionary that resides in a process created by the SyncManager instance. But multiprocessing is only advantageous if your worker function(s) are sufficiently CPU-intensive such that what is gained by running these functions in parallel more than offsets what is lost due to the additional overhead incurred by multiprocessing (e.e. process creation and inter-process communication). So you need to ask how much of your total processing is represented by the accessing of the dictionary values. If it is a significant portion then this is not the best approach but perhaps also your processes are not sufficiently CPU-intensive to warrant a multiprocessing approach. Use multiprocessing.Value instances as the values for the dictionary. Each process will have its own copy of the dictionary but the values stored are sharable because they reside in shared memory. These instances can be created with or without a lock. Updates under control of a lock to force updates to be "one-at-a-time" would be required if the updating logic is not atomic. For example, if you decide that a new value for key 'x' needs to be replaced and this new value is not based on the value of something that is being shared (which might get updated while you are executing your update logic), then you just need a statement such as d['x'].value = new_value. However, an update such as d[x].value += 1 needs to be done under control of a lock: with d['x'].get_lock(): d['x'].value += 1. In the code below, which is a modified version of my third coding example that I originally posted (since this version is closes to the approach you are taking, i.e. using multiprocessing.Process instances with a multiprocssing.Queue), I am following the second approach outlined above and using a regular dictionary with multiprocessing.Value instances as the values. I have added code just to show how an update made by one process is visible to the other processes. Normally, the way I am updating a value, i.e. first reading the current value and then replacing the value with a new value computed from the current value, would require that these operations be done under control of a lock to ensure that no other process can be modifying this key's value while this non-atomic logic is being performed. I am, however, not doing this under control of a lock just because I don't believe that the way you are updating values requires a lock and I didn't want to imply otherwise. from multiprocessing import Process, cpu_count, Queue, Value, current_process def user_choices(): props = {'deep': Value('i', 0, lock=False)} lead_type = input('Put Leader type: ') props['lead_type'] = Value('i', int(lead_type), lock=False) return props def processing(): props = user_choices() processors = [ 'blah', 'blah', 'blah', 'blah', 'blah', 'blah', 'blah', 'blah' ] queue = Queue() n_processors = cpu_count() for p in processors: queue.put(p) for _ in range(n_processors): # Use None as the sentinel so it cannot be mistaken for actual data: queue.put(None) procs = [Process(target=processes, args=(queue, props)) for _ in range(n_processors)] for p in procs: p.start() for p in procs: p.join() def processes(queue, props): import time while True: process = queue.get() if process is None: # Sentinel break v = props['lead_type'] if v.value == 1: new_value = 2 else: new_value = 1 v.value = new_value print(f"Process {current_process().pid} is setting props['lead_type'] to {new_value}: {process}") time.sleep(.1) # Give other processes a chance to rum # For platforms that use *spawn*, such as windows: if __name__ == '__main__': processing() Prints: Put Leader type: 1 Process 8764 is setting props['lead_type'] to 2: blah Process 23740 is setting props['lead_type'] to 1: blah Process 15276 is setting props['lead_type'] to 2: blah Process 8984 is setting props['lead_type'] to 1: blah Process 18300 is setting props['lead_type'] to 2: blah Process 10740 is setting props['lead_type'] to 1: blah Process 17796 is setting props['lead_type'] to 2: blah Process 7880 is setting props['lead_type'] to 1: blah Original Posting A few observations: Your current code will result in all of your children processes except for one never terminating: You need to put N instances of 'ends' to the queue where N is the number of child processes you are creating so that each process knows that there is no more data. Right now you have only put a single instance of 'ends' on the queue so only one child process can get it and the others will wait indefinitely doing a blocking get against an empty queue. You can greatly simplify the code and use a standard dictionary by using a multiprocessing pool, e.g. a militprocessing.Pool. The idea is to have the main process initialize the props dictionary and then initialize within each pool process' address space a global variable pool initialized appropriately. Alternatively, you could pass the props dictionary as an an argument to processes. Of course, in your current code you could have also passed it as additional argument. Multiprocessing will perform worse unless function processes is sufficiently CPU-intensive. from multiprocessing import Pool def init_pool_processes(the_props): """ Initialize global variable props for each pool process. """ global props props = the_props def user_choices(): props = {'deep': 0} lead_type = input('Put Leader type: ') props['lead_type'] = int(lead_type) return props def processing(): props = user_choices() processors = [ 'blah', 'blah', 'blah', 'blah', 'blah', 'blah', 'blah', 'blah' ] with Pool(initializer=init_pool_processes, initargs=(props,)) as pool: pool.map(processes, processors) def processes(process): print(f"Lead {props['lead_type']}: {process}") # For platforms that use *spawn*, such as windows: if __name__ == '__main__': processing() If you want to use individual multiprocessing.Process instances then a simpler approach would be to use a multiprocessing.JoinableQueue with daemon child processes. from multiprocessing import Process, cpu_count, JoinableQueue def user_choices(): props = {'deep': 0} lead_type = input('Put Leader type: ') props['lead_type'] = int(lead_type) return props def processing(): props = user_choices() processors = [ 'blah', 'blah', 'blah', 'blah', 'blah', 'blah', 'blah', 'blah' ] queue = JoinableQueue() for p in processors: queue.put(p) for _ in range(cpu_count()): # A daemon process will terminate automatically when the main # process terminates: Process(target=processes, args=(queue, props), daemon=True).start() # Wait for all queue items to have been processed: queue.join() def processes(queue, props): while True: process = queue.get() print(f"Lead {props['lead_type']}: {process}") queue.task_done() # Show we have finished processing the item # For platforms that use *spawn*, such as windows: if __name__ == '__main__': processing() If you use a multiprocessing.Queue instance, then you need to put N sentinel items (where N is the number of child processes) on the queue that signify there is no more data to be gotten from the queue. I think that using None is a better choice of a sentinel value than 'ends' as it could not possibly be mistaken for actual data: from multiprocessing import Process, cpu_count, Queue def user_choices(): props = {'deep': 0} lead_type = input('Put Leader type: ') props['lead_type'] = int(lead_type) return props def processing(): props = user_choices() processors = [ 'blah', 'blah', 'blah', 'blah', 'blah', 'blah', 'blah', 'blah' ] queue = Queue() n_processors = cpu_count() for p in processors: queue.put(p) for _ in range(n_processors): # Use None as the sentinel so it cannot be mistaken for actual data: queue.put(None) procs = [Process(target=processes, args=(queue, props)) for _ in range(n_processors)] for p in procs: p.start() for p in procs: p.join() def processes(queue, props): while True: process = queue.get() if process is None: # Sentinel break print(f"Lead {props['lead_type']}: {process}") # For platforms that use *spawn*, such as windows: if __name__ == '__main__': processing() All three code examples above result in the following output: Put Leader type: 1 Lead 1: blah Lead 1: blah Lead 1: blah Lead 1: blah Lead 1: blah Lead 1: blah Lead 1: blah Lead 1: blah | 3 | 2 |
75,161,513 | 2023-1-18 | https://stackoverflow.com/questions/75161513/cant-pause-python-process-using-debug | I have a python script which starts multiple sub processes using these lines : for elm in elements: t = multiprocessing.Process(target=sub_process,args=[elm]) threads.append(t) t.start() for t in threads: t.join() Sometimes, for some reason the thread halts and the script never finishes. I'm trying to use VSCode debugger to find the problem and check where in the thread itself it stuck but I'm having issues pausing these sub processes because when I click the pause in the debugger window: It will pause the main thread and some other threads that are running properly but it won't pause the stuck sub process. Even when I try to pause the threads manually one by one using the Call Stack window, I can still pause only the working threads and not the stuck one. Please help me figure this thing, It's a hard thing because the thing that makes the process stuck doesn't always happen so it makes it very hard to debug. | First, those are subprocesses, not threads. It's important to understand the difference, although it doesn't answer your question. Second, a pause (manual break) in the Python debugger will break in Python code. It won't break in the machine code below that executes the Python, or in the machine code below that performing the OS services the Python code is asking for. If you execute a pause, the pause will occur in the Python code above the machine code when (and if) the machine code returns to the Python interpreter loop. Given a complete example: import multiprocessing import time elements = ["one", "two", "three"] def sub_process(gs, elm): gs.acquire() print("sleep", elm) time.sleep(60) print("awake", elm); gs.release() def test(): gs = multiprocessing.Semaphore() subprocs = [] for elm in elements: p = multiprocessing.Process(target=sub_process,args=[gs, elm]) subprocs.append(p) p.start() for p in subprocs: p.join() if __name__ == '__main__': test() The first subprocess will grab the semaphore and sleep for a minute, and the second and third subprocesses will wait inside gs.acquire() until they can move forward. A pause will not break into the debugger until the subprocess returns from the acquire, because acquire is below the Python code. It sounds like you have an idea where the process is getting stuck, but you don't know why. You need to determine what questions you are trying to answer. For example: (Assuming) one of the processess is stuck in acquire. That means one of the other processess didn't release the semaphore. What code in which process is acquiring a semaphore and not releasing it? Looking at the semaphore object itself might tell you which subprocess is holding it, but this is a tangent: can you use the debugger to inspect the semaphore and determine who is holding it? For example, using a machine level debugger in windows, if these were threads and a critical section, it's possible to look at the critical section and see which thread is still holding it. I don't know if this could be done using processes and semaphores on your chosen platform. Which debuggers you have access to depend on the platform you're running on. In summary: You can't break the Python debugger in machine code You can run the Python interpreter in a machine code debugger, but this won't show you the Python code at all, which make life interesting. This can be helpful if you have an idea what you're looking for - for example, you might be able to tell that you're stuck waiting for a semaphore. Running a machine code debugger becomes more difficult when you're running sub-processes, because you need to know which sub-process you're interested in, and attach to that one. This becomes simpler if you're using a single process and multiple threads instead, since there's only one process to deal with. "You can't get there from here, you have to go someplace else first." You'll need to take a closer look at your code and figure out how to answer the questions you need to answer using other means. | 3 | 13 |
75,202,407 | 2023-1-22 | https://stackoverflow.com/questions/75202407/creating-a-date-range-in-python-polars-with-the-last-days-of-the-months | How do I create a date range in Polars (Python API) with only the last days of the months? This is the code I have: pl.date_range(datetime(2022,5,5), datetime(2022,8,10), "1mo", name="dtrange") The result is: '2022-05-05', '2022-06-05', '2022-07-05', '2022-08-05' I would like to get: '2022-05-31', '2022-06-30', '2022-07-31' I know this is possible with Pandas with: pd.date_range(start=datetime(2022,5,5), end=datetime(2022,8,10), freq='M') | I think you'd need to create the range of days and filter: (pl.date_range(datetime(2022,5,5), datetime(2022,8,10), "1d", name="dtrange") .to_frame() .filter( pl.col("dtrange").dt.month() < (pl.col("dtrange") + pl.duration(days=1)).dt.month() ) ) shape: (3, 1) βββββββββββββββββββββββ β dtrange β β --- β β datetime[ΞΌs] β βββββββββββββββββββββββ‘ β 2022-05-31 00:00:00 β βββββββββββββββββββββββ€ β 2022-06-30 00:00:00 β βββββββββββββββββββββββ€ β 2022-07-31 00:00:00 β βββββββββββββββββββββββ | 3 | 3 |
75,202,610 | 2023-1-22 | https://stackoverflow.com/questions/75202610/typeerror-type-object-is-not-subscriptable-python | Whenever I try to type-hint a list of strings, such as tricks: list[str] = [] , I get TypeError: 'type' object is not subscriptable. I follow a course where they use the same code, but it works for them. So I guess the problem is one of these differences between my an the courses environments. I use: vs code anaconda python 3.8.15 jupyter notebook Can someone help me fix that? I used the same code in normal .py files and it sill doesn't work, so that is probably not it. The python version should also not be the problem as this is kind of basic. Anaconda should also not cause such Error messages. Leaves the difference between vscode and pycharm, which is also strange. Therefore I don't know what to try. | You're on an old Python version. list[str] is only valid starting in Python 3.9. Before that, you need to use typing.List: from typing import List tricks: List[str] = [] If you're taking a course that uses features introduced in Python 3.9, you should probably get a Python version that's at least 3.9, though. | 31 | 61 |
75,165,351 | 2023-1-18 | https://stackoverflow.com/questions/75165351/how-can-i-return-progress-hook-for-yt-dlp-using-fastapi-to-end-user | Relevant portion of my code looks something like this: @directory_router.get("/youtube-dl/{relative_path:path}", tags=["directory"]) def youtube_dl(relative_path, url, name=""): """ Download """ relative_path, _ = set_path(relative_path) logger.info(f"{DATA_PATH}{relative_path}") if name: name = f"{DATA_PATH}{relative_path}/{name}.%(ext)s" else: name = f"{DATA_PATH}{relative_path}/%(title)s.%(ext)s" ydl_opts = { "outtmpl": name, # "quiet": True "logger": logger, "progress_hooks": [yt_dlp_hook], # "force-overwrites": True } with yt.YoutubeDL(ydl_opts) as ydl: try: ydl.download([url]) except Exception as exp: logger.info(exp) return str(exp) I am using this webhook/end point to allow an angular app to accept url/name input and download file to folder. I am able to logger.info .. etc. output the values of the yt_dlp_hook, something like this: def yt_dlp_hook(download): """ download Hook Args: download (_type_): _description_ """ global TMP_KEYS if download.keys() != TMP_KEYS: logger.info(f'Status: {download["status"]}') logger.info(f'Dict Keys: {download.keys()}') TMP_KEYS = download.keys() logger.info(download) Is there a way to stream a string of relevant variables like ETA, download speed etc. etc. to the front end? Is there a better way to do this? | You could use a Queue object to communicate between the threads. So when you call youtube_dl pass in a Queue that you can add messages inside yt_dlp_hook (you'll need to use partial functions to construct it). You'll be best off using asyncio to run the download at the same time as updating the user something like: import asyncio from functools import partial import threading from youtube_dl import YoutubeDL from queue import LifoQueue, Empty def main(): # Set the url to download url = "https://www.youtube.com/watch?v=dQw4w9WgXcQ" # Get the current event loop loop = asyncio.get_event_loop() # Create a Last In First Out Queue to communicate between the threads queue = LifoQueue() # Create the future which will be marked as done once the file is downloaded coros = [youtube_dl(url, queue)] future = asyncio.gather(*coros) # Start a new thread to run the loop_in_thread function (with the positional arguments passed to it) t = threading.Thread(target=loop_in_thread, args=[loop, future]) t.start() # While the future isn't finished yet continue while not future.done(): try: # Get the latest status update from the que and print it data = queue.get_nowait() print(data) except Empty as e: print("no status updates available") finally: # Sleep between checking for updates asyncio.run(asyncio.sleep(0.1)) def loop_in_thread(loop, future): loop.run_until_complete(future) async def youtube_dl(url, queue, name="temp.mp4"): """ Download """ yt_dlp_hook_partial = partial(yt_dlp_hook, queue) ydl_opts = { "outtmpl": name, "progress_hooks": [yt_dlp_hook_partial], } with YoutubeDL(ydl_opts) as ydl: return ydl.download([url]) def yt_dlp_hook(queue: LifoQueue, download): """ download Hook Args: download (_type_): _description_ """ # Instead of logging the data just add the latest data to the queue queue.put(download) if __name__ == "__main__": main() | 7 | 3 |
75,195,622 | 2023-1-21 | https://stackoverflow.com/questions/75195622/download-video-with-yt-dlp-using-format-id | How can I download a specific format without using options like "best video", using the format ID... example: 139, see the picture β― yt-dlp --list-formats https://www.youtube.com/watch?v=BaW_jenozKc [youtube] Extracting URL: https://www.youtube.com/watch?v=BaW_jenozKc [youtube] BaW_jenozKc: Downloading webpage [youtube] BaW_jenozKc: Downloading android player API JSON [info] Available formats for BaW_jenozKc: ID EXT RESOLUTION FPS CH β FILESIZE TBR PROTO β VCODEC VBR ACODEC ABR ASR MORE INFO βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 139 m4a audio only 2 β 58.59KiB 48k https β audio only mp4a.40.5 48k 22k low, m4a_dash 249 webm audio only 2 β 58.17KiB 48k https β audio only opus 48k 48k low, webm_dash 250 webm audio only 2 β 76.07KiB 63k https β audio only opus 63k 48k low, webm_dash 140 m4a audio only 2 β 154.06KiB 128k https β audio only mp4a.40.2 128k 44k medium, m4a_dash 251 webm audio only 2 β 138.96KiB 116k https β audio only opus 116k 48k medium, webm_dash 17 3gp 176x144 12 1 β 55.79KiB 45k https β mp4v.20.3 45k mp4a.40.2 0k 22k 144p 160 mp4 256x144 15 β 135.08KiB 113k https β avc1.4d400c 113k video only 144p, mp4_dash 278 webm 256x144 30 β 52.22KiB 44k https β vp9 44k video only 144p, webm_dash 133 mp4 426x240 30 β 294.27KiB 246k https β avc1.4d4015 246k video only 240p, mp4_dash 242 webm 426x240 30 β 33.27KiB 28k https β vp9 28k video only 240p, webm_dash 134 mp4 640x360 30 β 349.59KiB 292k https β avc1.4d401e 292k video only 360p, mp4_dash 18 mp4 640x360 30 2 β ~525.60KiB 420k https β avc1.42001E 420k mp4a.40.2 0k 44k 360p 243 webm 640x360 30 β 75.55KiB 63k https β vp9 63k video only 360p, webm_dash 135 mp4 854x480 30 β 849.41KiB 710k https β avc1.4d401f 710k video only 480p, mp4_dash 244 webm 854x480 30 β 165.49KiB 138k https β vp9 138k video only 480p, webm_dash 22 mp4 1280x720 30 2 β ~ 1.82MiB 1493k https β avc1.64001F 1493k mp4a.40.2 0k 44k 720p 136 mp4 1280x720 30 β 1.60MiB 1366k https β avc1.4d401f 1366k video only 720p, mp4_dash 247 webm 1280x720 30 β 504.68KiB 420k https β vp9 420k video only 720p, webm_dash 137 mp4 1920x1080 30 β 2.11MiB 1803k https β avc1.640028 1803k video only 1080p, mp4_dash 248 webm 1920x1080 30 β 965.31KiB 804k https β vp9 804k video only 1080p, webm_dash I tried using the format url, but it didn't work | yt-dlp -f <format_id> <url> | 20 | 21 |
75,198,911 | 2023-1-22 | https://stackoverflow.com/questions/75198911/usage-of-nested-protocol-member-of-protocol-is-also-a-protocol | Consider a Python protocol attribute which is also annotated with a protocol. I found in that case, both mypy and Pyright report an error even when my custom datatype follows the nested protocol. For example in the code below Outer follows the HasHasA protocol in that it has hasa: HasA because Inner follows HasA protocol. from dataclasses import dataclass from typing import Protocol class HasA(Protocol): a: int class HasHasA(Protocol): hasa: HasA @dataclass class Inner: a: int @dataclass class Outer: hasa: Inner def func(b: HasHasA): ... o = Outer(Inner(0)) func(o) However, mypy shows the following error. nested_protocol.py:22: error: Argument 1 to "func" has incompatible type "Outer"; expected "HasHasA" [arg-type] nested_protocol.py:22: note: Following member(s) of "Outer" have conflicts: nested_protocol.py:22: note: hasa: expected "HasA", got "Inner" What's wrong with my code? | There's an issue on GitHub which is almost exactly the same as your example. I think the motivating case on the mypy docs explains quite well why this is illegal. Bringing a structural analogy to your example, let's fill in an implementation for func and tweak Inner slightly: def func(b: HasHasA) -> None: b.hasa.a += 100 - 100 @dataclass class Inner: a: bool o = Outer(Inner(bool(0))) func(o) if o.hasa.a is False: print("Oh no! This is still False!") else: print("This is true now!") This is of course a contrived example, but it shows that if the type-checker didn't warn you against this, the inner protocol can type-widen the inner type and perform value mutation, and you may silently perform type-unsafe operations. As suggested by the mypy documentation, the solution is to make the outer protocol's variable read-only: class HasHasA(Protocol): @property def hasa(self) -> HasA: ... | 7 | 6 |
75,198,932 | 2023-1-22 | https://stackoverflow.com/questions/75198932/how-to-call-an-async-function-from-the-main-thread-without-waiting-for-the-funct | I'm trying to call an async function containing an await function from the main thread without halting computation in the main thread. I've looked into similar questions employing a variety of solutions two of which I have demonstrated below, however, none of which seem to work with my current setup involving aioconsole and asyncio. Here is a simplified version of what I currently have implemented: import aioconsole import asyncio async def async_input(): line = await aioconsole.ainput(">>> ") print("You typed: " + line) if __name__ == "__main__": asyncio.run(async_input()) print("This text should print instantly. It should not wait for you to type something in the console.") while True: # Do stuff here pass I have also tried replacing asyncio.run(async_input()) with the code below but that hangs the program even after entering console input. loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) loop.create_task(async_input()) loop.run_forever() What's currently happening: Function is created β Function is called from the main thread β The program stops and awaits the completion of the async function. β The processing that should be performed in parallel with the async function is performed sequentially. β Output >>> What should happen: Function is created β Function is called from the main thread β The program continues while the async function runs in the background. β The processing following is performed in parallel with the async function. β Expected output >>> This text should print instantly. It should not wait for you to type something in the console. Python version: 3.10.8 Operating System: MacOS Ventura | As pointed out to me by @flakes there were a couple of problems with my approach: asyncio.run is a blocking function so wrapping my async function calls in it was preventing any following code from running. The main content needs to be wrapped in an asynchronous function to give the indented async function time to run. The intended async function runs while the main thread is waiting so downtime must be introduced in the main function. This can be easily done with await asyncio.sleep(x). The introduction of all this fixes the problem while allowing the main content to run synchronously: import aioconsole import asyncio async def async_input(): line = await aioconsole.ainput(">>> ") print("You typed: " + line) async def main(): input_task = asyncio.create_task(async_input()) print("This text should print instantly. It should not wait for you to type something in the console.") while True: # Do stuff here await asyncio.sleep(0.1) await input_task if __name__ == "__main__": asyncio.run(main()) | 3 | 3 |
75,198,303 | 2023-1-22 | https://stackoverflow.com/questions/75198303/splitting-based-on-condtions | Say I have df as follows: MyCol Red Motor Green Taxi Light blue small Taxi Light blue big Taxi I would like to split the color and the vehicle into two columns. I used this command to split the last word. But sometimes, there is a 'big' or 'small' associated with the car name. How can do the splitting with conditions? df[['color','vehicle']] = df.myCol.str.rsplit(pat=' ', n=1, expand=True) | I think the best approach is to use extract with a regex pattern df['MyCol'].str.extract('^(.*?)\s((?:small|big)?\s?\w+)$') 0 1 0 Red Motor 1 Green Taxi 2 Light blue small Taxi 3 Light blue big Taxi Regex details: ^: Matches start of the string (.*?): first capturing group .*?: matches any character zero or more times but as few times as possible (lazy match) \s: Matches the space ((?:small|big)?\s?\w+): Second capturing group (?:small|big)? : matches small or big zero or one time \s?: matches space zero or one time \w+: matches word characters oner or more times $: matches end of the string The Series.str.extract is used here to extracts two groups using a regular expression. The first group is before a whitespace and the second group is after the whitespace. The second group may contain the word "small" or "big" and returns a new DataFrame with two columns containing the extracted groups. | 3 | 2 |
75,164,313 | 2023-1-18 | https://stackoverflow.com/questions/75164313/selenium-in-google-colab-stopped-working-showing-an-error-as-service-chromedrive | Few hour ago my setup in google colab for selenium worked fine. Now it stopped working all of a sudden. This is a sample: !pip install selenium !apt-get update !apt install chromium-chromedriver from selenium import webdriver chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--headless') chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--disable-dev-shm-usage') driver = webdriver.Chrome('chromedriver', chrome_options=chrome_options) I get the error: WebDriverException: Message: Service chromedriver unexpectedly exited. Status code was: 1 Any ideas on solving it? | This error message... WebDriverException: Message: Service chromedriver unexpectedly exited. Status code was: 1 ...implies that the chromedriver service unexpectedly exited. This is because of the of an issue induced as the colab system was updated from v18.04 to ubuntu v20.04 LTS recently. The main reason is, with Ubuntu v20.04 LTS google-colaboratory no longer distributes chromium-browser outside of a snap package. Quick Fix @mco-gh created a new notebook following @metrizable's guidance (details below) which is working perfect as of now: https://colab.research.google.com/drive/1cbEvuZOhkouYLda3RqiwtbM-o9hxGLyC Solution As a solution you can install a compatible version of chromium-browser from the Debian buster repository using the following code block published by @metrizable in the discussion Issues when trying to use Chromedriver in Colab %%shell # Ubuntu no longer distributes chromium-browser outside of snap # # Proposed solution: https://askubuntu.com/questions/1204571/how-to-install-chromium-without-snap # Add debian buster cat > /etc/apt/sources.list.d/debian.list <<'EOF' deb [arch=amd64 signed-by=/usr/share/keyrings/debian-buster.gpg] http://deb.debian.org/debian buster main deb [arch=amd64 signed-by=/usr/share/keyrings/debian-buster-updates.gpg] http://deb.debian.org/debian buster-updates main deb [arch=amd64 signed-by=/usr/share/keyrings/debian-security-buster.gpg] http://deb.debian.org/debian-security buster/updates main EOF # Add keys apt-key adv --keyserver keyserver.ubuntu.com --recv-keys DCC9EFBF77E11517 apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 648ACFD622F3D138 apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 112695A0E562B32A apt-key export 77E11517 | gpg --dearmour -o /usr/share/keyrings/debian-buster.gpg apt-key export 22F3D138 | gpg --dearmour -o /usr/share/keyrings/debian-buster-updates.gpg apt-key export E562B32A | gpg --dearmour -o /usr/share/keyrings/debian-security-buster.gpg # Prefer debian repo for chromium* packages only # Note the double-blank lines between entries cat > /etc/apt/preferences.d/chromium.pref << 'EOF' Package: * Pin: release a=eoan Pin-Priority: 500 Package: * Pin: origin "deb.debian.org" Pin-Priority: 300 Package: chromium* Pin: origin "deb.debian.org" Pin-Priority: 700 EOF # Install chromium and chromium-driver apt-get update apt-get install chromium chromium-driver | 5 | 4 |
75,196,568 | 2023-1-21 | https://stackoverflow.com/questions/75196568/how-do-i-get-page-level-data-of-a-parquet-file-with-pyarrow | Given a ParquetFile object (docs) I am able to retrieve data at row group / column chunk level either with read_row_group or with the metadata attribute: from pyarrow import fs from pyarrow.parquet import ParquetFile s3 = fs.S3FileSystem(region='us-east-2') path = 'voltrondata-labs-datasets/nyc-taxi/year=2009/month=1/part-0.parquet' source = s3.open_input_file(path) parquet_file = ParquetFile(source) # row_group metadata parquet_file.metadata.row_group(0) <pyarrow._parquet.RowGroupMetaData object at 0x7f8f5edcda40> num_columns: 22 num_rows: 11624 total_byte_size: 712185 # column_chunk metadata parquet_file.metadata.row_group(0).column(0) <pyarrow._parquet.ColumnChunkMetaData object at 0x7f8f5edcda90> file_offset: 1636 file_path: physical_type: BYTE_ARRAY num_values: 11624 path_in_schema: vendor_name is_stats_set: True statistics: <pyarrow._parquet.Statistics object at 0x7f8f5eb74c20> has_min_max: True min: CMT max: VTS null_count: 0 distinct_count: 0 num_values: 11624 physical_type: BYTE_ARRAY logical_type: String converted_type (legacy): UTF8 compression: SNAPPY encodings: ('PLAIN_DICTIONARY', 'PLAIN', 'RLE') has_dictionary_page: True dictionary_page_offset: 4 data_page_offset: 41 total_compressed_size: 1632 total_uncompressed_size: 1625 But I cannot go further than that. Is it possible to get page related information (page header, repetition levels, definition levels and values) as outlined in parquet docs? Note: I am interested in this to learn about how parquet files work under the hood. I've had a look at introspection tools (like parquet-tools) but it seems to be deprecated and alternatives only give row group level information. | You cannot access that information in pyarrow today. Pyarrow has initally been focused on converting parquet files to the Arrow representation. There is no equivalent to pages in Arrow. The info should be available in parquet-cpp (which, confusingly, is a project that also lives in the Arrow GitHub repo) if you're able to dig into C++. It may be possible to get that info in other parquet projects, I am not as familiar with them. | 3 | 3 |
75,196,143 | 2023-1-21 | https://stackoverflow.com/questions/75196143/seaborn-error-with-kde-plot-the-following-variable-cannot-be-assigned-with-wide | I have a pandas dataframe df with two columns (type and IR) as this one: type IR 0 a 0.1 1 b 0.3 2 b 0.2 3 c 0.8 4 c 0.5 ... I want to plot three distributions (one for each type) with the values of the IR so, I write: sns.displot(df, kind="kde", hue='type', rug=True) but I get this error: The following variable cannot be assigned with wide-form data 'hue' Any idea? EDIT: My real dataframe looks like pd.DataFrame({"type": ["IR orig", "IR orig", "IR orig", "IR trans", "IR trans", "IR trans", "IR perm", "IR perm", "IR perm", "IR perm", "IR perm"], "IR": [1.41, 1.42, 1.32, 0.0, 0.44, 0.0, 1.41, 1.31, 1.41, 1.37, 1.34] }) but with sns.displot(df, x='IR', kind="kde", hue='type', rug=True) I got ValueError: cannot reindex on an axis with duplicate labels | Use: df = pd.DataFrame({'type': {0: 'a', 1: 'b', 2: 'b', 3: 'c', 4: 'c'}, 'IR': {0: 0.1, 1: 0.3, 2: 0.2, 3: 0.8, 4: 0.5}}) print (df) type IR 0 a 0.1 1 b 0.3 2 b 0.2 3 c 0.8 4 c 0.5 sns.displot(df.reset_index(drop=True), x='IR', kind="kde", hue='type', rug=True) | 3 | 3 |
75,176,669 | 2023-1-19 | https://stackoverflow.com/questions/75176669/emulating-matlab-mesh-plot-in-matplotlib-yielding-shadow-effects | I have this meshplot that looks very clean in matlab for a 3d surface (Ignore the red border line): And I am trying to emulate the same image in matplotlib. However, I get this weird shadow effect where the top of the surface is pure black and only the bottom is white: I have the code for both plots here *my surface u can be replaced with any 3d data(ie peaks) matlab dx=0.01; dt=0.001; %Attention : they have to be integer-multiples of one another T=1.999; %length of simulation x=(0:dx:1); t=(0:dt:T); u = load('u.mat', 'u'); u = u.u; u2 = load('u2.mat','u'); u2 = u2.u; pstep = 3; tstep = 30; xzeros = repelem(1, 1, length(t)); zline = interp2(x, t, u, xzeros, t); zline2 = interp2(x, t, u2, xzeros, t); subplot(1, 2, 1); mesh(x(1:pstep:end),t(1:tstep:end),u(1:tstep:end,1:pstep:end), "edgecolor", "black"); view(90, 10); xlabel('x', 'FontName', 'Arial', 'FontSize',18) ylabel('Time', 'FontName', 'Arial', 'FontSize',18) zlabel('u(x,t)', 'FontName', 'Arial', 'FontSize',18) set(gca,'FontName', 'Arial', 'FontSize',18) hold on plot3(xzeros, t, zline, 'r', 'linewidth', 3); subplot(1, 2, 2) mesh(x(1:pstep:end),t(1:tstep:end),u2(1:tstep:end,1:pstep:end), "edgecolor", "black"); view(90, 10); xlabel('x', 'FontName', 'Arial', 'FontSize',18) ylabel('Time', 'FontName', 'Arial', 'FontSize',18) zlabel('u(x,t)', 'FontName', 'Arial', 'FontSize',18) set(gca,'FontName', 'Arial', 'FontSize',18) hold on plot3(xzeros, t, zline2, 'r', 'linewidth', 3); set(gcf, 'PaperPositionMode', 'auto'); sgtitle("$$\hat{u}$ for PDE Solutions Using $\hat{k}$$", 'interpreter', 'latex', 'FontSize', 32) saveas(gca, "matlab.png"); and matplotlib fig = plt.figure(figsize=(8, 4)) plt.subplots_adjust(left=0.03, bottom=0, right=0.98, top=1, wspace=0.1, hspace=0) subfigs = fig.subfigures(nrows=1, ncols=1, hspace=0) subfig = subfigs subfig.suptitle(r"$\hat{u}$ for PDE Solutions Using $\hat{k}$") ax = subfig.subplots(nrows=1, ncols=2, subplot_kw={"projection": "3d"}) ax[0].plot_surface(x, t, uarr[0], edgecolor="black",lw=0.2, rstride=30, cstride=3, alpha=1, color="white") ax[0].view_init(5, 5) ax[0].set_xlabel("x", labelpad=10) ax[0].set_ylabel("Time") ax[0].set_zlabel(r"$\hat{u}(x, t)$", labelpad=5) ax[0].zaxis.set_rotate_label(False) ax[0].yaxis.set_major_formatter(FormatStrFormatter('%.1f')) ax[1].plot_surface(x, t, uarr[2], edgecolor="black",lw=0.2, rstride=30, cstride=3, alpha=1,color="white") ax[1].view_init(5, 5) ax[1].set_xlabel("x", labelpad=10) ax[1].set_ylabel("Time") ax[1].set_zlabel(r"$\hat{u}(x, t)$", labelpad=5) ax[1].zaxis.set_rotate_label(False) ax[1].yaxis.set_major_formatter(FormatStrFormatter('%.1f')) Any ideas on relieving this issue would be appreciated. | My guess is that you don't know the keyword argument shade=..., that by default is True but it seems that you prefer no shading. Here it is the (very simple) code that produces the graph above. import numpy as np import matplotlib.pyplot as plt x = y = np.linspace(0, 10, 51) x, y = np.meshgrid(x, y) z = np.sin(y/7*x) fig, ax = plt.subplots(1, 2, figsize=(10, 5), layout='tight', subplot_kw=dict(projection="3d")) ax[0].plot_surface(x, y, z, color='#F4FBFFFF', ec='black', lw=0.2, shade=True) # default ax[1].plot_surface(x, y, z, color='#F4FBFFFF', ec='black', lw=0.2, shade=False) plt.show() | 3 | 2 |
75,194,696 | 2023-1-21 | https://stackoverflow.com/questions/75194696/pandas-groupby-agg-with-condition | I have a pandas data frame similar to this: name sales profit profit_flag Joe 200 100 True Joe 300 150 False Mark 200 100 True Mark 300 150 True Judy 300 150 False The actual data frame has 100 columns. The idea is: I want to group by name, and aggregate all the columns. However, certain columns depend on a flag. In this case, sales will be aggregated no matter what, but profit should be included in the aggregation only if profit_flag is True. It should look like this if we use sum: name sales profit Joe 500 100 Judy 300 Nan Mark 500 250 Is there anyway I can do this from one line using df.groupby('name').agg()? Right now I'm using: grouped = pd.DataFrame() grouped['sales'] = df.groupby('name').sales.sum() grouped['profit'] = df[df.profit_flag].groupby('name').profit.sum() I'm getting the correct result, but since the actual data frame has many more columns, I wanted to know if I could somehow write something like this to avoid the clutter: grouped = df.groupby('name').agg({ 'sales': 'sum', 'profit' 'sum' #if profit_flag }) Is this even possible or should I just group 'flag dependent columns' in separate statemetns? | You can mask the values prior to aggregation: (df.assign(profit=lambda d: d['profit'].where(d['profit_flag'])) .groupby('name', as_index=False)[['sales', 'profit']].sum(min_count=1) ) Output: name sales profit 0 Joe 500 100.0 1 Judy 300 NaN 2 Mark 500 250.0 | 3 | 4 |
75,192,148 | 2023-1-21 | https://stackoverflow.com/questions/75192148/fastapi-and-postgresql-in-docker-compose-file-connection-error | This question has been asked already for example Docker: Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5432? but I still can't figure out how to properly connect the application to the database. Files: Dockerfile FROM python:3.10-slim WORKDIR /app COPY . . RUN pip install --upgrade pip RUN pip install "fastapi[all]" sqlalchemy psycopg2-binary docker-compose.yml version: '3.8' services: ylab: container_name: ylab build: context: . entrypoint: > sh -c "uvicorn main:app --reload --host 0.0.0.0" ports: - "8000:8000" postgres: container_name: postgr image: postgres:15.1-alpine environment: POSTGRES_DB: "fastapi_database" POSTGRES_PASSWORD: "password" ports: - "5433:5432" main.py import fastapi as _fastapi import sqlalchemy as _sql import sqlalchemy.ext.declarative as _declarative import sqlalchemy.orm as _orm DATABASE_URL = "postgresql://postgres:password@localhost:5433/fastapi_database" engine = _sql.create_engine(DATABASE_URL) SessionLocal = _orm.sessionmaker(autocommit=False, autoflush=False, bind=engine) Base = _declarative.declarative_base() class Menu(Base): __tablename__ = "menu" id = _sql.Column(_sql.Integer, primary_key=True, index=True) title = _sql.Column(_sql.String, index=True) description = _sql.Column(_sql.String, index=True) app = _fastapi.FastAPI() # Create table 'menu' Base.metadata.create_all(bind=engine) This works if I host only the postgres database in the container and my application is running locally, but if the database and application are in their own containers, no matter how I try to change the settings, the error always comes up: "sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "localhost" (127.0.0.1), port 5433 failed: Connection refused ylab | Is the server running on that host and accepting TCP/IP connections? ylab | connection to server at "localhost" (::1), port 5433 failed: Cannot assign requested address ylab | Is the server running on that host and accepting TCP/IP connections?" The error comes up in Base.metadata.create_all(bind=engine) I also tried DATABASE_URL = "postgresql://postgres:password@postgres:5433/fastapi_database" but still error: "sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "postgres" (172.23.0.2), port 5433 failed: Connection refused ylab | Is the server running on that host and accepting TCP/IP connections?" There is some kind of config file or something mentioned in the answer above but I can't figure out how to manage that config. | You should update your config to reference the service name of postgres and the port the database runs on inside the container DATABASE_URL = "postgresql://postgres:password@postgres:5432/fastapi_database" When your app was running locally on your machine with the database running in the container then localhost:5433 would work since port 5433 on the host was mapped to 5432 inside the db container. When you then put the app in its own container but still refer to localhost then it will be looking for the postgres database inside the same container the app is in which is not right. When you put the right service name but with port 5433 you will also get an error since port 5433 is only being mapped on the host running the containers not from inside the containers them self. So what you want to do in the app container is just target the database service on port 5432 as thats the port postgres will be running on inside the container. You also probably want to look at a depends on script that will not start the fast api app until the db is up and ready. | 3 | 4 |
75,175,322 | 2023-1-19 | https://stackoverflow.com/questions/75175322/sum-elements-before-and-replace-element-with-sum | I have the numpy array arr = np.array([[0, 0, 2, 5, 0, 0, 1, 8, 0, 3, 0], [1, 2, 0, 0, 0, 0, 5, 7, 0, 0, 0], [8, 5, 3, 9, 0, 1, 0, 0, 0, 0, 1]]) I need the result array like this: [[0, 0, 0, 0, 7, 0, 0, 0, 9, 0, 3] [0, 0, 3, 0, 0, 0, 0, 0, 12, 0, 0] [0, 0, 0, 0, 25, 0, 1, 0, 0, 0, 0]] What's happened? We go along the row, if element in row is 0, then we go to the next element , if not 0, then we sum up the elements until 0 is met, once 0 is met, then we replace it with the resulting sum (also replace the initial non-zero numbers with 0 I already know how to do that with loops but it doesn't work well on time for a large number of rows, so I need time-efficient solution in numpy methods | First, we want to find the locations where the array has a zero next to a non-zero. rr, cc = np.where((arr[:, 1:] == 0) & (arr[:, :-1] != 0)) Now, we can use np.add.reduceat to add elements. Unfortunately, reduceat needs a list of 1-d indices, so we're going to have to play with shapes a little. Calculating the equivalent indices of rr, cc in a flattened array is easy: reduce_indices = rr * arr.shape[1] + cc + 1 # array([ 4, 8, 10, 13, 19, 26, 28]) We want to reduce from the start of every row, so we'll create a row_starts to mix in with the indices calculated above: row_starts = np.arange(arr.shape[0]) * arr.shape[1] # array([ 0, 11, 22]) reduce_indices = np.hstack((row_starts, reduce_indices)) reduce_indices.sort() # array([ 0, 4, 8, 10, 11, 13, 19, 22, 26, 28]) Now, call np.add.reduceat on the flattened input array, reducing at reduce_indices totals = np.add.reduceat(arr.flatten(), reduce_indices) # array([ 7, 9, 3, 0, 3, 12, 0, 25, 1, 1]) Now we have the totals, we need to assign them to an array of zeros. Note that the 0th element of totals needs to go to the 1th index of reduce_indices, and the last element of totals is to be discarded: result_f = np.zeros((arr.size,)) result_f[reduce_indices[1:]] = totals[:-1] result = result_f.reshape(arr.shape) Now, one last step remains. For cases where the last element in a row is nonzero, reduceat would calculate a nonzero value for the first element of the next row, as you mentioned in the comment below. An easy solution is to overwrite these to zero. result[:, 0] = 0 which gives the expected result: array([[ 0., 0., 0., 0., 7., 0., 0., 0., 9., 0., 3.], [ 0., 0., 3., 0., 0., 0., 0., 0., 12., 0., 0.], [ 0., 0., 0., 0., 25., 0., 1., 0., 0., 0., 0.]]) | 3 | 4 |
75,185,990 | 2023-1-20 | https://stackoverflow.com/questions/75185990/filtering-a-dataframe-with-uuids-returns-an-empty-result | I have this data frame: df: id hint 29c45630-7d41-11e9-8ea2-9f2bfe5760ab None 61afc910-3918-11ea-b078-93fcef773138 Yes and I want to find the first row that has this id = 29c45630-7d41-11e9-8ea2-9f2bfe5760ab when I run this code: df[df['id'] == '29c45630-7d41-11e9-8ea2-9f2bfe5760ab'] return empty!!! and the type of this id is "object" but I really have this id!! how can I find it? | You code didn't work because you have UUIDs, not strings, you first need to convert to string: df[df['id'].astype(str).eq('29c45630-7d41-11e9-8ea2-9f2bfe5760ab')] Output: id hint 0 29c45630-7d41-11e9-8ea2-9f2bfe5760ab None | 3 | 2 |
75,186,036 | 2023-1-20 | https://stackoverflow.com/questions/75186036/why-does-the-last-line-in-a-cell-generate-output-but-preceding-lines-do-not | Given this Jupyter notebook cell: x = [1,2,3,4,5] y = {1,2,3,4,5} x y When the cell executes, it generates this output: {1, 2, 3, 4, 5} The last line in the cell generates output, the line above it has no effect. This works for any data type, as far as I can tell. Here's a snip of the same code as above: | You can change this behaviour with: from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" The reason why only the last line is printed is that the default value of ast_node_interactivity is: last_expr. You can read more about that here: https://ipython.readthedocs.io/en/stable/config/options/terminal.html | 6 | 9 |
75,182,198 | 2023-1-20 | https://stackoverflow.com/questions/75182198/how-to-pass-additional-arguments-to-a-function-when-using-threadpoolexecutor | I would like to read several png images by utilizing the ThreadPoolExecutor and cv2.imread. Problem is that I don't know where to place cv2.IMREAD_UNCHANGED tag/argument to preserve alpha channel (transparency). The following code works but alpha channel is lost. Where should I place the cv2.IMREAD_UNCHANGED argument? import cv2 import concurrent.futures images=["pic1.png", "pic2.png", "pic3.png"] images_list=[] with concurrent.futures.ThreadPoolExecutor() as executor: images_list=list(executor.map(cv2.imread,images)) For example, the following return an error: SystemError: <built-in function imread> returned NULL without setting an error import cv2 import concurrent.futures images=["pic1.png", "pic2.png", "pic3.png"] images_list=[] with concurrent.futures.ThreadPoolExecutor() as executor: images_list=list(executor.map(cv2.imread(images,cv2.IMREAD_UNCHANGED))) | Use a lambda that accepts one argument img and pass the argument to the imread function along with the cv2.IMREAD_UNCHANGED. import cv2 import concurrent.futures images=["pic1.png", "pic2.png", "pic3.png"] images_list=[] with concurrent.futures.ThreadPoolExecutor() as executor: images_list=list(executor.map(lambda img: cv2.imread(img, cv2.IMREAD_UNCHANGED),images)) | 3 | 2 |
75,178,360 | 2023-1-19 | https://stackoverflow.com/questions/75178360/what-is-the-difference-between-np-min-and-min | I have trying to learn Python on datacamp. When I run statistical codes with agg - like min, max - I realized that it is the same result with using np with these codes. But I can not use mean and median methods without np. So, is there any difference between np.min and min , np.max and max? why don't median and mean work without numpy? Code: unemp_fuel_stats = sales.groupby("type")["unemployment", "fuel_price_usd_per_l"].agg([np.min, np.max, np.mean, np.median]) Result: unemployment fuel_price_usd_per_l amin amax mean median amin amax mean median type A 3.879 8.992 7.973 8.067 0.664 1.107 0.745 0.735 B 7.170 9.765 9.279 9.199 0.760 1.108 0.806 0.803 Another result: unemployment fuel_price_usd_per_l min max min max type A 3.879 8.992 0.664 1.107 B 7.170 9.765 0.760 1.108 | min is the builtin python function and numpy.min is another min function that is in the numpy package. They act similarly but different. Given a single list, they will act the same: print(np.min([1, 2, 3, 4, 5])) print(min([1, 2, 3, 4, 5])) 1 1 However, given multiple arguments, numpy.min will throw an error as the first argument must be array-like. min will simply return the minimum of all elements. print(min(1, 2, 3, 4, 5)) try: print(np.min(1, 2, 3, 4, 5)) except Exception as e: print(e) 1 output must be an array numpy.min can take the minimum along a specific axis of a multi-dimensional array (which uses multiple square brackets) or the minimum of all elements. min will simply given the minimum list (though I forget how they're ordered). twod_array = [[1, 2, 5], [3, 7, 5], [10, 5, 7]] print(min(twod_array)) print(np.min(twod_array)) print(np.min(twod_array, axis = 0)) # Minimum of each column print(np.min(twod_array, axis = 1)) # Minimum of each row [1, 2, 5] 1 [1 2 5] [1 3 5] Hope that was enlightening. The tl;dr is that they are simply different functions with the same name. It's good to use import numpy as np to differentiate min from np.min. Using something like from numpy import min will cause ambiguity as to which function is actually being used. Python has no builtin median or mean but numpy has numpy.median and numpy.mean | 3 | 4 |
75,176,951 | 2023-1-19 | https://stackoverflow.com/questions/75176951/python-add-weights-associated-with-values-of-a-column | I am working with an ex termly large datfarem. Here is a sample: import pandas as pd import numpy as np df = pd.DataFrame({ 'ID': ['A', 'A', 'A', 'X', 'X', 'Y'], }) ID 0 A 1 A 2 A 3 X 4 X 5 Y Now, given the frequency of each value in column '''ID''', I want to calculate a weight using the function below and add a column that has the weight associated with each value in '''ID'''. def get_weights_inverse_num_of_samples(label_counts, power=1.): no_of_classes = len(label_counts) weights_for_samples = 1.0/np.power(np.array(label_counts), power) weights_for_samples = weights_for_samples/ np.sum(weights_for_samples)*no_of_classes return weights_for_samples freq = df.value_counts() print(freq) ID A 3 X 2 Y 1 weights = get_weights_inverse_num_of_samples(freq) print(weights) [0.54545455 0.81818182 1.63636364] So, I am looking for an efficient way to get a dataframe like this given the above weights: ID sample_weight 0 A 0.54545455 1 A 0.54545455 2 A 0.54545455 3 X 0.81818182 4 X 0.81818182 5 Y 1.63636364 | If you rely on duck-typing a little bit more, you can rewrite your function to return the same input type as outputted. This will save you of needing to explicitly reaching back into the .index prior to calling .map import pandas as pd df = pd.DataFrame({'ID': ['A', 'A', 'A', 'X', 'X', 'Y'}) def get_weights_inverse_num_of_samples(label_counts, power=1): """Using object methods here instead of coercing to numpy ndarray""" no_of_classes = len(label_counts) weights_for_samples = 1 / (label_counts ** power) return weights_for_samples / weights_for_samples.sum() * no_of_classes # select the column before using `.value_counts()` # this saves us from ending up with a `MultiIndex` Series freq = df['ID'].value_counts() weights = get_weights_inverse_num_of_samples(freq) print(weights) # A 0.545455 # X 0.818182 # Y 1.636364 # note that now our weights are still a `pd.Series` # that we can align directly against our `"ID"` column df['sample_weight'] = df['ID'].map(weights) print(df) # ID sample_weight # 0 A 0.545455 # 1 A 0.545455 # 2 A 0.545455 # 3 X 0.818182 # 4 X 0.818182 # 5 Y 1.636364 | 6 | 7 |
75,155,648 | 2023-1-18 | https://stackoverflow.com/questions/75155648/find-total-combinations-of-length-3-such-that-sum-is-divisible-by-a-given-number | So, I have been trying to find optimum solution for the question, but I can not find a solution which is less than o(n3). The problem statemnt is :- find total number of triplet in an array such that sum of a[i],a[j],a[k] is divisible by a given number d and i<j<k. I have tried a multiple solutions but the solutions all reached o(n3). I need a solution that could be less than o(n3) | Let A be an array of numbers of length N: A = [1,2,3,4,5,6,7,8] Let D be the divider D = 4 It is possible to reduce complexity O(N^2) with an extra dictionary that saves you iterating through the array for each pair (a[i],a[j]). The helper dictionary will be built before iterating through the pairs (i,j) with the count of A[k] % D = X. So for any pair A[i], A[j] you can tell how many matching A[k] exist by fetching from a dictionary rather than a loop. Below is a python implementation that demonstrates the solution T = 0 # Total possibilities H = {} # counts all possible (A[k])%D = Key from index k for k in range(2, len(A)): key = A[k]%D H[key] = H.get(key,0) + 1 for j in range(1, len(A)): if j >= 2: H[A[j]%D] -= 1 # when j increments it reduces options for A[k] for i in range(j): matching_val = (D - (A[i]+A[j]) % D ) % D to_add = H.get(matching_val, 0) T += to_add print(T) | 4 | 6 |
75,159,821 | 2023-1-18 | https://stackoverflow.com/questions/75159821/installing-python-3-11-1-on-a-docker-container | I want to use debian:bullseye as a base image and then install a specific Python version - i.e. 3.11.1. At the moment I am just learning docker and linux. From what I understand I can either: Download and compile sources Install binaries (using apt-get) Use a Python base image I have come across countless questions on here and articles online. Do I use deadsnakes? What version do I need? Are there any official python distributions (who is deadsnakes anyway)? But ultimately I want to know the best means of getting Python on there. I don't want to use a Python base image - I am curious in the steps involved. Compile sources - I am far from having that level of knowhow - and one for another day. Currently I am rolling with the following: FROM debian:bullseye RUN apt update && apt upgrade -y RUN apt install software-properties-common -y RUN add-apt-repository "ppa:deadsnakes/ppa" RUN apt install python3.11 This fails with: #8 1.546 E: Unable to locate package python3.11 #8 1.546 E: Couldn't find any package by glob 'python3.11' Ultimately - it's not the error - its just finding a good way of getting a specific Python version on my container. | In case you want to install Python 3.11 in debian bullseye you have to compile it from source following the next steps (inside the Dockerfile): sudo apt update sudo apt install software-properties-common wget wget https://www.python.org/ftp/python/3.11.1/Python-3.11.1.tar.xz sudo tar -xf Python-3.11.1.tar.xz cd Python-3.11.1 sudo ./configure --enable-optimizations sudo make altinstall Another option (easiest) would be to use the official Python Docker image, in your case: FROM 3.11-bullseye You have all the versions available in docker hub. Other option that could be interesting in your case is 3.11-slim-bullseye, that is an image that does not contain the common packages contained in the default tag and only contains the minimal packages needed to run python. | 8 | 8 |
75,170,069 | 2023-1-19 | https://stackoverflow.com/questions/75170069/strip-colum-values-if-startswith-a-specific-string-pandas | I have a pandas dataframe(sample). id name 1 Mr-Mrs-Jon Snow 2 Mr-Mrs-Jane Smith 3 Mr-Mrs-Darth Vader I'm looking to strip the "Mr-Mrs-" from the dataframe. i.e the output should be: id name 1 Jon Snow 2 Jane Smith 3 Darth Vader I tried using df['name'] = df['name'].str.lstrip("Mr-Mrs-") But while doing so, some of the alphabets of names in some rows are also getting stripped out. I don't want to run a loop and do .loc for every row, is there a better/optimized way to achieve this ? | Don't strip, replace using a start of string anchor (^): df['name'] = df['name'].str.replace(r"^Mr-Mrs-", "", regex=True) Or removeprefix: df['name'] = df['name'].str.removeprefix("Mr-Mrs-") Output: id name 1 Jon Snow 2 Jane Smith 3 Darth Vader | 3 | 4 |
75,169,285 | 2023-1-19 | https://stackoverflow.com/questions/75169285/how-to-change-numbers-in-a-list-to-make-it-monotonically-decreasing | I have got a list: first = [100, 110, 60] How to make that: if the next number is greater than the previous one, then it is necessary to reduce this number like preceding number. For example, the answer should be: ans = [100, 100, 60] The second example: arr = [60,50,60] ans = [60, 50, 50] The third example: arr = [20, 100, 150] ans = [20, 20, 20] I try to, but i think it's not good idea for i in range(len(arr)-1): if arr[i] < arr[i+1]: answer.append(a[i+1] - 10) if arr[i] < arr[i+1]: answer.append(a[i]) if arr[i] < arr [i+1]: answer.append(arr[-1]) | This will modify the list in situ: def fix_list(_list): for i, v in enumerate(_list[1:], 1): _list[i] = min(v, _list[i-1]) return _list print(fix_list([100, 110, 60])) print(fix_list([60, 50, 60])) print(fix_list([20, 100, 150])) Output: [100, 100, 60] [60, 50, 50] [20, 20, 20] | 5 | 3 |
75,167,963 | 2023-1-19 | https://stackoverflow.com/questions/75167963/polars-slower-than-numpy | I was thinking about using polars in place of numpy in a parsing problem where I turn a structured text file into a character table and operate on different columns. However, it seems that polars is about 5 times slower than numpy in most operations I'm performing. I was wondering why that's the case and whether I'm doing something wrong given that polars is supposed to be faster. Example: import requests import numpy as np import polars as pl # Download the text file text = requests.get("https://files.rcsb.org/download/3w32.pdb").text # Turn it into a 2D array of characters char_tab_np = np.array(file.splitlines()).view(dtype=(str,1)).reshape(-1, 80) # Create a polars DataFrame from the numpy array char_tab_pl = pl.DataFrame(char_tab_np) # Sort by first column with numpy char_tab_np[np.argsort(char_tab_np[:,0])] # Sort by first column with polars char_tab_pl.sort(by="column_0") Using %%timeit in Jupyter, the numpy sorting takes about 320 microseconds, whereas the polars sort takes about 1.3 milliseconds, i.e. about five times slower. I also tried char_tab_pl.lazy().sort(by="column_0").collect(), but it had no effect on the duration. Another example (Take all rows where the first column is equal to 'A'): # with numpy %%timeit char_tab_np[char_tab_np[:, 0] == "A"] # with polars %%timeit char_tab_pl.filter(pl.col("column_0") == "A") Again, numpy takes 226 microseconds, whereas polars takes 673 microseconds, about three times slower. Update Based on the comments I tried two other things: 1. Making the file 1000 times larger to see whether polars performs better on larger data. Results: numpy was still about 2 times faster (1.3 ms vs. 2.1 ms). In addition, creating the character array took numpy about 2 seconds, whereas polars needed about 2 minutes to create the dataframe, i.e. 60 times slower. To re-produce, just add text *= 1000 before creating the numpy array in the code above. 2. Casting to integer. For the original (smaller) file, casting to int sped up the process for both numpy and polars. The filtering in numpy was still about 5 times faster than polars (30 microseconds vs. 120), wheres the sorting time became more similar (150 microseconds for numpy vs. 200 for polars). However, for the large file, polars was marginally faster than numpy, but the huge instantiation time makes it only worth if the dataframe is to be queried thousands of times. | Polars does extra work in filtering string data that is not worth it in this case. Polars uses arrow large-utf8 buffers for their string data. This makes filtering more expensive than filtering python strings/chars (e.g. pointers or u8 bytes). Sometimes it is worth it, sometimes not. If you have homogeneous data, numpy is a better fit than polars. If you have heterogenous data, polars will likely be faster. Especially if you consider your whole query instead of these micro benchmarks. | 7 | 6 |
75,164,872 | 2023-1-18 | https://stackoverflow.com/questions/75164872/bar-polar-with-areas-proportional-to-values | Based on this question I have the plot below. The issue is plotly misaligns the proportion between plot area and data value. I mean, higher values (e.g. going from 0.5 to 0.6) lead to a large increase in area (big dark green block) whereas from 0 to 0.1 is not noticiable (even if the actual data increment is the same 0.1). import numpy as np import pandas as pd import plotly.express as px df = px.data.wind() df_test = df[df["strength"]=='0-1'] df_test_sectors = pd.DataFrame(columns=df_test.columns) ## this only works if each group has one row for direction, df_direction in df_test.groupby('direction'): frequency_stop = df_direction['frequency'].tolist()[0] frequencies = np.arange(0.1, frequency_stop+0.1, 0.1) df_sector = pd.DataFrame({ 'direction': [direction]*len(frequencies), 'strength': ['0-1']*len(frequencies), 'frequency': frequencies }) df_test_sectors = pd.concat([df_test_sectors, df_sector]) df_test_sectors = df_test_sectors.reset_index(drop=True) df_test_sectors['direction'] = pd.Categorical( df_test_sectors['direction'], df_test.direction.tolist() #sort the directions into the same order as those in df_test ) df_test_sectors['frequency'] = df_test_sectors['frequency'].astype(float) df_test_sectors = df_test_sectors.sort_values(['direction', 'frequency']) fig = px.bar_polar(df_test_sectors, r='frequency', theta='direction', color='frequency', color_continuous_scale='YlGn') fig.show() Is there any way to make the plot with proportional areas to blocks to keep a more "truthful" alignment between the aesthetics and the actual data? So the closer to the center, the "longer" the blocks so the areas of all blocks are equal? Is there any option in Plotly for this? | You can construct a new column called r_outer_diff that stores radius differences (as you go from the inner most to outer most sector for each direction) to ensure the area of each sector is equal. The values for this column can be calculated inside the loop we are using to construct df_test_sectors using the following steps: we start with the inner sector of r = 0.1 and find the area of that sector as a reference since we want all subsequent sectors to have the same area then to construct the next sector, we need to find r_outer so that pi*(r_outer-r_inner)**2 * (sector angle/360) = reference sector area we solve this formula for r_outer for each iteration of the loop, and use r_outer as r_inner for the next iteration of the loop. since plotly will draw the sum of all of the radiuses, we actually want to keep track of r_outer-r_inner for each iteration of the loop and this is the value we will store in the r_outer_diffs column Putting this into code: import numpy as np import pandas as pd import plotly.express as px df = px.data.wind() df_test = df[df["strength"]=='0-1'] df_test_sectors = pd.DataFrame(columns=df_test.columns) ## this only works if each group has one row for direction, df_direction in df_test.groupby('direction'): frequency_stop = df_direction['frequency'].tolist()[0] frequencies = np.arange(0.1, frequency_stop+0.1, 0.1) r_base = 0.1 sector_area = np.pi * r_base**2 * (16/360) ## we can populate the list with the first radius of 0.1 ## since that will stay fixed ## then we use the formula: sector_area = pi*(r_outer-r_inner)^2 * (sector angle/360) r_adjusted_for_area = [0.1] r_outer_diffs = [0.1] for i in range(len(frequencies)-1): r_inner = r_adjusted_for_area[-1] inner_sector_area = np.pi * r_inner**2 * (16/360) outer_sector_area = inner_sector_area + sector_area r_outer = np.sqrt(outer_sector_area * (360/16) / np.pi) r_outer_diff = r_outer - r_inner r_adjusted_for_area.append(r_outer) r_outer_diffs.append(r_outer_diff) df_sector = pd.DataFrame({ 'direction': [direction]*len(frequencies), 'strength': ['0-1']*len(frequencies), 'frequency': frequencies, 'r_outer_diff': r_outer_diffs }) df_test_sectors = pd.concat([df_test_sectors, df_sector]) df_test_sectors = df_test_sectors.reset_index(drop=True) df_test_sectors['direction'] = pd.Categorical( df_test_sectors['direction'], df_test.direction.tolist() #sort the directions into the same order as those in df_test ) df_test_sectors['frequency'] = df_test_sectors['frequency'].astype(float) df_test_sectors = df_test_sectors.sort_values(['direction', 'frequency']) fig = px.bar_polar(df_test_sectors, r='r_outer_diff', theta='direction', color='frequency', color_continuous_scale='YlGn') fig.show() | 4 | 2 |
75,165,745 | 2023-1-18 | https://stackoverflow.com/questions/75165745/cannot-determine-if-type-of-field-in-a-pydantic-model-is-of-type-list | I am trying to automatically convert a Pydantic model to a DB schema. To do that, I am recursively looping through a Pydantic model's fields to determine the type of field. As an example, I have this simple model: from typing import List from pydantic import BaseModel class TestModel(BaseModel): tags: List[str] I am recursing through the model using the __fields__ property as described here: https://docs.pydantic.dev/usage/models/#model-properties If I do type(TestModel).__fields__['tags'] I see: ModelField(name='tags', type=List[str], required=True) I want to programatically check if the ModelField type has a List origin. I have tried the following, and none of them work: type(TestModel).__fields__['tags'].type_ is List[str] type(TestModel).__fields__['tags'].type_ == List[str] typing.get_origin(type(TestModel).__fields__['tags'].type_) is List typing.get_origin(type(TestModel).__fields__['tags'].type_) == List Frustratingly, this does return True: type(TestModel).__fields__['tags'].type_ is str What is the correct way for me to confirm a field is a List type? | Pydantic has the concept of the shape of a field. These shapes are encoded as integers and available as constants in the fields module. The more-or-less standard types have been accommodated there already. If a field was annotated with list[T], then the shape attribute of the field will be SHAPE_LIST and the type_ will be T. The type_ refers to the element type in the context of everything that is not SHAPE_SINGLETON, i.e. with container-like types. This is why you get str in your example. Thus for something as simple as list, you can simply check the shape against that constant: from pydantic import BaseModel from pydantic.fields import SHAPE_LIST class TestModel(BaseModel): tags: list[str] other: tuple[str] tags_field = TestModel.__fields__["tags"] other_field = TestModel.__fields__["other"] assert tags_field.shape == SHAPE_LIST assert other_field.shape != SHAPE_LIST If you want more insight into the actual annotation of the field, that is stored in the annotation attribute of the field. With that you should be able to do all the typing related analyses like get_origin. That means another way of accomplishing your check would be this: from typing import get_origin from pydantic import BaseModel class TestModel(BaseModel): tags: list[str] other: tuple[str] tags_field = TestModel.__fields__["tags"] other_field = TestModel.__fields__["other"] assert get_origin(tags_field.annotation) is list assert get_origin(other_field.annotation) is tuple Sadly, neither of those attributes are officially documented anywhere as far as I know, but the beauty of open-source is that we can just check ourselves. Neither the attributes nor the shape constants are obfuscated, protected or made private in any of the usual ways, so I'll assume these are stable (at least until Pydantic v2 drops). | 4 | 5 |
75,164,370 | 2023-1-18 | https://stackoverflow.com/questions/75164370/python-polars-how-to-convert-a-list-of-dictionaries-to-polars-dataframe-without | I have a list of dictionaries like this: [{"id": 1, "name": "Joe", "lastname": "Bloggs"}, {"id": 2, "name": "Bob", "lastname": "Wilson"}] And I would like to transform it to a polars dataframe. I've tried going via pandas but if possible, I'd like to avoid using pandas. Any thoughts? | Just pass it to pl.DataFrame In [2]: pl.DataFrame([{"id": 1, "name": "Joe", "lastname": "Bloggs"}, {"id": 2, "name": "Bob", "lastname": "Wilson"}]) Out[2]: shape: (2, 3) βββββββ¬βββββββ¬βββββββββββ β id β name β lastname β β --- β --- β --- β β i64 β str β str β βββββββͺβββββββͺβββββββββββ‘ β 1 β Joe β Bloggs β β 2 β Bob β Wilson β βββββββ΄βββββββ΄βββββββββββ | 7 | 10 |
75,160,024 | 2023-1-18 | https://stackoverflow.com/questions/75160024/matplotlib-step-function-how-to-extend-the-first-and-last-steps | I am using a step and fill_between functions in Matplotlib and want the steps to be centred on the x points. Code import matplotlib.pyplot as plt import numpy as np xpoints=np.array([1,2,3,4]) ypoints=np.array([4,6,5,2]) ypoints_std=np.array([0.5,0.3,0.4,0.2]) plt.step(xpoints,ypoints,where='mid') plt.fill_between(xpoints,ypoints+ypoints_std,ypoints-ypoints_std,step='mid',alpha=0.2) plt.show() Current plot: At the moment, the step centred on 1 is only 0.5 wide, whereas the step centred on 2 is 1 wide. Wanted I actually want the step-width of 1 for all steps and also for the fill. This should include first and last step, so that they are extended compared to the current plot. Of course I can pad the data, but that is getting messy in my actual code. Questions Is there a way to make the first and last steps the same size as the middle ones? Or is there a way to produce a similar graph using histogram ? i.e. showing an error the size of the full width of the bar, centred on the y position of the graph? | Using a bar plot at a height The error bands could be shown via a bar plot with a bottom at ypoints - ypoints_std and a height of 2*ypoints_std. import matplotlib.pyplot as plt import numpy as np xpoints = np.array([1, 2, 3, 4]) ypoints = np.array([4, 6, 5, 2]) ypoints_std = np.array([0.5, 0.3, 0.4, 0.2]) plt.bar(xpoints, ypoints, width=1, facecolor='none', edgecolor='dodgerblue') plt.bar(xpoints, height=2 * ypoints_std, bottom=ypoints - ypoints_std, width=1, color='dodgerblue', alpha=0.2) plt.xticks(xpoints) plt.show() Using zero-height bars To only have horizontal lines, you could replace the first bar plot with zero-height bars. Adding the original plt.step with the same color will create the connecting lines plt.gca().use_sticky_edges = False # prevent bars from "sticking" to the bottom plt.step(xpoints, ypoints, where='mid', color='dodgerblue') plt.bar(xpoints, height=0, bottom=ypoints, width=1, facecolor='none', edgecolor='dodgerblue') plt.bar(xpoints, height=2 * ypoints_std, bottom=ypoints - ypoints_std, width=1, color='dodgerblue', alpha=0.2) Extending the points You could add dummy values to repeat the first and last point. And then use plt.xlim(...) to limit the plot between 0.5 and 4.5. import matplotlib.pyplot as plt import numpy as np xpoints = np.array([1, 2, 3, 4]) ypoints = np.array([4, 6, 5, 2]) ypoints_std = np.array([0.5, 0.3, 0.4, 0.2]) xpoints = np.concatenate([[xpoints[0] - 1], xpoints, [xpoints[-1] + 1]]) ypoints = np.pad(ypoints, 1, mode='edge') ypoints_std = np.pad(ypoints_std, 1, mode='edge') plt.step(xpoints, ypoints, where='mid') plt.fill_between(xpoints, ypoints + ypoints_std, ypoints - ypoints_std, step='mid', alpha=0.2) plt.xlim(xpoints[0] + 0.5, xpoints[-1] - 0.5) plt.show() | 4 | 2 |
75,161,173 | 2023-1-18 | https://stackoverflow.com/questions/75161173/nested-dictionary-creation-python | I have a dictionary setup like this: company = {'Honda': {} ,'Toyota':{} ,'Ford':{} } I have a list containing data like this: years = ['Year 1', 'Year 2', 'Year 3'] Finally, I also have a list of lists containing data like this: sales = [[55,9,90],[44,22,67],[83,13,91]] I am trying to achieve a final result that looks like this: {'Honda': {'Year 1':55,'Year 2':9,'Year 3':90} ,'Toyota':{'Year 1':44,'Year 2':22,'Year 3':67} ,'Ford':{'Year 1':83,'Year 2':13,'Year 3':91} } I can access the sub-list if sales like this: for i in sales: for j in i: #this would give me a flat list of all sales I can't seem to wrap my head around constructing the final dictionary that would tie everything together. Any help is appreciated! | You can use a dict comprehension with zip. res = {k : dict(zip(years, sale)) for k, sale in zip(company, sales)} | 3 | 5 |
75,158,767 | 2023-1-18 | https://stackoverflow.com/questions/75158767/mypy-doesnt-allow-a-generic-attribute-to-be-marked-as-final | I'm trying to create a class which has an attribute that should be a constant. This attribute could have different types depending on were the class is used in the codebase. Moreover, the type of this attribute is used in various type hints throughout the classΒΉ, so I decided to convert the class to a Generic, like so: from typing import TypeVar, Generic, Final T = TypeVar("T") class Foo(Generic[T]): bar: Final[T] def __init__(self, bar: T) -> None: self.bar = bar However, MyPy complains saying that Final name declared in class body cannot depend on type variables while if I remove the Final annotation MyPy doesn't raise any errors. I can't find any logical errors in my code: it simply says that, regardless of its type, the bar attribute should always be referencing the same object. Am I missing something or is this some limitation of Python and/or of MyPy? ΒΉIn the example I'm showing only one such usage to keep things simple. P.S. I'm using Python 3.10.8 and MyPy 0.991. | Your code is correct per PEP591, mypy applies the rules in a wrong order, annotate in __init__ to solve the issue. Here are the links to the docs and PEP591. mypy should've checked the presence of initializer and then decide whether the initializer is missing, but it doesn't in fact and thinks that you define a final class attribute despite missing initializer. Type variables make no sense in types of class attributes (because type variables are bound to instances, not classes), so mypy errors here. Thus we need to help mypy resolve the kind of Final properly. To do so, we can annotate attribute in __init__ instead: from typing import TypeVar, Generic, Final _T = TypeVar("_T") class Foo(Generic[_T]): def __init__(self, bar: _T): self.bar: Final[_T] = bar This typechecks now (playground). | 3 | 3 |
75,100,102 | 2023-1-12 | https://stackoverflow.com/questions/75100102/get-app-version-from-pyproject-toml-inside-python-code | I am not very familiar with python, I only done automation with so I am a new with packages and everything. I am creating an API with Flask, Gunicorn and Poetry. I noticed that there is a version number inside the pyproject.toml and I would like to create a route /version which returns the version of my app. My app structure look like this atm: βββ README.md βββ __init__.py βββ poetry.lock βββ pyproject.toml βββ tests β βββ __init__.py βββ wsgi.py Where wsgi.py is my main file which run the app. I saw peoples using importlib but I didn't find how to make it work as it is used with: __version__ = importlib.metadata.version("__package__") But I have no clue what this package mean. | You should not use __package__, which is the name of the "import package" (or maybe import module, depending on where this line of code is located), because this is not what is expected here. importlib.metadata.version() expects the name of the "distribution package" (the thing that you pip-install), which is the one you write in the [project] table of pyproject.toml as name = "my-distribution-package-name". So if the application or library whose version string you want to get is named my-foo-lib, then you only need to call the following import importlib.metadata version_string_of_foo = importlib.metadata.version('my-foo-lib') | 12 | 24 |
75,091,265 | 2023-1-12 | https://stackoverflow.com/questions/75091265/python-setuptools-scm-get-version-from-git-tags | I am using project.toml file to package my module, I want to extract the version from git tag using setuptools_scm module. When I run python setup.p y --version command it gives this output 0.0.1.post1.dev0. How will I get only 0.0.1 value and omit the .post.dev0 value? Here is project.toml file settings: [build-system] requires = ["setuptools>=46.1.0", "setuptools_scm[toml]>=5"] build-backend = "setuptools.build_meta" [tool.setuptools_scm] version_scheme = "no-guess-dev" local_scheme="no-local-version" write_to = "src/showme/version.py" git_describe_command = "git describe --dirty --tags --long --match v* --first-parent" [tool.setuptools.dynamic] version = {attr = "showme.__version__"} output: python setup.py --version setuptools/config/pyprojecttoml.py:108: _BetaConfiguration: Support for `[tool.setuptools]` in `pyproject.toml` is still *beta*. warnings.warn(msg, _BetaConfiguration) 0.0.1.post1.dev0 Thanks | setuptools_scm out-of-the-box generates development and post-release versions. To generate a release version like 0.0.1, you can pass a callable into use_scm_version: # content of setup.py def my_version(): from setuptools_scm.version import SEMVER_MINOR, guess_next_simple_semver, release_branch_semver_version def my_release_branch_semver_version(version): v = release_branch_semver_version(version) if v == version.format_next_version(guess_next_simple_semver, retain=SEMVER_MINOR): return version.format_next_version(guess_next_simple_semver, fmt="{guessed}", retain=SEMVER_MINOR) return v return { 'version_scheme': my_release_branch_semver_version, 'local_scheme': 'no-local-version', } setup(use_scm_version=my_version) Reference: https://github.com/pypa/setuptools_scm#importing-in-setuppy | 8 | 4 |
75,133,458 | 2023-1-16 | https://stackoverflow.com/questions/75133458/polars-str-starts-with-with-values-from-another-column | I have a polars DataFrame for example: >>> df = pl.DataFrame({'A': ['a', 'b', 'c', 'd'], 'B': ['app', 'nop', 'cap', 'tab']}) >>> df shape: (4, 2) βββββββ¬ββββββ β A β B β β --- β --- β β str β str β βββββββͺββββββ‘ β a β app β β b β nop β β c β cap β β d β tab β βββββββ΄ββββββ I'm trying to get a third column C which is True if strings in column B starts with the strings in column A of the same row, otherwise, False. So in the case above, I'd expect: βββββββ¬ββββββ¬ββββββββ β A β B β C β β --- β --- β --- β β str β str β bool β βββββββͺββββββͺββββββββ‘ β a β app β true β β b β nop β false β β c β cap β true β β d β tab β false β βββββββ΄ββββββ΄ββββββββ I'm aware of the df['B'].str.starts_with() function but passing in a column yielded: >>> df['B'].str.starts_with(pl.col('A')) ... # Some stuff here. TypeError: argument 'sub': 'Expr' object cannot be converted to 'PyString' What's the way to do this? In pandas, you would do: df.apply(lambda d: d['B'].startswith(d['A']), axis=1) | Expression support was added for .str.starts_with() in pull/6355 as part of the Polars 0.15.17 release. df.with_columns(pl.col("B").str.starts_with(pl.col("A")).alias("C")) shape: (4, 3) βββββββ¬ββββββ¬ββββββββ β A | B | C β β --- | --- | --- β β str | str | bool β βββββββͺββββββͺββββββββ‘ β a | app | true β β b | nop | false β β c | cap | true β β d | tab | false β βββββββ΄ββββββ΄ββββββββ | 3 | 5 |
75,150,535 | 2023-1-17 | https://stackoverflow.com/questions/75150535/polars-create-column-with-string-formatting | I have a polars dataframe: df = pl.DataFrame({'schema_name': ['test_schema', 'test_schema_2'], 'table_name': ['test_table', 'test_table_2'], 'column_name': ['test_column, test_column_2','test_column']}) schema_name table_name column_name test_schema test_table test_column, test_column_2 test_schema_2 test_table_2 test_column I have a string: date_field_value_max_query = ''' select '{0}' as schema_name, '{1}' as table_name, greatest({2}) from {0}.{1} group by 1, 2 ''' I would like to use polars to add a column by using string formatting. The target dataframe is this: schema_name table_name column_name query test_schema test_table test_column, test_column_2 select test_schema, test_table, greatest(test_column, test_column_2) from test_schema.test_table group by 1, 2 test_schema_2 test_table_2 test_column select test_schema_2, test_table_2, greatest(test_column) from test_schema_2.test_table_2 group by 1, 2 In pandas, I would do something like this: df.apply(lambda row: date_field_value_max_query.format(row['schema_name'], row['table_name'], row['column_name']), axis=1) For polars, I tried this: df.map_rows(lambda row: date_field_value_max_query.format(row[0], row[1], row[2])) ...but this returns only the one column, and I lose the original three columns. I know this approach is also not recommended for polars, when possible. How can I perform string formatting across multiple dataframe columns with the output column attached to the original dataframe? | Another option is to use polars.format to create your string. For example: date_field_value_max_query = ( '''select {} as schema_name, {} as table_name, greatest({}) from {}.{} group by 1, 2 ''' ) ( df .with_columns( pl.format(date_field_value_max_query, 'schema_name', 'table_name', 'column_name', 'schema_name', 'table_name') ) ) shape: (2, 4) βββββββββββββββββ¬βββββββββββββββ¬βββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββββ β schema_name β table_name β column_name β literal β β --- β --- β --- β --- β β str β str β str β str β βββββββββββββββββͺβββββββββββββββͺβββββββββββββββββββββββββββββͺββββββββββββββββββββββββββββββββββββββββββββββ‘ β test_schema β test_table β test_column, test_column_2 β select test_schema as schema_name, β β β β β test_table as table_name, β β β β β greatest(test_column, test_column_2) β β β β β from test_schema.test_table β β β β β group by 1, 2 β β β β β β β β β β β β test_schema_2 β test_table_2 β test_column β select test_schema_2 as schema_name, β β β β β test_table_2 as table_name, β β β β β greatest(test_column) β β β β β from test_schema_2.test_table_2 β β β β β group by 1, 2 β β β β β β β β β β β βββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββ | 3 | 5 |
75,108,379 | 2023-1-13 | https://stackoverflow.com/questions/75108379/python-poetry-adding-dependency-to-a-group | I am using Poetry of version 1.3.2 (currently the last version), and added group to .toml file as below: [tool.poetry.group.dev.dependencies]. And following official documentation tried to add library to this group using command: poetry add pytest --group dev. But always getting error that says: The "--group" option does not exist. (Using python of version 3.9.15) I tried to change version of poetry to 1.2.x, but it did not help. | Just type: poetry add --group dev 'package_name' That should work | 10 | 18 |
75,150,942 | 2023-1-17 | https://stackoverflow.com/questions/75150942/how-to-get-a-session-from-async-session-generator-fastapi-sqlalchemy | I see in many places an approach for getting SqlAlchemy session just like this one below: async def get_session() -> AsyncSession: async with async_session() as session: yield session It used together with Depends: @app.post("/endpoint") async def vieww(session: AsyncSession = Depends(get_session)): session.execute(some_statement) So my question is how to get a session from get_session ooutside Depends? I had a lot of attempts and got a headache.. I tried with s = await get_session() s.execute(stmt) And I get AttributeError: 'async_generator' object has no attribute 'execute' | get_session is an asynchronous generator function. It returns an asynchronous iterator. You can't await those. You can await their __anext__ method though. from asyncio import run from collections.abc import AsyncIterator async def get_generator() -> AsyncIterator[int]: yield 1 async def main() -> None: generator = get_generator() print(await generator.__anext__()) if __name__ == "__main__": run(main()) Output: 1 Since Python 3.10 we can use the built-in anext, which makes this look less hacky: ... async def main() -> None: generator = get_generator() print(await anext(generator) ... So in your case, you could use s = await anext(get_session()). But I struggle to see the use of this. This get_session function is designed as an asynchronous generator for the specific use case in the FastAPI Depends construct. Outside of that, you may as well just use the context manager normally: async with async_session() as session: ... # do things with the session session.execute() Otherwise you'll have to remember to call __anext__ on that generator again (and catch the StopAsyncIteration error), otherwise the session will never be closed. | 5 | 7 |
75,146,691 | 2023-1-17 | https://stackoverflow.com/questions/75146691/way-to-specify-viewpoint-distance-in-3d-plots | I am working on an animation of a 3D plot using mpl_toolkits.mplot3d (Matplotlib 3.6.3) and need to set the view distance. It seems that earlier versions of Matplotlib allowed the elevation, azimuth, and distance of the viewpoint "camera" to be set for 3D plots using methods like this: ax.elev = 45 ax.azim = 10 ax.dist = 2 but the distance attribute appears to have been deprecated for some reason: Warning (from warnings module): ax.dist = 2 MatplotlibDeprecationWarning: The dist attribute was deprecated in Matplotlib 3.6 and will be removed two minor releases later. This still runs, but the output plots have all sorts of visual artifacts that only go away if I shut off the axes with ax.set_axis_off(). Is there an equivalent means of setting the viewpoint distance to zoom in on a 3D data set in 3.6.3? | According to the documentation, you now need to use the zoom argument of set_box_aspect: ax.set_box_aspect(None, zoom=2) Where the first argument is the aspect ratio. Use None to use the default values. | 3 | 3 |
75,145,424 | 2023-1-17 | https://stackoverflow.com/questions/75145424/fastapi-starlette-how-to-handle-exceptions-inside-background-tasks | I developed some API endpoints using FastAPI. These endpoints are allowed to run BackgroundTasks. Unfortunately, I do not know how to handle unpredictable issues from theses tasks. An example of my API is shown below: # main.py from fastapi import FastAPI import uvicorn app = FastAPI() def test_func(a, b): raise ... @app.post("/test", status_code=201) async def test(request: Request, background_task: BackgroundTasks): background_task.add_task(test_func, a, b) return { "message": "The test task was successfully sent.", } if __name__ == "__main__": uvicorn.run( app=app, host="0.0.0.0", port=8000 ) # python3 main.py to run # fastapi == 0.78.0 # uvicorn == 0.16.0 Can you help me to handle any type of exception from such a background task? Should I add any exception_middleware from Starlette, in order to achieve this? | Can you help me to handle any type of exception from such a background task? Background tasks, as the name suggests, are tasks that are going to run in the background after returning a response. Hence, you can't raise an Exception and expect the client to receive some kind of response. If you are just looking to catch any Exception occuring inside the background task, you can simply use the try-except block to catch any Exception and handle it as desired. For example: def test_func(a, b): try: # some background task logic here... raise <some_exception> except Exception as e: print('Something went wrong') # use `print(e.detail)` to print out the Exception's details If you would like to log any exceptions being raised in the task (instead of just printing them out), you could use Python's logging moduleβhave a look at this answer, as well as this answer and this answer on how to do that. You may also find helpful information on FastAPI/Starlette's custom/global exception handlers at this post and this post, as well as here, here and here. Finally, this answer will help you understand in detail the difference between def and async def endpoints (as well as background task functions) in FastAPI, and find solutions for tasks blocking the event loop (if you ever come across this issue). | 3 | 7 |
75,114,841 | 2023-1-13 | https://stackoverflow.com/questions/75114841/debugger-warning-from-ipython-frozen-modules | I created a new environment using conda and wanted to add it to jupyter-lab. I got a warning about frozen modules? (shown below) $ ipython kernel install --user --name=testi2 0.00s - Debugger warning: It seems that frozen modules are being used, which may 0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off 0.00s - to python to disable frozen modules. 0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation. Installed kernelspec testi2 in /home/michael/.local/share/jupyter/kernels/testi2 All I had installed were ipykernel, ipython, ipywidgets, jupyterlab_widgets, ipympl Python Version 3.11.0, Conda version 22.11.0 And I used conda install nodejs -c conda-forge --repodata-fn=repodata.json to get the latest version of nodejs I also tried re-installing ipykernel to a previous version (6.20.1 -> 6.19.2) | This is just a warning that the debugger cannot debug frozen modules. In Python 3.11, the core modules essential for Python startup are βfrozenβ. ... This reduces the steps in module execution process ... Interpreter startup is now 10-15% faster in Python 3.11. This has a big impact for short-running programs using Python. β Whatβs New In Python 3.11 Β§ Faster Startup it's not possible for the debugger to debug frozen modules as the filename is definitely required to hit breakpoints β https://github.com/fabioz/PyDev.Debugger/issues/213#issuecomment-1058247166 E.g. os.path.realpath.__code__.co_filename is now '<frozen posixpath>' in Python 3.11. The possible resolutions are mentioned with the warning. If you need to debug those modules, pass -Xfrozen_modules=off to python: # ipython kernel install --user --name=testi2 python -Xfrozen_modules=off -m ipykernel install --user --name=testi2 # jupyter-lab python -Xfrozen_modules=off -m jupyterlab If you just want to suppress the warning, set PYDEVD_DISABLE_FILE_VALIDATION=1: PYDEVD_DISABLE_FILE_VALIDATION=1 ipython kernel install --user --name=testi2 PYDEVD_DISABLE_FILE_VALIDATION=1 jupyter-lab | 25 | 28 |
75,102,134 | 2023-1-12 | https://stackoverflow.com/questions/75102134/mat1-and-mat2-must-have-the-same-dtype | I'm trying to build a neural network to predict per-capita-income for counties in US based on the education level of their citizens. X and y have the same dtype (I have checked this) but I'm getting an error. Here is my data: county_FIPS state county per_capita_personal_income_2019 \ 0 51013 VA Arlington, VA 97629 per_capita_personal_income_2020 per_capita_personal_income_2021 \ 0 100687 107603 associate_degree_numbers_2016_2020 bachelor_degree_numbers_2016_2020 \ 0 19573 132394 And here is my network import torch import pandas as pd df = pd.read_csv("./input/US counties - education vs per capita personal income - results-20221227-213216.csv") X = torch.tensor(df[["bachelor_degree_numbers_2016_2020", "associate_degree_numbers_2016_2020"]].values) y = torch.tensor(df["per_capita_personal_income_2020"].values) X.dtype torch.int64 y.dtype torch.int64 import torch.nn as nn class BaseNet(nn.Module): def __init__(self, in_dim, hidden_dim, out_dim): super(BaseNet, self).__init__() self.classifier = nn.Sequential( nn.Linear(in_dim, hidden_dim, bias=True), nn.ReLU(), nn.Linear(feature_dim, out_dim, bias=True)) def forward(self, x): return self.classifier(x) from torch import optim import matplotlib.pyplot as plt in_dim, hidden_dim, out_dim = 2, 20, 1 lr = 1e-3 epochs = 40 loss_fn = nn.CrossEntropyLoss() classifier = BaseNet(in_dim, hidden_dim, out_dim) optimizer = optim.SGD(classifier.parameters(), lr=lr) def train(classifier, optimizer, epochs, loss_fn): classifier.train() losses = [] for epoch in range(epochs): out = classifier(X) loss = loss_fn(out, y) loss.backward() optimizer.step() optimizer.zero_grad() losses.append(loss/len(X)) print("Epoch {} train loss: {}".format(epoch+1, loss/len(X))) plt.plot([i for i in range(1, epochs + 1)]) plt.xlabel("Epoch") plt.ylabel("Training Loss") plt.show() train(classifier, optimizer, epochs, loss_fn) Here is the full stack trace of the error that I am getting when I try to train the network: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Input In [77], in <cell line: 39>() 36 plt.ylabel("Training Loss") 37 plt.show() ---> 39 train(classifier, optimizer, epochs, loss_fn) Input In [77], in train(classifier, optimizer, epochs, loss_fn) 24 losses = [] 25 for epoch in range(epochs): ---> 26 out = classifier(X) 27 loss = loss_fn(out, y) 28 loss.backward() File ~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] Input In [77], in BaseNet.forward(self, x) 10 def forward(self, x): ---> 11 return self.classifier(x) File ~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/modules/container.py:204, in Sequential.forward(self, input) 202 def forward(self, input): 203 for module in self: --> 204 input = module(input) 205 return input File ~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/modules/linear.py:114, in Linear.forward(self, input) 113 def forward(self, input: Tensor) -> Tensor: --> 114 return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 must have the same dtype Updates I have tried casting X and y to float tensors but this comes up with the following error: expected scalar type Long but found Float. If someone who knows PyTorch could try running this notebook for themselves that would be a great help. I'm struggling to get off the ground with Kaggle and ML. | The reason for this is because the parameters dtype of nn.Linear doesn't match your input's dtype; the default dtype for nn.Linear is torch.float32 which is in your case different from your input data - float64. The solution to this question solves your problem and explains why @Anonymous answer works. In short, add self.double() at the end of your constructor and things should run. | 7 | 9 |
75,111,217 | 2023-1-13 | https://stackoverflow.com/questions/75111217/how-do-i-run-dbt-models-from-a-python-script-or-program | I have a DBT project, and a python script will be grabbing data from the postgresql to produce output. However, part of the python script will need to make the DBT run. I haven't found the library that will let me cause a DBT run from an external script, but I'm pretty sure it exists. How do I do this? ETA: The correct answer may be to download the DBT CLI and then use python system calls to use that.... I was hoping for a library, but I'll take what I can get. | Update: v1.5 has arrived! With v1.5 of dbt, we get a stable and officially supported Python API for invoking dbt operations; this API has functional parity with the CLI. From the docs: from dbt.cli.main import dbtRunner, dbtRunnerResult # initialize dbt = dbtRunner() # create CLI args as a list of strings cli_args = ["run", "--select", "tag:my_tag"] # run the command res: dbtRunnerResult = dbt.invoke(cli_args) # inspect the results for r in res.result: print(f"{r.node.name}: {r.status}") There are some caveats about the stability of artifacts returned by dbt.invoke; read the docs for more details. Original Answer (As of Jan 2023) There is not a public Python API for dbt, yet. It is expected in v1.5, which should be out in a couple months. Right now, your safest option is to use the CLI. If you don't want to use subprocess, the CLI uses Click now, and Click provides a runner that you can use to invoke Click commands. It's usually used for testing, but I think it would work for your use case, too. The CLI command is here. That would look something like: from click.testing import CliRunner from dbt.cli.main import run dbt_runner = CliRunner() dbt_runner.invoke(run, args="-s my_model") You could also invoke dbt the way they do in the test suite, using run_dbt. | 10 | 16 |
75,114,334 | 2023-1-13 | https://stackoverflow.com/questions/75114334/flask-limiter-error-typeerror-limiter-init-got-multiple-values-for-argum | I am attempting to use Flask's rate limiting library to rate limit an API based on the seconds. So I have used this exact same format to limit requests to an API on an Apache Server. However I am now using an NGINX. I do not thinks this makes a difference but when I run this code: import api app = Flask(__name__, instance_relative_config=True) limiter = Limiter(app, default_limits=["5/second"], key_func=lambda: get_remote_address) limiter.limit("5/second", key_func=lambda: request.args.get('token') if 'token' in request.args else get_remote_address)(api.bp) app.register_blueprint(api.bp) Again I have ran this exact same code on another server, but now it is giving this error: limiter = Limiter(app, "5/second", key_func=lambda: request.args.get('token') if 'token' in request.args else get_remote_address) TypeError: Limiter.__init__() got multiple values for argument 'key_func' Any help would be great. I am using Flask-Limiter in python and running gevent on gunicorn server for NGINX. | Your Limiter class instantiation is incorrect. Below is the correct one- limiter = Limiter(get_remote_address, app=app, default_limits=["200 per day", "50 per hour"]) | 3 | 7 |
75,099,182 | 2023-1-12 | https://stackoverflow.com/questions/75099182/stable-diffusion-error-couldnt-install-torch-no-matching-distribution-found | I am trying to install locally Stable Diffusion. I follow the presented steps but when I get to the last one "run webui-use file" it opens the terminal and it's saying "Press any key to continue...". If I do so the terminal instantly closes. I went to the SB folder, right-clicked open in the terminal and used ./webui-user to run the file. The terminal does not longer close but nothing is happening and I get those two errors: Couldn't install torch, No matching distribution found for torch==1.12.1+cu113 I've researched online and I've tried installing the torch version from the error, also I tried pip install --user pipenv==2022.1.8 but I get the same errors. | I had a similar error. What solved the issue was, I installed the 32 bit version of python. I uninstalled and installed the x86-64 bit version. And everything installed just fine. I have version 3.8 installed as i have found out it is the most stable version. I downloaded the 64 bit installer from here python 3.8 | 4 | 0 |
75,121,807 | 2023-1-14 | https://stackoverflow.com/questions/75121807/what-are-keypoints-in-yolov7-pose | I am trying to understad the keypoint output of the yolov7, but I didn't find enough information about that. I have the following output: array([ 0, 0, 430.44, 476.19, 243.75, 840, 0.94348, 402.75, 128.5, 0.99902, 417.5, 114.25, 0.99658, 385.5, 115, 0.99609, 437.75, 125.5, 0.89209, 366.75, 128, 0.66406, 471, 229.62, 0.97754, 346.75, 224.88, 0.97705, 526, 322.75, 0.95654, 388.5, 340.75, 0.95898, 424.5, 314.75, 0.94873, 483.5, 335.5, 0.9502, 465.5, 457.75, 0.99219, 381.5, 456.25, 0.99219, 451.5, 649, 0.98584, 379.25, 649.5, 0.98633, 446.5, 818, 0.92285, 366, 829.5, 0.9248]) the paper https://arxiv.org/pdf/2204.06806.pdf tells "So, in total there are 51 elements for 17 keypoints associated with an anchor. " but the length is 58. there are 18 numbers that probably are confidences of a keypoint: array([ 0.94348, 0.99902,, 0.99658, 0.99609, 0.89209, 0.66406, 0.97754, 0.97705, 0.95654, 0.95898, 0.94873, 0.9502, 0.99219, 0.99219, 0.98584, 0.98633, 0.92285, 0.9248]) But the paper tells that are 17 keypoints. In this repo https://github.com/retkowsky/Human_pose_estimation_with_YoloV7/blob/main/Human_pose_estimation_YoloV7.ipynb tells that the keypoints are the following: but that shape doesn't match the prediction: Is the first image right about the keypoints? and what are the first four digits? 0, 0, 430.44, 476.19 Thanks EDIT This is not a complet answer but editing the plot function I can get the following information Given the following output keypoint: array([[ 0, 0, 312.31, 486, 291.75, 916.5, 0.94974, 304.5, 118.75, 0.99902, 320.75, 102.25, 0.99756, 287.75, 103.25, 0.99658, 345, 112, 0.96338, 268.25, 115.25, 0.69531, 394, 226.25, 0.98145, 228.25, 230.12, 0.98389, 428.5, 358.5, 0.95898, 192.88, 364.75, 0.96533, 407, 464.25, 0.95166, 215.75, 464.25, 0.9585, 363.75, 491, 0.99219, 257.75, 491.5, 0.99268, 361.5, 680, 0.9834, 250.88, 679, 0.98438, 361, 861.5, 0.91064, 247, 863, 0.91504]]) from this position ouput[7:] you can get the points of each keypoint, with the following sort as you can see in the image array([ 304.5, 118.75, 0.99902, 320.75, 102.25, 0.99756, 287.75, 103.25, 0.99658, 345, 112, 0.96338, 268.25, 115.25, 0.69531, 394, 226.25, 0.98145, 228.25, 230.12, 0.98389, 428.5, 358.5, 0.95898, 192.88, 364.75, 0.96533, 407, 464.25, 0.95166, 215.75, 464.25, 0.9585, 363.75, 491, 0.99219, 257.75, 491.5, 0.99268, 361.5, 680, 0.9834, 250.88, 679, 0.98438, 361, 861.5, 0.91064, 247, 863, 0.91504]) but I am not sure about what are the rest of the values: 0, 0, 312.31, 486, 291.75, 916.5, 0.94974, | I assume you have passed your output through output_to_keypoint function in utils.plots. Based on the comment left by the authors of that function, the first 7 values should be (in order): batch_id class_id x coordinate of the center of the bounding box y coordinate of the center of the bounding box w - width of the bounding box h - height of the bounding box conf - confidence in the bounding box | 5 | 3 |
75,093,503 | 2023-1-12 | https://stackoverflow.com/questions/75093503/are-python-3-11-objects-as-light-as-slots | After Mark Shannon's optimisation of Python objects, is a plain object different from an object with slots? I understand that after this optimisation in a normal use case, objects have no dictionary. Have the new Python objects made the use of slots totally unnecessary? | No, __slots__ still produces more compact objects. With __slots__ for attributes, an object's memory layout only needs one pointer per slot. With the new lazy __dict__ creation, an object needs to store a PyDictValues object and a pointer to that PyDictValues object. The PyDictValues object contains room for a number of pointers based on the "usable size" of the shared keys object when the PyDictValues object is created, which is usually more pointers than what you would have gotten with __slots__. It also holds some extra metadata and padding, stored before those pointers: a "prefix size" representing the size of this metadata, a "used size" representing the number of values stored, and an array of bytes tracking insertion order. The padding is used to ensure this extra metadata doesn't interfere with the alignment of the pointers. (None of this is reflected in the PyDictValues struct definition, since C struct definitions aren't expressive enough for this - it mostly has to be handled manually.) So, without __slots__, you've got an extra PyDictValues * in the object directly, usually more room allocated for attribute pointers than necessary, and a bunch of extra metadata in the PyDictValues itself, relative to using __slots__. Plus, with __slots__, if you don't explicitly declare a __weakref__ slot, you don't get one, saving some memory at the cost of not being able to create weak references to your objects. With no __slots__, Python will automatically add a __weakref__ slot to your objects. | 15 | 13 |
75,114,624 | 2023-1-13 | https://stackoverflow.com/questions/75114624/multiline-ruleorder-in-snakemake | I have 3 rules and their names are somewhat long. When using ruleorder, the line goes over my desired 80 character limit. Is it possible break up the ruleorder into multiple lines in such a way that the behaviour is exactly the same as if I wrote it all in one line? Example: ruleorder: long_rule_1 > long_rule_2 > long_rule_3 I would like to reformat it into something like this: ruleorder: ( long_rule_1 > long_rule_2 > long_rule_3 ) | After looking at ways to do this, I believe the best way is pretty simple: ruleorder: long_rule_1 > long_rule_2 > long_rule_3 The other answers are good too, but this is the one that I'm using | 5 | 1 |
75,118,904 | 2023-1-14 | https://stackoverflow.com/questions/75118904/why-does-the-python-tutorial-say-it-is-an-error-that-other-scripts-in-the-curren | The Python Tutorial chapter 6 (Modules) says: The directory containing the script being run is placed at the beginning of the search path, ahead of the standard library path. This means that scripts in that directory will be loaded instead of modules of the same name in the library directory. This is an error unless the replacement is intended. Why is this an error? In other terms, what is the meaning of the sentence "This is an error unless the replacement is intended"? | After initialization, Python programs can modify sys.path. The directory containing the script being run is placed at the beginning of the search path, ahead of the standard library path. This means that scripts in that directory will be loaded instead of modules of the same name in the library directory. This is an error unless the replacement is intended. See section Standard Modules for more information. Assuming you have a package (= module) installed called fooo. This is the package you want to use for e script. Now, if you import it using fooo, everything works as intended. Now, if there's a module in the directory mentioned above, with the same name (fooo), this one will replace your intended package. The error is, that you now got imported the wrong package. Resource: https://docs.python.org/3/tutorial/modules.html#the-module-search-path | 3 | 2 |
75,141,237 | 2023-1-17 | https://stackoverflow.com/questions/75141237/general-function-to-turn-string-into-kwargs | I'm trying to find a way to pass a string (coming from outside the python world!) that can be interpreted as **kwargs once it gets to the Python side. I have been trying to use this pyparsing example, but the string thats being passed in this example is too specific, and I've never heard of pyparsing until now. I'm trying to make it more, human friendly and robust to small differences in spacing etc. For example, I would like to pass the following. input_str = "a = [1,2], b= False, c =('abc', 'efg'),d=1" desired_kwargs = {a : [1,2], b:False, c:('abc','efg'), d:1} When I try this code though, no love. from pyparsing import * # Names for symbols _quote = Suppress('"') _eq = Suppress('=') # Parsing grammar definition data = ( delimitedList( # Zero or more comma-separated items Group( # Group the contained unsuppressed tokens in a list Regex(u'[^=,)\s]+') + # Grab everything up to an equal, comma, endparen or whitespace as a token Optional( # Optionally... _eq + # match an = _quote + # a quote Regex(u'[^"]*') + # Grab everything up to another quote as a token _quote) # a quote ) # EndGroup - will have one or two items. )) # EndList def process(s): items = data.parseString(s).asList() args = [i[0] for i in items if len(i) == 1] kwargs = {i[0]:i[1] for i in items if len(i) == 2} return args,kwargs def hello_world(named_arg, named_arg_2 = 1, **kwargs): print(process(kwargs)) hello_world(1, 2, "my_kwargs_are_gross = True, some_bool=False, a_list=[1,2,3]") #output: "{my_kwargs_are_gross : True, some_bool:False, a_list:[1,2,3]}" Requirements: The '{' and '}' will be appended on the code side. Only standard types / standard iterables (list, tuple, etc) will be used in the kwargs-string. No special characters that I can think of... The kwargs-string will be like they are entered into a function on the python side, ie, 'x=1, y=2'. Not as a string of a dictionary. I think its a safe assumption that the first step in the string parse will be to remove all whitespace. | One option could be to use the ast module to parse some wrapping of the string that turns it into a valid Python expression. Then you can even use ast.literal_eval if youβre okay with everything it can produce: >>> import ast >>> kwargs = "a = [1,2], b= False, c =('abc', 'efg'),d=1" >>> expr = ast.parse(f"dict({kwargs}\n)", mode="eval") >>> {kw.arg: ast.literal_eval(kw.value) for kw in expr.body.keywords} {'a': [1, 2], 'b': False, 'c': ('abc', 'efg'), 'd': 1} | 4 | 11 |
75,135,530 | 2023-1-16 | https://stackoverflow.com/questions/75135530/error-exporting-styled-dataframes-to-image-syntaxerror-not-a-png-file-using | I've been using dataframe_image for a while and have had great results so far. Last week, out of a sudden, all my code containing the method dfi.export() stopped working with this error as an output raise SyntaxError("not a PNG file") File <string> SyntaxError: not a PNG file I can export the images passing the argument table_conversion='matplotlib' but they do not come out styled... This is my code: now = str(datetime.now()) filename = ("Extracciones-"+now[0:10]+".png") df_styled = DATAFINAL.reset_index(drop=True).style.apply(highlight_rows, axis=1) dfi.export(df_styled, filename,max_rows=-1) IMAGEN = Image.open(filename) IMAGEN.show() Any clues on why this just suddenly stopped working? Or any ideas to export dataframes as images (not using html)? These were the outputs i used to get: fully styled dataframe images and this is the only thing I can get right now Thank you in advance | dataframe_image has a dependency on Chrome, and a recent Chrome update (possibly v109 on 2013-01-10) broke dataframe_image. v0.1.5 was released on 2023-01-14 to fix this. pip install --upgrade dataframe_image pip show dataframe_image The version should now be v0.1.5 or later, which should resolve the problem. Some users have reported still having the error even after upgrading. This could be due to upgrading the package in the wrong directory (due to multiple installations of python, pip, virtual envs, etc). The reliable way to check the actual version of dataframe_image that the code is using, is to add this debugging code to the top of your python code: import pandas as pd import dataframe_image as dfi from importlib.metadata import version print(version('dataframe_image')) df = pd.DataFrame({'x':[1,2]}) dfi.export(df, 'out.png') exit() Also check chrome://version/ in your Chrome browser. | 4 | 3 |
75,107,763 | 2023-1-13 | https://stackoverflow.com/questions/75107763/why-binary-mode-when-reading-writing-toml-in-python | When reading a toml file in normal read ("r") mode, I get an error import tomli with open("path_to_file/conf.toml", "r") as f: # have to use "rb" ! toml_dict = tomli.load(f) TypeError: File must be opened in binary mode, e.g. use open('foo.toml', 'rb') Same happens when writing a toml file. Why? tomli github readme says The file must be opened in binary mode (with the "rb" flag). Binary mode will enforce decoding the file as UTF-8 with universal newlines disabled, both of which are required to correctly parse TOML. I thought the age of typewriters was over, so why is the "universal newline" not allowed? toml spec says "Newline means LF (0x0A) or CRLF (0x0D 0x0A)" (poor Mac users) - that also doesn't clarify the reason to me... so, what am I missing? | To wrap this up, the problem/behavior described in the question is actually a specific case of a more general problem: how to enforce a specific decoding when reading a text file with Python's open built-in. Or rephrased: ensure the file has a specific encoding. tomli requires the user to handle the file IO, so the user could also use an arbitrary encoding in open(path-to-file, "r", encoding=...). However, the toml specification requires the input to be UTF-8. tomli implements this requirement by forcing the user to use binary mode "b" when reading the file, then does the decoding based on the read bytes (src). | 3 | 5 |
75,150,625 | 2023-1-17 | https://stackoverflow.com/questions/75150625/pandas-dataframe-check-list-in-column-an-set-value-in-different-column | I need your help for the following task: I have the following dataframe: test = {'Col1':[2,5], 'Col2':[5,7], 'Col_List':[['One','Two','Three','Four','Five'], ['Two', 'Four']], 'One':[0,0], 'Two':[0,0], 'Three':[0,0], 'Four':[0,0], 'Five':[0,0],} df=pd.DataFrame.from_dict(test) df which looks like: Col1 Col2 Col_List One Two Three Four Five 2 5 [One, Two, Three, Four, Five] 0 0 0 0 0 5 7 [Two, Four] 0 0 0 0 0 I need to inspect the list in Col_List and set, depending on which item is in the list, the value of column Col1 in the specific column (One, Two, Three, Four or Five). Now I would like to have the following result: Col1 Col2 Col_List One Two Three Four Five 2 5 [One, Two, Three, Four, Five] 2 2 2 2 2 5 7 [Two, Four] 0 5 0 5 0 | exploded = df.explode("Col_List") df.update(pd.get_dummies(exploded["Col_List"]) .mul(exploded["Col1"], axis="rows") .groupby(level=0).sum()) explode lists' elements to their own rows get 1-hot representation of "One", "Two" etc. multiply it with the (exploded) "Col1" values 1/0 values will act as a selector then undo the explosion: groupby & sum lastly update the original frame's "One", "Two"... columns with this to get >>> df Col1 Col2 Col_List One Two Three Four Five 0 2 5 [One, Two, Three, Four, Five] 2 2 2 2 2 1 5 7 [Two, Four] 0 5 0 5 0 | 5 | 3 |
75,149,969 | 2023-1-17 | https://stackoverflow.com/questions/75149969/duplicate-row-in-pandas-dataframe-based-on-condition-then-update-a-new-column-b | I have a dataframe that looks like : df = pd.DataFrame({'qty': [10,7,2,1], 'status 1': [5,2,2,0], 'status 2': [3,2,0,1], 'status 3': [2,3,0,0] }) Each row has a qty of items. These items have one status (1,2 or 3). So qty = sum of values of status 1,2,3. I would like to : Duplicate each row by the "qty" column Then edit 3 status (or update a new column), to get just 1 status. The output should look like this: Edit: the order is not important, but I will need to keep other columns of my initial df. My (incomplete) solution so far - I found a way to duplicate the rows using this : df2= df2.loc[df2.index.repeat(df2['qty'])].reset_index(drop=True) But I can't find a way to fill the status. Do I need to use a for loop approach to fill the status? Should I do this transform in 1 or 2 steps? Something like: for each initial row, the n first rows take the first status, where n is the value of status 2.... The output could maybe looks like : Edit1 : Thank you for your answers ! Last question : now I'm trying to integrate this to my actual df. What is the best approach to apply these methods to my df who contains many other column ? I will answer my last question : Split df in 2: dfstatus and dfwithoutstatus, keeping the qty column in both Apply one of your method on the dfstatus Apply my method on the dfwithoutstatus (a simple duplication) Merge on index Thank you all for your answers. Best | Here is a possible solution: import numpy as np import pandas as pd E = pd.DataFrame(np.eye(df.shape[1] - 1, dtype=int)) result = pd.DataFrame( df['qty'].reindex(df.index.repeat(df['qty'])).reset_index(drop=True), ) result[df.columns[1:]] = pd.concat( [E.reindex(E.index.repeat(df.iloc[i, 1:])) for i in range(len(df))], ).reset_index( drop=True, ) Here is the result: >>> result qty status 1 status 2 status 3 0 10 1 0 0 1 10 1 0 0 2 10 1 0 0 3 10 1 0 0 4 10 1 0 0 5 10 0 1 0 6 10 0 1 0 7 10 0 1 0 8 10 0 0 1 9 10 0 0 1 10 7 1 0 0 11 7 1 0 0 12 7 0 1 0 13 7 0 1 0 14 7 0 0 1 15 7 0 0 1 16 7 0 0 1 17 2 1 0 0 18 2 1 0 0 19 1 0 1 0 | 4 | 3 |
75,148,272 | 2023-1-17 | https://stackoverflow.com/questions/75148272/how-to-convert-pandas-dataframe-to-nested-dictionary | I have a pandas DataFrame like this: id unit step phase start_or_end_of_phase op_name occurence 1 A 50l LOAD start P12load5 2 2 A 50l LOAD end P12load5 2 3 A 50l STIR start P12s5 4 4 A 50l STIR end P13s5 3 5 A 50l COLLECT start F7_col1 1 6 A 50l COLLECT end H325_col1 1 7 A 1000l SET_TEMP start xyz 2 8 A 1000l SET_TEMP end qwe 3 9 A 1000l SET_TEMP2 start asf 4 10 A 1000l SET_TEMP2 end fdsa 5 11 A 1000l FILTER start 4fags 1 11 A 1000l FILTER end mllsgrs_1 1 12 B MACHINE1 ... ... ... ... ...and want to create nested dictionaries like this: A = {50l : { 'LOAD' : {'start':{'op_name' : 'p12load5', 'occurrence': 2}, 'end':{'op_name': 'P12load5', 'occurrence': 2}}, 'STIR': {'start':{'op_name' : 'P12s5', 'occurrence': 4}, 'end':{'op_name': 'P13s5', 'occurrence': 3}}, 'COLLECT': {'start':{'op_name' : 'F7_col1', 'occurrence': 1}, 'end':{'op_name': 'H325_col1', 'occurrence': 1}} }, 1000l : { 'SET_TEMP' : .... I have been trying to combine groupby() with to_dict() but couldn't wrap my head around it. My last attempt was this (based on How to convert pandas dataframe to nested dictionary): populated_dict = process_steps_table.groupby(['unit', 'step', 'phase', 'start_or_end_phase']).apply(lambda x: x.set_index('start_or_end_phase').to_dict(orient='index')).to_dict() and got his error: DataFrame index must be unique for orient='index'. I am not sure if I have to apply the set_index() lambda function to the groups and why. | You have to reshape your dataframe before export as dictionary: nested_cols = ['step', 'phase', 'start_or_end_of_phase'] value_cols = ['op_name', 'occurence'] # Reshape your dataframe df1 = df.set_index(nested_cols)[value_cols].stack() # Export nested dict d = {} # items(): # t -> flatten index to convert to nested dict # v -> last level of your nested dict (values) for t, v in df1.items(): e = d.setdefault(t[0], {}) # create a new entry with an empty dict for k in t[1:-1]: e = e.setdefault(k, {}) # create a nested sub entry with an empty dict e[t[-1]] = v # finally add values when you reach the end of the index Output import json # just for a best representation print(json.dumps(d, indent=4)) # Output { "50l": { "LOAD": { "start": { "op_name": "P12load5", "occurence": 2 }, "end": { "op_name": "P12load5", "occurence": 2 } }, "STIR": { "start": { "op_name": "P12s5", "occurence": 4 }, "end": { "op_name": "P13s5", "occurence": 3 } }, "COLLECT": { "start": { "op_name": "F7_col1", "occurence": 1 }, "end": { "op_name": "H325_col1", "occurence": 1 } } }, "1000l": { "SET_TEMP": { "start": { "op_name": "xyz", "occurence": 2 }, "end": { "op_name": "qwe", "occurence": 3 } }, "SET_TEMP2": { "start": { "op_name": "asf", "occurence": 4 }, "end": { "op_name": "fdsa", "occurence": 5 } }, "FILTER": { "start": { "op_name": "4fags", "occurence": 1 }, "end": { "op_name": "mllsgrs_1", "occurence": 1 } } } } | 3 | 4 |
75,127,785 | 2023-1-15 | https://stackoverflow.com/questions/75127785/mypy-seems-to-think-that-args-kwargs-could-match-to-any-funtion-signature | How does mypy apply the Liskov substitution principle to *args, **kwargs parameters? I thought the following code should fail a mypy check since some calls to f allowed by the Base class are not allowed by C, but it actually passed. Are there any reasons for this? from abc import ABC, abstractmethod from typing import Any class Base(ABC): @abstractmethod def f(self, *args: Any, **kwargs: Any) -> int: pass class C(Base): def f(self, batch: int, train: bool) -> int: return 1 I also tried to remove either *args or **kwargs, both failed. | Unlike Daniil said in currently accepted answer, the reason is exactly (*args: Any, **kwargs: Any) signature part. Please check the corresponding discussion on mypy issue tracker: I actually like this idea, I have seen this confusion several times, and although it is a bit unsafe, most of the time when people write (*args, **kwargs) it means "don't care", rather than "should work for all calls". [GVR] Agreed, this is a case where practicality beats purity. So, mypy gives a special treatment to functions of form # _T is arbitrary type class _: def _(self, *args, **kwargs) -> _T: ... and considers them fully equivalent to Callable[..., _T]. Yes, this actually violates LSP, of course, but this was designed specially to allow declaring functions with signature "just ignore my parameters". To declare the broadest possible function that really accepts arbitrary positional and keyword arguments, you should use object in signature instead. | 7 | 5 |
75,144,059 | 2023-1-17 | https://stackoverflow.com/questions/75144059/python-playwright-start-maximized-window | I have a problem starting Playwright in Python maximized. I found some articles for other languages but doesn't work in Python, also nothing is written about maximizing window in Python in the official documentation. I tried browser = p.chromium.launch(headless=False, args=["--start-maximized"]) And it starts maximized but then automatically restores back to the default small window size. Any ideas? Thanks | I just found the answer: I need to set also the following and it works: browser.new_context(no_viewport=True) | 6 | 4 |
75,142,154 | 2023-1-17 | https://stackoverflow.com/questions/75142154/refrehing-the-django-model-after-save-and-5-second-sleep-get-me-old-state-what | I was able to save the data to a Django model without any errors, but data not reflected in db. But after a sleep time I was able to save the data again with same method. What might be causing this ? I suspect use of the Google API, but was able to print the data before performing the save operation. def update_channel(): client = Client.objects.get(name="name") print(f"Existing channel: {data['id']}") # 123 # fetch channel data from google api data = google_drive.subscribe_new_channel() client.channel_id = data["id"] client.channel_resource_id = data["resourceId"] client.save() client.refresh_from_db() print(f"New channel: {data['id']}") # 456 print(f"New channel in db: {client.channel_id}") # 456 time.sleep(5) client.refresh_from_db() print(f"channel in db: {client.channel_id}") # 123 Sample Output: Existing channel: 123 New channel: 456 New channel in db: 456 channel in db: 123 | This can happen if another process has already fetched the same client object and saved the object after your save operation. In this case, the data in the second process still be the old one and overwrites your change when it saves. | 3 | 3 |
75,141,732 | 2023-1-17 | https://stackoverflow.com/questions/75141732/regex-only-replace-if-wrapped-in-certain-function | I am looking to create a python function that will take a long SQL script that I have to create a table and place the session variables into the script so it can be used as a view within Snowflake. For example, SET TABLE_NAME = MY_TABLE_NAME; CREATE OR REPLACE VIEW MY_VIEW AS ( SELECT * FROM IDENTIFIER($TABLE_NAME) ) With the python script, the previous block becomes CREATE OR REPLACE VIEW MY_VIEW AS( SELECT * FROM MY_TABLE ) However, during testing, I realized that the if the variable name is within another function, the last parenthesis is captured and is removed. Is there a way that I can replace the string with the variable value only if it is wrapped in the identifier function? I would like this code: IDENTIFIER($VAR_NAME) identifier($VAR_NAME) SELECT * FROM $VAR_NAME DATEADD('DAY',+1,$VAR_NAME) To become: VAR_NAME VAR_NAME SELECT * FROM VAR_NAME DATEADD('DAY',+1,VAR_NAME) This is what I have tried so far. https://regex101.com/r/2SriK9/2 Thanks. P.S. In the last example, if var_name were a function, it would need to have the function and then close with a closing parenthesis: DATEADD('DAY',+1,MY_FUNC()) [Currently, my output makes it DATEADD('DAY',+1,MY_FUNC()] with no closing parenthesis on the dateadd function. | There are two patterns you're looking for here: one enclosed in the identifier function, and the other with just a preceding $ character, so you can use an alternation pattern to search for both of them, capture the variable names of each, if any, and replace the match with what's captured. Find (with the case-insensitive flag): identifier\(\$(\w+)\)|\$(\w+) Replace with: \1\2 Demo: https://regex101.com/r/2SriK9/3 | 4 | 0 |
75,113,742 | 2023-1-13 | https://stackoverflow.com/questions/75113742/improving-performance-for-a-nested-for-loop-iterating-over-dates | I am looking to learn how to improve the performance of code over a large dataframe (10 million rows) and my solution loops over multiple dates (2023-01-10, 2023-01-20, 2023-01-30) for different combinations of category_a and category_b. The working approach is shown below, which iterates over the dates for different pairings of the two-category data by first locating a subset of a particular pair. However, I would want to refactor it to see if there is an approach that is more efficient. My input (df) looks like: date category_a category_b outflow open inflow max close buy random_str 0 2023-01-10 4 1 1 0 0 10 0 0 a 1 2023-01-20 4 1 2 0 0 20 nan nan a 2 2023-01-30 4 1 10 0 0 20 nan nan a 3 2023-01-10 4 2 2 0 0 10 0 0 b 4 2023-01-20 4 2 2 0 0 20 nan nan b 5 2023-01-30 4 2 0 0 0 20 nan nan b with 2 pairs (4, 1) and (4,2) over the days and my expected output (results) looks like this: date category_a category_b outflow open inflow max close buy random_str 0 2023-01-10 4 1 1 0 0 10 -1 23 a 1 2023-01-20 4 1 2 -1 23 20 20 10 a 2 2023-01-30 4 1 10 20 10 20 20 nan a 3 2023-01-10 4 2 2 0 0 10 -2 24 b 4 2023-01-20 4 2 2 -2 24 20 20 0 b 5 2023-01-30 4 2 0 20 0 20 20 nan b I have a working solution using pandas dataframes to take a subset then loop over it to get a solution but I would like to see how I can improve the performance of this using perhaps ;numpy, numba, pandas-multiprocessing or dask. Another great idea was to rewrite it in BigQuery SQL. I am not sure what the best solution would be and I would appreciate any help in improving the performance. Minimum working example The code below generates the input dataframe. import pandas as pd import numpy as np # prepare the input df df = pd.DataFrame({ 'date' : ['2023-01-10', '2023-01-20','2023-01-30', '2023-01-10', '2023-01-20','2023-01-30'] , 'category_a' : [4, 4,4,4, 4, 4] , 'category_b' : [1, 1,1, 2, 2,2] , 'outflow' : [1.0, 2.0,10.0, 2.0, 2.0, 0.0], 'open' : [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] , 'inflow' : [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] , 'max' : [10.0, 20.0, 20.0 , 10.0, 20.0, 20.0] , 'close' : [0.0, np.nan,np.nan, 0.0, np.nan, np.nan] , 'buy' : [0.0, np.nan,np.nan, 0.0, np.nan,np.nan], 'random_str' : ['a', 'a', 'a', 'b', 'b', 'b'] }) df['date'] = pd.to_datetime(df['date']) # get unique pairs of category_a and category_b in a dictionary unique_pairs = df.groupby(['category_a', 'category_b']).size().reset_index().rename(columns={0:'count'})[['category_a', 'category_b']].to_dict('records') unique_dates = np.sort(df['date'].unique()) Using this input dataframe and Numpy, the code below is what I am trying to optmizize. df = df.set_index('date') day_0 = unique_dates[0] # first date # Using Dictionary comprehension list_of_numbers = list(range(len(unique_pairs))) myset = {key: None for key in list_of_numbers} for count_pair, value in enumerate(unique_pairs): # pair of category_a and category_b category_a = value['category_a'] category_b = value['category_b'] # subset the dataframe for the pair df_subset = df.loc[(df['category_a'] == category_a) & (df['category_b'] == category_b)] log.info(f" running for {category_a} and {category_b}") # day 0 df_subset.loc[day_0, 'close'] = df_subset.loc[day_0, 'open'] + df_subset.loc[day_0, 'inflow'] - df_subset.loc[day_0, 'outflow'] # loop over single pair using date for count, date in enumerate(unique_dates[1:], start=1): previous_date = unique_dates[count-1] df_subset.loc[date, 'open'] = df_subset.loc[previous_date, 'close'] df_subset.loc[date, 'close'] = df_subset.loc[date, 'open'] + df_subset.loc[date, 'inflow'] - df_subset.loc[date, 'outflow'] # check if closing value is negative, if so, set inflow to buy for next weeks deficit if df_subset.loc[date, 'close'] < df_subset.loc[date, 'max']: df_subset.loc[previous_date, 'buy'] = df_subset.loc[date, 'max'] - df_subset.loc[date, 'close'] + df_subset.loc[date, 'inflow'] elif df_subset.loc[date, 'close'] > df_subset.loc[date, 'max']: df_subset.loc[previous_date, 'buy'] = 0 else: df_subset.loc[previous_date, 'buy'] = df_subset.loc[date, 'inflow'] df_subset.loc[date, 'inflow'] = df_subset.loc[previous_date, 'buy'] df_subset.loc[date, 'close'] = df_subset.loc[date, 'open'] + df_subset.loc[date, 'inflow'] - df_subset.loc[date, 'outflow'] # store all the dataframes in a container myset myset[count_pair] = df_subset # make myset into a dataframe result = pd.concat(myset.values()).reset_index(drop=False) result After which we can check that the solution is the same as what we expected. from pandas.testing import assert_frame_equal expected = pd.DataFrame({ 'date' : [pd.Timestamp('2023-01-10 00:00:00'), pd.Timestamp('2023-01-20 00:00:00'), pd.Timestamp('2023-01-30 00:00:00'), pd.Timestamp('2023-01-10 00:00:00'), pd.Timestamp('2023-01-20 00:00:00'), pd.Timestamp('2023-01-30 00:00:00')] , 'category_a' : [4, 4, 4, 4, 4, 4] , 'category_b' : [1, 1, 1, 2, 2, 2] , 'outflow' : [1, 2, 10, 2, 2, 0] , 'open' : [0.0, -1.0, 20.0, 0.0, -2.0, 20.0] , 'inflow' : [0.0, 23.0, 10.0, 0.0, 24.0, 0.0] , 'max' : [10, 20, 20, 10, 20, 20] , 'close' : [-1.0, 20.0, 20.0, -2.0, 20.0, 20.0] , 'buy' : [23.0, 10.0, np.nan, 24.0, 0.0, np.nan] , 'random_str' : ['a', 'a', 'a', 'b', 'b', 'b'] }) # check that the result is the same as expected assert_frame_equal(result, expected) SQL to create first table The solution can also be in sql, if so you can use the following code to create the initial table. I am busy trying to implement a solution in big query sql using a user defined function to keep the logic going too. This would be a nice approach to solving the problem too. WITH data AS ( SELECT DATE '2023-01-10' as date, 4 as category_a, 1 as category_b, 1 as outflow, 0 as open, 0 as inflow, 10 as max, 0 as close, 0 as buy, 'a' as random_str UNION ALL SELECT DATE '2023-01-20' as date, 4 as category_a, 1 as category_b, 2 as outflow, 0 as open, 0 as inflow, 20 as max, NULL as close, NULL as buy, 'a' as random_str UNION ALL SELECT DATE '2023-01-30' as date, 4 as category_a, 1 as category_b, 10 as outflow, 0 as open, 0 as inflow, 20 as max, NULL as close, NULL as buy, 'a' as random_str UNION ALL SELECT DATE '2023-01-10' as date, 4 as category_a, 2 as category_b, 2 as outflow, 0 as open, 0 as inflow, 10 as max, 0 as close, 0 as buy, 'b' as random_str UNION ALL SELECT DATE '2023-01-20' as date, 4 as category_a, 2 as category_b, 2 as outflow, 0 as open, 0 as inflow, 20 as max, NULL as close, NULL as buy, 'b' as random_str UNION ALL SELECT DATE '2023-01-30' as date, 4 as category_a, 2 as category_b, 0 as outflow, 0 as open, 0 as inflow, 20 as max, NULL as close, NULL as buy, 'b' as random_str ) SELECT ROW_NUMBER() OVER (ORDER BY date) as " ", date, category_a, category_b, outflow, open, inflow, max, close, buy, random_str FROM data | Efficient algorithm First of all, the complexity of the algorithm can be improved. Indeed, (df['category_a'] == category_a) & (df['category_b'] == category_b) travels the whole dataframe and this is done for each item in unique_pairs. The running time is O(U R) where U = len(unique_pairs) and R = len(df). An efficient solution is to perform a groupby, that is, to split the dataframe in M groups each sharing the same pair of category. This operation can be done in O(R) time where R is the number of rows in the dataframe. In practice, Pandas may implement this using a (comparison-based) sort running in O(R log R) time. Faster access & Conversion to Numpy Moreover, accessing a dataframe item per item using loc is very slow. Indeed, Pandas needs to locate the location of the column using an internal dictionary, find the row based on the provided date, extract the value in the dataframe based on the ith row and jth column, create a new object and return it, not to mention the several check done (eg. types and bounds). On top of that, Pandas introduces a significant overhead partially due to its code being interpreted using typically CPython. A faster solution is to extract the columns ahead of time, and to iterate over the row using integers instead of values (like dates). The thing is the order of the sorted date may not be the one in the dataframe subset. I guess it is the case for your input dataframe in practice, but if it is not, then you can sort the dataframe of each precomputed groups by date. I assume all the dates are present in all subset dataframe (but again, if this not the case, you can correct the result of the groupby). Each column can be converted to Numpy so to the can be faster. The result is a pure-Numpy code, not using Pandas anymore. Computationally-intensive Numpy codes are great since they can often be heavily optimized, especially when the target arrays contains native numerical types. Here is the implementation so far: df = df.set_index('date') day_0 = unique_dates[0] # first date # Using Dictionary comprehension list_of_numbers = list(range(len(unique_pairs))) myset = {key: None for key in list_of_numbers} groups = dict(list(df.groupby(['category_a', 'category_b']))) for count_pair, value in enumerate(unique_pairs): # pair of category_a and category_b category_a = value['category_a'] category_b = value['category_b'] # subset the dataframe for the pair df_subset = groups[(category_a, category_b)] # Extraction of the Pandas columns and convertion to Numpy ones col_open = df_subset['open'].to_numpy() col_close = df_subset['close'].to_numpy() col_inflow = df_subset['inflow'].to_numpy() col_outflow = df_subset['outflow'].to_numpy() col_max = df_subset['max'].to_numpy() col_buy = df_subset['buy'].to_numpy() # day 0 col_close[0] = col_open[0] + col_inflow[0] - col_outflow[0] # loop over single pair using date for i in range(1, len(unique_dates)): col_open[i] = col_close[i-1] col_close[i] = col_open[i] + col_inflow[i] - col_outflow[i] # check if closing value is negative, if so, set inflow to buy for next weeks deficit if col_close[i] < col_max[i]: col_buy[i-1] = col_max[i] - col_close[i] + col_inflow[i] elif col_close[i] > col_max[i]: col_buy[i-1] = 0 else: col_buy[i-1] = col_inflow[i] col_inflow[i] = col_buy[i-1] col_close[i] = col_open[i] + col_inflow[i] - col_outflow[i] # store all the dataframes in a container myset myset[count_pair] = df_subset # make myset into a dataframe result = pd.concat(myset.values()).reset_index(drop=False) result This code is not only faster, but also a bit easier to read. Fast execution using Numba At this point, the general solution is to use vectorized functions but this is really not easy to do that efficiently (if even possible) here due to the loop dependencies and the conditionals. A fast solution is to use a JIT compiler like Numba so to generate a very-fast implementation. Numba is designed to work efficiently on natively-typed Numpy arrays so this is the perfect use-case. Note that Numba need the input parameter to have a well-defined (native) type. Providing the types manually cause Numba to generate the code eagerly (during the definition of the function) instead of lazily (during the first execution). Here is the final resulting code: import numba as nb @nb.njit('(float64[:], float64[:], float64[:], int64[:], int64[:], float64[:], int64)') def compute(col_open, col_close, col_inflow, col_outflow, col_max, col_buy, n): # Important checks to avoid out-of bounds that are # not checked by Numba for sake of performance. # If they are not true and not done, then # the function can simply cause a crash. assert col_open.size == n and col_close.size == n assert col_inflow.size == n and col_outflow.size == n assert col_max.size == n and col_buy.size == n # day 0 col_close[0] = col_open[0] + col_inflow[0] - col_outflow[0] # loop over single pair using date for i in range(1, n): col_open[i] = col_close[i-1] col_close[i] = col_open[i] + col_inflow[i] - col_outflow[i] # check if closing value is negative, if so, set inflow to buy for next weeks deficit if col_close[i] < col_max[i]: col_buy[i-1] = col_max[i] - col_close[i] + col_inflow[i] elif col_close[i] > col_max[i]: col_buy[i-1] = 0 else: col_buy[i-1] = col_inflow[i] col_inflow[i] = col_buy[i-1] col_close[i] = col_open[i] + col_inflow[i] - col_outflow[i] df = df.set_index('date') day_0 = unique_dates[0] # first date # Using Dictionary comprehension list_of_numbers = list(range(len(unique_pairs))) myset = {key: None for key in list_of_numbers} groups = dict(list(df.groupby(['category_a', 'category_b']))) for count_pair, value in enumerate(unique_pairs): # pair of category_a and category_b category_a = value['category_a'] category_b = value['category_b'] # subset the dataframe for the pair df_subset = groups[(category_a, category_b)] # Extraction of the Pandas columns and convertion to Numpy ones col_open = df_subset['open'].to_numpy() col_close = df_subset['close'].to_numpy() col_inflow = df_subset['inflow'].to_numpy() col_outflow = df_subset['outflow'].to_numpy() col_max = df_subset['max'].to_numpy() col_buy = df_subset['buy'].to_numpy() # Numba-accelerated computation compute(col_open, col_close, col_inflow, col_outflow, col_max, col_buy, len(unique_dates)) # store all the dataframes in a container myset myset[count_pair] = df_subset # make myset into a dataframe result = pd.concat(myset.values()).reset_index(drop=False) result Feel free to change the type of the parameters if they do not match with the real-world input data-type (eg. int32 vs int64 or float64 vs int64). Note that you can replace things like float64[:] by float64[::1] if you know that the input array is contiguous which is likely the case. This generates a faster code. Also please note that myset can be a list since count_pair is an increasing integer. This is simpler and faster but it might be useful in your real-world code. Performance results The Numba function call runs in about 1 Β΅s on my machine as opposed to 7.1 ms for the initial code. This means the hot part of the code is 7100 times faster just on the tiny example. That being said, Pandas takes some time to convert the columns to Numpy, to create groups and to merge the dataframes. The former takes a small constant time negligible for large arrays. The two later operations take more time on bigger input dataframes and they are actually the main bottleneck on my machine (both takes 1 ms on the small example). Overall, the whole initial code takes 16.5 ms on my machine for the tiny example dataframe, while the new one takes 3.1 ms. This means a 5.3 times faster code just for this small input. On bigger input dataframes the speed-up should be significantly better. Finally, please not that df.groupby(['category_a', 'category_b']) was actually already precomputed so I am not even sure we should include it in the benchmark ;) . | 14 | 7 |
75,138,693 | 2023-1-16 | https://stackoverflow.com/questions/75138693/pandas-isin-function-in-polars | Once in a while I get to the point where I need to run the following line: DF['is_flagged'] = DF['id'].isin(DF2[DF2['flag']==1]['id']) Lately I started using polars, and I wonder how to convert it easily to polars. For example: df1 = pl.DataFrame({ 'Animal_id': [1, 2, 3, 4, 5, 6, 7], 'age': [4, 6, 3, 8, 3, 8, 9] }) df2 = pl.DataFrame({ 'Animal_id': [1, 2, 3, 4, 5, 6, 7], 'Animal_type': ['cat', 'dog', 'cat', 'cat', 'dog', 'dog', 'cat'] }) Expected output: shape: (7, 3) βββββββββββββ¬ββββββ¬βββββββββ β animal_id β age β is_dog β β --- β --- β --- β β i64 β i64 β i64 β βββββββββββββͺββββββͺβββββββββ‘ β 1 β 4 β 0 β β 2 β 6 β 1 β β 3 β 3 β 0 β β 4 β 8 β 0 β β 5 β 3 β 1 β β 6 β 8 β 1 β β 7 β 9 β 0 β βββββββββββββ΄ββββββ΄βββββββββ Without using flag and then join I tried to use the is_in() function but this didnβt worked. | So to be honest I am not quite sure from your question, your pandas snippet and your example what your desired solution is, but here are my three takes. import polars as pl df1 = pl.DataFrame( {"Animal_id": [1, 2, 3, 4, 5, 6, 7], "age": [4, 6, 3, 8, 3, 8, 9]} ).lazy() df2 = pl.DataFrame( { "Animal_id": [1, 2, 3, 4, 5, 6, 7], "Animal_type": ["cat", "dog", "cat", "cat", "dog", "dog", "cat"], } ).lazy() 1 Solution So this one is a small adaptation of the solution of @ignoring_gravity. So the assumption in his solution is that the DataFrames have the same length and the Animal_id matches in both tables. If that was your goal I want to give you another solution because by subsetting (["is_dog"]) you lose the possibility to use the lazy api. df1.with_context( df2.select(pl.col("Animal_type").is_in(["dog"]).cast(int).alias("is_dog")) ).select(["Animal_id", "age", "is_dog"]).collect() 2 Solution So in case you want something more similar to your pandas snippet and because you wrote you don't want to have a join. df1.with_context( df2.filter(pl.col("Animal_type") == "dog").select( pl.col("Animal_id").alias("valid_ids") ) ).with_columns( [pl.col("Animal_id").is_in(pl.col("valid_ids")).cast(int).alias("is_dog")] ).collect() 3 Solution So this would be the solution with a join. In my opinion the best solution regarding your example and example output, but maybe there are other reasons that speak against a join, which aren't apparent from your example. df1.join( df2.select( ["Animal_id", pl.col("Animal_type").is_in(["dog"]).cast(int).alias("is_dog")] ), on=["Animal_id"], ).collect() | 5 | 3 |
75,136,603 | 2023-1-16 | https://stackoverflow.com/questions/75136603/how-to-import-numpy-using-layer-in-lambda-aws | I am trying to add the numpy package to my lambda function, but I can't import it. I have followed several tutorials, but they all have the same problem. On my last attempt, I executed the following step by step: I created a lambda function in AWS and tested it Installed the numpy package on my local machine and zipped it Created lambda layer and loaded the numpy package Fixed the layer to lambda and tested When I run my lambda function without importing numpy, it works perfectly, however when I import it I get this error: { "errorMessage": "Unable to import module 'lambda_function': No module named 'numpy'", "errorType": "Runtime.ImportModuleError", "requestId": "0153834a-6b28-44d1-889f-3e2e3ead9c4a", "stackTrace": [] } A very common error on the forum, however everything is fine with lambda_function, because as I said it works fine if I'm not importing any module. lambda_function.py import json import numpy def lambda_handler(event, context): # TODO implement return { 'statusCode': 200, 'body': json.dumps('Hello from Lambda!') } I would like to learn how to use Layers inside Lambda. | There are at least two ways to add NumPy to your AWS Lambda as layer: Add an AWS (Managed) Layer: From your Lambda console, choose AWS Layers and AWSSDKPandas-Python39 (Choose a layer accordingly for your Python version). Since Pandas is built on top of NumPy, you should be able to use NumPy once you add this Pandas layer. Add a custom layer: Create a virtual environment on your local machine, install NumPy, create a zip file, and upload to AWS as a layer. Ensure the version of Python in your local Virtual env is the same as the Lambda. Step by step instruction here. This is a general approach which you can use to add any external Python libraries as Lambda layers. | 4 | 4 |
75,135,206 | 2023-1-16 | https://stackoverflow.com/questions/75135206/printing-all-member-names-of-a-enum-class | class MyValues(enum.Enum): value1 = 1 value2 = 2 value3 = 1 print(MyValues._member_names_) Output would be a list with only the first two members [value1,value2] I would also like to have value3 in that list, i tried with aenum with NoAlias setting but did not work. Is there any way to get all members even if they have duplicate values? | _member_names_ is not part of the Enum documented public API. You should not use it. __members__ is what you're looking for. It's a mapping from member names to members, including all aliases. You can use list(MyValues.__members__) if you just want a list of the names. | 3 | 5 |
75,116,947 | 2023-1-14 | https://stackoverflow.com/questions/75116947/how-to-send-messages-to-telegram-using-python | I have a python script in which logs messages to the console. But I also want those messages to be sent to the telegram using my bot. Any hint and suggestion will be helpful. Thanks in advance. I haven't tried anything yet, just got thought if that possible or not and if it is then how? | Make A Telegram Bot Using BotFather for Telegram Search for BotFather in your Telegram client by opening it. A pre-installed Telegram bot that assists users in making original Telegram bots To create a new bot, enter /newbot. Name and username your bot. Copy the token for your new Telegram bot. Please take note that anyone who obtains your token has complete control over your Telegram bot, so avoid uploading it online. Obtaining a chat ID To send Telegram messages using Python, we require the conversation ID that each chat in Telegram holds. Send a message to your Telegram bot (any random message) Use this Python program to locate your chat ID. import requests TOKEN = "YOUR TELEGRAM BOT TOKEN" url = f"https://api.telegram.org/bot{TOKEN}/getUpdates" print(requests.get(url).json()) The getUpdates function, which is invoked by this script, sort of checks for new messages. Our chat ID can be located in the JSON that was returned (the one in red) Please take note that your results can be empty if you don't message your Telegram bot. Copy the chat ID and paste it in the following step. Python-based Telegram message delivery The following Python script requires you to enter 1) your Telegram bot token and 2) your chat ID from the previous two steps. (Be sure to include modify your message.) import requests TOKEN = "YOUR TELEGRAM BOT TOKEN" chat_id = "YOUR CHAT ID" message = "hello from your telegram bot" url = f"https://api.telegram.org/bot{TOKEN}/sendMessage?chat_id={chat_id}&text={message}" print(requests.get(url).json()) # this sends the message Run the Python script and check your Telegram! | 9 | 30 |
75,125,071 | 2023-1-15 | https://stackoverflow.com/questions/75125071/extend-list-with-another-list-in-specific-index | In python we can add lists to each other with the extend() method but it adds the second list at the end of the first list. lst1 = [1, 4, 5] lst2 = [2, 3] lst1.extend(lst2) Output: [1, 4, 5, 2, 3] How would I add the second list to be apart of the 1st element? Such that the result is this; [1, 2, 3, 4, 5 ] I've tried using lst1.insert(1, *lst2) and got an error; TypeError: insert expected 2 arguments, got 3 | For those who don't like reading comments: lst1 = [1, 4, 5] lst2 = [2, 3] lst1[1:1] = lst2 print(lst1) Output: [1, 2, 3, 4, 5] | 9 | 13 |
75,118,159 | 2023-1-14 | https://stackoverflow.com/questions/75118159/generate-specific-toeplitz-covariance-matrix | I want to generate a statistical sample from a multidimensional normal distribution. For this I need to generate one specific kind of covariance matrix: 1 0.99 0.98 0.97 ... 0.99 1 0.99 0.98 ... 0.98 0.99 1 0.99 ... 0.97 0.98 0.99 1 ... ... ... ... ... Is there a way to generate this kind of matrix for multiple dimensions easily, without writing it by hand? (I need to have matrices with 50-100 dimensions, so doing it by hand is very tedious.) | You can use the scipy.linalg.toeplitz function, as it is made for exactly this kind of matrix: >>> import numpy as np >>> from scipy import linalg >>> linalg.toeplitz(np.arange(1,0,-0.1), np.arange(1,0,-0.1)) array([[1. , 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1], [0.9, 1. , 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2], [0.8, 0.9, 1. , 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3], [0.7, 0.8, 0.9, 1. , 0.9, 0.8, 0.7, 0.6, 0.5, 0.4], [0.6, 0.7, 0.8, 0.9, 1. , 0.9, 0.8, 0.7, 0.6, 0.5], [0.5, 0.6, 0.7, 0.8, 0.9, 1. , 0.9, 0.8, 0.7, 0.6], [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. , 0.9, 0.8, 0.7], [0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. , 0.9, 0.8], [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. , 0.9], [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ]]) | 3 | 3 |
75,116,574 | 2023-1-14 | https://stackoverflow.com/questions/75116574/interpolation-using-asfreqd-in-multiindex | The following code generates two DataFrames: frame1=pd.DataFrame({'dates':['2023-01-01','2023-01-07','2023-01-09'],'values':[0,18,28]}) frame1['dates']=pd.to_datetime(frame1['dates']) frame1=frame1.set_index('dates') frame2=pd.DataFrame({'dates':['2023-01-08','2023-01-12'],'values':[8,12]}) frame2['dates']=pd.to_datetime(frame2['dates']) frame2=frame2.set_index('dates') Using frame1.asfreq('D').interpolate() frame2.asfreq('D').interpolate() we can interpolate their values between the days to obtain and However, consider now the concatenation table: frame1['frame']='f1' frame2['frame']='f2' concat=pd.concat([frame1,frame2]) concat=concat.set_index('frame',append=True) concat=concat.reorder_levels(['frame','dates']) concat I want to do the interpolation using one command like concat.groupby('frame').apply(lambda g:g.asfreq('D').interpolate()) direktly in the concatenation table. Unfortunately, my above command does not work but raises a TypeError: TypeError: Cannot convert input [('f1', Timestamp('2023-01-01 00:00:00'))] of type <class 'tuple'> to Timestamp How do I fix that command to work? | You have to drop the first level index (the group key) before use asfreq like your initial dataframes: >>> concat.groupby('frame').apply(lambda g: g.loc[g.name].asfreq('D').interpolate()) values frame dates f1 2023-01-01 0.0 2023-01-02 3.0 2023-01-03 6.0 2023-01-04 9.0 2023-01-05 12.0 2023-01-06 15.0 2023-01-07 18.0 2023-01-08 23.0 2023-01-09 28.0 f2 2023-01-08 8.0 2023-01-09 9.0 2023-01-10 10.0 2023-01-11 11.0 2023-01-12 12.0 To debug use a named function instead of a lambda function: def interpolate(g): print(f'[Group {g.name}]') print(g.loc[g.name]) print() return g.loc[g.name].asfreq('D').interpolate() out = concat.groupby('frame').apply(interpolate) Output: [Group f1] values dates 2023-01-01 0 2023-01-07 18 2023-01-09 28 [Group f2] values dates 2023-01-08 8 2023-01-12 12 | 4 | 5 |
75,115,582 | 2023-1-14 | https://stackoverflow.com/questions/75115582/how-to-load-toml-file-in-python | How to load toml file into a python file that my code python file: import toml toml.get("first").name toml file : [first] name = "Mark Wasfy" age = 22 [second] name = "John Wasfy" age = 25 | it can be used without open as a file import toml data = toml.load("./config.toml") print(data["first"]["name"]) | 7 | 10 |
75,115,515 | 2023-1-14 | https://stackoverflow.com/questions/75115515/does-anything-supercede-pep-8 | Trying to go from a script kiddie to a semi-respectable software engineer and need to learn how to write clean, digestible code. The book I'm reading pointed me towards PEP 8 - I know this is the foundational styling guide for Python. What I can't seem to figure out is if all the guidelines are still valid today in 2022 and nothing has changed since its last update in 2013 OR if there are supplemental PEPs I should be reading. Visited https://peps.python.org/pep-0000/ and started browsing through the different releases but got confused and unsure which PEPs besides 8 have to do with style guidelines. I found this previous question from 9 years ago and wanted to see if any of the answers have changed. | At the top, below the authors, you can see it says Status: Active As the tooltip explains, that means PEP 8 is Currently valid informational guidance, or an in-use process If the PEP is ever replaced it will say "Status: Superseded". At the bottom of the page it says: Last modified: 2022-05-11 17:45:05 GMT You can check the link to see what changes have been made to PEP 8 when. | 6 | 4 |
75,103,151 | 2023-1-12 | https://stackoverflow.com/questions/75103151/using-pyfirmata-with-a-simulator-like-simulide | I'm trying to learn PyFirmata, but don't want to get all of the hardware I need to do so. Instead I want to use a simulator, right now I'm using SimulIDE. In order to use PyFirmata, a board has to be connected to a COM port. Is there a way to get around this and use SimulIDE, or another simulator, instead? Here is the code I want to run: import pyfirmata import time board = pyfirmata.Arduino('/dev/ttyACM0') while True: board.digital[13].write(1) time.sleep(1) board.digital[13].write(0) time.sleep(1) | According to this page, it's possible to connect your simulated Arduino to a real or virtual serial port (look for the "Connecting to Serial Port" section). You haven't specified what OS you're using; I'll show an example of doing that with Linux. Create a virtual serial port on your host We can use socat to create a virtual serial port like this: mkdir /tmp/vport socat -v \ pty,raw,echo=0,link=/tmp/vport/board \ pty,raw,echo=0,link=/tmp/vport/python This links a pair of PTY devices (which will have unpredictable names like /dev/pts/6 and /dev/pts/7) and creates convenience symlinks in /tmp/vport. The -v option means socat will print on the console any data being sent back and forth. As an alternative to socat, you could also use tty0tty to create a linked pair of PTY devices. If you're on Windows, I believe that com0com does something similar, but I don't have an environment in which to test that out. Create a serial port in SimulIDE In the component panel, open the "Perifericals" section... ...and place a "SerialPort" in your circuit. Connect the TX pin on the serial port to the RX pin on your Arduino, and the RX pin on the serial port to the TX pin on the Arduino. Double click on the serial port and configure it to connect to /tmp/vport/board: Then select the "Config" panel and make sure the port is set for 57600 bps: Close the configuration window and select the "Open" button on the serial port. It should end up looking something like: Load some firmware on your Arduino and run it For testing the serial port, I used the following code (in a sketch named SerialRead: void setup() { // We use 57600bps because this is the rate that Firmata uses by default Serial.begin(57600); } void loop() { int incomingByte = 0; if (Serial.available() > 0) { incomingByte = Serial.read(); Serial.print("Received:"); Serial.println(incomingByte); } } And then compiled it using the arduino-cli tool like this: cd SerialRead mkdir build arduino-cli compile -b arduino:avr:uno --build-path build . This creates (among other files) the file build/SerialRead.ino.hex, which is what we'll load into the simulator. In SimulIDE, right-click on the Arduino and select "Load firmware", then browse to that SerialRead.ino.hex file and load it: Connect a serial tool to the virtual port I'm using picocom to verify that things are connected properly (but you can use any other serial communication software, like minicom, cu, etc). On the host, run: picocom /tmp/vport/python In SimulIDE, power on the Arduino. In picocom, start typing something and you should see (if you type "ABCD"): Received:97 Received:98 Received:99 Received:100 This confirms that we have functional serial connectivity between the virtual serial port on our host and the software running on the simulated Arduino. Now let's try Firmata! So what about Firmata? Now that we've confirmed that we have the appropriate connectivity we can start working with pyfirmata. First, you'll need to compile the StandardFirmata sketch into a .hex file, as we did with the SerialRead sketch earlier, and then load the .hex file into your simulated Arduino. I wired up an LED between pin 7 and GND, and a switch between pin 8 and 3v3, and used the following code to test things out: import sys import pyfirmata import time board = pyfirmata.Arduino('/tmp/vport/python') print('connected') # start an iterator for reading pin states it = pyfirmata.util.Iterator(board) it.start() board.digital[7].mode = pyfirmata.OUTPUT board.digital[8].mode = pyfirmata.INPUT board.digital[8].enable_reporting() while True: if not board.digital[8].read(): print('on') board.digital[7].write(1) time.sleep(0.5) print('off') board.digital[7].write(0) time.sleep(0.5) Running this Python code produces the following behavior in SimulIDE: | 4 | 3 |
75,103,127 | 2023-1-12 | https://stackoverflow.com/questions/75103127/getting-notimplementederror-could-not-run-torchvisionnms-with-arguments-fr | The full error: NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher]. I get this when attempting to train a YOLOv8 model on a Windows 11 machine, everything works for the first epoch then this occurs. I also get this error immediately after the first epoch ends but I don't think it is relevant. Error executing job with overrides: ['task=detect', 'mode=train', 'model=yolov8n.pt', 'data=custom.yaml', 'epochs=300', 'imgsz=160', 'workers=8', 'batch=4'] I was trying to train a YOLOv8 image detection model utilizing CUDA GPU. | This error occurs when you have Torch and Torchaudio for CUDA but not Torchvision for CUDA installed. Uninstall Torch and Torchvision, I used pip: pip uninstall torch torchvision Then go here to install the proper versions of both using the nice interface. I got the following command: pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116 | 14 | 33 |
75,112,136 | 2023-1-13 | https://stackoverflow.com/questions/75112136/python-unable-to-install-guesslang | I'm trying to install guesslang with pip but it seems that the last version (which was released on August 2021) depends on an obsolete version of Tensorflow (2.5.0). The problem is that I can't find this version anywhere. So, how can I install it? Or is there any other python library that does language detection? However here's the error I get when trying to install it, maybe I misunderstood... > pip install guesslang Collecting guesslang Using cached guesslang-2.2.1-py3-none-any.whl (2.5 MB) Using cached guesslang-2.2.0-py3-none-any.whl (2.5 MB) Using cached guesslang-2.0.3-py3-none-any.whl (2.1 MB) Using cached guesslang-2.0.1-py3-none-any.whl (2.1 MB) Using cached guesslang-2.0.0-py3-none-any.whl (13.0 MB) Using cached guesslang-0.9.3-py3-none-any.whl (3.2 MB) Collecting numpy Using cached numpy-1.24.1-cp310-cp310-win_amd64.whl (14.8 MB) Collecting guesslang Using cached guesslang-0.9.1-py3-none-any.whl (3.2 MB) ERROR: Cannot install guesslang==0.9.1, guesslang==0.9.3, guesslang==2.0.0, guesslang==2.0.1, guesslang==2.0.3, guesslang==2.2.0 and guesslang==2.2.1 because these package versions have conflicting dependencies. The conflict is caused by: guesslang 2.2.1 depends on tensorflow==2.5.0 guesslang 2.2.0 depends on tensorflow==2.5.0 guesslang 2.0.3 depends on tensorflow==2.5.0 guesslang 2.0.1 depends on tensorflow==2.2.0 guesslang 2.0.0 depends on tensorflow==2.2.0 guesslang 0.9.3 depends on tensorflow==1.7.0rc1 guesslang 0.9.1 depends on tensorflow==1.1.0 To fix this you could try to: 1. loosen the range of package versions you've specified 2. remove package versions to allow pip attempt to solve the dependency conflict ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts | tensorflow 2.5.0 released wheels for Python 3.6-3.9. Downgrade to Python 3.9 to install guesslang with tensorflow 2.5.0. | 4 | 3 |
75,111,518 | 2023-1-13 | https://stackoverflow.com/questions/75111518/analyze-if-the-value-of-a-column-is-less-than-another-and-this-another-is-less-t | Currently I do it this way: import pandas as pd dt = pd.DataFrame({ '1st':[1,0,1,0,1], '2nd':[2,1,2,1,2], '3rd':[3,0,3,2,3], '4th':[4,3,4,3,4], '5th':[5,0,5,4,5], 'minute_traded':[6,5,6,5,6] }) dt = dt[ (dt['1st'] < dt['2nd']) & (dt['2nd'] < dt['3rd']) & (dt['3rd'] < dt['4th']) & (dt['4th'] < dt['5th']) & (dt['5th'] < dt['minute_traded']) ] print(dt) Result: 1st 2nd 3rd 4th 5th minute_traded 0 1 2 3 4 5 6 2 1 2 3 4 5 6 3 0 1 2 3 4 5 4 1 2 3 4 5 6 Is there a more correct method for an analysis like this that always uses the same pattern and only changes the columns to be analyzed? | Using shift to perform the comparison and all to aggregate as single boolean for boolean indexing: out = dt[dt.shift(axis=1).lt(dt).iloc[:, 1:].all(axis=1)] Output: 1st 2nd 3rd 4th 5th minute_traded 0 1 2 3 4 5 6 2 1 2 3 4 5 6 3 0 1 2 3 4 5 4 1 2 3 4 5 6 | 3 | 2 |
75,111,569 | 2023-1-13 | https://stackoverflow.com/questions/75111569/streamlit-on-aws-serverless-options | My goal is to deploy a Streamlit application to an AWS Serverless architecture. Streamlit does not appear to function properly without a Docker container, so the architecture would need to support containers. From various tutorials, EC2 is the most popular deployment option for Streamlit, which I have no interest in pursuing due to the server management aspect. AWS Lambda would be my preferred deployment option if viable. I see that Lambda can support containers, but I'm curious what the pros & cons of Lambda vs Fargate is for containerized apps. My question is: Is Lambda or Fargate better for a serverless deployment of a Streamlit web app? | AWS Lambda: AWS Lambda can run containers, but those containers have to implement the Lambda runtime API. Lambda can't run any generic container. Lambda has a maximum run time (for processing a single request) of 15 minutes. Behind API gateway that maximum time is reduced to 60 seconds. Lambda isn't running 24/7. Your container would only be started up when a request comes in that the Lambda function needs to process. Given the nature of how Lambda works, something has to sit in front of Lambda to receive the web requests and route them to the AWS Lambda API. Most commonly this would be AWS API Gateway. So you would have to setup an AWS API Gateway deployment that understands how to route all of your apps API requests to your Lambda function(s). Alternatively you could put an Application Load Balancer in front of your Lambda function. Fargate (or more appropriately titled "ECS services with Fargate deployments"): Runs your container 24/7 like a more traditional web server. Can run pretty much any container. No time limit on the time to process a single request, although there is a maximum value of 4000 seconds (over 60 minutes) on the load balancer that you would typically use in this configuration. So in general, it's much easier to take a third-party app or docker image and get it running in ECS/Fargate than it is to get it running in Lambda. | 3 | 4 |
75,110,547 | 2023-1-13 | https://stackoverflow.com/questions/75110547/tensorflows-random-truncated-normal-returns-different-results-with-the-same-see | The following lines are supposed to get the same result: import tensorflow as tf print(tf.random.truncated_normal(shape=[2],seed=1234)) print(tf.random.truncated_normal(shape=[2],seed=1234)) But I got: tf.Tensor([-0.12297685 -0.76935077], shape=(2,), dtype=tf.float32) tf.Tensor([0.37034193 1.3367208 ], shape=(2,), dtype=tf.float32) Why? | This seems to be intentional, see the docs here. Specifically the "Examples" section. What you need is stateless_truncated_normal: print(tf.random.stateless_truncated_normal(shape=[2],seed=[1234, 1])) print(tf.random.stateless_truncated_normal(shape=[2],seed=[1234, 1])) Gives me tf.Tensor([1.0721238 0.10303579], shape=(2,), dtype=float32) tf.Tensor([1.0721238 0.10303579], shape=(2,), dtype=float32) Note: The seed needs to be two numbers here, I honestly don't know why (the docs don't say). | 3 | 2 |
75,103,449 | 2023-1-12 | https://stackoverflow.com/questions/75103449/dict-of-dataframes | Let's say I initialize a df and then I assign it to a dict 3 times, each one with a specific key. import pandas as pd df = pd.DataFrame({'A': [2, 2], 'B': [2, 2]}) dict = {} for i in range(3): dict_strat['Df {0}'.format(i)] = df Alright, what I'm not understanding is that when I try to change value of one element in the dictionary, it changes all the others. For example: dict_strat['Df 0'].iloc[0, :] = 9 It not only changes the first df on the dict, it changes all of them. Why? How can I get rid of that? | The DataFrames are all shallow copies, meaning that mutating one of them will mutate the others in the dictionary. To resolve this issue, make deep copies using .copy(). You also should be using f-strings rather than .format(): for i in range(3): dict_strat[f'Df {i}'] = df.copy() | 3 | 2 |
75,101,952 | 2023-1-12 | https://stackoverflow.com/questions/75101952/minimizing-a-function-using-pytorch-optimizer-return-values-are-all-the-same | I'm trying to minimize a function in order to better understand the optimizer process. As an example I used the Eggholder-Function (https://www.sfu.ca/~ssurjano/egg.html) which is 2d. My goal is to get the values of my parameters (x and y) after every optimizer iteration so that i can visualize it afterwards. Using Pytorch I wrote the following code: def eggholder_function(x): return -(x[1] + 47) * torch.sin(torch.sqrt(torch.abs(x[1] + x[0]/2 + 47))) - x[0]*torch.sin(torch.sqrt(torch.abs(x[0]-(x[1]+47)))) def minimize(function, initial_parameters): list_params = [] params = initial_parameters params.requires_grad_() optimizer = torch.optim.Adam([params], lr=0.1) for i in range(5): optimizer.zero_grad() loss = function(params) loss.backward() optimizer.step() list_params.append(params) return params, list_params starting_point = torch.tensor([-30.,-10.]) minimized_params, list_of_params = minimize(eggholder_function, starting_point) The output is as follows: minimized_params: tensor([-29.4984, -10.5021], requires_grad=True) and list of params: [tensor([-29.4984, -10.5021], requires_grad=True), tensor([-29.4984, -10.5021], requires_grad=True), tensor([-29.4984, -10.5021], requires_grad=True), tensor([-29.4984, -10.5021], requires_grad=True), tensor([-29.4984, -10.5021], requires_grad=True)] While I understand that the minimized_params is infact the optimized minimum, why does list_of_params show the same values for every iteration? Thank you and have a great day! | Because they all refer to the same object. You can check it by: id(list_of_params[0]), id(list_of_params[1]) You can clone the params to avoid that: import torch def eggholder_function(x): return -(x[1] + 47) * torch.sin(torch.sqrt(torch.abs(x[1] + x[0]/2 + 47))) - x[0]*torch.sin(torch.sqrt(torch.abs(x[0]-(x[1]+47)))) def minimize(function, initial_parameters): list_params = [] params = initial_parameters params.requires_grad_() optimizer = torch.optim.Adam([params], lr=0.1) for i in range(5): optimizer.zero_grad() loss = function(params) loss.backward() optimizer.step() list_params.append(params.detach().clone()) #here return params, list_params starting_point = torch.tensor([-30.,-10.]) minimized_params, list_of_params = minimize(eggholder_function, starting_point) #list_params [tensor([-29.9000, -10.1000]), tensor([-29.7999, -10.2001]), tensor([-29.6996, -10.3005]), tensor([-29.5992, -10.4011]), tensor([-29.4984, -10.5021])] | 3 | 3 |
75,085,009 | 2023-1-11 | https://stackoverflow.com/questions/75085009/how-to-type-hint-a-function-added-to-class-by-class-decorator-in-python | I have a class decorator, which adds a few functions and fields to decorated class. @mydecorator @dataclass class A: a: str = "" Added (via setattr()) is a .save() function and a set of info for dataclass fields as a separate dict. I'd like VScode and mypy to properly recognize that, so that when I use: a=A() a.save() or a.my_fields_dict those 2 are properly recognized. Is there any way to do that? Maybe modify class A type annotations at runtime? | TL;DR What you are trying to do is not possible with the current type system. 1. Intersection types If the attributes and methods you are adding to the class via your decorator are static (in the sense that they are not just known at runtime), then what you are describing is effectively the extension of any given class T by mixing in a protocol P. That protocol defines the method save and so on. To annotate this you would need an intersection of T & P. It would look something like this: from typing import Protocol, TypeVar T = TypeVar("T") class P(Protocol): @staticmethod def bar() -> str: ... def dec(cls: type[T]) -> type[Intersection[T, P]]: setattr(cls, "bar", lambda: "x") return cls # type: ignore[return-value] @dec class A: @staticmethod def foo() -> int: return 1 You might notice that the import of Intersection is conspicuously missing. That is because despite being one of the most requested features for the Python type system, it is still missing as of today. There is currently no way to express this concept in Python typing. 2. Class decorator problems The only workaround right now is a custom implementation alongside a corresponding plugin for the type checker(s) of your choice. I just stumbled across the typing-protocol-intersection package, which does just that for mypy. If you install that and add plugins = typing_protocol_intersection.mypy_plugin to your mypy configuration, you could write your code like this: from typing import Protocol, TypeVar from typing_protocol_intersection import ProtocolIntersection T = TypeVar("T") class P(Protocol): @staticmethod def bar() -> str: ... def dec(cls: type[T]) -> type[ProtocolIntersection[T, P]]: setattr(cls, "bar", lambda: "x") return cls # type: ignore[return-value] @dec class A: @staticmethod def foo() -> int: return 1 But here we run into the next problem. Testing this with reveal_type(A.bar()) via mypy will yield the following: error: "Type[A]" has no attribute "bar" [attr-defined] note: Revealed type is "Any" Yet if we do this instead: class A: @staticmethod def foo() -> int: return 1 B = dec(A) reveal_type(B.bar()) we get no complaints from mypy and note: Revealed type is "builtins.str". Even though what we did before was equivalent! This is not a bug of the plugin, but of the mypy internals. It is another long-standing issue, that mypy does not handle class decorators correctly. A person in that issue thread even mentioned your use case in conjunction with the desired intersection type. DIY In other words, you'll just have to wait until those two holes are patched. Or you can hope that at least the decorator issue by mypy is fixed soon-ish and write your own VSCode plugin for intersection types in the meantime. Maybe you can get together with the person behind that mypy plugin I mentioned above. | 13 | 19 |
75,100,393 | 2023-1-12 | https://stackoverflow.com/questions/75100393/how-to-find-every-combination-of-numbers-that-sum-to-a-specific-x-given-a-large | I have a large array of integers (~3k items), and I want to find the indices of every combination of numbers where the sum of said numbers is equal to X. How to do this without the program taking years to execute? I can find the first combination possible with the following Python code: def find_numbers_that_sum_to(source_arr, target_number, return_index=True): result = [] result_indices = [] for idx, item in enumerate(source_arr): sum_list = sum(result) assert (sum_list + item) <= target_number result.append(item) result_indices.append(idx) if (sum_list + item) == target_number: break return result_indices But I need every combination possible. At least a way of generating them dynamically. I only ever need one of them at a time, but if the indices it gave me don't match another criteria I need, I'll need the next set. | Unfortunately, this problem is NP-hard, meaning that there's no currently known polynomial time algorithm for this. If you want to benchmark your current code against other implementations, this SO question contains a bunch. Those implementations may be faster, but their runtimes will still be exponential. | 3 | 4 |
75,080,993 | 2023-1-11 | https://stackoverflow.com/questions/75080993/dbuserrorresponse-while-running-poetry-install | I tried to upgrade my poetry from 1.1.x version to 1.3 but as an official manual (https://python-poetry.org/docs/) recommends I removed the old version manually. Unfortunately I probably deleted wrong files because after installing 1.3 version I was still receiving errors that seemed sth was in conflict with old poetry. I tried to find all files in my account (it's a remote machine so I did not want to effect others) connected somehow with poetry (with find /home/username -name *poetry*) and (after uninstalling poetry 1.3) removed them. Then I installed poetry 1.3 back but still did not work. Also tried to delete my whole repo and clone it again, but same problems remains. I guess I pissed it off already, but hope that there is some way to some hard reset. Is there any way how to get from this? Here is beginning of my error message: Package operations: 28 installs, 0 updates, 0 removals β’ Installing certifi (2021.10.8) β’ Installing charset-normalizer (2.0.12) β’ Installing idna (3.3) β’ Installing six (1.16.0) β’ Installing typing-extensions (4.2.0) β’ Installing urllib3 (1.26.9) DBusErrorResponse [org.freedesktop.DBus.Error.UnknownMethod] ('No such interface βorg.freedesktop.DBus.Propertiesβ on object at path /org/freedesktop/secrets/collection/login',) at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/secretstorage/util.py:48 in send_and_get_reply 44β def send_and_get_reply(self, msg: Message) -> Any: 45β try: 46β resp_msg: Message = self._connection.send_and_get_reply(msg) 47β if resp_msg.header.message_type == MessageType.error: β 48β raise DBusErrorResponse(resp_msg) 49β return resp_msg.body 50β except DBusErrorResponse as resp: 51β if resp.name in (DBUS_UNKNOWN_METHOD, DBUS_NO_SUCH_OBJECT): 52β raise ItemNotFoundException('Item does not exist!') from resp The following error occurred when trying to handle this error: ItemNotFoundException Item does not exist! at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/secretstorage/util.py:52 in send_and_get_reply 48β raise DBusErrorResponse(resp_msg) 49β return resp_msg.body 50β except DBusErrorResponse as resp: 51β if resp.name in (DBUS_UNKNOWN_METHOD, DBUS_NO_SUCH_OBJECT): β 52β raise ItemNotFoundException('Item does not exist!') from resp 53β elif resp.name in (DBUS_SERVICE_UNKNOWN, DBUS_EXEC_FAILED, 54β DBUS_NO_REPLY): 55β data = resp.data 56β if isinstance(data, tuple): The following error occurred when trying to handle this error: PromptDismissedException Prompt dismissed. at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/secretstorage/collection.py:159 in create_collection 155β if len(collection_path) > 1: 156β return Collection(connection, collection_path, session=session) 157β dismissed, result = exec_prompt(connection, prompt) 158β if dismissed: β 159β raise PromptDismissedException('Prompt dismissed.') 160β signature, collection_path = result 161β assert signature == 'o' 162β return Collection(connection, collection_path, session=session) 163β The following error occurred when trying to handle this error: InitError Failed to create the collection: Prompt dismissed.. at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/keyring/backends/SecretService.py:63 in get_preferred_collection 59β collection = secretstorage.Collection(bus, self.preferred_collection) 60β else: 61β collection = secretstorage.get_default_collection(bus) 62β except exceptions.SecretStorageException as e: β 63β raise InitError("Failed to create the collection: %s." % e) 64β if collection.is_locked(): 65β collection.unlock() 66β if collection.is_locked(): # User dismissed the prompt 67β raise KeyringLocked("Failed to unlock the collection!") DBusErrorResponse [org.freedesktop.DBus.Error.UnknownMethod] ('No such interface βorg.freedesktop.DBus.Propertiesβ on object at path /org/freedesktop/secrets/collection/login',) at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/secretstorage/util.py:48 in send_and_get_reply 44β def send_and_get_reply(self, msg: Message) -> Any: 45β try: 46β resp_msg: Message = self._connection.send_and_get_reply(msg) 47β if resp_msg.header.message_type == MessageType.error: β 48β raise DBusErrorResponse(resp_msg) 49β return resp_msg.body 50β except DBusErrorResponse as resp: 51β if resp.name in (DBUS_UNKNOWN_METHOD, DBUS_NO_SUCH_OBJECT): 52β raise ItemNotFoundException('Item does not exist!') from resp The following error occurred when trying to handle this error: ItemNotFoundException Item does not exist! at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/secretstorage/util.py:52 in send_and_get_reply 48β raise DBusErrorResponse(resp_msg) 49β return resp_msg.body 50β except DBusErrorResponse as resp: 51β if resp.name in (DBUS_UNKNOWN_METHOD, DBUS_NO_SUCH_OBJECT): β 52β raise ItemNotFoundException('Item does not exist!') from resp 53β elif resp.name in (DBUS_SERVICE_UNKNOWN, DBUS_EXEC_FAILED, 54β DBUS_NO_REPLY): 55β data = resp.data 56β if isinstance(data, tuple): The following error occurred when trying to handle this error: PromptDismissedException Prompt dismissed. at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/secretstorage/collection.py:159 in create_collection 155β if len(collection_path) > 1: 156β return Collection(connection, collection_path, session=session) 157β dismissed, result = exec_prompt(connection, prompt) 158β if dismissed: β 159β raise PromptDismissedException('Prompt dismissed.') 160β signature, collection_path = result 161β assert signature == 'o' 162β return Collection(connection, collection_path, session=session) 163β | Finally I was able to find the answer here. There are several ways to do that: Running export PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring in shell will work for following poetry commands until you close (exit) your shell session Add environment variable for each! poetry command, for example PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring poetry install If you want to preserve (store) this environment variable between shell sessions or system reboot you can add it in .bashrc and .profile example for bash shell: echo 'export PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring' >> ~/.bashrc echo 'export PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring' >> ~/.profile exec "$SHELL" for case number 3) you can now run any poetry command as usual, even after system restart | 11 | 25 |
75,077,369 | 2023-1-11 | https://stackoverflow.com/questions/75077369/how-to-import-module-using-path-related-to-working-directory-in-a-python-project | I'm using poetry to manage my python project, here's the project: my_project/ βββ pyproject.toml βββ module.py βββ scripts/ βββ main.py And I want to know how to import function from module.py into my_scripts/main.py correctly. My pyproject.toml: [tool.poetry] name = "my_project" version = "0.1.0" description = "" authors = [] [tool.poetry.dependencies] python = "^3.11" [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api" I have tried this: # In my_scripts/main.py from module import my_function And run these commands: poetry install poetry shell python my_scripts/main.py then got this error: ModuleNotFoundError: No module named 'module' I also have put a __init__.py under my_project/ but didn't work out. | What made it work for me is to create a folder in your root with the same name of the package in the pyproject.toml file. If you don't do this, poetry will not find your project and will not install it in editable mode. You also need to let Python know that a folder is a package or sub-package by placing an __init__.py file in it. my_project/ ββ pyproject.toml ββ my_project/ ββ __init__.py ββ module.py ββ scripts/ ββ __init__.py ββ main.py When you initialize the folder you should see that the project is installed. ...> poetry install Installing dependencies from lock file Installing the current project: my_project (0.1.0) Last thing, you need to change the import in main.py. # main.py from my_project.module import my_function print("my_function imported") You can now activate the virtual environment (if it's not already active) and run your script. ...> poetry shell Spawning shell within: C:\...\my_project\.venv ... ...> python my_project\scripts\main.py my_function imported PS As @sinoroc pointed out in the comments, it is possible (and preferable according to him) to run the module with python -m my_project.scripts.main. I am new to the topic, but according to this SO answer: __package__ is set to the immediate parent package in <modulename> [when running python via command line with the -m option] This means that relative imports like the following magically work. # main.py from ..module import my_function # ..module tells python to look in the parent directory for module.py print("my_function imported") ...> python -m my_project.scripts.main my_function imported ...> python my_project\scripts\main.py Traceback (most recent call last): ... ImportError: attempted relative import with no known parent package | 4 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.