question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
77,128,073
2023-9-18
https://stackoverflow.com/questions/77128073/python-numpy-invert-boolean-mask-operation
When I have an array a and a boolean mask b, I can find the 'masked' vector c. a = np.array([1, 2, 4, 7, 9]) b = np.array([True, False, True, True, False]) c = a[b] Now suppose, it's the other way around. I have c and b and would like to arrive at d (below). What is the easiest way to do this? c = np.array([1, 4, 7]) b = np.array([True, False, True, True, False]) d = np.array([1, 0, 4, 7, 0])
You could use: d = np.zeros_like(b, dtype=c.dtype) d[b] = c Output: array([1, 0, 4, 7, 0])
3
7
77,098,444
2023-9-13
https://stackoverflow.com/questions/77098444/disable-font-colour-formatting-for-negetive-values-in-python-polars-generated-ex
I would like to disable automatic font colouring of negetive values in polars write_excel. Any tip? import polars as pl import xlsxwriter df2 = pl.DataFrame(data=np.random.randint(-10, 10, 5*3).reshape(-1, 3), schema=['x', 'y', 'z']) with xlsxwriter.Workbook(r'_out_.xlsx') as workbook: df2.write_excel(workbook=workbook, worksheet='sheet1', autofit=True, table_style=None, column_formats=None, conditional_formats=False, float_precision=0, sparklines=None, formulas=None) This is what it generates -
column_formats={cs.numeric():'General'} or column_formats={x:'[Black]' for x in df2.columns} will do the trick. import numpy as np import polars as pl import polars.selectors as cs import xlsxwriter df2 = pl.DataFrame(data=np.random.randint(-10, 10, 5*3).reshape(-1, 3), schema=['x', 'y', 'z']) with xlsxwriter.Workbook(r'C:\SUVOPAM_local\Try\_out_.xlsx') as workbook: df2.write_excel(workbook=workbook, worksheet='sheet1', autofit=True, table_style=None, dtype_formats=None, column_formats={cs.numeric():'General'}, # column_formats={x:'[Black]' for x in df2.columns} conditional_formats=False, float_precision=0, sparklines=None, formulas=None) Generates -
2
1
77,123,568
2023-9-17
https://stackoverflow.com/questions/77123568/how-to-plot-grouped-bars-overlaid-with-lines
I am trying to create a chart below created in excel based on the table below using matplotlib.. Category %_total_dist_1 event_rate_%_1 %_total_dist_2 event_rate_%_2 00 (-inf, 0.25) 5.7 36.5 5.8 10 01 [0.25, 4.75) 7 11.2 7 11 02 [4.75, 6.75) 10.5 5 10.5 4.8 03 [6.75, 8.25) 13.8 3.9 13.7 4 04 [8.25, 9.25) 9.1 3.4 9.2 3.1 05 [9.25, 10.75) 14.1 2.5 14.2 2.4 06 [10.75, 11.75) 13.7 1.6 13.7 1.8 07 [11.75, 13.75) 16.8 1.3 16.7 1.3 08 [13.75, inf) 9.4 1 9.1 1.3 The problem I am facing is that The columns in matplot lib are overlapping. I want to rotate the x axis labels by 45 degrees so that they don't overlap, but but don't know how to do that. I want markers on the lines. Here is the code I used: import pandas as pd import matplotlib.pyplot as plt # Create a Pandas DataFrame with your data data = { "Category": ["00 (-inf, 0.25)", "01 [0.25, 4.75)", "02 [4.75, 6.75)", "03 [6.75, 8.25)", "04 [8.25, 9.25)", "05 [9.25, 10.75)", "06 [10.75, 11.75)", "07 [11.75, 13.75)", "08 [13.75, inf)"], "%_total_dist_1": [5.7, 7, 10.5, 13.8, 9.1, 14.1, 13.7, 16.8, 9.4], "event_rate_%_1": [36.5, 11.2, 5, 3.9, 3.4, 2.5, 1.6, 1.3, 1], "%_total_dist_2": [5.8, 7, 10.5, 13.7, 9.2, 14.2, 13.7, 16.7, 9.1], "event_rate_%_2": [10, 11, 4.8, 4, 3.1, 2.4, 1.8, 1.3, 1.3] } df = pd.DataFrame(data) # Create a figure and primary y-axis fig, ax1 = plt.subplots(figsize=(10, 6)) # Plot percentage distribution on the primary y-axis ax1.bar(df['Category'], df['%_total_dist_1'], alpha=0.7, label="%_total_dist_1", color='b') ax1.bar(df['Category'], df['%_total_dist_2'], alpha=0.7, label="%_total_dist_2", color='g') ax1.set_ylabel('% Distribution', color='b') ax1.tick_params(axis='y', labelcolor='b') # Create a secondary y-axis ax2 = ax1.twinx() # Plot event rate on the secondary y-axis ax2.plot(df['Category'], df['event_rate_%_1'], marker='o', label='event_rate_%_1', color='r') ax2.plot(df['Category'], df['event_rate_%_2'], marker='o', label='event_rate_%_2', color='orange') ax2.set_ylabel('Event Rate (%)', color='r') ax2.tick_params(axis='y', labelcolor='r') # Adding legend fig.tight_layout() plt.title('Percentage Distribution and Event Rate') fig.legend(loc="upper left", bbox_to_anchor=(0.15, 0.85)) # Rotate x-axis labels for better readability plt.xticks(rotation=45, ha="right") # Show the plot plt.show()
Solution To fix the overlapping bars you can assign offsets for each bar which are equal to half the width of the bar. This centers them without overlapping. To rotate the x-axis labels, you should call plt.xticks(...) before creating ax2. This is because the x-labels come from the first axis. Finally, to create the gridlines on the y-axis you should include ax1.grid(which='major', axis='y', linestyle='--',zorder=1). Make sure to set the zorder parameter to 1 in this line and 2 when creating the bars and lines. This ensures that the gridlines are in the background and don't show up on top of the bars. Code import pandas as pd import matplotlib.pyplot as plt import numpy as np # Create a Pandas DataFrame with your data data = { "Category": ["00 (-inf, 0.25)", "01 [0.25, 4.75)", "02 [4.75, 6.75)", "03 [6.75, 8.25)", "04 [8.25, 9.25)", "05 [9.25, 10.75)", "06 [10.75, 11.75)", "07 [11.75, 13.75)", "08 [13.75, inf)"], "%_total_dist_1": [5.7, 7, 10.5, 13.8, 9.1, 14.1, 13.7, 16.8, 9.4], "event_rate_%_1": [36.5, 11.2, 5, 3.9, 3.4, 2.5, 1.6, 1.3, 1], "%_total_dist_2": [5.8, 7, 10.5, 13.7, 9.2, 14.2, 13.7, 16.7, 9.1], "event_rate_%_2": [10, 11, 4.8, 4, 3.1, 2.4, 1.8, 1.3, 1.3] } df = pd.DataFrame(data) # Create a figure and primary y-axis fig, ax1 = plt.subplots(figsize=(10, 6)) x=np.arange(len(df['Category'])) # THIS LINE MAKES THE HORIZONTAL GRID LINES ON THE PLOT ax1.grid(which='major', axis='y', linestyle='--',zorder=1) # THIS PLOTS THE BARS NEXT TO EACH OTHER INSTEAD OF OVERLAPPING ax1.bar(x+0.1, df['%_total_dist_1'], width=0.2, alpha=1.0, label="%_total_dist_1", color='b',zorder=2) ax1.bar(x-0.1, df['%_total_dist_2'], width=0.2, alpha=1.0, label="%_total_dist_2", color='g',zorder=2) ax1.set_ylabel('% Distribution', color='b') ax1.tick_params(axis='y', labelcolor='b') # THIS LINE ROTATES THE X-AXIS LABELS plt.xticks(rotation=45, ha="right") # Create a secondary y-axis ax2 = ax1.twinx() # Plot event rate on the secondary y-axis ax2.plot(df['Category'], df['event_rate_%_1'], marker='o', label='event_rate_%_1', color='r',zorder=2) ax2.plot(df['Category'], df['event_rate_%_2'], marker='o', label='event_rate_%_2', color='orange',zorder=2) ax2.set_ylabel('Event Rate (%)', color='r') ax2.tick_params(axis='y', labelcolor='r') # Adding legend fig.tight_layout() plt.title('Percentage Distribution and Event Rate') fig.legend(loc="upper left", bbox_to_anchor=(0.15, 0.85)) # Show the plot plt.show()
3
1
77,122,159
2023-9-17
https://stackoverflow.com/questions/77122159/about-gravity-option-for-marks-in-tkinter
I am using the Text widget in python / tkinter. I want to use the left and right option for marks so that text inserted immediately before or immediately after already tagged text is in the same tag range. In the code below, I don’t understand why inserted text does not appear in red. import tkinter as tk main = tk.Tk() tt = tk.Text(main, width=50, height=5, bg='lightgray') tt.pack() # some text tt.insert('1.0', 'STRANGE!') # inserting marks in text tt.mark_set('redLeft', '1.2') tt.mark_gravity('redLeft', 'left') tt.mark_set('redRight', '1.6') tt.mark_gravity('redRight', 'right') # a tag for red text tt.tag_add('red', 'redLeft', 'redRight') tt.tag_configure('red', foreground='red') # insertions after, in and before red text tt.insert('1.6', 'should be red ?') tt.insert('1.4', 'is red') tt.insert('1.2', 'should be red ?') main.mainloop() Thank you for any help.
I don’t understand why inserted text does not appear in red. This is how the text widget is designed to work. The gravity of a mark does not affect the tags that are applied when inserting text. The gravity only defines what happens to the mark when text is inserted at the mark. From the canonical documentation on marks: "The gravity for a mark specifies what happens to the mark when text is inserted at the point of the mark. If a mark has left gravity, then the mark is treated as if it were attached to the character on its left, so the mark will remain to the left of any text inserted at the mark position. If the mark has right gravity, new text inserted at the mark position will appear to the left of the mark (so that the mark remains rightmost)." When you insert text, the text will only inherit tags if the tag is on the character both to the left and the right of the insertion point. From the canonical documentation on the insert method: "If there is a single chars argument and no tagList, then the new text will receive any tags that are present on both the character before and the character after the insertion point; if a tag is present on only one of these characters then it will not be applied to the new text."
3
2
77,104,702
2023-9-14
https://stackoverflow.com/questions/77104702/tls-communication-between-python-3-11-and-micropython-1-20-fails-with-ssl-no
I'm trying to send text between this to devices. One has a Python 3.11 (Server) and the other one a Micropython 1.20 (Client). Both devices have their own key and the server has a server-cert. Both Keys where created with: openssl req -new -newkey rsa:1024 -days 365 -nodes -x509 -keyout server-key.pem -out server-cert.pem``` I also try, as mentioned at the documentation of Micropython, to convert the cert of the server into the DER format. openssl.exe x509 -in server-cert.pem -out server-cert.der -outform DER My client code: # main.py -- put your code here! import usocket as socket import ussl as ssl # Server-IP-Adresse und Port server_ip = '192.168.178.67' server_port = 2002 server_address = (server_ip, server_port) server_ca = "/flash/cert/server-cert-test.der" server_key = "/flash/cert/server-key.pem" # Verbindung zum Server herstellen client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) client_socket.connect(server_address) # SSL-Verbindung einrichten ssl_context = ssl.wrap_socket(client_socket, cert=(server_ca, server_key)) # Daten senden ssl_context.write("Hallo, Server!".encode('utf-8')) # Antwort empfangen und dekodieren response = ssl_context.read(1024) print("Antwort vom Server:", response.decode('utf-8')) # Verbindung schließen ssl_context.close() client_socket.close() My Server Code: from socket import socket from ssl import wrap_socket def main(): server_certfile = "server-cert.pem" server_keyfile = "server-key.pem" s = socket() s.bind(('192.168.178.67', 2002)) s.listen(5) wrap_socket(s.accept()[0], 'server-key.pem', 'server-cert.pem', True) if __name__ == "__main__": main() As I understand the Wireshark analysys, the Micropython says "hello" and offers TLS1.2. The Server thinks it's ok, but then they don`t get clear about, which cipher they should use. Python: ssl.SSLError: [SSL: NO_SHARED_CIPHER] no shared cipher (_ssl.c:992) Micropython: OSError: (-30592, 'MBEDTLS_ERR_SSL_FATAL_ALERT_MESSAGE') 6768 11:49:56,791402 192.168.178.126 192.168.178.67 TCP 60 55778 β†’ 2002 [SYN] Seq=0 Win=6400 Len=0 MSS=800 6769 11:49:56,791478 192.168.178.67 192.168.178.126 TCP 58 2002 β†’ 55778 [SYN, ACK] Seq=0 Ack=1 Win=64800 Len=0 MSS=1460 6770 11:49:56,791810 192.168.178.126 192.168.178.67 TCP 60 55778 β†’ 2002 [ACK] Seq=1 Ack=1 Win=6400 Len=0 6771 11:49:57,118967 192.168.178.126 192.168.178.67 TLSv1.2 176 Client Hello 6772 11:49:57,119175 192.168.178.67 192.168.178.126 TLSv1.2 61 Alert (Level: Fatal, Description: Handshake Failure) 6773 11:49:57,119284 192.168.178.67 192.168.178.126 TCP 54 2002 β†’ 55778 [FIN, ACK] Seq=8 Ack=123 Win=64678 Len=0 6774 11:49:57,119367 192.168.178.126 192.168.178.67 TCP 60 55778 β†’ 2002 [ACK] Seq=123 Ack=9 Win=6392 Len=0 What a bummer! I switched cert's and keys. Used different implementation and writing methods of wrap_socket. Wireshark was always with me... Do I have mistakes in my implementation? Or isn't there a way yet for Python and MicroPython to communicate secure? Does anybody has an working implementation? Thanks in advance! Edit: The TLS hadshake CLIENT_HELLO shows this supported ciphers:
It's because the server doesn't accept weak algorithms that the client suggested. You can check algorithms that the server supports using the SSLContext API like the following, instead of using the deprecated ssl.wrap_socket(). ... from ssl import SSLContext ... def main(): ssl_ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) for cipher in ssl_ctx.get_ciphers(): print(cipher['name']) ssl_ctx.load_cert_chain(certfile = server_certfile, keyfile=server_keyfile) ... accepted_sock, _ = s.accept() ssl_sock = ssl_ctx.wrap_socket(accepted_sock, server_side=True) ... In your case, the client Micropython suggested TLS_ECDHE_ECDSA_* algorithms and TLS_RSA_* algorithms. But the server cannot choose TLS_ECDHE_ECDSA_* algorithms because you configured an RSA key and certificate. And probably the server also didn't choose TLS_RSA_* algorithms because they are weak and deprecated. (They do not use an ephemeral key.) There are two possible solutions. The first one is to use an ECDSA key and certificate. And the second one is to fix Micropython so it suggests TLS_ECDSA_RSA_* or TLS_DHE_RSA_* algorithms. I strongly recommend not to force the server to use TLS_RSA_* algorithms as in the comment on the question. As a side note, you need to do like the following in your client. (Use the cadata argument and pass the data instead of the file path.) import ussl as ssl ... with open(server_ca, 'rb') as f: server_ca_bytes = f.read() ssl_sock = ssl.wrap_socket(client_socket, cadata=server_ca_bytes) ...
3
1
77,118,636
2023-9-16
https://stackoverflow.com/questions/77118636/attributeerror-nonetype-object-has-no-attribute-to-capabilities-getting-th
import unittest from appium import webdriver from appium.webdriver.common.appiumby import AppiumBy capabilities = dict( platformName='Android', automationName='uiautomator2', deviceName='Samsung S9', appPackage='com.android.settings', appActivity='.Settings', language='en', locale='US' ) appium_server_url = 'http://localhost:4723' class TestAppium(unittest.TestCase): def setUp(self) -> None: self.driver = webdriver.Remote(appium_server_url, capabilities) def tearDown(self) -> None: if self.driver: self.driver.quit() def test_find_battery(self) -> None: el = self.driver.find_element(by=AppiumBy.XPATH, value='//*[@text="Battery"]') el.click() if __name__ == '__main__': unittest.main() The above is the example code from official Appium website (http://appium.io/docs/en/2.1/quickstart/test-py/), I have installed all the prequisites requred but still I'm getting the below error when I run the python file: C:\Users\syeda\Desktop>python test.py E ====================================================================== ERROR: test_find_battery (__main__.TestAppium.test_find_battery) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Users\syeda\Desktop\test.py", line 19, in setUp self.driver = webdriver.Remote(appium_server_url, capabilities) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\syeda\AppData\Local\Programs\Python\Python311\Lib\site-packages\appium\webdriver\webdriver.py", line 229, in __init__ super().__init__( File "C:\Users\syeda\AppData\Local\Programs\Python\Python311\Lib\site-packages\selenium\webdriver\remote\webdriver.py", line 185, in __init__ capabilities = options.to_capabilities() ^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'to_capabilities' ---------------------------------------------------------------------- Ran 1 test in 0.001s FAILED (errors=1) I made sure the Appium server is running. I'm not sure why this eroor is occuring. I tried searching on the web but no luck.
import unittest from appium import webdriver from appium.webdriver.common.appiumby import AppiumBy # Import Appium UiAutomator2 driver for Android platforms (AppiumOptions) from appium.options.android import UiAutomator2Options capabilities = dict( platformName='Android', automationName='uiautomator2', deviceName='Samsung S9', appPackage='com.android.settings', appActivity='.Settings', language='en', locale='US' ) appium_server_url = 'http://localhost:4723' # Converts capabilities to AppiumOptions instance capabilities_options = UiAutomator2Options().load_capabilities(capabilities) class TestAppium(unittest.TestCase): def setUp(self) -> None: self.driver = webdriver.Remote(command_executor=appium_server_url,options=capabilities_options) def tearDown(self) -> None: if self.driver: self.driver.quit() def test_find_battery(self) -> None: el = self.driver.find_element(by=AppiumBy.XPATH, value='//*[@text="Battery"]') el.click() if __name__ == '__main__': unittest.main()
8
6
77,102,707
2023-9-14
https://stackoverflow.com/questions/77102707/type-hinting-of-dependency-injection
I'm creating a declarative http client and have a problem with mypy linting. Error: Incompatible default for argument "user" (default has type "Json", argument has type "dict\[Any, Any\]") I have a "Dependency" class that implements the logic of: value validation agains type, request modification: class Dependency(abc.ABC): def __init__( self, default: Any = Empty, field_name: Union[str, None] = None, ): self.default = default self._overridden_field_name = field_name ... @abc.abstractmethod def modify_request(self, request: RawRequest) -> RawRequest: raise NotImplementedError The dependencies inherited from Dependency, e.g. Json: class Json(Dependency): location = Location.json def __init__(self, default: Any = Empty): """Field name is unused for Json.""" super().__init__(default=default) def modify_request(self, request: "RawRequest") -> "RawRequest": ... return request Then I use them as function argument's default to declare: @http("GET", "/example") def test_get(data: dict = Json()): ... It works as expected, but mypy is raising a lot of errors. The question - how to deal with type hinting? I need it to work like Query() or Body() in FastAPI, without changing the way of declaration. I tried to make a Dependency class to be generic, but it wasn't helped me. UPD: Sorry, forgot to mention that type hint can be dataclass, or pydantic model, or any other type. Then it will be deserialized in function execution. Dict as type annotation: @http("GET", "/example") def test_get(data: dict = Json()): ... Pydantic model as type annotation: class PydanticModel(BaseModel): … @http("GET", "/example") def test_get(data: PydanticModel = Json()): ... Dataclass as type annotation: @dataclasses.dataclass class DataclassModel(): … @http("GET", "/example") def test_get(data: DataclassModel = Json()): ... It should support any type provided in type hint.
Solved using typing.Annotated, declaration has been changed a bit, but it works correctly and mypy has no errors to show. @http("GET", "/example") def test_get(data: Annotated[DataclassModel, Json()]): ... Then I'm using a function signature to extract dependency, type hint and do magic... def extract_dependencies(func: Callable): signature = inspect.signature(func) for key, val in signature.parameters.items(): if key in ["self", "cls"]: # We don't need the self or cls parameter. continue # We check if the parameter is annotated. annotation = func.__annotations__.get(key, None) if hasattr(annotation, "__metadata__"): # Extracting the type hint and the dependency from the # Annotated type. type_hint, dependency = get_args(annotation) if not isinstance(dependency, Dependency): if inspect.isclass(dependency) and issubclass( dependency, Dependency ): # If the dependency is a class, we instantiate it. dependency = dependency() dependency.type_hint = type_hint else: # If the dependency is not an instance of Dependency, # we raise an AnnotationException. raise AnnotationException(annotation) else: # If the dependency is already an instance of Dependency, # we are setting only the type hint. dependency.type_hint = type_hint else: ... dependency.field_name = key dependency.value = values.get(key, val.default) yield dependency
3
1
77,120,647
2023-9-17
https://stackoverflow.com/questions/77120647/given-an-array-of-n-integers-return-the-max-sum-of-3-non-adjacent-elements
I got this question for a coding interview (mock, part of my uni's competitive programming club since a few people are curious lol), and I got stuck on finding the optimal solution. Extra details/constraints: minimum array has to be 5 (because the elements at index 0,2,4 summed would be the answer) And some examples: Example test: [8, -4, -7, -5, -5, -4, 8, 8] expected 12 because at indices 0, 5, 7, max sum is 8 + -4 + 8 = 12) Example test: [-2, -8, 1, 5, -8, 4, 7, 6] expected 15 b/c at indices 3,5,7, the max sum is 15) Example test: [-3, 0, -6, -7, -9, -5, -2, -6] expected -9 b/c at indices 1,3,6, the max sum is -9) Example test: [-10, -10, -10, -10, -10] expected -30 b/c at indices 1,3,5, max sum is -30) Here's my current brute force solution: def solution(A): # Implement your solution here # res = [] max_sum = -float('inf') n = len(A) for i in range(n): for j in range(i + 2, n): for k in range(j + 2, n): temp_sum = A[i] + A[j] + A[k] if temp_sum >= max_sum: max_sum = temp_sum # res = [i, j, k] return max_sum Is there anything more optimal and efficient?
You can solve this problem in O(n), being n the size of the array. The idea is to use Dynamic Programming to save the following information: 1 - What is the best you can make using the first i elements, if you were to pick exactly 1 element. 2 - What is the best you can make using the first i elements, if you were to pick exactly 2 elements. 3 - What is the best you can make using the first i elements, if you were to pick exactly 3 elements. To answer 1, the best you can make using the first i elements is the maximum element from the first i elements. Let's use a matrix to store that information. Let dp[1][i] be the best you can get picking exactly 1 element from the first i elements. To answer 2, the best you can make using the first i elements picking 2 elements is: Either you do not pick the current i-th element, and get the best you could get picking exactly 2 elements up until the i-1-th element. Or you pick the current i-th element and the best you could get picking ONLY 1 element up until the i-2-th element. Let dp[2][i] be the best you can get picking exactly 2 elements from the first i elements. It's the maximum of: dp[2][i-1] (not picking the i-th element). A[i] + dp[1][i-2] (picking the i-th plus the best from picking 1 up until the i-2-th). Notice that if you pick the i-th, you must NOT pick the i-1, as that would violate the rules of the problem. To answer 3, it's the same as to answer 2: Either you do not pick the current i-th element, and get the best you could get picking exactly 3 elements up until the i-1-th element. Or you pick the current i-th element and the best you could get picking ONLY 2 elements up until the i-2-th element. The answer will be what lies in pd[3][n-1], which means, the best you could get picking exactly 3 elements from the first n-1 elements. The code: def solution(A): n = len(A) dp = [[0] * n for i in range(4)] dp[1][0] = A[0] # The best of picking 1 element up to the 1st is the 1st. dp[2][0] = -float('inf') # You cannot pick 2 elements up to the 1st. dp[2][1] = -float('inf') # You cannot pick 2 elements up to the 2nd. dp[3][0] = -float('inf') # You cannot pick 3 elements up to the 1st. dp[3][1] = -float('inf') # You cannot pick 3 elements up to the 2nd. # Theoretically up to 4th, but the `for` already does it for you. for i in range(1, n): dp[1][i] = max( dp[1][i-1], # Either we skip the current A[i] # Or we choose it ) for i in range(2, n): dp[2][i] = max( dp[2][i-1], # Either we skip the current A[i] + dp[1][i-2] # Or we choose it ) for i in range(2, n): dp[3][i] = max( dp[3][i-1], # Either we skip the current A[i] + dp[2][i-2] # Or we choose it ) max_sum = dp[3][n-1] return max_sum
3
2
77,116,907
2023-9-16
https://stackoverflow.com/questions/77116907/diamond-pattern-using-python
I'm trying to print out a diamond shape using star characters and additional characters using python. I tried a lot to get the result I wanted, but not successful. I want to write a function that receives a parameters and according to that parameter it should determine the height of the diamond shape. For example : If the parameter is 6, the first half must have 5 lines except for the middle line. middle line is shared by both the halves. bottom half must also have 5 lines. But the additional requirement is after first 2 lines, instead of * characters, it must have "1 # 1" in the first half and in the second half it must have "2 # 2" For example, if the input is 6, it must produce some like below * *** 1 # 1 1 # 1 1 # 1 *********** 2 ^ 2 2 ^ 2 2 ^ 2 *** * So, this is what I tried so far def print_half_diamond(height): if height <= 0: print("Height must be a positive integer.") return for i in range(1, height + 1): spaces = " " * (height - i) line = "" if i == 1 or i == 2: line = "*" * (2 * i - 1) else: line = "1" + " " * (2 * (i - 2) + 1) + "#" + " " * (2 * (height - i)) + "1" print(spaces + line) print("*" * (2 * height - 1)) for i in range(height - 1, 0, -1): spaces = " " * (height - i) stars = "*" * (2 * i - 1) print(spaces + stars) print_half_diamond(6) The first half is somewhat correct but the spacing is wrong, the second half just prints out the * characters as follows * *** 1 # 1 1 # 1 1 # 1 1 #1 *********** ********* ******* ***** *** * How do i get the result I want? Help is much appreciated.
The top and bottom halves of the diamond are effectively mirror images of each other. Therefore you only need to write code to format one half. The other half is just the reverse. Something like this: def diamond(n): def rows(c, mc='#'): return [f'{c: >{i+2}}{mc: >{m-i-1}}{c: >{m-i-1}}' for i in range(n-3)] if n > 2: middle = '*' * (w := n * 2 - 1) m = w // 2 top = [f'{"***": >{m+2}}', f'{"*": >{m+1}}'] print(*reversed(rows('1')+top), middle, *rows('2', '^')+top, sep='\n') diamond(6) Output: * *** 1 # 1 1 # 1 1 # 1 *********** 2 ^ 2 2 ^ 2 2 ^ 2 *** * Or, by using the centering format specifier... def diamond(n): def rows(c, mc='#'): return [f'{e:^{w}}' for e in [f'{c}{" "*i}{mc}{" "*i}{c}' for i in range(n-3, 0, -1)] + ['***', '*']] if n > 2: w = n * 2 - 1 print(*reversed(rows('1')), '*'*w, *rows('2', '^'), sep='\n')
2
4
77,117,483
2023-9-16
https://stackoverflow.com/questions/77117483/iterator-for-k-combinations
LeetCode 77. Combinations: Given two integers n and k, return all possible combinations of k numbers chosen from the range [1, n]. You may return the answer in any order. My code using backtracking is given below. def combine(n: int, k: int) -> list[list[int]]: def backtrack(i: int, comb: list[int]) -> None: if len(comb) == k: ans.append([*comb]) return # Number of elements still needed to make a k-combination. need = k - len(comb) # The range of numbers is [i, n], therefore, size=n - i + 1 remaining = n - i + 1 # This is the last offset from which we can still make a k-combination. offset = remaining - need for j in range(i, i + offset + 1): comb.append(j) backtrack(j + 1, comb) comb.pop() ans: list[list[int]] = [] backtrack(1, []) return ans It works as expected. Now, I'm looking at LeetCode 1286. Iterator for Combination, which basically asks for an Iterator[list[int]] or Generator[list[int]] instead of returning all the combinations at once. Technically, LeetCode 1286 works with strings, but for the sake of similarity to LeetCode 77, let's talk about integers only, since it makes no difference to the algorithm. Can the above code be converted to return an iterator? The reason I'd prefer starting with the above code and not something completely different is because of its simplicity, and to keep the two solutions similar as much as possible. My research efforts: I've studied the sample code for itertools.combinations, but it works differently from my code above, and, IMO, is pretty convoluted since it uses some obscure/non-intuitive features such as for-else and loop variables referenced later (Python loop variables remain in scope even after the loop exits). I've also looked at Algorithm to return all combinations of k elements from n, but due to the highly generic nature of the question (it doesn't specify a programming language or whether the combinations should be returned all at once or as an iterator), the answers are all over the place. Finally, I've looked at Creating all possible k combinations of n items in C++, which specifies C++, but not an iterator, and thus, doesn't fit my purpose, since I already know how to generate all combinations.
You can pass yielding through recursion with minimal changes to your code. Yield comb if length is appropriate instead of appending it to ans. Yield everything backtrack(j + 1, comb) yields after each recursive call. Return backtrack(1, []) from combine_iterator. def combine_iterator(n: int, k: int) -> list[list[int]]: def backtrack(i: int, comb: list[int]) -> None: if len(comb) == k: yield [*comb] return # Number of elements still needed to make a k-combination. need = k - len(comb) # The range of numbers is [i, n], therefore, size=n - i + 1 remaining = n - i + 1 # This is the last offset from which we can still make a k-combination. offset = remaining - need for j in range(i, i + offset + 1): comb.append(j) yield from backtrack(j + 1, comb) comb.pop() return backtrack(1, []) Checking the type of returned object: type(combine_iterator(5, 3)) >> generator type(next(combine_iterator(5, 3))) >> list type(next(combine_iterator(5, 3))[0]) >> int Checking that the results are the same for combine and combine_iterator: for n in range(2, 20): for k in range(1, n + 1): res = combine(n, k) res_it = combine_iterator(n, k) assert list(res) == list(res_it), f"!!! Different results for n={n}, k={k}" print("Done!") Output: Done!
2
2
77,116,419
2023-9-16
https://stackoverflow.com/questions/77116419/match-only-if-following-string-matches-pattern
I'm trying to match an entire string that starts with a certain string and then match any number of characters except ::, if :: was matched then only accept if followed by the string CASE. So for example: A string that starts with Linus:: followed by 0 or more 1 characters except if :: then CASE has to follow else only matches everything before the ::. Linus::AOPKNS::CASE would capture the entire string Linus::AOPKNS would capture the entire string Linus::AOPKNS::OK would only capture Linus::AOPKNS I imagine I'd have to use a positive lookahead but I'm not quite sure how to do that considering I wanna match any number of characters before the ::.
Use a tempered greedy token: ^ # Match at the start of the string Linus:: # 'Linus::', literally, (?:(?!::).)+ # followed by a sequence of characters that doesn't contain '::' (?:::CASE)? # and, optionally, '::CASE'. Try it on regex101.com. Depending on your use case, you might want to add a \b (word boundary) at the end of the pattern.
4
5
77,109,398
2023-9-15
https://stackoverflow.com/questions/77109398/failing-to-import-files-compiled-from-protobuf-in-python
My directory structure is as follows: test |-test.py |-test.proto |-test_pb2.py |-__init__.py |-comm |-comm.proto |-comm_pb2.py |-__init__.py both __init__.py is empty and test.proto is like this: package test; import "comm/comm.proto"; message Test{ optional comm.Foo foo = 1; } and comm.proto is like this: package comm; message Foo{} i successfuly used command protoc --python_out=. -I. comm.proto in comm directory to compile comm.proto and protoc --python_out=. -I. test.proto in test directory to compile test.proto but when i tried to import test_pb2 in test.py, i encounter this error TypeError: Couldn't build proto file into descriptor pool: Depends on file 'comm/comm.proto', but it has not been loaded Can someone help me identify the reason and provide a solution, please?
So, I think how protoc works with Python is more complicated|confusing than the average bear! My recourse is to use Golang to see what its SDK does and then reverse engineer that back into Python. The documentation says that protoc ignores Protocol Buffers packages. See Defining your Protocol Format and the note: The .proto file starts with a package declaration, which helps to prevent naming conflicts between different projects. In Python, packages are normally determined by directory structure, so the package you define in your .proto file will have no effect on the generated code. However, you should still declare one to avoid name collisions in the Protocol Buffers name space as well as in non-Python languages. I'm unfamiliar with Python Packages and Modules but ... package's When you define a package in a proto file, convention is that the proto file be in a folder that matches the package name. You can see this with e.g. Google's Well-known Types and e.g. Timestamp. This is defined in timestamp.proto to be in package google.protobuf and so it's in the a folder path google/protobuf and, by convention, it's called timestamp.proto (though the file name is arbitrary). Because message Test is in package test and message Foo is in package comm, the structure would be more correctly (!?) structured as: . β”œβ”€β”€ comm β”‚ └── comm.proto └── test └── test.proto When you use protoc, you must define --proto_path for each distinct folder that contains package folders. In this case --proto_path=${PWD} which can be omitted. protoc This gives protoc: protoc \ --python_out=${PWD} \ --pyi_out=${PWD} \ test/test.proto \ comm/comm.proto NOTE I encourage you to include --pyi_out to get "intellisense" in Visual Studio Code. Or, my preference is to be more emphatic and include the root (${PWD}) of the proto package's: protoc \ --proto_path=${PWD} \ --python_out=${PWD} \ --pyi_out=${PWD} \ ${PWD}/test/test.proto \ ${PWD}/comm/comm.proto Python packages Curiously, even though we were told that the package would not have an effect on the Python code, it does. The package test proto is output to test folder and package comm to comm folder. You will need to create an empty __init__.py in test (but not comm) for the code to work. Empty Message Your comm.Foo message is empty ({}) and so there's nothing to use. I tweaked it: package comm; message Foo{ optional string bar = 1; } Basic Python example import test.test_pb2 test = test.test_pb2.Test() test.foo.bar = "Hello" print(test) Yields: foo { bar: "Hello" } protos folder My recommendation is to use a folder e.g. protos to hold proto sources, i.e. . └── protos β”œβ”€β”€ comm β”‚ └── comm.proto └── test └── test.proto And then e.g.: protoc \ --proto_path=${PWD}/protos \ --python_out=${PWD} \ --pyi_out=${PWD} \ ${PWD}/protos/test/test.proto \ ${PWD}/protos/comm/comm.proto NOTE --proto_path extends to include protos and protos must be prefix on the proto sources but the generated code is still by package in ${PWD} . β”œβ”€β”€ comm β”‚ β”œβ”€β”€ comm_pb2.py β”‚ β”œβ”€β”€ comm_pb2.pyi β”œβ”€β”€ main.py β”œβ”€β”€ protos β”‚ β”œβ”€β”€ comm β”‚ β”‚ └── comm.proto β”‚ └── test β”‚ └── test.proto └─── test β”œβ”€β”€ __init__.py β”œβ”€β”€ test_pb2.py └── test_pb2.pyi This has the advantage of being more universally applicable if you decide to generate e.g. Golang sources too.
3
6
77,114,815
2023-9-15
https://stackoverflow.com/questions/77114815/pylint-is-not-suggesting-the-walrus-operator-why
I was going to ask if there is a pylint-style code analyzer capable of suggesting the use of the := operator in places were it might improve the code. However, it looks like such test has been added to the pylint two years ago -> github PR (merged). Anyway I never saw such suggestion, not even for this example like in the linked PR: x = 2 if x: print(x) # ----- # if (x := 2): # print(x) # ----- This feature is available since Python 3.8. (I'm using recent Python and pylint versions.) I though I have to enable it somehow, but the help says: --py-version <py_version> Minimum Python version to use for version dependent checks. Will default to the version used to run pylint. What is wrong? Why there is no consider-using-assignment-expr from pylint?
The consider-using-assignment-expr check in pylint can be enabled by Adding the following line to your pylint configuration file. I am using a configuration file named pylint.toml: [tool.pylint.main] load-plugins="pylint.extensions.code_style" Then you can run the linter using pylint --rcfile <config_file> <python_file>. See here for more instructions. Note that I am using Python 3.11, and Pylint 2.17, but the check should be available since Python 3.8.
3
3
77,111,833
2023-9-15
https://stackoverflow.com/questions/77111833/align-two-dataframe-columns-and-preserve-the-order-no-lexicographical-reorder
Let A and B two dataframes columns: Hello Foo Hey Bar World Hello Bar Doo World Star I want to obtain the columns of a dataframe C containing all the unique columns in their concatenation BUT the columns must be in the same order as before. Hello Foo Bar Hey Doo World Star In other words: "If A is an older version of a dataframe and B is a newer, how to get a dataframe C which keeps track of A deleted columns (not present in B) and B added columns (not present in A) in a way that keeps the ordering consistent with A (or B)?". The method align can mix the two dataframe columns but it does not preserve the original order, the order is in lexicographical order.
Assuming you really want to keep the original order in both Indexes (and assuming there is no circular pattern), you can use the following algorithm: A = pd.DataFrame(columns=['Hello', 'Foo', 'Hey', 'Bar', 'World']) B = pd.DataFrame(columns=['Hello', 'Bar', 'Doo', 'World', 'Star']) def merge(A, B): sA = set(A) sB = set(B) out = {} while sA or sB: # while items are left in A or B for a in A: # take from A while not in B sA.discard(a) if a in sB: break out.setdefault(a) for b in B: # take from B while not in A sB.discard(b) if b in sA: break out.setdefault(b) return list(out) out = merge(A, B) Output: ['Hello', 'Foo', 'Hey', 'Bar', 'Doo', 'World', 'Star'] generic solution Alternatively, you can use graph theory, construct a directed graph with all the pairs of successive columns as edges using networkx.from_edgelist and itertools.pairwise, then find the longest path with dag_longest_path: import networkx as nx from itertools import pairwise dfs = [A, B] G = nx.from_edgelist((pair for pairs in map(pairwise, dfs) for pair in pairs), create_using=nx.DiGraph) C_cols = nx.dag_longest_path(G) Output: ['Hello', 'Foo', 'Hey', 'Bar', 'Doo', 'World', 'Star'] Graph:
3
3
77,111,774
2023-9-15
https://stackoverflow.com/questions/77111774/how-to-hide-horizontal-monotonic-sequence-of-numbers
My input is df : COLUMN_1 COLUMN_2 COLUMN_3 COLUMN_4 0 0 1 0 2 1 1 1 2 3 2 1 2 3 2 3 1 2 4 5 4 4 5 8 8 And I wish I can hide (horizontally, from left to the non inclusive right) monotonic sequences with a difference equal to 1. For example if in a row we have [4, 5, 8, 8] (like in the last one), the concerned sequence is [4, 5). So, we need to hide the number 4 with an emty string. My expected output is this : COLUMN_1 COLUMN_2 COLUMN_3 COLUMN_4 0 1 0 2 1 1 3 2 3 2 3 2 5 4 5 8 8 Explanation : I tried with the code below but I'm not in the right track since I get a weird boolean dataframe. df.diff(axis=1).eq(1).iloc[:, ::-1].cummax(axis=1).replace(True, '').iloc[:, ::-1]
You need to use a negative period in diff, combined with mask: out = df.mask(df.diff(-1, axis=1).eq(-1), '') or, for in place modification: df[df.eq(df.shift(-1, axis=1)-1)] = '' Variant with shift: out = df.mask(df.eq(df.shift(-1, axis=1)-1), '') Output: COLUMN_1 COLUMN_2 COLUMN_3 COLUMN_4 0 1 0 2 1 1 3 2 3 2 3 2 5 4 5 8 8 Intermediates: # df.diff(-1, axis=1) COLUMN_1 COLUMN_2 COLUMN_3 COLUMN_4 0 -1 1 -2 NaN 1 0 -1 -1 NaN 2 -1 -1 1 NaN 3 -1 -2 -1 NaN 4 -1 -3 0 NaN # df.shift(-1, axis=1) COLUMN_1 COLUMN_2 COLUMN_3 COLUMN_4 0 1 0 2 NaN 1 1 2 3 NaN 2 2 3 2 NaN 3 2 4 5 NaN 4 5 8 8 NaN
2
1
77,111,178
2023-9-15
https://stackoverflow.com/questions/77111178/running-a-docker-login-with-python-subprocess-securely
I want to run a docker login from python3 without asking for user input. I have three global variables REGISTRY_URL, USERNAME, PASSWORD. I want to run: os.system(f"echo '{PASSWORD}' | docker login {REGISTRY_URL} -u {USERNAME} --password-stdin") The problem is that my three global variables are user controllable which can lead to Remote Code Execution. How can I run this command securely with subprocess.run ? (NB: I do not want to use -p option of docker because it is not secure as per docker recommandation)
You can supply the password using the input argument to subprocess.run: import subprocess def docker_login(registry_url, username, password): command = ["docker", "login", registry_url, "-u", username, "--password-stdin"] completed_process = subprocess.run(command, input=password.encode() + b'\n', capture_output=True) if completed_process.returncode == 0: print("Docker login successful!") else: print("Docker login failed. Error message:") print(completed_process.stderr.decode()) docker_login(REGISTRY_URL, USERNAME, PASSWORD)
2
6
77,097,844
2023-9-13
https://stackoverflow.com/questions/77097844/how-to-rename-samples-based-on-dictionary-values
I have some trouble writing a snakemake rule to change the name of my samples. After demultiplexing with Porechop and some basic trimming with Filtlong, I would like to change the names of my samples from e.g. BC01_trimmed.fastq.gz to E_coli_trimmed.fastq.gz. The idea is that in my config file there is a dictionary where each sample is linked to the used barcode. Based on this previously asked question, I wrote this piece of example code. mydictionary = { 'BC01': 'bacteria_A', 'BC02': 'bacteria_B' } rule all: input: expand('{bacteria}_trimmed.fastq.gz', bacteria=mydictionary.values()) rule changeName: input: '{barcode}_trimmed.fastq.gz' params: value=lambda wcs: mydictionary[wcs.bacteria] output: '{params.value}_trimmed.fastq.gz' shell: "mv {input} {output}" But I receive the error: WildcardError in rule changeName in file Snakefile: Wildcards in input files cannot be determined from output files: 'barcode' Thanks in advance
Let's try again... I would reverse the dictionary since in your input function you want to retrieve the barcode given a sample name. (You can reverse key-values using python code, of course). To resolve the cyclic dependency or similar errors, I think you need to either constraint the wildcard values to the ones you have in your dictionary, i.e. you need to effectively disable the regex matching, or you can output the renamed files to a different directory. (I really like snakemake, but I find this behaviour quite confusing). I use the wildcard_constraints pattern as below quite liberally for any wildcard to avoid this issue. So: mydictionary = { 'bacteria_A': 'BC01', 'bacteria_B': 'BC02' } wildcard_constraints: bacteria='|'.join([re.escape(x) for x in mydictionary.keys()]), rule all: input: expand('{bacteria}_trimmed.fastq.gz', bacteria=mydictionary.keys()) rule changeName: input: lambda wcs: '%s_trimmed.fastq.gz' % mydictionary[wcs.bacteria], output: '{bacteria}_trimmed.fastq.gz' shell: "mv {input} {output}"
2
2
77,110,775
2023-9-15
https://stackoverflow.com/questions/77110775/ipython-doesnt-allow-creating-of-classmethods
I tried using ipython to create a class method with the class method decorator. When I press enter I get the following error: I tried using the same decorator in a normal python script and it worked. Why can't I do the same in Ipython?
Upgrade your ipython package to the latest version, e.g. $ python3 -m pip install -U ipython It works fine for ipython==8.1.0 (released Feb 25, 2022) or later: $ ipython Python 3.11.4 (main, Jun 20 2023, 16:52:35) [Clang 13.0.0 (clang-1300.0.29.30)] Type 'copyright', 'credits' or 'license' for more information IPython 8.1.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: class Test: ...: @classmethod ...: def test(): ...: pass ...: In [2]: I could reproduce the issue for the previous version of ipython, 8.0.1: $ ipython Python 3.11.4 (main, Jun 20 2023, 16:52:35) [Clang 13.0.0 (clang-1300.0.29.30)] Type 'copyright', 'credits' or 'license' for more information IPython 8.0.1 -- An enhanced Interactive Python. Type '?' for help. In [1]: class Test: ...: @classmethod Input In [1] @classmethod ^ SyntaxError: incomplete input
2
3
77,108,924
2023-9-14
https://stackoverflow.com/questions/77108924/does-scipy-optimize-minimize-use-parallelization
scipy.optimize.minimize function with method="BFGS" (based on this) doesn't seem to use parallelization when computing the cost function or numerical gradient. However, when I run an optimization on a Macbook Air 8 core Apple M1 (see below for minimal reproducible example), using top command I get 750% to 790% CPU usage, suggesting all 8 cores are used. This isn't always the case. On a supercomputer where each node has 40 cores I got 100% to 200% CPU usage, suggesting only 2 cores are used. Does scipy.optimize.minimize use parallelization when computing the cost function/numerical gradient? If so, how do I get scipy.optimize.minimize to use all cores? # basic optimization of the variational functional for a random symmetric matrix import numpy as np from numpy.random import uniform from scipy.optimize import minimize # generate random symmetric matrix to compute minimal eigenvalue of N = 1000 H = uniform(-1, 1, [N,N]) H = H + H.T # variational cost function def cost(x): return (x @ H @ x) / (x @ x) x0 = uniform(-0.1, 0.1, N) # minimize variational function with BFGS minimize(cost, x0, method='BFGS')
No, but the function being evaluated can use parallelization. You might think that you're not using parallelization in this program. And you're not - at least not explicitly. However, many NumPy operations call out to your platform's BLAS library. Matrix multiplication is one of the operations that can be parallelized by BLAS. Some profiling shows that this program spends roughly 80% of its time doing matrix multiplies inside of the cost() function. You can check this possibility using the library threadpoolctl. Example: from threadpoolctl import ThreadpoolController controller = ThreadpoolController() for i in range(1, 5): t0 = time.time() with controller.limit(limits=i, user_api='blas'): print(minimize(cost, x0, method='BFGS')) t = time.time() print(f"Threads {i}, Duration: {t - t0:.3f}") By using htop, I confirmed that restricting BLAS parallelism also restricts the number of cores this program uses. Under the tests I did, this is not parallelizing particularly well. This suggests that most of the extra CPU usage is being wasted. Threads 1, Duration: 4.561 Threads 2, Duration: 3.631 Threads 3, Duration: 3.462 Threads 4, Duration: 4.019 (Results are only a rough guide, and will depend on your particular CPU and BLAS library. Benchmark this on your own hardware.) Note: Although 80% of the time is spent inside your cost function, SciPy also seems to be using some BLAS parallelism, as moving the with controller.limit(limits=i, user_api='blas'): line inside the cost function resulted in some amount of parallelism. Most likely, this is from inverting the Hessian, which is the most expensive step of BFGS, ignoring computing the cost function itself. Note: One of the reasons this is so slow is that no Jacobian is provided for this function. If no Jacobian is provided, it must be estimated numerically by calling the function cost() once for every dimension in the problem. In this case, it requires an extra thousand calls for each step.
3
4
77,106,998
2023-9-14
https://stackoverflow.com/questions/77106998/polars-use-is-in-with-durations
I have a dataframe with a column containing a set of durations, like 5m, 15m, etc... df = pl.DataFrame({ "duration_m": [5, 15, 30] }) df = df.with_columns( duration = pl.duration( minutes = pl.col("duration_m")) ) df shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ duration_m ┆ duration β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ duration[ns] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════════║ β”‚ 5 ┆ 5m β”‚ β”‚ 15 ┆ 15m β”‚ β”‚ 30 ┆ 30m β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ I want to filter this dataframe to return only rows that are 5m or 15m using is_in(). However, when I do this: my_durations = [pl.duration(minutes=5), pl.duration(minutes=15)] df.filter(pl.col("duration").is_in(my_durations)) I get: InvalidOperationError: 'is_in' cannot check for Object("object", Some(object-registry)) values in Duration(Microseconds) data This has no error (but is obviously not what I want to do): df.filter(pl.col("duration").is_in(pl.duration(minutes=5))) Thanks
I'm not sure why, even though pl.duration is an Expr and .is_in accepts expressions that it errors out. My best guess is that it sees a python list first so it assumes away getting an expression. From there it doesn't get a python type that it knows what to do with and just sees an object. Causes aside, you have two ways to get around this. The first: If you wrap your list in a pl.concat_list then you still have an expression instead of a python list but that won't be enough because it doesn't broadcast that to the whole df so you'll get an error about size mismatch which you can also fix by wrapping that in pl.repeat like this: df.filter(pl.col('duration') .is_in(pl.repeat(pl.concat_list(my_durations), pl.count())) ) shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ duration_m ┆ duration β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ duration[ns] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════════║ β”‚ 5 ┆ 5m β”‚ β”‚ 15 ┆ 15m β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ That's awfully clunky, better to just use the second method. The second: Just use timedelta instead of duration when you're not setting it with the values in a df like this: from datetime import timedelta my_durations = [timedelta(minutes=5), timedelta(minutes=15)] df.filter(pl.col("duration").is_in(my_durations)) As an aside... if you start with my_durations = [pl.duration(minutes=5), pl.duration(minutes=15)] then you can get the same timedelta list by doing: [pl.select(x).item() for x in my_durations]
3
2
77,104,513
2023-9-14
https://stackoverflow.com/questions/77104513/why-is-numba-popcount-code-twice-as-fast-as-equivalent-c-code
I have this simple python/numba code: from numba import njit import numba as nb @nb.njit(nb.uint64(nb.uint64)) def popcount(x): b=0 while(x > 0): x &= x - nb.uint64(1) b+=1 return b @njit def timed_loop(n): summand = 0 for i in range(n): summand += popcount(i) return summand It just adds the popcounts for integers 0 to n - 1. When I time it I get: %timeit timed_loop(1000000) 340 Β΅s Β± 1.08 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) It turns out that llvm cleverly converts the popcount function into the native CPU POPCNT instruction so we should expect it to be fast. But the question is, how fast. I thought I would compare it to a C version to see the speed difference. #include <stdio.h> #include <time.h> // Function to calculate the population count (number of set bits) of an integer using __builtin_popcount int popcount(int num) { return __builtin_popcount(num); } int main() { unsigned int n; printf("Enter the value of n: "); scanf("%d", &n); // Variables to store start and end times struct timespec start_time, end_time; // Get the current time as the start time clock_gettime(CLOCK_MONOTONIC, &start_time); int sum = 0; for (unsigned int i = 0; i < n; i++) { sum += popcount(i); } // Get the current time as the end time clock_gettime(CLOCK_MONOTONIC, &end_time); // Calculate the elapsed time in microseconds long long elapsed_time = (end_time.tv_sec - start_time.tv_sec) * 1000000LL + (end_time.tv_nsec - start_time.tv_nsec) / 1000; printf("Sum of population counts from 0 to %d-1 is: %d\n", n, sum); printf("Elapsed time: %lld microseconds\n", elapsed_time); return 0; } I then compiled this with -march=native -Ofast. I tried both gcc and clang and the results were very similar. ./popcount Enter the value of n: 1000000 Sum of population counts from 0 to 1000000-1 is: 9884992 Elapsed time: 732 microseconds Why is the numba twice as fast as the C code?
TL;DR: the performance gap between the GCC and the Clang version is due to the use of scalar instructions versus SIMD instructions. The performance gap between the Numba and the Clang version comes from the size of the integers that is not the same between the two version : 64-bit versus 32-bits. Performance Results First of all, I am also able to reproduce the problem on my Intel i5-9600KF. Here are the results (and the versions): Numba 0.56.4: 170.089 ms Clang 14.0.6: 190.350 ms GCC 12.2.0: 328.133 ms To understand what happens, we need to analyze the assembly code produce by all compilers. Assembly code Here is the assembly code of the hot loop produced by GCC: .L5: xorl %edx, %edx popcntl %eax, %edx incl %eax addl %edx, %ebx cmpl %ecx, %eax jne .L5 Here is the one produced by Clang: .LBB1_3: # =>This Inner Loop Header: Depth=1 vpand %ymm5, %ymm0, %ymm12 vpshufb %ymm12, %ymm6, %ymm12 vpsrlw $4, %ymm0, %ymm13 vpand %ymm5, %ymm13, %ymm13 vpshufb %ymm13, %ymm6, %ymm13 vpaddb %ymm12, %ymm13, %ymm12 vpunpckhdq %ymm1, %ymm12, %ymm13 # ymm13 = ymm12[2],ymm1[2],ymm12[3],ymm1[3],ymm12[6],ymm1[6],ymm12[7],ymm1[7] vpsadbw %ymm1, %ymm13, %ymm13 vpunpckldq %ymm1, %ymm12, %ymm12 # ymm12 = ymm12[0],ymm1[0],ymm12[1],ymm1[1],ymm12[4],ymm1[4],ymm12[5],ymm1[5] vpsadbw %ymm1, %ymm12, %ymm12 vpackuswb %ymm13, %ymm12, %ymm12 vpaddd %ymm2, %ymm0, %ymm13 vpaddd %ymm12, %ymm8, %ymm8 vpand %ymm5, %ymm13, %ymm12 vpshufb %ymm12, %ymm6, %ymm12 vpsrlw $4, %ymm13, %ymm13 vpand %ymm5, %ymm13, %ymm13 vpshufb %ymm13, %ymm6, %ymm13 vpaddb %ymm12, %ymm13, %ymm12 vpunpckhdq %ymm1, %ymm12, %ymm13 # ymm13 = ymm12[2],ymm1[2],ymm12[3],ymm1[3],ymm12[6],ymm1[6],ymm12[7],ymm1[7] vpsadbw %ymm1, %ymm13, %ymm13 vpunpckldq %ymm1, %ymm12, %ymm12 # ymm12 = ymm12[0],ymm1[0],ymm12[1],ymm1[1],ymm12[4],ymm1[4],ymm12[5],ymm1[5] vpsadbw %ymm1, %ymm12, %ymm12 vpackuswb %ymm13, %ymm12, %ymm12 vpaddd %ymm3, %ymm0, %ymm13 vpaddd %ymm12, %ymm9, %ymm9 vpand %ymm5, %ymm13, %ymm12 vpshufb %ymm12, %ymm6, %ymm12 vpsrlw $4, %ymm13, %ymm13 vpand %ymm5, %ymm13, %ymm13 vpshufb %ymm13, %ymm6, %ymm13 vpaddb %ymm12, %ymm13, %ymm12 vpunpckhdq %ymm1, %ymm12, %ymm13 # ymm13 = ymm12[2],ymm1[2],ymm12[3],ymm1[3],ymm12[6],ymm1[6],ymm12[7],ymm1[7] vpsadbw %ymm1, %ymm13, %ymm13 vpunpckldq %ymm1, %ymm12, %ymm12 # ymm12 = ymm12[0],ymm1[0],ymm12[1],ymm1[1],ymm12[4],ymm1[4],ymm12[5],ymm1[5] vpsadbw %ymm1, %ymm12, %ymm12 vpackuswb %ymm13, %ymm12, %ymm12 vpaddd %ymm4, %ymm0, %ymm13 vpaddd %ymm12, %ymm10, %ymm10 vpand %ymm5, %ymm13, %ymm12 vpshufb %ymm12, %ymm6, %ymm12 vpsrlw $4, %ymm13, %ymm13 vpand %ymm5, %ymm13, %ymm13 vpshufb %ymm13, %ymm6, %ymm13 vpaddb %ymm12, %ymm13, %ymm12 vpunpckhdq %ymm1, %ymm12, %ymm13 # ymm13 = ymm12[2],ymm1[2],ymm12[3],ymm1[3],ymm12[6],ymm1[6],ymm12[7],ymm1[7] vpsadbw %ymm1, %ymm13, %ymm13 vpunpckldq %ymm1, %ymm12, %ymm12 # ymm12 = ymm12[0],ymm1[0],ymm12[1],ymm1[1],ymm12[4],ymm1[4],ymm12[5],ymm1[5] vpsadbw %ymm1, %ymm12, %ymm12 vpackuswb %ymm13, %ymm12, %ymm12 vpaddd %ymm12, %ymm11, %ymm11 vpaddd %ymm7, %ymm0, %ymm0 addl $-32, %edx jne .LBB1_3 Here is the one produced by Numba: .LBB0_8: vpand %ymm0, %ymm9, %ymm6 vpshufb %ymm6, %ymm10, %ymm6 vpsrlw $4, %ymm0, %ymm7 vpand %ymm7, %ymm9, %ymm7 vpshufb %ymm7, %ymm10, %ymm7 vpaddb %ymm6, %ymm7, %ymm6 vpaddq -40(%rsp), %ymm0, %ymm7 vpsadbw %ymm5, %ymm6, %ymm6 vpaddq %ymm6, %ymm1, %ymm1 vpand %ymm7, %ymm9, %ymm6 vpshufb %ymm6, %ymm10, %ymm6 vpsrlw $4, %ymm7, %ymm7 vpand %ymm7, %ymm9, %ymm7 vpshufb %ymm7, %ymm10, %ymm7 vpaddb %ymm6, %ymm7, %ymm6 vpaddq -72(%rsp), %ymm0, %ymm7 vpsadbw %ymm5, %ymm6, %ymm6 vpaddq %ymm6, %ymm2, %ymm2 vpand %ymm7, %ymm9, %ymm6 vpshufb %ymm6, %ymm10, %ymm6 vpsrlw $4, %ymm7, %ymm7 vpand %ymm7, %ymm9, %ymm7 vpshufb %ymm7, %ymm10, %ymm7 vpaddb %ymm6, %ymm7, %ymm6 vpaddq %ymm0, %ymm8, %ymm7 vpsadbw %ymm5, %ymm6, %ymm6 vpaddq %ymm6, %ymm3, %ymm3 vpand %ymm7, %ymm9, %ymm6 vpshufb %ymm6, %ymm10, %ymm6 vpsrlw $4, %ymm7, %ymm7 vpand %ymm7, %ymm9, %ymm7 vpshufb %ymm7, %ymm10, %ymm7 vpaddb %ymm6, %ymm7, %ymm6 vpsadbw %ymm5, %ymm6, %ymm6 vpaddq %ymm6, %ymm4, %ymm4 vpaddq %ymm0, %ymm11, %ymm6 vpand %ymm6, %ymm9, %ymm7 vpshufb %ymm7, %ymm10, %ymm7 vpsrlw $4, %ymm6, %ymm6 vpand %ymm6, %ymm9, %ymm6 vpshufb %ymm6, %ymm10, %ymm6 vpaddb %ymm7, %ymm6, %ymm6 vpaddq %ymm0, %ymm12, %ymm7 vpsadbw %ymm5, %ymm6, %ymm6 vpaddq %ymm6, %ymm1, %ymm1 vpand %ymm7, %ymm9, %ymm6 vpshufb %ymm6, %ymm10, %ymm6 vpsrlw $4, %ymm7, %ymm7 vpand %ymm7, %ymm9, %ymm7 vpshufb %ymm7, %ymm10, %ymm7 vpaddb %ymm6, %ymm7, %ymm6 vpaddq %ymm0, %ymm13, %ymm7 vpsadbw %ymm5, %ymm6, %ymm6 vpaddq %ymm6, %ymm2, %ymm2 vpand %ymm7, %ymm9, %ymm6 vpshufb %ymm6, %ymm10, %ymm6 vpsrlw $4, %ymm7, %ymm7 vpand %ymm7, %ymm9, %ymm7 vpshufb %ymm7, %ymm10, %ymm7 vpaddb %ymm6, %ymm7, %ymm6 vpaddq %ymm0, %ymm14, %ymm7 vpsadbw %ymm5, %ymm6, %ymm6 vpaddq %ymm6, %ymm3, %ymm3 vpand %ymm7, %ymm9, %ymm6 vpshufb %ymm6, %ymm10, %ymm6 vpsrlw $4, %ymm7, %ymm7 vpand %ymm7, %ymm9, %ymm7 vpshufb %ymm7, %ymm10, %ymm7 vpaddb %ymm6, %ymm7, %ymm6 vpsadbw %ymm5, %ymm6, %ymm6 vpaddq %ymm6, %ymm4, %ymm4 vpaddq %ymm0, %ymm15, %ymm0 addq $-2, %rbx jne .LBB0_8 Analysis First of all, we can see that the GCC code use the popcntl instruction which is very fast, at least for scalar operations. Clang generate a assembly code using the AVX-2 SIMD instruction set on my machine. This is why the program produced by Clang is so fast compared to GCC : it operates on many items in parallel thanks to SIMD instructions. Numba generate a code very similar to Clang. This is not surprising since Numba is based on LLVM-Lite (and so LLVM), while Clang is also based on LLVM. However, there are small differences explaining the performance impact. Indeed, the Numba assembly code operates on twice more items than the Clang counterpart. This can be seen by counting the number of vpsrlw instructions (8 VS 4). I do not expect this to make the difference since the Clang loop is already well unrolled and the benefit of unrolling it more is tiny. Actually, this more aggressive unrolling is a side effect. The key difference is that Numba operates on 64-bit integers while the C code operates on 32-bit integers! This is why Clang unroll the loop differently and generate different instructions. In fact, the smaller integers causes Clang to generate a sequence of instructions to convert integers of different size which is less efficient. IMHO, this is a side effect impacting the optimizer since operating on smaller items can generally be used generate faster SIMD code. The code produced by LLVM seems sub-optimal in this case : it saturates the port 5 (ie. shuffle/permute execution unit) on my machine while one can write a code not saturating it (not easy though). Faster C implementation You can fix the C++ implementation so to operate 64-bit integers: #include <stdio.h> #include <time.h> #include <stdint.h> // Function to calculate the population count (number of set bits) of an integer using __builtin_popcount uint64_t popcount(uint64_t num) { return __builtin_popcountl(num); } int main() { int64_t n; printf("Enter the value of n: "); scanf("%ld", &n); // Variables to store start and end times struct timespec start_time, end_time; // Get the current time as the start time clock_gettime(CLOCK_MONOTONIC, &start_time); int64_t sum = 0; for (int64_t i = 0; i < n; i++) { sum += popcount(i); } // Get the current time as the end time clock_gettime(CLOCK_MONOTONIC, &end_time); // Calculate the elapsed time in microseconds long long elapsed_time = (end_time.tv_sec - start_time.tv_sec) * 1000000LL + (end_time.tv_nsec - start_time.tv_nsec) / 1000; printf("Sum of population counts from 0 to %ld-1 is: %ld\n", n, sum); printf("Elapsed time: %lld microseconds\n", elapsed_time); return 0; } This produces a program as fast as Numba on my machine using Clang (GCC still generate a slow scalar implementation). Notes SIMD versions only make sense if your real-world code is SIMD-friendly, that is if popcount can be applied on multiple contiguous items. Otherwise, results of the scalar implementation can be drastically different (in fact, the three compilers generate a very close code which I expect to be equally fast). AVX-512 provides the SIMD instruction VPOPCNTDQ that should clearly outperforms the code generated by LLVM using (only) AVX-2.. Since I do not have AVX-512 on my machine, and AVX-2 does not provide such an instruction, it makes sense of LLVM to produce an assembly code using AVX-2. The AVX-512 instruction can count the number of 1 in 16 x 32-bit integers in parallel while taking about the same number of cycle than its scalar counterpart. To be more precise, the instruction is only available using the instruction set AVX512VPOPCNTDQ + AVX512VL (which AFAIK is not available on all CPU supporting AVX-512). As of now, this instruction is only available on few x86-64 micro-architectures (eg. Intel Ice-Lake, Intel Sapphire Rapids and AMD Zen4).
7
9
77,105,233
2023-9-14
https://stackoverflow.com/questions/77105233/how-do-i-get-an-element-from-shadow-dom-in-selenium-in-python
could someone please explain to me how to get an element from shadow DOM in Selenium4 in Python? I want to element.click() to Accept the cookies - but I fail at the very first step! I've tried driver.find_element(By.CSS_SELECTOR, '#shadow_host') ... driver.find_element(By.CSS_SELECTOR, '#shadow_root') Every time "no such element: Unable to locate element". Bye Michael
To get element's shadow-root, you should first get it's host and then get property shadowRoot. In your case, host tag is cmm-cookie-banner. So, you get this element and then execute JS script on it. def get_shadow_root(element): return driver.execute_script('return arguments[0].shadowRoot', element) shadow_host = driver.find_element(By.TAG_NAME, 'cmm-cookie-banner') button = get_shadow_root(shadow_host).find_element(By.CSS_SELECTOR, '[data-test=handle-accept-all-button]')
2
8
77,103,883
2023-9-14
https://stackoverflow.com/questions/77103883/how-to-import-a-library-in-python-for-firebase-functions
Hello StackOverflow community. I am trying to deploy Firebase functions written in Python from a React-Native project. My code snippet looks like this: from firebase_functions import firestore_fn, https_fn import fitz import re import requests import io from datetime import datetime # The Firebase Admin SDK to access Cloud Firestore. from firebase_admin import initialize_app, firestore, credentials, storage import google.cloud.firestore from google.cloud import storage as Storage from google.oauth2 import service_account from uuid import uuid4 cred = credentials.Certificate("link_to_my_certificate.json") app = initialize_app(cred,{'storageBucket':'my_project_name.appspot.com'}) @https_fn.on_request() def printHello(req: https_fn.Request) -> https_fn.Response: https_fn.Response("Hello from Firebase functions", status=200) I tested them: tsc --watch firebase emulators:start --only functions It works perfectly fine, and after that, I'm trying to deploy them and I get the output like this: knswrw@MacBook functions % firebase deploy --only functions === Deploying to 'my_project_name'... i deploying functions i functions: preparing codebase default for deployment i functions: ensuring required API cloudfunctions.googleapis.com is enabled... i functions: ensuring required API cloudbuild.googleapis.com is enabled... i artifactregistry: ensuring required API artifactregistry.googleapis.com is enabled... βœ” artifactregistry: required API artifactregistry.googleapis.com is enabled βœ” functions: required API cloudbuild.googleapis.com is enabled βœ” functions: required API cloudfunctions.googleapis.com is enabled i functions: Loading and analyzing source code for codebase default to determine what to deploy * Serving Flask app 'serving' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on http://127.0.0.1:8082 Press CTRL+C to quit 127.0.0.1 - - [14/Sep/2023 12:34:11] "GET /__/functions.yaml HTTP/1.1" 200 - 127.0.0.1 - - [14/Sep/2023 12:34:11] "GET /__/quitquitquit HTTP/1.1" 200 - /bin/sh: line 1: 75256 Terminated: 15 python3.11 "/Users/knswrw/Desktop/Project/firebase/functions/venv/lib/python3.11/site-packages/firebase_functions/private/serving.py" i functions: preparing functions directory for uploading... i functions: packaged /Users/knswrw/Desktop/Project/firebase/functions (14.13 KB) for uploading i functions: ensuring required API run.googleapis.com is enabled... i functions: ensuring required API eventarc.googleapis.com is enabled... i functions: ensuring required API pubsub.googleapis.com is enabled... i functions: ensuring required API storage.googleapis.com is enabled... βœ” functions: required API pubsub.googleapis.com is enabled βœ” functions: required API run.googleapis.com is enabled βœ” functions: required API eventarc.googleapis.com is enabled βœ” functions: required API storage.googleapis.com is enabled i functions: generating the service identity for pubsub.googleapis.com... i functions: generating the service identity for eventarc.googleapis.com... βœ” functions: functions folder uploaded successfully i functions: creating Python 3.11 (2nd Gen) function printHello(us-central1)... Could not create or update Cloud Run service printhello, Container Healthcheck failed. Revision 'printhello-00001-wev' is not ready and cannot serve traffic. The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable. Logs for this revision might contain more information. Logs URL: https://console.cloud.google.com/logs/viewer?project=my_project_name&resource=cloud_run_revision/service_name/printhello/revision_name/printhello-00001-wev&advancedFilter=resource.type%3D%22cloud_run_revision%22%0Aresource.labels.service_name%3D%22printhello%22%0Aresource.labels.revision_name%3D%22printhello-00001-wev%22 For more troubleshooting guidance, see https://cloud.google.com/run/docs/troubleshooting#container-failed-to-start Functions deploy had errors with the following functions: printHello(us-central1) i functions: cleaning up build files... Error: There was an error deploying functions I tried to search about this topic on the internet, but I didn't find an answer to my question. I would welcome any answer, any explanation and any advice. Thank you. EDITED: There's a trouble in the fitz library from Pymupdf (I'm using it in the import above). The question is next, now modified: How to import a library in Python for Firebase functions? How to install it in the right way? And how to deploy the functions?
I found the solution to my problem when I checked the firebase-debug.log. It appears that there was an issue with the "fitz" library from Pymupdf, which I had imported in my code. There's a trouble in fitz library from Pymupdf (I'm using it in the import above) I had initially added the library to my Python Firebase functions using the following command: pip3 install -t _directoryName _moduleName However, it turns out that this was not the correct approach. I encountered deployment issues and received an error message like this: For more troubleshooting guidance, see https://cloud.google.com/run/docs/troubleshooting#container-failed-to-start Could not create or update Cloud Run service FunctionName, Container Healthcheck failed. Revision 'FunctionName-00001-wus' is not ready and cannot serve traffic. The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable. Logs for this revision might contain more information. Upon examining the logs, I noticed that the issue was related to the "import fitz" statement in my code. Import fitz To resolve this, I followed these steps: Delete the "venv" directory. Add the libraries you want to use to the "requirements.txt" file. Recreate the virtual environment with Python 3.11 using the following commands: python3.11 -m venv venv source venv/bin/activate pip3 install --upgrade pip python3.11 -m pip install -r requirements.txt Write your Firebase functions. Finally, follow these steps for deployment: tsc β€”watch firebase emulators:start --only functions firebase deploy And that's it! Your deployment should now work smoothly.
3
6
77,102,860
2023-9-14
https://stackoverflow.com/questions/77102860/how-to-use-native-popcount-with-numba
I am using numba 0.57.1 and I would like to exploit the native CPU popcount in my code. My existing code is too slow as I need to run it hundreds of millions of times. Here is a MWE: import numba as nb @nb.njit(nb.uint64(nb.uint64)) def popcount(x): b=0 while(x > 0): x &= x - nb.uint64(1) b+=1 return b print(popcount(43)) The current speed is: %timeit popcount(255) 148 ns Β± 0.369 ns per loop (mean Β± std. dev. of 7 runs, 10,000,000 loops each) I believe that it should be possible to use the native CPU popcount (_mm_popcnt_u64 as a C intrinsic) instruction using numba intrinsics but this area is new to me. The llvm intrinsic I need to use is, I think, ctpop. Assuming this is right, how can I do this?
Try: import numba as nb from numba import types from numba.cpython import mathimpl from numba.extending import intrinsic @intrinsic def popcnt(typingctx, src): sig = types.uint64(types.uint64) def codegen(context, builder, signature, args): return mathimpl.call_fp_intrinsic(builder, "llvm.ctpop.i64", args) return sig, codegen @nb.njit(nb.uint64(nb.uint64)) def popcount(x): return popcnt(x) print(popcount(43)) Prints: 4
3
4
77,101,575
2023-9-14
https://stackoverflow.com/questions/77101575/python-3-10-type-hinting-for-decorator-to-be-used-in-a-method
I'm trying to use typing.Concatenate alongside typing.ParamSpec to type hint a decorator to be used by the methods of a class. The decorator simply receives flags and only runs if the class has that flag as a member. Code shown below: import enum from typing import Callable, ParamSpec, Concatenate P = ParamSpec("P") Wrappable = Callable[Concatenate["Foo", P], None] class Flag(enum.Enum): FLAG_1 = enum.auto() FLAG_2 = enum.auto() def requires_flags(*flags: Flag) -> Callable[[Wrappable], Wrappable]: def wrap(func: Wrappable) -> Wrappable: def wrapped_f(foo: "Foo", *args: P.args, **kwargs: P.kwargs) -> None: if set(flags).issubset(foo.flags): func(foo, *args, **kwargs) return wrapped_f return wrap class Foo: def __init__(self, flags: set[Flag] | None = None) -> None: self.flags: set[Flag] = flags or set() super().__init__() @requires_flags(Flag.FLAG_1) def some_conditional_method(self, some_int: int): print(f"Number given: {some_int}") Foo({Flag.FLAG_1}).some_conditional_method(1) # prints "Number given: 1" Foo({Flag.FLAG_2}).some_conditional_method(2) # does not print anything The point of using Concatenate here is that the first parameter of the decorated function must be an instance of Foo, which aligns with methods of Foo (for which the first parameter is self, an instance of Foo). The rest of the parameters of the decorated function can be anything at all, hence allowing *args and **kwargs mypy is failing the above code with the following: error: Argument 1 has incompatible type "Callable[[Foo, int], Any]"; expected "Callable[[Foo, VarArg(Any), KwArg(Any)], None]" [arg-type] note: This is likely because "some_conditional_method of Foo" has named arguments: "self". Consider marking them positional-only It's having an issue with the fact that at the call site, I'm not explicitly passing in an instance of Foo as the first argument (as I'm calling it as a method). Is there a way that I can type this strictly and correctly? Does the wrapper itself need to be defined within the class somehow so that it has access to self directly? Note that if line 5 is updated to Wrappable = Callable[P, None] then mypy passes, but this is not as strict as it could be, as I'm trying to enforce in the type that it can only be used on methods of Foo (or free functions which receive a Foo as their first parameter). Similarly, if I update some_conditional_method to be a free function rather than a method on Foo, then mypy also passes (this aligns with the linked SO question below). In this case it is achieving the strictness that I'm after, but I really want to be able to apply this to methods, not just free functions (in fact, it doesn't need to apply to free functions at all). This question is somewhat of an extension to Python 3 type hinting for decorator but has the nuanced difference of the decorator needing to be used in a method. To be clear, the difference between this and that question is that the following (as described in the linked question) works perfectly: @requires_flags(Flag.FLAG_1) def some_conditional_free_function(foo: Foo, some_int: int): print(f"Number given: {some_int}") some_conditional_free_function(Foo({Flag.FLAG_1}), 1) # prints "Number given: 1"
You have two major issues here, and more detailed warnings would have been given if you had strict on: Your type alias, Wrappable = Callable[Concatenate["Foo", P], None], has a type variable (here, the ParamSpec P), but you're not providing the type variable when you're using the alias. This means you've lost all signature information after decorating with @requires_flags(...). You can fix this by explicitly parameterising Wrappable with P upon usage: def requires_flags(*flags: Flag) -> Callable[[Wrappable[P]], Wrappable[P]]: def wrap(func: Wrappable[P]) -> Wrappable[P]: This is caught with either strict mode or the mypy configuration setting disallow_any_generics = True. The arguments to Concatenate consist of any number of positional-only parameters followed by a ParamSpec. The declaration def wrapped_f(foo, *args, **kwargs): ... >>> wrapped_f(foo=Foo(), some_int=1) # Works at runtime has now replaced some_conditional_method like so: class Foo: @requires_flags(Flag.FLAG_1) def some_conditional_method(self, some_int) -> None: ... >>> Foo.some_conditional_method(foo=Foo(), some_int=1) # Works at runtime This isn't great, especially if some_conditional_method is part of the public API. The type checker is warning you to write wrapped_f properly, by using a positional-only marker: def wrapped_f(foo: "Foo", /, *args: P.args, **kwargs: P.kwargs) -> None: ... >>> Foo.some_conditional_method(foo=Foo(), some_int=1) # Fails at runtime This is caught with either strict mode or the mypy configuration setting extra_checks = True. The remaining minor issue is that def some_conditional_method(self, some_int: int): isn't annotated with a return type; this is a source of typing bugs if you're going to be using inheritance. I suggest getting into the habit of writing -> None for functions which don't return anything. The fixed code can be re-run here: mypy-playground
4
4
77,101,192
2023-9-14
https://stackoverflow.com/questions/77101192/cannot-import-name-randn-tensor-from-diffusers-utils
I was using this autotrain collab and when i labbelled and put my images into images folder and tried to run it , It says this error how do i solve this ? to reproduce : click link of ipynb make a new folder name images add some images and replace the prompt to something which describes your images go to runtime and run all ipynb link
This is happening due to the newer version of diffusers library. At the very start, run pip install diffusers==0.20.2 and then execute the cells.
5
7
77,099,610
2023-9-13
https://stackoverflow.com/questions/77099610/polars-fill-null-using-rule-of-three-based-of-filtered-set
Goal I want to fill the nulls in a series by distributing the difference between the next non-null and previous non-null value. The distribution is not linear but uses the values in another column to calculate the portioning Example df = pl.DataFrame({ "id": ["a", "a", "a", "b", "b", "b", "b", "b"], "timestamp": ["2023-09-13 14:05:34", "2023-09-13 14:15:04", "2023-09-13 14:30:01", "2023-09-13 12:12:02", "2023-09-13 12:15:02", "2023-09-13 12:30:07", "2023-09-13 12:45:01", "2023-09-13 13:00:02"], "value": [10, None, 30, 5, 10, None, None, 40] }).with_columns( pl.col("timestamp").str.to_datetime(), ) shape: (8, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ timestamp ┆ value β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ datetime[ΞΌs] ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════════════════β•ͺ═══════║ β”‚ a ┆ 2023-09-13 14:05:34 ┆ 10 β”‚ β”‚ a ┆ 2023-09-13 14:15:04 ┆ null β”‚ β”‚ a ┆ 2023-09-13 14:30:01 ┆ 30 β”‚ β”‚ b ┆ 2023-09-13 12:12:02 ┆ 5 β”‚ β”‚ b ┆ 2023-09-13 12:15:02 ┆ 10 β”‚ β”‚ b ┆ 2023-09-13 12:30:07 ┆ null β”‚ β”‚ b ┆ 2023-09-13 12:45:01 ┆ null β”‚ β”‚ b ┆ 2023-09-13 13:00:02 ┆ 40 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Expected output (with some intermediary columns to show how it is calculated) shape: (8, 9) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ timestamp ┆ value ┆ gap value ┆ gap time s ┆ gap proportion ┆ portion ┆ fill value ┆ final β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ datetime[ns] ┆ str ┆ f64 ┆ f64 ┆ f64 ┆ f64 ┆ f64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════════════════β•ͺ═══════β•ͺ═══════════β•ͺ════════════β•ͺ════════════════β•ͺ═════════β•ͺ════════════β•ͺ═══════║ β”‚ a ┆ 2023-09-13 14:05:34 ┆ 10 ┆ null ┆ null ┆ null ┆ null ┆ null ┆ 10.0 β”‚ β”‚ a ┆ 2023-09-13 14:15:04 ┆ null ┆ 20.0 ┆ 1467.0 ┆ 570.0 ┆ 7.77 ┆ 17.77 ┆ 17.77 β”‚ β”‚ a ┆ 2023-09-13 14:30:01 ┆ 30 ┆ null ┆ null ┆ null ┆ null ┆ null ┆ 30.0 β”‚ β”‚ b ┆ 2023-09-13 12:12:02 ┆ 5 ┆ null ┆ null ┆ null ┆ null ┆ null ┆ 5.0 β”‚ β”‚ b ┆ 2023-09-13 12:15:02 ┆ 10 ┆ null ┆ null ┆ null ┆ null ┆ null ┆ 10.0 β”‚ β”‚ b ┆ 2023-09-13 12:30:07 ┆ null ┆ 30.0 ┆ 2700.0 ┆ 905.0 ┆ 10.06 ┆ 20.06 ┆ 20.06 β”‚ β”‚ b ┆ 2023-09-13 12:45:01 ┆ null ┆ 30.0 ┆ 2700.0 ┆ 1799.0 ┆ 19.99 ┆ 29.99 ┆ 29.99 β”‚ β”‚ b ┆ 2023-09-13 13:00:02 ┆ 40 ┆ null ┆ null ┆ null ┆ null ┆ null ┆ 40.0 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ How this calculation works I will take group 'b' as an example. There are 2 rows with nulls that need filling. the difference between the next and the previous value is 30 ( 40 - 10 ) the time difference between the next and previous value is 2700 seconds (13:00:02 - 12:15:02) for the first blank row, the time difference is 905 seconds (12:30:07 - 12:15:02 ). So this row gets the portion 30 * ( 905 / 2700 ) assigned (10.06) so when filling it the fill value is 10 + 10.06 the next blank row gets a portion of 30 * ( 1799 / 2700 ) (19.99) so it's fill value is 10 + 19.99 Thanks for the help. I am new to both Polars and Python so my SQL-primed mind is still wrapping around all this. Personally I feel it would be a great addition to the fill_null, to be able to use a rule of three using a different column to proportion Thanks
( df .join_asof( df .filter(pl.col('value').is_not_null()) .with_columns( gap_time=(pl.col('timestamp')-pl.col('timestamp').shift().over('id')) .dt.seconds(), prev_good_time=pl.col('timestamp').shift().over('id'), prev_good_value=pl.col('value').shift().over('id') ) .drop('value'), on='timestamp', by='id', strategy='forward' ) .with_columns( gap_value=pl.when(pl.col('value').is_null()) .then((pl.col('value')-((pl.col('value') .forward_fill().shift() ).over('id'))).backward_fill()), gap_time=pl.when(pl.col('value').is_null()) .then(pl.col('gap_time')), gap_proportion=pl.when(pl.col('value').is_null()) .then((pl.col('timestamp')-pl.col('prev_good_time')).dt.seconds()), ) .with_columns( portion=pl.col('gap_value')*(pl.col('gap_proportion')/pl.col('gap_time')) ) .with_columns( fill_value=pl.col('prev_good_value')+pl.col('portion') ) .select( 'id','timestamp', value=pl.when(pl.col('value').is_null()) .then(pl.col('fill_value')) .otherwise( pl.col('value') ) ) ) The first thing we do is do a join_asof to a filtered version of the original. That allows us to calculate the time between valid values as well as setting aside the most recent time that associated with a non-null value and the value itself. The asof part of the join means that it will join on a time based but rolls until it finds the next (or previous) matching time and then by some other equality column. You could nest most of the rest of the calcs without repeating yourself or using so many contexts but I left it really verbose so it's easy to deconstruct. The reason there are so many calls to with_columns is that you can't set and use a column in the same context so anytime you make a column that you want to use again, you've got to chain another context. Output (excluding intermediate columns) shape: (8, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ timestamp ┆ value β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ datetime[ΞΌs] ┆ f64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════════════════β•ͺ═══════════║ β”‚ a ┆ 2023-09-13 14:05:34 ┆ 10.0 β”‚ β”‚ a ┆ 2023-09-13 14:15:04 ┆ 17.770961 β”‚ β”‚ a ┆ 2023-09-13 14:30:01 ┆ 30.0 β”‚ β”‚ b ┆ 2023-09-13 12:12:02 ┆ 5.0 β”‚ β”‚ b ┆ 2023-09-13 12:15:02 ┆ 10.0 β”‚ β”‚ b ┆ 2023-09-13 12:30:07 ┆ 20.055556 β”‚ β”‚ b ┆ 2023-09-13 12:45:01 ┆ 29.988889 β”‚ β”‚ b ┆ 2023-09-13 13:00:02 ┆ 40.0 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Numpy can do it Here's a hacky (as if what's above isn't hacky) way to get numpy to do the work. finaldf=[] df=df.with_columns(pl.col('value').cast(pl.Float64)) for little_df in df.partition_by('id'): x=little_df.filter(pl.col('value').is_null()).select(pl.col('timestamp').to_physical()).to_numpy() xp,fp = little_df.filter(pl.col('value').is_not_null()).select('timestamp','value').to_numpy().transpose() finaldf.append( pl.concat([ little_df.filter(pl.col('value').is_not_null()).lazy(), little_df.filter(pl.col('value').is_null()).with_columns(value=pl.Series(np.interp(x, xp, fp).transpose()[0])).lazy() ]) ) finaldf=pl.concat(finaldf).sort(['id','timestamp']).collect() finaldf shape: (8, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ timestamp ┆ value β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ datetime[ΞΌs] ┆ f64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════════════════β•ͺ═══════════║ β”‚ a ┆ 2023-09-13 14:05:34 ┆ 10.0 β”‚ β”‚ a ┆ 2023-09-13 14:15:04 ┆ 17.770961 β”‚ β”‚ a ┆ 2023-09-13 14:30:01 ┆ 30.0 β”‚ β”‚ b ┆ 2023-09-13 12:12:02 ┆ 5.0 β”‚ β”‚ b ┆ 2023-09-13 12:15:02 ┆ 10.0 β”‚ β”‚ b ┆ 2023-09-13 12:30:07 ┆ 20.055556 β”‚ β”‚ b ┆ 2023-09-13 12:45:01 ┆ 29.988889 β”‚ β”‚ b ┆ 2023-09-13 13:00:02 ┆ 40.0 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Another more concise polars way On the first round I was fixated on reproducing all the same intermediate columns but if I just go straight for the answer we can do this... ( df.join_asof( df.filter(pl.col('value').is_not_null()) .with_columns( value_slope=(pl.col('value')-pl.col('value').shift().over('id'))/(pl.col('timestamp')-pl.col('timestamp').shift().over('id')), value_slope_since=pl.col('timestamp').shift(), value_base=pl.col('value').shift() ) .drop('value'), on='timestamp', by='id', strategy='forward' ) .select('id','timestamp',value=pl.coalesce(pl.col('value'), pl.col('value_base')+pl.col('value_slope')*(pl.col('timestamp')-pl.col('value_slope_since')))) ) An extensible function def interp(df, y_col, id_cols=None): if not isinstance(y_col, str): raise ValueError("y_col should be string") if isinstance(id_cols, str): id_cols=[id_cols] if id_cols is None: id_cols=['__dummyid'] df=df.with_columns(__dummyid=0) lf=df.select(id_cols + [y_col]).lazy() value_cols=[x for x in df.columns if x not in id_cols and x!=y_col] for value_col in value_cols: lf=lf.join( df.join_asof( df.filter(pl.col(value_col).is_not_null()) .select( *id_cols, y_col, __value_slope=(pl.col(value_col)-pl.col(value_col).shift().over(id_cols))/(pl.col(y_col)-pl.col(y_col).shift().over(id_cols)), __value_slope_since=pl.col(y_col).shift(), __value_base=pl.col(value_col).shift() ), on=y_col, by=id_cols, strategy='forward' ) .select( id_cols+ [y_col] + [pl.coalesce(pl.col(value_col), pl.coalesce(pl.col('__value_base'), pl.col('__value_base').shift(-1))+ pl.coalesce(pl.col('__value_slope'), pl.col('__value_slope').shift(-1))*(pl.col(y_col)- pl.coalesce(pl.col('__value_slope_since'), pl.col('__value_slope_since').shift(-1)))).alias(value_col)] ) .lazy(), on=[y_col]+id_cols ) if id_cols[0]=='__dummyid': lf=lf.select(pl.exclude('__dummyid')) return lf.collect() With this function you can just do interp(df, "timestamp", "id") where the first argument is the df, the second is your time or y column. The third optional parameter is if you have an id column(s) (it can take a list or a single string). It will infer that any columns in the df that weren't given to it as a time or id column are values and it will interpolate them. If you can monkey patch it to the pl.DataFrame you can use it as a dataframe method like this pl.DataFrame.interp=interp df.interp('timestamp','id')
4
1
77,099,794
2023-9-13
https://stackoverflow.com/questions/77099794/why-cant-you-use-bitwise-with-numba-and-uint64
I have the following MWE: import numba as nb @nb.njit(nb.uint64(nb.uint64)) def popcount(x): b=0 while(x > 0): x &= x - 1 b+=1 return b print(popcount(43)) It fails with: numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) No implementation of function Function(<built-in function iand>) found for signature: >>> iand(float64, float64) There are 8 candidate implementations: - Of which 4 did not match due to: Overload of function 'iand': File: <numerous>: Line N/A. With argument(s): '(float64, float64)': No match. - Of which 2 did not match due to: Operator Overload in function 'iand': File: unknown: Line unknown. With argument(s): '(float64, float64)': No match for registered cases: * (bool, bool) -> bool * (int64, int64) -> int64 * (int64, uint64) -> int64 * (uint64, int64) -> int64 * (uint64, uint64) -> uint64 - Of which 2 did not match due to: Overload in function 'gen_operator_impl.<locals>._ol_set_operator': File: numba/cpython/setobj.py: Line 1508. With argument(s): '(float64, float64)': Rejected as the implementation raised a specific error: TypingError: All arguments must be Sets, got (float64, float64) raised from /home/user/python/mypython3.10/lib/python3.10/site-packages/numba/cpython/setobj.py:108 During: typing of intrinsic-call at /home/user/python/popcount.py (7) File "popcount.py", line 7: def popcount(x): <source elided> while(x > 0): x &= x - 1 ^ What is wrong with using uint64 for this? The code fails with the same message even if I use: print(popcount(nb.uint64(43))
At first, I thought this was NumPy uint64 awkwardness. Turns out it's slightly different Numba uint64 awkwardness. By NumPy dtype rules, a standard Python int is handled as numpy.int_ dtype, which is signed. There's no integer dtype big enough to hold all values of both uint64 dtype and a signed dtype, so in mixed uint64/int operations, NumPy converts both operands to float64! You can't use & with floating-point dtypes, so that's where the error would come from with NumPy type handling. It turns out Numba uses different type handling, though. Under Numba rules, an operation on a uint64 and a signed integer produces int64, not float64. But then the assignment: x &= x - 1 tries to assign an int64 value to a variable that initially held a uint64 value. This is the part where Numba gets awkward. By Numba type inference rules, A type variable holds the type of each variable (in the Numba IR). Conceptually, it is initialized to the universal type and, as it is re-assigned, it stores a common type by unifying the new type with the existing type. The common type must be able to represent values of the new type and the existing type. Type conversion is applied as necessary and precision loss is accepted for usability reason. Rather than converting to uint64 or keeping int64, the Numba compiler tries to unify uint64 and int64 to find a new type to use. This is the part where Numba accepts precision loss and converts this version of the x variable to float64. Then you get the error about & not working on float64.
2
4
77,096,912
2023-9-13
https://stackoverflow.com/questions/77096912/python-polars-elegant-way-to-add-a-month-to-a-date
I need to carry out a very simple operation in Polars, but the documentation and examples I have been finding are super convoluted. I simply have a date, and I would like to create a range running from the first day in the following month until the first day of a month twelve months later. I have a date: date = 2023-01-15 I want to find these two dates: range_start = 2023-02-01 range_end = 2024-02-01 How is this done in Polars? In datetime in Python from datetime import datetime from dateutil import relativedelta my_date = datetime.fromisoformat("2023-01-15") start = my_date.replace(day=1) + relativedelta.relativedelta(months=1) end = start + relativedelta.relativedelta(months=12) Polars? # The format of my date import polars as pl my_date_pl = pl.lit(datetime.fromisoformat("2023-01-15")) ????
You can use dt.offset_by For example pl.select(pl.lit(datetime.fromisoformat("2023-01-15")).dt.offset_by("1mo")).item() To avoid that error you can suffix "_saturating" to the offset like this... pl.select(pl.lit(datetime.fromisoformat("2023-01-31")).dt.offset_by("1mo_saturating")).item() It'll produce an error if the next month doesn't have the same date for example Jan 31 + 1 month will error.
2
7
77,093,787
2023-9-13
https://stackoverflow.com/questions/77093787/pandas-how-to-flag-rows-between-a-start-1-and-multiple-ends-2-or-3
I have the following dataframe: import numpy as np import pandas as pd df = pd.DataFrame([]) df['Date'] = ['2020-01-01','2020-01-02','2020-01-03','2020-01-04','2020-01-05', '2020-01-06','2020-01-07','2020-01-08','2020-01-09','2020-01-10', '2020-01-11','2020-01-12','2020-01-13','2020-01-14','2020-01-15', '2020-01-16','2020-01-17','2020-01-18','2020-01-19','2020-01-20'] df['Machine'] = ['A','A','A','A','A','A','A','A','A','A','A','A','A','A','A','A','A','A','A','A'] df['Signal'] = [0,1,2,0,1,3,0,0,0,3,0,1,0,0,3,0,1,0,0,1] df['Status'] = 0 And the following function which generates a 'Status' column for the machine A. In the Signal col, 1 switches the machine on (Status col 1) which remains 1 until the machine receives either 2 or 3 which are signals to switch the machine status to 0 (off) until the machine receives Signal 1 again. I've solved the issue of maintaining the previous Status row value of 1 or 0 with the below function: def s_gen(dataset, Signal): _status = 0 status0 = [] for (i) in Signal: if _status == 0: if i == 1: _status = 1 elif _status == 1: if (i == 2 or i==3): _status = 0 status0.append(_status) dataset['status0'] = status0 return dataset['status0'] df['Status'] = s_gen(df,df['Signal']) df.drop('status0',axis=1,inplace = True) df This appends the newly created column to the dataframe. However I have a larger dataframe with many different values in the Machine column (grouped as series; A,A,A,B,B,B etc) and the results of the function cannot overlap. Using groupby didn't work. So I think the next step is to produce each sequence of 'Status' as a separate list and concatenate them before appending the whole series to the larger dataframe as part of a larger outer loop. This is the desired outcome: df = pd.DataFrame([]) df['Date'] = ['2020-01-01','2020-01-02','2020-01-03','2020-01-04','2020-01-05', '2020-01-06','2020-01-07','2020-01-08','2020-01-09','2020-01-10', '2020-01-11','2020-01-12','2020-01-13','2020-01-14','2020-01-15', '2020-01-16','2020-01-17','2020-01-18','2020-01-19','2020-01-20', '2020-01-01','2020-01-02','2020-01-03','2020-01-04','2020-01-05', '2020-01-06','2020-01-07','2020-01-08','2020-01-09','2020-01-10', '2020-01-11','2020-01-12','2020-01-13','2020-01-14','2020-01-15', '2020-01-16','2020-01-17','2020-01-18','2020-01-19','2020-01-20'] df['Machine'] = ['A','A','A','A','A','A','A','A','A','A','A','A','A','A','A','A','A','A','A','A', 'B','B','B','B','B','B','B','B','B','B','B','B','B','B','B','B','B','B','B','B',] df['Signal'] = [0,1,2,0,1,3,0,0,0,3,0,1,0,0,3,0,1,0,0,1,0,1,2,0,1,3,0,0,0,3,0,1,0,0,3,0,1,0,0,1] df['Status'] = [0,1,0,0,1,0,0,0,0,0,0,1,1,1,0,0,1,1,1,1,0,1,0,0,1,0,0,0,0,0,0,1,1,1,0,0,1,1,1,1] df What I'm struggling with is, if the function processes each machine's data separately then appends it to the dataframe, it would have to loop through each machine, then concatenate all the Status series produced, then append that larger series to the dataframe. This is what I've tried so far: dfList = df[df['Machine']] dfListU = pd.DataFrame([]) dfListU = dfList['Machine'].unique() dfListU.flatten() def s_gen2(item, dataset, Signal): data = df[df.Machine==m] for m in dfListU: _status = 0 status0 = [] for (i) in Signal: if _status == 0: if i == 1: _status = 1 elif _status == 1: if (i == 2 or i==3): _status = 0 #status0.append(_status) dataset['status0'] = status0 return dataset['status0'] for i in dfListU: df1 = pd.concat(i) status0.append(_status) df['Status'] = s_gen(df,df['Signal']) df.drop('status0',axis=1,inplace = True) df Which results in the error - KeyError: "None of [Index(['A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A',\n 'A', 'A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B',\n 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B'],\n dtype='object')] are in the [columns]" Is is better to loop the function through the dfListU (list of unique machines) then concatenate the outcome? I've tried to avoid using loops but couldn't find any other way to compare the previous status row with the same row in the Signal column. Any help is sincerely appreciated.
A simple approach would be to map the known statuses, then to groupby.ffill them: df['Status'] = (df['Signal'] .map({1:1, 2:0, 3:0}) .groupby(df['Machine']).ffill() .fillna(0, downcast='infer') ) Output: Date Machine Signal Status 0 2020-01-01 A 0 0 1 2020-01-02 A 1 1 2 2020-01-03 A 2 0 3 2020-01-04 A 0 0 4 2020-01-05 A 1 1 5 2020-01-06 A 3 0 6 2020-01-07 A 0 0 7 2020-01-08 A 0 0 8 2020-01-09 A 0 0 9 2020-01-10 A 3 0 10 2020-01-11 A 0 0 11 2020-01-12 A 1 1 12 2020-01-13 A 0 1 13 2020-01-14 A 0 1 14 2020-01-15 A 3 0 15 2020-01-16 A 0 0 16 2020-01-17 A 1 1 17 2020-01-18 A 0 1 18 2020-01-19 A 0 1 19 2020-01-20 A 1 1 20 2020-01-01 B 0 0 21 2020-01-02 B 1 1 22 2020-01-03 B 2 0 23 2020-01-04 B 0 0 24 2020-01-05 B 1 1 25 2020-01-06 B 3 0 26 2020-01-07 B 0 0 27 2020-01-08 B 0 0 28 2020-01-09 B 0 0 29 2020-01-10 B 3 0 30 2020-01-11 B 0 0 31 2020-01-12 B 1 1 32 2020-01-13 B 0 1 33 2020-01-14 B 0 1 34 2020-01-15 B 3 0 35 2020-01-16 B 0 0 36 2020-01-17 B 1 1 37 2020-01-18 B 0 1 38 2020-01-19 B 0 1 39 2020-01-20 B 1 1
4
7
77,069,773
2023-9-8
https://stackoverflow.com/questions/77069773/how-can-i-install-ipython-in-debian-12-or-ubuntu-23-04-where-pip3-prevents-insta
python3 is a system wide program, just as pip3 is. I want to install IPython on Debian 12 (Bookworm). (This information is also relevant to newer Ubuntu versions, since these are derived directly from Debian and contain the same policy change.) I would probably expect this to also be a system-wide available program, just like python3 and pip3. Please correct me if that no longer makes sense, given the recent changes which prevent (by default) users from installing pip3 packages system wide, instead encouraging the use of venvs. Previously I would have run pip3 install ipython. What should I now do instead? Error message when attempting to run pip3 install ipython. error: externally-managed-environment Γ— This environment is externally managed ╰─> To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. If you wish to install a non-Debian-packaged Python package, create a virtual environment using python3 -m venv path/to/venv. Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make sure you have python3-full installed. If you wish to install a non-Debian packaged Python application, it may be easiest to use pipx install xyz, which will manage a virtual environment for you. Make sure you have pipx installed. See /usr/share/doc/python3.11/README.venv for more information. note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification.
The actual solution I used, thanks to others for directing me to venv. python3 -m venv .venv source .venv/bin/activate # do this every time to use the venv created above pip3 install ipython FYI for convenience one can also do ln -s .venv/bin/activate . . activate
2
2
77,091,944
2023-9-12
https://stackoverflow.com/questions/77091944/polars-flatten-rows-into-columns-aggregating-by-column-values
I'm trying to write a script in Polars that would flatten a list of prices per date and minute. The catch is I want to incrementally aggregate into columns and zero out values in the future. For example. Idea is to make this solution vectorized if possible to make it performant. df = pl.DataFrame({ "date": ["2022-01-01", "2022-01-01", "2022-01-02", "2022-01-02", "2022-01-02", "2022-01-03", "2022-01-03", "2022-01-03"], "minute": [1, 2, 1, 2, 3, 1, 2, 3], "price": [10, 20, 15, 10, 20, 30, 60, 70] }) Should build the following dataframe. shape: (8, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ date ┆ minute ┆ 1_price ┆ 2_price ┆ 3_price β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ════════β•ͺ═════════β•ͺ═════════β•ͺ═══════════║ β”‚ 2022-01-01 ┆ 1 ┆ 10 ┆ 0 ┆ 0 β”‚ β”‚ 2022-01-01 ┆ 2 ┆ 10 ┆ 20 ┆ 0 β”‚ β”‚ 2022-01-02 ┆ 1 ┆ 15 ┆ 0 ┆ 0 β”‚ β”‚ 2022-01-02 ┆ 2 ┆ 15 ┆ 10 ┆ 0 β”‚ β”‚ 2022-01-02 ┆ 3 ┆ 15 ┆ 10 ┆ 20 β”‚ β”‚ 2022-01-03 ┆ 1 ┆ 30 ┆ 0 ┆ 0 β”‚ β”‚ 2022-01-03 ┆ 2 ┆ 30 ┆ 60 ┆ 0 β”‚ β”‚ 2022-01-03 ┆ 3 ┆ 30 ┆ 60 ┆ 70 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
This seems to work df.join( df.pivot('minute', index='date'), on='date') \ .select("date", "minute", **{f"{x}_price":pl.when(pl.lit(x)<=pl.col('minute')) .then(pl.col(f"{x}")) .otherwise(0) for x in df['minute'].unique().sort()}) shape: (8, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ date ┆ minute ┆ 1_price ┆ 2_price ┆ 3_price β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ════════β•ͺ═════════β•ͺ═════════β•ͺ═════════║ β”‚ 2022-01-01 ┆ 1 ┆ 10 ┆ 0 ┆ 0 β”‚ β”‚ 2022-01-01 ┆ 2 ┆ 10 ┆ 20 ┆ 0 β”‚ β”‚ 2022-01-02 ┆ 1 ┆ 15 ┆ 0 ┆ 0 β”‚ β”‚ 2022-01-02 ┆ 2 ┆ 15 ┆ 10 ┆ 0 β”‚ β”‚ 2022-01-02 ┆ 3 ┆ 15 ┆ 10 ┆ 20 β”‚ β”‚ 2022-01-03 ┆ 1 ┆ 30 ┆ 0 ┆ 0 β”‚ β”‚ 2022-01-03 ┆ 2 ┆ 30 ┆ 60 ┆ 0 β”‚ β”‚ 2022-01-03 ┆ 3 ┆ 30 ┆ 60 ┆ 70 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ What your example shows is more than just flattening but it seems what you're looking for is that the x_price column should be 0 whenever the x is greater than whatever is in the minute column. That's what this does through a pivot, self join, and when/then/otherwise.
4
0
77,087,197
2023-9-12
https://stackoverflow.com/questions/77087197/is-there-a-way-to-group-by-in-polars-while-keeping-other-columns
I am currently trying to achieve a polars group_by while keeping other columns than the ones in the group_by function. Here is an example of an input data frame that I have. df = pl.from_repr(""" β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ SRC ┆ TGT ┆ IT ┆ Cd β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ 1 ┆ 1 ┆ 2 ┆ 3.0 β”‚ β”‚ 2 ┆ 1 ┆ 2 ┆ 4.0 β”‚ β”‚ 3 ┆ 1 ┆ 2 ┆ 3.0 β”‚ β”‚ 3 ┆ 2 ┆ 1 ┆ 8.0 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ """) I want to group by ['TGT', 'IT'] using min('Cd'), which is the following code : df.group_by('TGT', 'IT').agg(pl.col('Cd').min()) With this code line, I obtain the following dataframe. β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ TGT ┆ IT ┆ Cd β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ 1 ┆ 2 ┆ 3.0 β”‚ β”‚ 2 ┆ 1 ┆ 8.0 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ And here is the dataframe I would rather want β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ SRC ┆ TGT ┆ IT ┆ Cd β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ 1 ┆ 1 ┆ 2 ┆ 3.0 β”‚ β”‚ 3 ┆ 2 ┆ 1 ┆ 8.0 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ I thing I could achieve this by joining the first dataframe on the grouped one using ['TGT', 'IT', 'Cd'], and then delete the doubled rows, as I only want one (and any) 'SRC' for each ('TGT', 'IT') couple. But I wanted to know if there is a more straightforward way to do it, especially by keeping the 'SRC' column during the group_by Thanks by advance
# Your data data = { "SRC": [1, 2, 3, 3], "TGT": [1, 1, 1, 2], "IT": [2, 2, 2, 1], "Cd": [3.0, 4.0, 3.0, 8.0] } df = pl.DataFrame(data) # Perform the group_by and aggregation result = ( df.group_by('TGT', 'IT', maintain_order=True) .agg( pl.col('SRC').first(), pl.col('Cd').min() ) .select('SRC', 'TGT', 'IT', 'Cd') # to reorder columns ) print(result)
3
5
77,071,244
2023-9-9
https://stackoverflow.com/questions/77071244/python-polars-calculate-rolling-mode-over-multiple-columns
I have a polars.DataFrame like: data = pl.DataFrame({ "col1": [3, 2, 4, 7, 1, 10, 7], "col2": [3, 4, None, 1, None, 1, 9], "col3": [3, 1, None, None, None, None, 4], "col4": [None, 5, None, None, None, None, None], "col5": [None, None, None, None, None, None, None]}) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col2 ┆ col3 ┆ col4 ┆ col5 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 ┆ i64 ┆ f32 β”‚ β•žβ•β•β•β•β•β•β•ͺ══════β•ͺ══════β•ͺ══════β•ͺ══════║ β”‚ 3 ┆ 3 ┆ 3 ┆ null ┆ null β”‚ β”‚ 2 ┆ 4 ┆ 1 ┆ 5 ┆ null β”‚ β”‚ 4 ┆ null ┆ null ┆ null ┆ null β”‚ β”‚ 7 ┆ 1 ┆ null ┆ null ┆ null β”‚ β”‚ 1 ┆ null ┆ null ┆ null ┆ null β”‚ β”‚ 10 ┆ 1 ┆ null ┆ null ┆ null β”‚ β”‚ 7 ┆ 9 ┆ 4 ┆ null ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ I want to create a new column that contains the rolling mode - but not based on one column and the respective row values within the window but on row values of all columns within the window. The nulls should be dropped and shouldn't appear in the resulting columns as a mode value. edit: I made some changes to the example data provided. For further clarifications and under the assumption of something like polars.rolling_apply(<function>, window_size=2, min_periods=1, center=False) I would expect the following result: β”Œβ”€β”€β”€β”€β”€β”€β” β”‚ res β”‚ β”‚ --- β”‚ β”‚ i64 β”‚ β•žβ•β•β•β•β•β•β•‘ β”‚ 3 β”‚ β”‚ 3 β”‚ β”‚ 4 β”‚ β”‚ None β”‚ <- all values different β”‚ 1 β”‚ β”‚ 1 β”‚ β”‚ None β”‚ <- all values different β””β”€β”€β”€β”€β”€β”€β”˜ In case there is no mode None as a result would be fine. Only the missing value in the original polars.DataFrame should be ignored.
.rolling() can be used to aggregate over the windows. Using .concat_list() inside .agg() will give us a nested list, e.g. [[col1, col2, ...], [col1, col2, ...]] Which we can flatten, remove nulls, and calculate the mode. .flatten() .drop_nulls() .mode() (df.with_row_index() .rolling( index_column = "index", period = "2i" ) .agg( pl.concat_list(pl.exclude("index")).flatten().drop_nulls().mode() .alias("mode") ) # .with_columns( # pl.when(pl.col("mode").list.len() == 1) # .then(pl.col("mode").list.first()) # ) ) shape: (7, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ index ┆ mode β”‚ β”‚ --- ┆ --- β”‚ β”‚ u32 ┆ list[i64] β”‚ β•žβ•β•β•β•β•β•β•β•ͺ══════════════════║ β”‚ 0 ┆ [3] β”‚ β”‚ 1 ┆ [3] β”‚ β”‚ 2 ┆ [4] β”‚ β”‚ 3 ┆ [1, 7, 4] β”‚ β”‚ 4 ┆ [1] β”‚ β”‚ 5 ┆ [1] β”‚ β”‚ 6 ┆ [10, 7, 9, 1, 4] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ The commented out lines deal with discarding ties and getting rid of the list. shape: (7, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ index ┆ mode β”‚ β”‚ --- ┆ --- β”‚ β”‚ u32 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ══════║ β”‚ 0 ┆ 3 β”‚ β”‚ 1 ┆ 3 β”‚ β”‚ 2 ┆ 4 β”‚ β”‚ 3 ┆ null β”‚ β”‚ 4 ┆ 1 β”‚ β”‚ 5 ┆ 1 β”‚ β”‚ 6 ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜
4
3
77,059,630
2023-9-7
https://stackoverflow.com/questions/77059630/python-polars-conditional-join-by-date-range
First of all, there seem to be some similar questions answered already. However, I couldn't find this specific case, where the conditional columns are also part of the join columns: I have two dataframes: df1 = pl.DataFrame({"timestamp": ['2023-01-01 00:00:00', '2023-05-01 00:00:00', '2023-10-01 00:00:00'], "value": [2, 5, 9]}) df1 = df1.with_columns( pl.col("timestamp").str.to_datetime().alias("timestamp"), ) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ timestamp ┆ value β”‚ β”‚ --- ┆ --- β”‚ β”‚ datetime[ΞΌs] ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════║ β”‚ 2023-01-01 00:00:00 ┆ 2 β”‚ β”‚ 2023-05-01 00:00:00 ┆ 5 β”‚ β”‚ 2023-10-01 00:00:00 ┆ 9 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ df2 = pl.DataFrame({"date_start": ['2022-12-31 00:00:00', '2023-01-02 00:00:00'], "date_end": ['2023-04-30 00:00:00', '2023-05-05 00:00:00'], "label": [0, 1]}) df2 = df2.with_columns( pl.col("date_start").str.to_datetime().alias("date_start"), pl.col("date_end").str.to_datetime().alias("date_end"), ) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ date_start ┆ date_end ┆ label β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ datetime[ΞΌs] ┆ datetime[ΞΌs] ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════════════β•ͺ═══════║ β”‚ 2022-12-31 00:00:00 ┆ 2023-04-30 00:00:00 ┆ 0 β”‚ β”‚ 2023-01-02 00:00:00 ┆ 2023-05-05 00:00:00 ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ I want to join label of the second polars.Dataframe (df2) onto the first polars.Dataframe (df1) - but only when the column value of timestamp (polars.Datetime) is within the date ranges given in date_start and date_end, respectively. Since I basically want a left join on df1, the column label should be None when the column value of timestamp isn't at all covered by df2. The tricky part for me is, that there isn't an actual on for df2 since its a range of dates.
.join_where() was added in Polars 1.7.0 (df1 .join_where(df2, pl.col.timestamp >= pl.col.date_start, pl.col.timestamp <= pl.col.date_end ) ) shape: (2, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ timestamp ┆ value ┆ date_start ┆ date_end ┆ label β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ datetime[ΞΌs] ┆ i64 ┆ datetime[ΞΌs] ┆ datetime[ΞΌs] ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ═════════════════════β•ͺ═════════════════════β•ͺ═══════║ β”‚ 2023-05-01 00:00:00 ┆ 5 ┆ 2023-01-02 00:00:00 ┆ 2023-05-05 00:00:00 ┆ 1 β”‚ β”‚ 2023-01-01 00:00:00 ┆ 2 ┆ 2022-12-31 00:00:00 ┆ 2023-04-30 00:00:00 ┆ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ It currently supports INNER JOIN - so an additional LEFT JOIN is required in thie case. (df1 .with_row_index() .join( df1 .with_row_index() .join_where(df2, pl.col.timestamp >= pl.col.date_start, pl.col.timestamp <= pl.col.date_end ) .select("index", "label"), on = "index", how = "left" ) ) shape: (3, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ index ┆ timestamp ┆ value ┆ label β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ u32 ┆ datetime[ΞΌs] ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═════════════════════β•ͺ═══════β•ͺ═══════║ β”‚ 0 ┆ 2023-01-01 00:00:00 ┆ 2 ┆ 0 β”‚ β”‚ 1 ┆ 2023-05-01 00:00:00 ┆ 5 ┆ 1 β”‚ β”‚ 2 ┆ 2023-10-01 00:00:00 ┆ 9 ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Additional join types are being tracked here: https://github.com/pola-rs/polars/issues/18669
3
1
77,090,789
2023-9-12
https://stackoverflow.com/questions/77090789/a-problem-with-building-scatterplot-using-dates-and-int-values
import pandas as pd import seaborn as sn import matplotlib.pyplot as plt from datetime import datetime import numpy as np path = r'C:\Users\bossd\OneDrive\Π”ΠΎΠΊΡƒΠΌΠ΅Π½Ρ‚Ρ‹\datarn.csv' df = pd.read_csv(path) path2 = r'C:\Users\bossd\OneDrive\Π”ΠΎΠΊΡƒΠΌΠ΅Π½Ρ‚Ρ‹\pipirka.csv' df2 = pd.read_csv(path2) x = (df2.loc[df2['timestamp'].str.startswith('2015')]) y = df2['cnt'] plt.scatter(x,y) plt.show() I wanted to build a scatterplot using dates that contain '2015' as x axis and 'cnt' parameter that means bicycles rent this day. But after running the code i get this error Cell In[47], line 14 12 x = (df2.loc[df2['timestamp'].str.startswith('2015')]) 13 y = df2['cnt'] ---> 14 plt.scatter(x,y) 15 plt.show() 17 display(df2) ... File ~\venv\lib\site-packages\matplotlib\category.py:214, in UnitData.update(self, data) 212 # check if convertible to number: 213 convertible = True --> 214 for val in OrderedDict.fromkeys(data): 215 # OrderedDict just iterates over unique values in data. 216 _api.check_isinstance((str, bytes), value=val) 217 if convertible: 218 # this will only be called so long as convertible is True. TypeError: unhashable type: 'numpy.ndarray' The dataframe looks like this and containts timestamp as the day and cnt as the amount of bicycles rent this day data = {'timestamp': ['2015-01-04', '2015-01-05', '2015-01-06', '2015-01-07', '2015-01-08', '2016-12-27', '2016-12-28', '2016-12-29', '2016-12-30', '2016-12-31'], 'cnt': [9234, 20372, 20613, 21064, 15601, 10842, 12428, 14052, 11566, 11424]} df2 = pd.DataFrame(data) timestamp cnt 0 2015-01-04 9234 1 2015-01-05 20372 2 2015-01-06 20613 3 2015-01-07 21064 4 2015-01-08 15601 5 2016-12-27 10842 6 2016-12-28 12428 7 2016-12-29 14052 8 2016-12-30 11566 9 2016-12-31 11424
The 'timestamp' column should first be converted to a datetime dtype with pd.to_datetime, otherwise the datetime x-ticks will not be correctly positioned and formatted. The typical process should begin with cleaning the data, and then selecting. x = (df2.loc[df2['timestamp'].str.startswith('2015')]) is the cause of the error, because it selects the entire dataframe, not a single column of the dataframe. And df2['cnt'] is not selected for the desired year. pandas.DataFrame.plot uses matplotlib as the default plotting backend, and should be used for plotting the dataframe. # load the sample dataframe from the OP # convert timestamp to a datetime dtype df2.timestamp = pd.to_datetime(df2.timestamp) # select the data by year df_2015 = df2[df2.timestamp.dt.year.eq(2015)] # directly plot the dataframe, which uses matplotlib as the back end ax = df_2015.plot(x='timestamp', marker='.', ls='') The x-ticks and labels will be formatted depending on the range of the data, which can be changed with the following answers Changing the formatting of a datetime axis How to change the datetime tick label frequency The x-axis limits can be set as shown in: How to set xlim and xticks after plotting time-series data
2
2
77,071,473
2023-9-9
https://stackoverflow.com/questions/77071473/where-can-i-import-dataclassinstance-for-mypy-check
I have been using custom-defined DataclassProtocol to annotate the arg of function which takes dataclass type. It was something like this: import dataclasses from typing import Type class DataclassProtocol(Protocol): """Type annotation for dataclass type object.""" # https://stackoverflow.com/a/55240861/11501976 __dataclass_fields__: Dict def f(dcls: Type[DataclassProtocol]): return dataclasses.fields(dcls) But recent mypy check fails with message: error: Argument 1 to "fields" has incompatible type "type[DataclassProtocol]"; expected "DataclassInstance | type[DataclassInstance]" [arg-type] It seems I should now annotate with this DataclassInstance, but I can't find out from where I can import this. Where can I find it?
You may import it from _typeshed: from __future__ import annotations from typing import TYPE_CHECKING if TYPE_CHECKING: from _typeshed import DataclassInstance Note that types from _typeshed do not exist at runtime. You may read more about them here: _typeshed _typeshed.DataclassInstance
7
13
77,079,524
2023-9-11
https://stackoverflow.com/questions/77079524/how-to-expect-count-bigger-or-smaller-than
Using Playwright and Python, how can I expect for count bigger or smaller than? For example, this code expect for count of 2. How do I achieve count >= 2 (Only bigger) expect(self.page.locator('MyLocator')).to_have_count(2, timeout=20 * 1000)
This doesn't seem to be possible in the current Python Playwright API, but you could use wait_for_function as a workaround: page.wait_for_function("document.querySelectorAll('.foo').length > 2") This is web-first and will wait for the predicate, but the error message once it throws won't be as clear as the expect failure. If the count is immediately available and you don't need to wait for the predicate to be true, assert is useful to mention as another possibility: assert page.locator(".foo").count() > 2 If you're using unittest, you can replace assert with, for example self.assertGreaterEqual(page.locator(".foo").count(), 2) self.assertGreater(page.locator(".foo").count(), 2) # or Yet another workaround if you're dealing only with small numbers of elements: page.locator(".foo") expect(loc).not_to_have_count(0) expect(loc).not_to_have_count(1) expect(loc).not_to_have_count(2) Here, we ensure the count is greater than 2 by process of elimination. It's impossible for count to be less than 0, so we need not include that. You could do this for less than as well, but not as easily, and using a reasonable assumption that there'll never be more than, say, 20 or 50 elements in any conceivable run: loc = page.locator(".foo") for i in range(2, 20): expect(loc).not_to_have_count(i) This ensures the count is 0 or 1, or greater than some reasonably high upper-bound.
4
2
77,086,128
2023-9-12
https://stackoverflow.com/questions/77086128/how-to-pass-worker-options-parameters-in-gunicorn
I am running an app which needed uvicorn's asycio loop, by default it uses auto and some time it randomly assign it to uvloop whihc breaks the behavior. So I use the following command uvicorn myapp.server.api:app --loop asyncio --port 7474 This forces uvicorn to use asyncio loop. This works as expected. Now I am trying to move this changes to gunicorn and uvicorn as worker, but I am couldn't find a way to pass this loop to uvicorn. gunicorn myapp.server.api:app -k uvicorn.workers.UvicornWorker --bind 0.0.0.0:7474 But this end up using default value i.e. auto and end up selecting loop type as uvloop. How can I force it to use asyncio worker. Help is appreciated.
The problem is gunicorn does not support passing options directly to uvicorn workers. So, basically, you can create a custom UvicornWorker class where you override the default loop policy. Here's an example: # myapp/server/custom_worker.py from uvicorn.workers import UvicornWorker class CustomUvicornWorker(UvicornWorker): CONFIG_KWARGS = {"loop": "asyncio"} Then: gunicorn myapp.server.api:app -k myapp.server.custom_worker.CustomUvicornWorker --bind 0.0.0.0:7474
4
5
77,076,597
2023-9-10
https://stackoverflow.com/questions/77076597/is-it-possible-to-get-pydantic-v2-to-dump-json-with-sorted-keys
In the pydantic v1 there was an option to add kwargs which would get passed to json.dumps via **dumps_kwargs. However, in pydantic v2 if you try to add extra kwargs to BaseModel.json() it fails with the error TypeError: `dumps_kwargs` keyword arguments are no longer supported. Here is example code with a workaround using dict()/model_dump(). This is good enough as long as the types are simple, but it won't work for the more complex data types that pydantic knows how to serialize. Is there a way to get sort_keys to work in pydantic v2 in general? import json from pydantic import BaseModel class JsonTest(BaseModel): b_field: int a_field: str obj = JsonTest(b_field=1, a_field="one") # this worked in pydantic v1 but raises a TypeError in v2 # print(obj.json(sort_keys=True) print(obj.model_dump_json()) # {"b_field":1,"a_field":"one"} # workaround for simple objects print(json.dumps(obj.model_dump(), sort_keys=True)) # {"a_field": "one", "b_field": 1}
I'm not sure whether it is an elegant solution but you could leverage the fact that dictionaries (since python 3.7) preserve an order of elements: from typing import Any, Dict from pydantic import BaseModel, model_serializer class JsonTest(BaseModel): b_field: int c_field: int a_field: str @model_serializer(when_used='json') def sort_model(self) -> Dict[str, Any]: return dict(sorted(self.model_dump().items())) obj = JsonTest(b_field=1, a_field="one", c_field=0) print(obj.model_dump_json()) # {"a_field":"one","b_field":1,"c_field":0}
6
6
77,052,622
2023-9-6
https://stackoverflow.com/questions/77052622/memory-issue-creating-bigrams-and-trigrams-with-countvectorizer
I am trying to create a document term matrix using CountVectorizer to extract bigrams and trigrams from a corpus. from sklearn.feature_extraction.text import CountVectorizer lemmatized = dat_clean['lemmatized'] c_vec = CountVectorizer(ngram_range=(2,3), lowercase = False) ngrams = c_vec.fit_transform(lemmatized) count_values = ngrams.toarray().sum(axis=0) vocab = c_vec.vocabulary_ df_ngram = pd.DataFrame(sorted([(count_values[i],k) for k,i in vocab.items()], reverse=True) ).rename(columns={0: 'frequency', 1:'bigram/trigram'}) I keep getting the following error: MemoryError: Unable to allocate 7.89 TiB for an array with shape (84891, 12780210) and data type int64 While I have some experience with Python, I am pretty new to dealing with text data. I was wondering if there was a more memory efficient way to address this issue. I'm not sure if it is helpful to know, but the ngrams object is a scipy.sparse._csr.csr_matrix.
Solution: Here is one way to get the final table your looking for with frequency and bigram/trigram without generating the entire document term matrix. We can take the sum of a sparse matrix and use that to create a dataframe. This removes the need to create space in RAM for all of those missing values. # Here we create columns as vocabulary terms and a single row value as count of all terms. # We tranpose that to make it an index and a single column data = ngrams.sum(axis=0) keys = c_vec.vocabulary_.keys() df_ngram = pd.DataFrame(data, columns=keys).T # Get the count to its own column and rename all columns df_ngram.index.name = 'bigram/trigram' df_ngram.rename({0: 'count'}, inplace=True, axis=1) df_ngram.reset_index(inplace=True) # Calculate frequency of each term df_ngram['frequency'] = (df_ngram['count'] / df_ngram['count'].sum()) df_ngram.sort_values(by=['count'], ascending=False, inplace=True) df_ngram.head() # bigram/trigram count frequency # 1 (ngram here) (data) (data) This could likely be simplified but it certainly does the job.
2
2
77,088,781
2023-9-12
https://stackoverflow.com/questions/77088781/how-to-write-and-read-dataframe-to-parquet-where-column-contains-list-of-dicts
I have a column that contain a list of dictionaries and I'm trying to write it to disk using parquet and reading it back into the same original object. However I'm not able to get the same exact object back. Here's the minimal code example to reproduce the issue: import pyarrow as pa from pyarrow import parquet import pandas as pd COLUMN1_SCHEMA = pa.list_(pa.struct([('Id', pa.string()), ('Age', pa.string())])) SCHEMA = pa.schema([pa.field("column1", COLUMN1_SCHEMA), ('column2', pa.int32())]) df = pd.DataFrame({ "column1": [[{"Id": "1"}, {"Age": "16"}], [{"Id": "2"},{"Age": "17"}]], "column2": [1, 2], }) table = pa.Table.from_pandas(df, schema=SCHEMA) parquet.write_table(table, "f.parquet") df = pa.parquet.read_table("f.parquet", schema=SCHEMA).to_pandas() The problem is that the dataframe that is read back contains repeating dicts where each value is None (one at a time). Illustration below: column1 column2 0 [{'Id': '1','Age': None}, {'Id': None, 'Age': '16'}] 1 1 [{'Id': '2','Age': None}, {'Id': None, 'Age': '17'}] 2 What I wanted was to get the original dataframe back: column1 column2 0 [{'Id': '1'}, {'Age': '16'}] 1 1 [{'Id': '2'}, {'Age': '17'}] 2 Read documentation at pyarrow: https://arrow.apache.org/docs/python/index.html Read similar questions at stackoverflow but could not find an answer for this. Also could not find issues on the topic at Pyarrow's github.
The problem is that the dataframe that is read back contains repeating dicts where each value is None (one at a time). I'm not sure what your intention is. Do you want individual values to be a dictionary? If so I'd suggest sending the schema to this (no need for list_): COLUMN1_SCHEMA = pa.struct([('Id', pa.string()), ('Age', pa.string())]) And the values to this (no extra [] wrapper, everything in the same dict): [{"Id": "1", "Age": "16"}, {"Id": "2","Age": "17"}] Which gives you column1 column2 {'Id': '1', 'Age': '16'} 1 {'Id': '2', 'Age': '17'} 2
3
0
77,092,112
2023-9-12
https://stackoverflow.com/questions/77092112/how-to-apply-weight-curve-with-curve-fit
I have two variables, and I am trying to use curve_fit in scipy optimize to fit the data. It looks alright, but the red line on the left portion does not fit so well to the data (green dots). How can I put some weights on the curve_fit() to shift the red line on left towards the blue line? Here is the code: import pandas as pd import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit from pandas import DataFrame x = [ 57, 83, 124, 141, 196, 223, 275, 302, 341, 714, 895, 1034, 1117, 1207, 1248, 1416, 1494, 1563, 1708, 1785, 1863, 2015, 2139, 2238, 2312, 2412, 2442, 2520, 2596, 2658, 2706, 2777, 2846, 2966, 3106, 3241, 3276, 3424, 3568, 3647, 3831, 3961, 4091, 4248, 4430, 4478, 4644, 4833, 5052, 6041 ] y = [ 70, 81, 87, 91, 96, 106, 109, 114, 120, 129, 144, 162, 168, 175, 181, 184, 190, 195, 205, 213, 216, 219, 224, 226, 231, 236, 239, 247, 255, 260, 264, 269, 282, 292, 297, 304, 308, 313, 319, 322, 327, 333, 338, 341, 345, 354, 362, 364, 374, 391 ] plt.scatter(x,y,color='green') def func(x, a, b): return a * np.power(x,b) popt, pcov = curve_fit(func, x, y) plt.plot(x, func(x, *popt), 'b-', label='fit: a=%5.3f, b=%5.3f' % tuple(popt)) popt2 = [12.6, 0.386] plt.plot(x, func(x, *popt2), 'r-', label='fit: a=%5.3f, b=%5.3f' % tuple(popt2)) plt.semilogx()
You can use the parameter sigma in curve_fit. From the docs: sigma: None or M-length sequence or MxM array, optional Determines the uncertainty in ydata. If we define residuals as r = ydata - f(xdata, *popt), then the interpretation of sigma depends on its number of dimensions: A 1-D sigma should contain values of standard deviations of errors in ydata. In this case, the optimized function is chisq = sum((r / sigma) ** 2). A 2-D sigma should contain the covariance matrix of errors in ydata. In this case, the optimized function is chisq = r.T @ inv(sigma) @ r. So you can consider a 1-D sigma as inverse weights. To better fit a particular part of the curve, assign lower values of sigma to the specific points: plt.scatter(x,y,color='green') def func(x, a, b): return a * np.power(x,b) sigma = np.ones(len(x)) sigma[10:] *= 10 # set higher sigma for all data points other than the first 10 popt, pcov = curve_fit(func, x, y, sigma=sigma) plt.plot(x, func(x, *popt), 'b-', label='fit: a=%5.3f, b=%5.3f' % tuple(popt)) popt2 = [12.6, 0.386] plt.plot(x, func(x, *popt2), 'r-', label='fit: a=%5.3f, b=%5.3f' % tuple(popt2)) plt.semilogx() Which results in: You can play with sigma to get better results than the above.
3
1
77,074,865
2023-9-10
https://stackoverflow.com/questions/77074865/how-to-convert-a-string-mixed-with-infix-and-prefix-sub-expressions-to-all-prefi
Consider I have a string formula which is written in this format: "func(a+b,c)", where func is a custom function, this string contains both infix(i.e. the +) and prefix(i.e. the func) representations, I'd like to convert it to a string with all prefix representations, "func(+(a,b), c)", how can I do that? Another example is "func1(a*(b+c),d)-func2(e,f)" to "-(func1(*(a, +(b,c)), d), func2(e,f))" More background: I want to build a parser, where the user would input the expressions in string format just as described above, normally a custom function would be written as a prefix expression, i.e. "func(a,b)", but due to the common convention, people would still write the +-*/ in an infix expression, i.e. "a+b". If an expression is given in all infix format, I can easily convert it to a tree object I predefined, but if an expression is mixed with both prefix and infix formats, I have no idea how to convert it to a tree object, thus asking how to convert the string to all infix formats. I have no experience with parsers, appreciate any initial guidance.
If the language you are parsing is that similar to Python, you can just use the Python parser as provided by the built-in ast module, and implement a visitor over the nodes that interest you, in order to build up the prefix expression. For example, you could try this: import ast def printc(*args): print(*args, end='') class PrefixPrinter(ast.NodeVisitor): def visit_BinOp(self, node): if type(node.op) == ast.Add: printc("+") if type(node.op) == ast.Sub: printc("-") if type(node.op) == ast.Mult: printc("*") if type(node.op) == ast.Div: printc("/") printc("(") self.visit(node.left) printc(",") self.visit(node.right) printc(")") def visit_Call(self, node): self.visit(node.func) printc("(") for index, arg in enumerate(node.args): self.visit(arg) if index < len(node.args) - 1: printc(", ") printc(")") def visit_Name(self, node): printc(node.id) first_example = "func(a+b,c)" second_example = "func1(a*(b+c),d)-func2(e,f)" printc(f"{first_example=} converts to ") PrefixPrinter().visit(ast.parse(first_example)) print() printc(f"{second_example=} converts to ") PrefixPrinter().visit(ast.parse(second_example)) print() Execution: $ python3 to_prefix.py first_example='func(a+b,c)' converts to func(+(a,b), c) second_example='func1(a*(b+c),d)-func2(e,f)' converts to -(func1(*(a,+(b,c)), d),func2(e, f))
2
2
77,093,266
2023-9-12
https://stackoverflow.com/questions/77093266/how-to-clear-input-field-after-hitting-enter-in-streamlit
I have a streamlit app where I want to get user input and use it later. However, I also want to clear the input field as soon as the user hits Enter. I looked online and it seems I need to pass a callback function to text_input but I can't make it work. I tried a couple different versions but neither works as I expect. import streamlit as st def clear_text(): st.session_state.my_text = "" # This version doesn't clear the text after hitting Enter. my_text = st.text_input("Enter text here", on_change=clear_text) # This version clears the field but doesn't save the input. my_text = st.text_input("Enter text here", on_change=clear_text, key='my_text') st.write(my_text) The expectation is to save the input into my_text and clear the field afterwards. I looked at similar questions about clearing text input here and here but they're not relevant for my case because I want the input field to clear automatically while those cases talk about using a separate button. How do I make it work?
You can slightly adjust the solution provided by @MathCatsAnd : if "my_text" not in st.session_state: st.session_state.my_text = "" def submit(): st.session_state.my_text = st.session_state.widget st.session_state.widget = "" st.text_input("Enter text here", key="widget", on_change=submit) my_text = st.session_state.my_text st.write(my_text) Output :
2
10
77,092,114
2023-9-12
https://stackoverflow.com/questions/77092114/numba-typeerror-on-higher-dimensional-structured-numpy-datatypes
The following code compiles and executes correctly: import numpy as np from numba import njit Particle = np.dtype([ ('position', 'f4'), ('velocity', 'f4')]) arr = np.zeros(2, dtype=Particle) @njit def f(x): x[0]['position'] = x[1]['position'] + x[1]['velocity'] * 0.2 + 1. f(arr) However, making the datatype more highly dimensional causes this code to fail when compiling (but works without @njit): import numpy as np from numba import njit Particle = np.dtype([ ('position', 'f4', (2,)), ('velocity', 'f4', (2,)) ]) arr = np.zeros(2, dtype=Particle) @njit def f(x): x[0]['position'] = x[1]['position'] + x[1]['velocity'] * 0.2 + 1. f(arr) With the following error: TypingError: Failed in nopython mode pipeline (step: nopython frontend) No implementation of function Function(<built-in function setitem>) found for signature: >>> setitem(Record(position[type=nestedarray(float32, (2,));offset=0],velocity[type=nestedarray(float32, (2,));offset=8];16;False), Literal[str](position), array(float64, 1d, C)) There are 16 candidate implementations: - Of which 16 did not match due to: Overload of function 'setitem': File: <numerous>: Line N/A. With argument(s): '(Record(position[type=nestedarray(float32, (2,));offset=0],velocity[type=nestedarray(float32, (2,));offset=8];16;False), unicode_type, array(float64, 1d, C))': No match. During: typing of staticsetitem at /tmp/ipykernel_21235/2952285515.py (13) File "../../../../tmp/ipykernel_21235/2952285515.py", line 13: <source missing, REPL/exec in use?> Any thoughts on how to remedy the later one? I would like to use more highly dimensionalized datatypes.
You can try to use [:] to set values of the array: import numpy as np from numba import njit Particle = np.dtype([("position", "f4", (2,)), ("velocity", "f4", (2,))]) arr = np.zeros(2, dtype=Particle) @njit def f(x): pos_0 = x[0]["position"] pos_0[:] = x[1]["position"] + x[1]["velocity"] * 0.2 + 1.0 #x[0]["position"][:] = ... works too f(arr) print(arr) Prints: [([1., 1.], [0., 0.]) ([0., 0.], [0., 0.])]
2
3
77,089,361
2023-9-12
https://stackoverflow.com/questions/77089361/what-do-ellipses-do-when-they-are-the-default-argument-of-a-function
In pandas source code, here's a snippet of the to_csv function: @overload def to_csv( self, path_or_buf: FilePath | WriteBuffer[bytes] | WriteBuffer[str], sep: str = ..., na_rep: str = ..., float_format: str | Callable | None = ..., What does the ... mean? EDIT: A couple of users have suggested this answer. Though appreciated, as per my comment, this answer does not contain a clear explanation of ... used as function argument defaults. There is a discussion over here, but it is not concrete enough for me to consider it satisfactory, and so I rejected the suggestion to close my question as having been answered elsewhere.
The short answer: in this particular case, the ellipsis (...) is used as a placeholder for the default values in an overloaded method signature, following PEP 484 ("In stubs it may be useful to declare an argument as having a default without specifying the actual default value. … In such cases the default value may be specified as a literal ellipsis"). The long answer: In its current version, the to_csv() method provides two overloads – see source code starting here. An "overloaded function" (or method) in Python simply means a different combination of input and return types, or even a different number of arguments, for the sake of type annotation. It is provided as a stubbed definition that is marked by the @overload decorator – see the section of the corresponding PEP 484 here. At present, this looks as follows for to_csv(): @overload def to_csv( self, path_or_buf: None = ..., sep: str = ..., na_rep: str = ..., float_format: str | Callable | None = ..., columns: Sequence[Hashable] | None = ..., header: bool_t | list[str] = ..., index: bool_t = ..., index_label: IndexLabel | None = ..., mode: str = ..., encoding: str | None = ..., compression: CompressionOptions = ..., quoting: int | None = ..., quotechar: str = ..., lineterminator: str | None = ..., chunksize: int | None = ..., date_format: str | None = ..., doublequote: bool_t = ..., escapechar: str | None = ..., decimal: str = ..., errors: OpenFileErrors = ..., storage_options: StorageOptions = ..., ) -> str: ... @overload def to_csv( self, path_or_buf: FilePath | WriteBuffer[bytes] | WriteBuffer[str], sep: str = ..., na_rep: str = ..., float_format: str | Callable | None = ..., columns: Sequence[Hashable] | None = ..., header: bool_t | list[str] = ..., index: bool_t = ..., index_label: IndexLabel | None = ..., mode: str = ..., encoding: str | None = ..., compression: CompressionOptions = ..., quoting: int | None = ..., quotechar: str = ..., lineterminator: str | None = ..., chunksize: int | None = ..., date_format: str | None = ..., doublequote: bool_t = ..., escapechar: str | None = ..., decimal: str = ..., errors: OpenFileErrors = ..., storage_options: StorageOptions = ..., ) -> None: ... @final @doc( storage_options=_shared_docs["storage_options"], compression_options=_shared_docs["compression_options"] % "path_or_buf", ) def to_csv( self, path_or_buf: FilePath | WriteBuffer[bytes] | WriteBuffer[str] | None = None, sep: str = ",", na_rep: str = "", float_format: str | Callable | None = None, columns: Sequence[Hashable] | None = None, header: bool_t | list[str] = True, index: bool_t = True, index_label: IndexLabel | None = None, mode: str = "w", encoding: str | None = None, compression: CompressionOptions = "infer", quoting: int | None = None, quotechar: str = '"', lineterminator: str | None = None, chunksize: int | None = None, date_format: str | None = None, doublequote: bool_t = True, escapechar: str | None = None, decimal: str = ".", errors: OpenFileErrors = "strict", storage_options: StorageOptions | None = None, ) -> str | None: # Actual doc and source of the method follow Note that the only difference between the first and second overload is the combination of path_or_buf argument and the return type: In the first case, path_or_buf is None and the return type will be str (namely, as no path is given, the resulting CSV data will be returned). In the second case, path_or_buf is something writeable (a file path or some buffer) and the return type will be None (as the CSV data is written to the path or buffer, it does not need to be returned). The default values of arguments do not matter at this point, so they are simply abbreviated as an ellipsis. This, again, follows PEP 484, which specifies: In stubs it may be useful to declare an argument as having a default without specifying the actual default value. … In such cases the default value may be specified as a literal ellipsis. This seems to be a good choice for several reasons: Syntactically, this is just fine, as the ellipsis, as of Python 3, is an ordinary Python object. For a human reader, it is convenient to read; also, it avoids using None for the same purpose, which would be ambiguous (as in: does it mean "I don't care about the actual value at this point" or does it mean "the default value is None"?). For the Python interpreter, it does not make a difference, as it takes its default values from the final signature that precedes the actual code. (This final signature, the third one in the given case, also combines the type annotations for path_or_buf and the return type from the previous two cases, so it is not immediately clear from the type annotations alone, which combinations are possible – which is why @overload was used in the first place). One might wonder why the ellipses are provided here at all – in fact, the code seems to run just as well without them (except for the ones that replace the method body in the stubs). However, as pointed out by @tobias_k, they help with another distinction, regarding the overloads: in the way the ellipses are used, they indicate that a default value is actually part of this particular overload. Note the path_or_buf argument: its default value is only meaningful for the first overload (path_or_buf is None), but not for the second (a path or buffer must be provided by the caller). Consequently, an ellipsis is only present in the first overload's signature but not in the second overload's signature for this particular argument.
5
5
77,091,788
2023-9-12
https://stackoverflow.com/questions/77091788/regex-to-match-only-the-second-ip-address-in-a-range
I'm trying to match only the second valid ip address in a string with a range of ip addresses. Sometimes it's written without a space between addresses and something it has one or more spaces. Also sometimes the ip isn't valid so it shouldn't match. test = ''' 1.0.0.0-1.0.0.240 2.0.0.0 - 1.0.0.241 3.0.0.0 -1.0.0.242 4.0.0.0- 1.0.0.243 5.0.0.0 - 1.0.0.244 6.0.0.0 - 1.0.0.245 7.0.0.0 - 1.0.0.2456 #NOT VALID SO DONT MATCH ''' pattern = r"(?<=-\s))\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}" r = re.compile(pattern, re.DOTALL) print(r.findall(test)) My try only catches: 1.0.0.241 and 1.0.0.243
Change regex pattern to the following: pattern = r"(?<=[-\s])((?:\d{1,3}\.){3}\d{1,3})$" r = re.compile(pattern, re.M) print(r.findall(test)) (?<=[-\s]) - lookbehind assertion to match either - or \s as a boundary before IP address (which is enough in your case) (?:\d{1,3}\.){3} - matches the 3 first octets each followed by . of IP address $ - matches the end of the string in a multi-lined text (recognized by re.M) ['1.0.0.240', '1.0.0.241', '1.0.0.242', '1.0.0.243', '1.0.0.244', '1.0.0.245']
3
3
77,070,305
2023-9-8
https://stackoverflow.com/questions/77070305/how-to-distribute-a-python-package-where-import-name-is-different-than-project-n
I am trying to package my project in order to upload it in PyPI. I have the following directory structure: . β”œβ”€β”€ docs β”œβ”€β”€ LICENSE β”œβ”€β”€ pyproject.toml β”œβ”€β”€ README.md β”œβ”€β”€ src β”‚ β”œβ”€β”€ package_name β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ β”œβ”€β”€ data.json β”‚ β”‚ β”œβ”€β”€ __main__.py β”‚ β”‚ β”œβ”€β”€ utils.py └── tests My package is in under src named src/package_name. The pyproject.toml has the following lines: [build-system] requires = ["hatchling"] build-backend = "hatchling.build" [project] name = "project_name" version = "0.0.1" authors = [{name = "Foo Bar"}] license = {text = "GPL-3.0-only"} description = "A small example package" readme = "README.md" requires-python = ">=3.10" classifiers = [ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: POSIX :: Linux", ] [project.urls] Homepage = "https://example.com" I want a user to be able to perform the installation as: pip install project_name but use the code as: >>> from package_name import x Is there any way to achieve this with Hatch? I have read the Build-instructions but can't find how. Note I have tried the following: python3 -m build python3 -m twine upload --repository testpypi dist/* pip install --upgrade -i https://test.pypi.org/simple/ project_name The problem is when I type: >>> import package_name Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'package_name' I searched into path/to/site-packages but only path/to/site-packages/project_name-0.0.1.dist-info is there. Any ideas? Tried to solve my problem based on this Tutorial. The reason I used Hatch is because it was presented in the tutorial. Ξ‘n alternative solution (using also pyproject.toml with different backend) is also appreciated. Edit I have updated the directory structure as proposed in the comments.
I found the solution to my problem by simply switching to setuptools as backend and modifying the pyproject.toml as following: [build-system] requires = ["setuptools"] build-backend = "setuptools.build_meta" [tool.setuptools.packages.find] where = ["src"] [tool.setuptools.dynamic] version = {attr = "package_name.__version__"} [project] name = "package_name" dynamic = ["version"] authors = [{name = "Foo Bar"}] license = {text = "GPL-3.0-only"} description = "A small example package" readme = "README.md" requires-python = ">=3.10" classifiers = [ "Programming Language :: Python :: 3", "License :: OSI Approved :: GNU General Public License v3 (GPLv3)", "Operating System :: POSIX :: Linux", ] [project.urls] Homepage = "https://example.com"" My solution is based on the setuptools Documentation.
4
2
77,090,701
2023-9-12
https://stackoverflow.com/questions/77090701/add-new-rate-column-base-previous-column
I would like to add a rate column base on row before current row, just like a diff() but need do some calculation just like below: import pandas as pd import numpy as np df = pd.DataFrame(np.random.random((5,2)),columns=['v1','v2']) print(df) rate = [] rate.append(0) prev = 0 for index, row in df.iterrows(): if index == 0: prev = row continue r = (prev['v2'] - row['v2'])/(row['v1'] - prev['v1']) prev = row rate.append(r) df['rate'] = rate print(df) Current logic works what's more better way to do it in python?
You can use the pandas shift function. df['rate'] = (df['v2'] - df['v2'].shift(1)) / (df['v1'].shift(1) - df['v1']) df['rate'].iloc[0] = 0 df['rate'] = df['rate'].fillna(0) The shift() method is used to shift the data by one row, which allows you to easily calculate differences between the current row and the previous row using vectorized operations. The .iloc[0] method is then used to set the rate for the first row to 0.
2
0
77,089,742
2023-9-12
https://stackoverflow.com/questions/77089742/how-to-stop-non-digit-input-causing-my-python-program-to-crash
I am trying to create a program for a class that will have a conversation. I want to force the user to input an int for the variables your_age and my_age. I've done a lot of research and this I've tried to code the program the way I believe I'm being told but it's not running if I put 'one' instead of '1'. If the user inputs 'one' I want them to get an error code and have the opportunity to re-enter the variable. Code: name = input('Salutations what is your name?') #user will input their name and it will be saved in the variable name print('That is a nice name ' + name + ' whats your favorite color') #genorate a sentance using the variable name color =input('') #this input is collecting the information from the above sentence print (color + ' is a great color')#make user feel like they picked a good color #your_age = int(input('How old are you?'))#user sets variable age while True: your_age = int(input('How old are you?')) try: your_age = int(your_age) break except ValueError: print('INVALID: you can round up or down but please enter your age using digits') my_age = int(input(' how old am I?')) combine_age = str(your_age + my_age) print ('Our combined age is ') print (combine_age) I had the code running but I was stress testing and figured out that if I didn't put in a digit the program would crash. I researched on YouTube and in Automating the Boring Stuff and couldn't find a sufficient answer.
Just convert input into integer inside of Try block and implement error handling for both variables. your_age and my_age. name = input('Salutations what is your name?') #user will input their name and it will be saved in the variable name print('That is a nice name ' + name + ' whats your favorite color') #genorate a sentance using the variable name color =input('Your favorite color?') #this input is collecting the information from the above sentence print (color + ' is a great color')#make user feel like they picked a good color while True: try: your_age = int(input('How old are you?')) break except ValueError: print('INVALID: you can round up or down but please enter your age using digits only') while True: try: my_age = int(input('How old am I?')) break except ValueError: print('INVALID: you can round up or down but please enter your age using digits only') combine_age = str(your_age + my_age) print ('Our combined age is '+combine_age)
2
1
77,081,746
2023-9-11
https://stackoverflow.com/questions/77081746/fastapi-multiple-examples-for-body-in-response
I need to create multiple examples for the Response Body, to display it in API documentation http://127.0.0.1:8000/docs. I found an example for the Request Body in the documentation (there is drop-down list of: "A normal example", "An example with converted data", etc.), but I require the same approach for the Response Body. import uvicorn from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class Item(BaseModel): value: str model_config = { "json_schema_extra": { "examples": [ {"value": "A normal example"}, # working {"value": "An example with converted data"}, # NOT WORKING {"value": "Invalid data ..."}, # NOT WORKING ] } } @app.get(path="/") def get1() -> Item: return Item(value="a") if __name__ == "__main__": uvicorn.run(app)
You can achieve this by adding the response parameter in the decorator method. import uvicorn from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class Item(BaseModel): message: str get1_responses = { 200: { "description": "Success", "content": { "application/json": { "examples": { "Normal": {"value": {"message": "A normal example"}}, "Converted": {"value": {"message": "An example with converted data"}}, } } }, }, 400 : { "description": "Bad request", "content": { "application/json": { "example": {"message": "Invalid data ..."} } }, } } @app.get(path="/", responses=get1_responses) def get1() -> Item: return Item(message="a") if __name__ == "__main__": uvicorn.run(app) The swagger UI
2
3
77,088,341
2023-9-12
https://stackoverflow.com/questions/77088341/removing-unwanted-values-from-a-pandas-data-frame
I'm creating a data frame and want to drop entries in it that are not relevant. I'm looking to drop the values that are not numbers. I have created the data frame using the following code (credit): import pandas as pd import os os.chdir('/pathdirectory/files') csv_files = [f for f in os.listdir() if f.endswith('.csv')] dfs = [] for csv in csv_files: df = pd.read_csv(csv, header=None) df = df.T df.columns = ['DC energy', 'AC energy', 'Capacity factor', 'Inverter Loss'] dfs.append(df) final_df = pd.concat(dfs, ignore_index=True) final_df And it returns this data frame. Obviously I want to remove the wording from the data frame but I am struggling with doing this. Any help is greatly appreciated.
You should set the first columns of the CSVs as index: pd.read_csv(csv, header=None, index_col=0) Alternatively: cols = ['DC energy', 'AC energy', 'Capacity factor', 'Inverter Loss'] final_df = pd.concat([pd.read_csv(csv, header=None, index_col=0) for csv in csv_files], axis=1, ignore_index=True).T.set_axis(cols) Note that this assumes that all files have the same order of columns. You could also keep the default name: final_df = pd.concat([pd.read_csv(csv, header=None, index_col=0) for csv in csv_files], axis=1, ignore_index=True).T
2
3
77,087,929
2023-9-12
https://stackoverflow.com/questions/77087929/how-to-print-a-array-with-numbered-columns-and-rows-nicely-in-python
Currently strugging with nicely printing a array of "O"'s, where the columns and rows should be numbered. So I have a array which is just a nxn-matrix full of the string "O". Now I tried using the following method: def __repr__(self) : Matrix = self.Spielstand #this is just the mentioned array of length n: Ausgabe = " " for j in range(len(Matrix[0])): Ausgabe += str(j + 1) + " " Ausgabe += "\n" for i in range(len(Matrix)): Ausgabe += str(i + 1) + " " for j in range(len(Matrix[i])): Ausgabe += str(Matrix[i][j]) Ausgabe += " " Ausgabe += "\n" return Ausgabe Which works perfectly fine printing something like: 1 2 3 4 5 6 7 8 1 O O O O O O O O 2 O O O O O O O O 3 O O O O O O O O 4 O O O O O O O O 5 O O O O O O O O 6 O O O O O O O O 7 O O O O O O O O 8 O O O O O O O O However when my n gets bigger than 9, things get kind of weird: 1 2 3 4 5 6 7 8 9 10 1 O O O O O O O O O O 2 O O O O O O O O O O 3 O O O O O O O O O O 4 O O O O O O O O O O 5 O O O O O O O O O O 6 O O O O O O O O O O 7 O O O O O O O O O O 8 O O O O O O O O O O 9 O O O O O O O O O O 10 O O O O O O O O O O Is there a simple and nice way (preferably without using external libraries) to print the array nicely even if my n is double or even triple digit? I think using f-Strings and something going from center should work, but I have no clue how to implement it. Thanks for your help!
You can calculate margin size by counting the length of the number converted to string. Margin = len(str(len(Matrix))) Martix -> (array of 12 "O") len(Martix) -> 12 str(12) -> "12" len("12") -> 2 I have created demo, with a solution to your problem: def generateMatrix(n): return [["O"] * n] * n def draw(n): Matrix = generateMatrix(n) Margin = len(str(len(Matrix))) Ausgabe = " " * (Margin + 1) for j in range(len(Matrix[0])): Ausgabe += str(j + 1) + " " * (Margin - len(str(j+1)) + 1) Ausgabe += "\n" for i in range(len(Matrix)): Ausgabe += str(i + 1) + " " * (Margin - len(str(i+1)) + 1) for j in range(len(Matrix[i])): Ausgabe += str(Matrix[i][j]) Ausgabe += " " * Margin Ausgabe += "\n" return Ausgabe print(draw(12)) 1 2 3 4 5 6 7 8 9 10 11 12 1 O O O O O O O O O O O O 2 O O O O O O O O O O O O 3 O O O O O O O O O O O O 4 O O O O O O O O O O O O 5 O O O O O O O O O O O O 6 O O O O O O O O O O O O 7 O O O O O O O O O O O O 8 O O O O O O O O O O O O 9 O O O O O O O O O O O O 10 O O O O O O O O O O O O 11 O O O O O O O O O O O O 12 O O O O O O O O O O O O
2
2
77,086,913
2023-9-12
https://stackoverflow.com/questions/77086913/how-to-get-split-months-from-two-intervals
I've two dates in YYYYmm format start : 202307 end : 202612 want to split them in interval wise, based on provided interval for example split_months ('202307,'202405',5), will give me ((202307,202311), (202312,202404), (202405,202405)) tried with below code, bot stuck in the logic def split_months(start, end, intv): from datetime import datetime periodList = [] periodRangeSplitList = [] start = datetime.strptime(start ,"%Y%m") end = datetime.strptime(end ,"%Y%m") mthDiff = relativedelta.relativedelta(end, start) if mthDiff == 0: periodRangeSplitList.append((start,start)) return periodRangeSplitList diff = mthDiff / intv print(diff) for i in range(intv): periodList.append ((start + diff * i).strftime("%Y%m")) periodList.append(end.strftime("%Y%m")) print(periodList) I tried, above code, but not working, can anyone share any suggestions ? Thanks
What about using a simple while loop? def split_months(start, end, intv): from datetime import datetime from dateutil import relativedelta periodList = [] start = datetime.strptime(start, '%Y%m') end = datetime.strptime(end, '%Y%m') step = relativedelta.relativedelta(months=intv-1) start_ = start while (end_:=start_+step) <= end: # add new period periodList.append((start_, end_)) # update next start start_ = end_ + relativedelta.relativedelta(months=1) # convert output to string return [(s.strftime('%Y%m'), e.strftime('%Y%m')) for s, e in periodList] split_months('202307', '202612', 5) Output: [('202307', '202311'), ('202312', '202404'), ('202405', '202409'), ('202410', '202502'), ('202503', '202507'), ('202508', '202512'), ('202601', '202605'), ('202606', '202610')] If you want to add the last interval up to the end, change the while condition and add an extra line (periodList.append((start_, end))): def split_months(start, end, intv): from datetime import datetime from dateutil import relativedelta periodList = [] start = datetime.strptime(start, '%Y%m') end = datetime.strptime(end ,'%Y%m') step=relativedelta.relativedelta(months=intv-1) start_ = start while (end_:=start_+step) < end: periodList.append((start_, end_)) start_ = end_ + relativedelta.relativedelta(months=1) periodList.append((start_, end)) return [(s.strftime('%Y%m'), e.strftime('%Y%m')) for s,e in periodList] Example: # split_months('202307', '202612', 5) [('202307', '202311'), ('202312', '202404'), ('202405', '202409'), ('202410', '202502'), ('202503', '202507'), ('202508', '202512'), ('202601', '202605'), ('202606', '202610'), ('202611', '202612')] # split_months('202307', '202610', 5) [('202307', '202311'), ('202312', '202404'), ('202405', '202409'), ('202410', '202502'), ('202503', '202507'), ('202508', '202512'), ('202601', '202605'), ('202606', '202610')] # split_months('202307','202405',5) [('202307', '202311'), ('202312', '202404'), ('202405', '202405')]
2
3
77,087,129
2023-9-12
https://stackoverflow.com/questions/77087129/converting-characters-like-%c2%b3-to-integer-in-python
I have this character 'Β³' in my dataset that I'm processing on top of. Generic Idea is to detect if a character is an integer, convert it into an integer and process on top of it. >>> x = 'Β³' >>> x.isdigit() # Returns True True Python detects this character as a digit. But raises the following error when I try to convert it >>> int(x) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: invalid literal for int() with base 10: 'Β³' I would like it if such characters could also be converted to integer, to ease my further processing Not sure if this helps, but here is my locale info >>> import locale >>> locale.getdefaultlocale() ('en_US', 'UTF-8')
You can use unicodedata and NFKC to convert it here is a detailed code with some error handling import unicodedata x = 'Β³' try: regular_digit = unicodedata.normalize('NFKC', x) integer_value = int(regular_digit) print(integer_value) except ValueError: print(f"'{x}' is not a convertible superscript digit.")
4
6
77,085,731
2023-9-12
https://stackoverflow.com/questions/77085731/python-how-to-groupby-one-column-and-then-calculate-a-trailing-mean-and-cumulat
I want to groupby one column (for example, 'country'). Each row has an associated 'start_date' and 'end_date'. For every row in the groupby, I want to increment the counter if the 'start_date' in the current row occurs after the most recent 'end_date' in the prior rows (and not increment otherwise). I want the same logic to apply to the trailing mean. I have sorted by country and start_date. For example, I have a dataframe that can be generated with the following code: import pandas as pd # create df data = {'country': ['arg', 'arg', 'arg', 'arg', 'arg', 'usa', 'usa', 'usa'], 'start_date': ['2020-01-01', '2020-01-01', '2020-05-01', '2021-05-01', '2021-07-01', '2020-03-01', '2020-05-01', '2020-09-01'], 'end_date': ['2020-10-01', '2020-09-01', '2021-01-01', '2021-06-01', '2021-12-01', '2020-10-01', '2020-08-01', '2021-05-01'], 'value': [250, 300, 150, 170, 200, 150, 100, 120]} # Create DataFrame df = pd.DataFrame(data) And the desired result (with the new columns trailing_mean and count) would be: country start_date end_date value trailing_mean counter arg 2020-01-01 2020-10-01 250 NA 0 arg 2020-01-01 2020-09-01 300 NA 0 arg 2020-05-01 2021-01-01 150 NA 0 arg 2021-05-01 2021-06-01 170 233.33 3 arg 2021-07-01 2021-12-01 200 217.5 4 usa 2020-03-01 2020-10-01 150 NA 0 usa 2020-05-01 2020-08-01 100 NA 0 usa 2020-09-01 2021-05-01 120 100 1 Notice how the trailing_mean is NA until there are records that have a start_date that occurs AFTER the end_date. On every record, the trailing mean only takes into account past records that have already completed (their end_date happens before the current record's start_date). This is the same logic for the counter. It is 0 and then it increments. It jumps from 0 to 3 because all three prior rows ended before that row has started I have tried to groupby country and iterate through the rows. But I am having trouble accounting for the differences in end_dates. You can't just look back at the prior row you have to look at all prior records because the end_dates are not sequential
IIUC, you can apply a custom function to generate your counts and trailing means for each group: def count_and_avg(df): mask = [df['end_date'] < start for start in df['start_date']] df = df.assign(count=[sum(m) for m in mask], trailing_mean=[df[m]['value'].sum() / sum(m) if sum(m) else 0 for m in mask] ) return df out = df.groupby('country').apply(count_and_avg).reset_index(drop=True) Output for your sample data: country start_date end_date value count trailing_mean 0 arg 2020-01-01 2020-10-01 250 0 0.000000 1 arg 2020-01-01 2020-09-01 300 0 0.000000 2 arg 2020-05-01 2021-01-01 150 0 0.000000 3 arg 2021-05-01 2021-06-01 170 3 233.333333 4 arg 2021-07-01 2021-12-01 200 4 217.500000 5 usa 2020-03-01 2020-10-01 150 0 0.000000 6 usa 2020-05-01 2020-08-01 100 0 0.000000 7 usa 2020-09-01 2021-05-01 120 1 100.000000
3
2
77,083,986
2023-9-11
https://stackoverflow.com/questions/77083986/imap-tools-access-raw-message-data
How do you access the raw message data of an email when using imap-tools? Specifically so it can then be loaded into the email.message_from_bytes() function for forwarding? from imap_tools import MailBox, AND with MailBox('imap.gmail.com').login('[email protected]', '123456', 'INBOX') as mailbox: # get unseen emails from INBOX folder for msg in mailbox.fetch(AND(seen=False), mark_seen=False): pass # get the raw data from msg
According to the source it looks like the msg.obj property contains the value after message_from_bytes has been run. class MailMessage: """The email message""" def __init__(self, fetch_data: list): raw_message_data, raw_uid_data, raw_flag_data = self._get_message_data_parts(fetch_data) self._raw_uid_data = raw_uid_data self._raw_flag_data = raw_flag_data self.obj = email.message_from_bytes(raw_message_data) So msg.obj can be used directly.
3
4
77,082,451
2023-9-11
https://stackoverflow.com/questions/77082451/python-polars-apply-function-to-two-columns-and-an-argument
Intro In Polars I would like to do quite complex queries, and I would like to simplify the process by dividing the operations into methods. Before I can do that, I need to find out how to provide these function with multiple columns and variables. Example Data # Libraries import polars as pl from datetime import datetime # Data test_data = pl.DataFrame({ "class": ['A', 'A', 'A', 'B', 'B', 'C'], "date": [datetime(2020, 1, 31), datetime(2020, 2, 28), datetime(2021, 1, 31), datetime(2022, 1, 31), datetime(2023, 2, 28), datetime(2020, 1, 31)], "status": [1,0,1,0,1,0] }) The Problem For each group, I would like to know if a reference date overlaps with the year-month in the date column of the dataframe. I would like to do something like this. # Some date reference_date = datetime(2020, 1, 2) # What I would expect the query to look like (test_data .group_by("class") .agg( pl.col("status").count().alias("row_count"), #just to show code that works pl.lit(reference_date).alias("reference_date"), pl.col("date", "status") .map_elements(lambda group: myfunc(group, reference_date)) .alias("point_in_time_status") ) ) # The desired output pl.DataFrame({ "class": ['A', 'B', 'C'], "reference_date": [datetime(2020, 1, 2), datetime(2020, 1, 2), datetime(2020, 1, 2)], "point_in_time_status": [1,0,0] }) But I can simply not find any solutions for doing operations on groups. Some suggest using pl.struct, but that just outputs some weird object without columns or anything to work with. Example in R of the same operation # Loading library library(tidyverse) # Creating dataframe df <- data.frame( date = c(as.Date("2020-01-31"), as.Date("2020-02-28"), as.Date("2021-01-31"), as.Date("2022-01-31"), as.Date("2023-02-28"), as.Date("2020-01-31")), status = c(1,0,1,0,1,0), class = c("A","A","A","B","B","C")) # Finding status in overlapping months ref_date = as.Date("2020-01-02") df %>% group_by("class") %>% filter(format(date, "%Y-%m") == format(ref_date, "%Y-%m")) %>% filter(status == 1)
This should work with expressions reference_date = datetime(2020, 1, 2) ( test_data .group_by('class', maintain_order=True) .agg( point_in_time_status = ( (pl.col('date').dt.month_start() == pl.lit(reference_date).dt.month_start()) & (pl.col('status')==1) ).any(), reference_date = pl.lit(reference_date) ) ) I'm using the month_start method instead of converting to a string format as strings aren't especially performant if they can be avoided. You can see in the agg that we're looking for times when the first day of date's month is the same as the first day of reference date's month & if status == 1. That is all in parenthesis and then the aggregate any function is applied to that which applies per class. Lastly, we add in the reference_date column to get that in the output. You can interchange the order of those if you like. You can make that into a method but you should do it with polars expressions otherwise you're going to lose the efficiency gains that polars brings to the table. You can then monkey patch those to the pl.Expr namespace or create your own namespace As an example you could do: def myFunc(self, reference_date, status): return ( (self.dt.month_start()==reference_date.dt.month_start()) & (status==1) ).any() pl.Expr.myFunc=myFunc ( test_data .group_by('class', maintain_order=True) .agg( point_in_time_status = pl.col('date').myFunc( pl.lit(reference_date), pl.col('status') ), reference_date=pl.lit(reference_date) ) )
3
1
77,079,865
2023-9-11
https://stackoverflow.com/questions/77079865/recreate-randperm-matlab-function-in-python
I have searched on stackoverflow for people facing similar issues and this topic Replicating MATLAB's `randperm` in NumPy is the most similar. However, although it is possible to recreate the behavior of randperm function from Matlab in Python using numpy random permutation, the numbers generated are not the same, even though I choose the same seed generator for both languages. I am a bit confused since my tests were relevant for other random functions between Matlab and Python. Here is what I have tried: Matlab rng(42); randperm(15) which returns ans = 11 7 6 5 15 14 1 4 9 10 3 13 8 2 12 Python np.random.seed(42) print(np.random.permutation(range(1,16))) which returns [10 12 1 14 6 9 3 2 15 5 8 11 13 4 7] How can I change my Python code so it can reproduce the same order of random numbers than Matlab ?
It seems that Matlab and Numpy use the same random number generators by default, and the discrepancy is caused by the inner workings of randperm being different in the two languages. In old Matlab versions, randperm worked by generating a random array and outputting the indices that would make the array sorted (using the second output of sort). In more modern Matlab versions (I'm using R2017b), randperm is a built-in function, so the source code cannot be seen, but it seems to use the same method: >> rng('default') >> rng(42) >> randperm(15) ans = 11 7 6 5 15 14 1 4 9 10 3 13 8 2 12 >> rng(42) >> [~, ind] = sort(rand(1,15)) ind = 11 7 6 5 15 14 1 4 9 10 3 13 8 2 12 So, if the random number generators are actually the same in the two languages, which seems to be the case, you can replicate that behaviour in Numpy by defining your own version of randperm using argsort: >>> import numpy as np np.random.seed(42) ind = np.argsort(np.random.random((1,16)))+1 print(ind) [[11 7 6 5 15 16 14 1 4 9 10 3 13 8 2 12]] Note, however, that relying on the random number generators being the same in the two languages is risky, and probably version-dependent.
3
3
77,082,895
2023-9-11
https://stackoverflow.com/questions/77082895/vectorizing-an-apply-function-in-pandas
I have a dataframe grouped by issue_ids where i want to apply a custom function. The grouped dataframe looks as follows import pandas as pd import numpy as np sub_test=pd.DataFrame(columns=['issue_id','step','conversion_rate'],data=[['01-abc-234',0,0.45],['01-abc-234',1,0.35],['01-abc-234',2,0.15],['01-abc-234',3,1],['02-abc-234',0,0.05],['02-abc-234',1,0.15],['02-abc-234',2,0.65],['02-abc-234',3,1]]) sub_test.info() I want to group by issue id and apply the following function for each grouped dataframe def calculate_conversion_step(row, df): if row == 0: return np.prod(df.loc[df['step'].isin([1, 2]), 'conversion_rate']) elif row == 1: return np.prod(df.loc[df['step'] == 2, 'conversion_rate']) else: return 1 Basically, what i am doing here is iterating through each dataframe for individual issue ids and applying the aforementioned function to each row of the filtered dataframe. I used .apply() but my dataframe is too large to function well with apply. final=pd.DataFrame() for issue_id in sub_test['issue_id'].unique(): int_df = sub_test[sub_test['issue_id'] == issue_id] # Apply the 'calculate_conversion_step' function to calculate 'conversion_step' for each issue int_df['conversion_step'] = int_df['step'].apply(lambda x: calculate_conversion_step(x, int_df)) # Concatenate the results for each issue final = pd.concat([final, int_df]) Is there anyway i can make it faster? this is my expected output
import numpy as np cond0, cond1, cond2 = sub_test['step'].eq(0), sub_test['step'].eq(1), sub_test['step'].eq(2) s1 = sub_test.groupby('issue_id')['conversion_rate'].transform(lambda x: x.where(cond1 | cond2).prod()) s2 = sub_test.groupby('issue_id')['conversion_rate'].transform(lambda x: x.where(cond2).sum()) sub_test['conversion_step'] = np.select([cond0, cond1], [s1, s2], 1) output: issue_id step conversion_rate conversion_step 0 01-abc-234 0 0.45 0.0525 1 01-abc-234 1 0.35 0.1500 2 01-abc-234 2 0.15 1.0000 3 01-abc-234 3 1.00 1.0000 4 02-abc-234 0 0.05 0.0975 5 02-abc-234 1 0.15 0.6500 6 02-abc-234 2 0.65 1.0000 7 02-abc-234 3 1.00 1.0000
2
1
77,081,815
2023-9-11
https://stackoverflow.com/questions/77081815/element-wise-average-in-dictionary-of-lists
I have a very large python dictionary. I want to perform an element-wise averaging for each element in each list. Let's say: dict = { "a": [2,5,3], "b": [1,0,2], "c": [5,2,5] } The output should be: [2.6 2.3 3.3] where each element is the average of all the elements at that index in all the lists in dictionary. This naive approach does not seem efficient because the dictionary is very large and so are the lists in dictionary. avg = np.zeros(3) for item in dict.values(): for x in range(0,3): avg[x] += item[x] for x in range(0,3): avg[x] /= 3 print(avg) How to do this efficiently?
This seems 2-3 times faster than mozway's fastest (for size 1000x1000): avg = [sum(column) / len(column) for column in zip(*d.values())] Benchmarked on Attempt This Online!: 19.09 Β± 0.03 ms Kelly 26.33 Β± 0.19 ms Kelly_fmean 51.09 Β± 0.25 ms mozway_numpy 249.17 Β± 1.17 ms bart_original 266.40 Β± 5.85 ms mozway_mean Python: 3.11.4 (main, Sep 9 2023, 15:09:21) [GCC 13.2.1 20230801] In mozway's test, bart_original was slower than mozway_mean, so let me try on two more systems: On replit.com: 16.58 Β± 0.20 ms Kelly 19.62 Β± 0.17 ms Kelly_fmean 34.98 Β± 0.10 ms mozway_numpy 184.64 Β± 0.25 ms bart_original 244.38 Β± 0.78 ms mozway_mean Python: 3.10.11 (main, Apr 4 2023, 22:10:32) [GCC 12.2.0] On trinket.io: 24.17 Β± 0.50 ms Kelly 31.63 Β± 0.64 ms Kelly_fmean 75.26 Β± 1.96 ms mozway_numpy 261.52 Β± 7.42 ms bart_original 289.71 Β± 10.68 ms mozway_mean Python: 3.10.9 (main, Jan 23 2023, 22:32:48) [GCC 10.2.1 20210110] Benchmark script: def Kelly(d): return [sum(column) / len(column) for column in zip(*d.values())] def Kelly_fmean(d): return list(map(fmean, zip(*d.values()))) def bart_original(dict): avg = np.zeros(1000) for item in dict.values(): for x in range(0,1000): avg[x] += item[x] for x in range(0,1000): avg[x] /= 1000 return avg def mozway_numpy(d): return np.array(list(d.values())).mean(axis=0).tolist() def mozway_mean(d): return list(map(mean, zip(*d.values()))) funcs = Kelly, Kelly_fmean, bart_original, mozway_numpy, mozway_mean import random from timeit import timeit, default_timer as timer from statistics import mean, fmean, stdev import numpy as np import sys d = {i: random.choices(range(10), k=1000) for i in range(1000)} times = {f: [] for f in funcs} def stats(f): ts = [t * 1e3 for t in sorted(times[f])[:5]] return f'{mean(ts):6.2f} Β± {stdev(ts):4.2f} ms ' for _ in range(50): for f in funcs: t0 = timer() f(d) times[f].append(timer() - t0) for f in sorted(funcs, key=stats): print(stats(f), f.__name__) print('\nPython:', sys.version)
2
1
77,077,529
2023-9-10
https://stackoverflow.com/questions/77077529/how-can-i-resolve-userwarning-the-palette-list-has-more-values-10-than-neede
I am using "tab10" palette because of its distinct colors blue, green, orange and red. k_clust = KMeans(n_clusters=4, random_state= 35, n_init=1).fit(df_normal) palette = sns.color_palette("tab10") sns.pairplot(new_df, hue="clusters", palette=palette) The number of clusters are only 4 and the palette "tab10" has more than 4 colors. Is there a way to address this UserWarning? The output is: C:\Users\....\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\seaborn\axisgrid.py:1507: UserWarning: The palette list has more values (10) than needed (4), which may not be intended. func(x=vector, **plot_kwargs)
The docs for color_palette() say that you can pass n_colors=4 to the call. Try this: ... palette = sns.color_palette("tab10", n_colors=4) # equal to n_clusters ...
3
4
77,071,266
2023-9-9
https://stackoverflow.com/questions/77071266/ravendb-python-client-useoptimisticconcurrency-does-this-option-exist
Recently I have started to work with python RavenDB client, and found out that documentation for the python client is not full. Official page even does not contain Python as an option for the doceumentation. Anyhow, official client is there and I wanted to understand if it contains something like "session.Advanced.UseOptimisticConcurrency", I tried to look into coding and did not find anything like this. Does anybody know about it? Mabye it is turned on by default? The docuemtnation for the python does not contain any hint.
This option can be set in your DocumentConventions. You can set a property of your DocumentStore with your custom document conventions. Then upon internal/automatic RequestExecutor creation, the conventions will be passed forward to be processed.
2
2
77,061,254
2023-9-7
https://stackoverflow.com/questions/77061254/how-to-deploy-to-aws-elastic-beanstalk-using-python-3-11-64bit-amazon-linux-2023
I am trying to deploy a simple Flask app to Elastic beanstalk. I am able to deploy the sample version. However, I am currently struggling to deploy my own. My Python app is already named application.py and changed flask name in the code to "application" in the code. Inside my .ebextension files are the following: postgres.config: packages: yum: mariadb-devel: [] postgresql-devel: [] (edit: removed mariadb-devel. now says postgres is not available to install. More info below) python.config: commands: 01_update_pip: command: "pip install --upgrade pip setuptools" option_settings: aws:elasticbeanstalk:container:python: WSGIPath: application.py I am using the AWS website for deployment. I am semi-new and still figuring it out. My directory looks like this: server (root) .ebextensions - postgres.config - python.config application.py requirements.txt venv in the logs some noticeable things: [INFO] Error occurred during build: Yum does not have postgresql-devel available for installation [ERROR] An error occurred during execution of command [app-deploy] - [PreBuildEbExtension] [WARN]could not build optimal types_hash, you should increase either types_hash_max_size: 1024 or types_hash_bucket_size: 64; ignoring types_hash_bucket_size I've tried running another command that seems to at least deploy in Amazon Linux 2 but with a bad gateway error. Sadly, I do not remember it any more. Plus, I would prefer if possible to stick to the newest version. If it's easier attempting it in a different way then that's OK. On Amazon Linux 2 (assuming name is Application.py): commands: 01_update_pip: command: "pip install --upgrade pip setuptools" option_settings: aws:elasticbeanstalk:container:python: WSGIPath: Application:application
The documents for Elastic Beanstalk is not updated for AL2023 and still uses AL2, so don't just follow elastic beanstalk docs if you want to use AL2023. I don't think they have postgresql-devel package for AL2023, based on this link for installed package list. Also, they changed package manager from yum to dnf, even though yum should still work, I would recommend changing yum to dnf in your file. If you can, it is best if you ssh into deployed elastic beanstalk instance and type "dnf search" for the exact name of package you search for. As in the link here.
3
2
77,076,663
2023-9-10
https://stackoverflow.com/questions/77076663/rng-challenge-python
I am trying to solve a CTF challenge in which the goal is to guess the generated number. Since the number is huge and you only have 10 attempts per number, I don't think you can apply binary search or any kind of algorithm to solve it, and that it has something to do with somehow getting the seed of the random function and being able to generate the next number, but I have no idea on where to start to get the correct seed. Do you have any idea? Here's the code of the challenge: #!/usr/bin/env python3 import signal import os import random TIMEOUT = 300 assert("FLAG" in os.environ) FLAG = os.environ["FLAG"] assert(FLAG.startswith("CCIT{")) assert(FLAG.endswith("}")) def handle(): for i in range(625): print(f"Round {i+1}") guess_count = 10 to_guess = random.getrandbits(32) while True: print("What do you want to do?") print("1. Guess my number") print("2. Give up on this round") print("0. Exit") choice = int(input("> ")) if choice == 0: exit() elif choice == 1: guess = int(input("> ")) if guess == to_guess: print(FLAG) exit() elif guess < to_guess: print("My number is higher!") guess_count -= 1 else: print("My number is lower!") guess_count -= 1 elif choice == 2: print(f"You lost! My number was {to_guess}") break if guess_count == 0: print(f"You lost! My number was {to_guess}") break if __name__ == "__main__": signal.alarm(TIMEOUT) handle()
Don't try guessing the first 624 numbers, just give up on them. You're told what they were, feed them into randcrack as shown in its example. Ask it to predict the next 32-bit number and guess that. For a bigger challenge, you could try it without that tool. Here's some insight, "predicting" the next number, i.e., showing how it's computed from the state: import random for _ in range(5): s = random.getstate()[1] y = s[s[-1]] y ^= (y >> 11); y ^= (y << 7) & 0x9d2c5680; y ^= (y << 15) & 0xefc60000; y ^= (y >> 18); print('predict:', y) print('actual: ', random.getrandbits(32)) print() Sample output: predict: 150999088 actual: 3825261045 predict: 457032747 actual: 457032747 predict: 667801614 actual: 667801614 predict: 3817694986 actual: 3817694986 predict: 816636218 actual: 816636218 First I get the state, which is 624 words of 32 bits and an index. The calculation of y is from here. The first number is wrong because in reality, the first time the more complicated code above it runs. If you can figure out how to inverse those calculations, you could reconstruct the original state from the given first 624 numbers, then set that state with random.setstate, and generate the next number. The tool does seem to do that inversal, but it doesn't look trivial.
5
6
77,076,966
2023-9-10
https://stackoverflow.com/questions/77076966/headers-to-column-pandas-dataframe
for example I have a pandas DataFrame of the test results in some class. It could look like this table: Name English French History Math Physic Chemistry Biology Mike 3 3 4 5 6 5 4 Tom 4 4 3 4 4 5 5 Nina 5 6 4 3 3 3 5 Anna 4 3 4 5 5 3 3 Musa 5 5 4 4 4 6 5 Maria 4 3 5 4 3 2 3 Chris 6 5 5 5 5 5 6 For every student I want to create at least two columns with the best test result and best subject. Important: every student can have more than only one best subject (the results are similar)! For the example above it should be look like this: Name English French History Math Physic Chemistry Biology Best result Best subject 1 Best subject 2 Mike 3 3 4 5 6 5 4 6 Physic None Tom 4 4 3 4 4 5 5 5 Chemistry Biology Nina 5 6 4 3 3 3 5 6 French None Anna 4 3 4 5 5 3 3 5 Math Physic Musa 5 5 4 4 4 6 5 6 Chemistry None Maria 4 3 5 4 3 2 3 5 History None Chris 6 5 5 5 5 5 6 6 English Biology what is the best way to do it in Pandas? Thank you in advance! what is the best way to do it in Pandas? Thank you in advance!
Another possible solution : tmp = df.set_index("Name") # a DataFrame bre = tmp.max(axis=1) # a Series bsu = ( ((tmp.columns + "|") @ tmp.eq(bre, axis=0).T) .str.strip("|").str.split("|", expand=True) .rename(lambda x: f"Best subject {x+1}", axis=1) ) out = tmp.assign(**{"Best result": bre}).join(bsu).reset_index()#.fillna("None") Output : Name English French History Math Physic Chemistry Biology Best result Best subject 1 Best subject 2 0 Mike 3 3 4 5 6 5 4 6 Physic 1 Tom 4 4 3 4 4 5 5 5 Chemistry Biology 2 Nina 5 6 4 3 3 3 5 6 French 3 Anna 4 3 4 5 5 3 3 5 Math Physic 4 Musa 5 5 4 4 4 6 5 6 Chemistry 5 Maria 4 3 5 4 3 2 3 5 History 6 Chris 6 5 5 5 5 5 6 6 English Biology
2
1
77,076,223
2023-9-10
https://stackoverflow.com/questions/77076223/python-copying-rows-with-same-id-values
I have a big dataframe with columns including ID and multiple values and different rows can have same or different ID values. I would like to create a new dataframe so, that every row has only one ID and the specific column values are just appended next to the ID. The Dataframe also has other columns with additional values that are same for same ID rows that i would like to keep ID type1 type2 value1 value2 value3 1 dog yellow 1 2 3 1 dog yellow 5 6 7 2 cat brown 1 1 1 3 mouse blue 1 1 1 1 dog yellow 1 2 3 expected output: ID type1 type2 value 1 dog yellow 1 2 3 5 6 7 1 2 3 2 cat brown 1 1 1 3 mouse blue 1 1 1 I have been exploring the groupby option, can't get it to have this kind of output
You can melt and groupby.agg: group = ['ID', 'type1', 'type2'] out = df.melt(group).groupby(group, as_index=False)['value'].agg(list) Output: ID type1 type2 value 0 1 dog yellow [1, 5, 1, 2, 6, 2, 3, 7, 3] 1 2 cat brown [1, 1, 1] 2 3 mouse blue [1, 1, 1] If order matters: out = (df.set_index(group).stack().groupby(group).agg(list) .reset_index(name='value') ) Output: ID type1 type2 value 0 1 dog yellow [1, 2, 3, 5, 6, 7, 1, 2, 3] 1 2 cat brown [1, 1, 1] 2 3 mouse blue [1, 1, 1]
3
2
77,074,676
2023-9-10
https://stackoverflow.com/questions/77074676/importerror-cannot-import-name-deprecated-from-typing-extensions
I want to download spacy, but the version of typing-extensions is lowered in the terminal: ERROR: pydantic 2.3.0 has requirement typing-extensions>=4.6.1, but you'll have typing-extensions 4.4.0 which is incompatible. ERROR: pydantic-core 2.6.3 has requirement typing-extensions!=4.7.0,>=4.6.0, but you'll have typing-extensions 4.4.0 which is incompatible. Installing collected packages: typing-extensions Attempting uninstall: typing-extensions Found existing installation: typing-extensions 4.7.1 Uninstalling typing-extensions-4.7.1: Successfully uninstalled typing-extensions-4.7.1 Successfully installed typing-extensions-4.4.0 Next I want to install the language pack python -m spacy download en, but another error occurs: (base) E:\Anaconda>python -m spacy download en Traceback (most recent call last): File "E:\Anaconda\lib\site-packages\confection\__init__.py", line 38, in <module> from pydantic.v1 import BaseModel, Extra, ValidationError, create_model File "E:\Anaconda\lib\site-packages\pydantic\__init__.py", line 13, in <module> from . import dataclasses File "E:\Anaconda\lib\site-packages\pydantic\dataclasses.py", line 11, in <module> from ._internal import _config, _decorators, _typing_extra File "E:\Anaconda\lib\site-packages\pydantic\_internal\_config.py", line 9, in <module> from ..config import ConfigDict, ExtraValues, JsonEncoder, JsonSchemaExtraCallable File "E:\Anaconda\lib\site-packages\pydantic\config.py", line 9, in <module> from .deprecated.config import BaseConfig File "E:\Anaconda\lib\site-packages\pydantic\deprecated\config.py", line 6, in <module> from typing_extensions import Literal, deprecated ImportError: cannot import name 'deprecated' from 'typing_extensions' (E:\Anaconda\lib\site-packages\typing_extensions.py) My current python version is 3.7, should I update it? Or is there any better solution? I'm a newbie in this area, thank you all!
You should use typing_extensions==4.7.1 try : pip install typing_extensions==4.7.1 --upgrade I also suggest you to upgrade your python version from 3.7 to 3.10 or 3.11 See a relevant answer: https://github.com/tiangolo/fastapi/discussions/9808
14
17
77,075,446
2023-9-10
https://stackoverflow.com/questions/77075446/subset-pandas-dataframe-to-get-specific-number-of-rows-based-on-values-in-anothe
I have a pandas dataframe as follows: df1 site_id date hour reach maid 0 16002 2023-09-02 21 NaN 33f9fad6-20c5-426c-962f-bc2fbb82aecb 1 16002 2023-09-04 17 NaN 33f9fad6-20c5-426c-962f-bc2fbb82aecb 2 16002 2023-09-04 19 NaN 4a676aeb-6f6f-4622-934b-59b8f149aad7 3 16002 2023-09-04 17 NaN 35363191-c6aa-49fb-beb1-04a98898bed2 4 16002 2023-09-03 22 NaN a44beb20-a90a-4135-be18-6dda71eeb7c2 I have created another dataframe based on the above dataframe that provides the count of records for each [site_id,date,hour] combination. T df2 site_id date hour count 1666 37226 2023-09-02 8 4586 1676 37226 2023-09-03 16 3586 639 36972 2023-09-03 21 235 640 36972 2023-09-03 22 5431 641 36972 2023-09-03 23 343 I want to filter the first data frame and get exact number of records as given in the count column of second data frame. For example, I want to get the 4586 records from the first data frame matching the site_id 37226, date 2023-09-02 and hour 8. I tried using a forloop on the second dataframe as follows: for index,rows in k3.iterrows(): sid=rows['site_id'] dt=rows['date'] hr=rows['hour'] cnt=rows['count'] kdf1=dff[(dff['site_id'] == sid) & (dff['date']==dt) & (dff['hour']==hr)] kdf2=kdf1[:cnt] This works - but works extremely slow. Is there a faster method to get the subset. I am also attaching herewith the links to both sample dataframes: Link to df1 and df2
You can merge the count from df2 to df1, and then using .groupby to reduce the count of groups: cols = ["site_id", "date", "hour"] df1 = df1.merge(df2, on=cols, how="right") df1 = df1.groupby(cols, group_keys=False).apply(lambda x: x[: x["count"].iloc[0]]) df1.pop("count") print(df1.head()) Prints: site_id date hour reach maid 0 37221 2023-09-03 19 NaN 3e769e74-9129-49ba-838d-c36f3a9b3335 1 37221 2023-09-03 19 NaN 71e258d2-5155-4001-9b3c-02a1a1f9c9fb 2 37221 2023-09-03 19 NaN 92eaee88-b41c-4999-b1b8-6be183e5d2cf 3 37221 2023-09-03 19 NaN c6eb504a-9259-410b-8391-7b06b3e92a41 4 37221 2023-09-03 19 NaN c36400ff-0790-4844-b58b-2e4cdaafb4d9 Note: With your data this method takes ~0.15 seconds, your original version ~11.2 seconds.
2
2
77,072,922
2023-9-9
https://stackoverflow.com/questions/77072922/python-opencv-template-matching-and-feature-detection-not-working-properly
I'm attempting to identify specific shapes in an image using a template. I've edited the original image by adding two stars to it. Now, I'm trying to detect the positions of these stars, but it doesn't seem to be recognizing them. I've employed two methods, template matching and feature detection, but neither is yielding the expected results. My primary objective is to successfully detect both stars and connect them to form a rectangular shape, ultimately providing me with just the answers area as a result in a new image. However, my initial focus is on accurately detecting the marks. The base image and the template are as follows: First i've tried to match the image with the template. # Load the base image and the black star image base_image = cv2.imread(path_saida_jpg, cv2.IMREAD_COLOR) star_template = cv2.imread(path_base_star, cv2.IMREAD_COLOR) base_gray = cv2.cvtColor(base_image, cv2.COLOR_BGR2GRAY) star_template_gray = cv2.cvtColor(star_template, cv2.COLOR_BGR2GRAY) # Perform template matching to find the stars result = cv2.matchTemplate(base_image, star_template, cv2.TM_CCOEFF_NORMED) threshold = 0.80 locations = np.where(result >= threshold) # Draw rectangles around the found stars for loc in zip(*locations[::-1]): h, w = star_template.shape[:2] cv2.rectangle(base_image, loc, (loc[0] + w, loc[1] + h), (0, 255, 0), 2) # Draw a green rectangle cv2.imwrite('result_image.jpg', base_image) The locations were always empty, the software as only capable to find some match when i down the threshold variable to 0.6. but the result wasn't the expected. Here it is: Afterward, I realized that the issue might be related to scale. Consequently, I attempted to address it using feature detection and matching with ORB (Oriented FAST and Rotated BRIEF). In my initial attempt, I obtained no results when using the template image at its original scale, which was 64px. Consequently, I decided to increase its size to 512px, hoping to enhance the chances of success. Surprisingly, the outcome was even less favorable. Below is the code and the resulting output: base_image = cv2.imread(path_saida_jpg, cv2.IMREAD_COLOR) star_template = cv2.imread(path_base_star, cv2.IMREAD_COLOR) base_gray = cv2.cvtColor(base_image, cv2.COLOR_BGR2GRAY) star_template_gray = cv2.cvtColor(star_template, cv2.COLOR_BGR2GRAY) orb = cv2.ORB_create() kp1, des1 = orb.detectAndCompute(base_gray, None) kp2, des2 = orb.detectAndCompute(star_template_gray, None) bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True) matches = bf.match(des1, des2) matches = sorted(matches, key=lambda x: x.distance) result_image = cv2.drawMatches(base_image, kp1, star_template, kp2, matches[:10], None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS) # Export the result cv2.imwrite('result_image.jpg', result_image) Does anyone have any suggestions? I'm relatively new to the field of image processing, and I find myself somewhat disoriented in the midst of this complex task. I've been using ChatGPT and resources from GeeksforGeeks to assist me.
I have cropped your star from your image to use as a template in Python/OpenCV template matching. I then use a technique of masking the correlation image in a loop to find the two matches. Each top match is masked out with zeros in the TM_CCORR_NORMED (normalized cross correlation) surface before searching for the next top match. Read the input Read the template Set arguments Compute TM_CCORR_NORMED image Loop over matches, if max_val > match_threshold, save correlation location and value and mask the correlation image. Then repeat until max_val below match_thresh. Save results Input: Template: import cv2 import numpy as np # read image img = cv2.imread('prova_da_epcar.jpg') # read template tmplt = cv2.imread('star.png') hh, ww, cc = tmplt.shape # set arguments match_thresh = 0.95 # stopping threshold for match value num_matches = 10 # stopping threshold for number of matches match_radius = max(hh,ww)//4 # approx radius of match peaks # get correlation surface from template matching corrimg = cv2.matchTemplate(img,tmplt,cv2.TM_CCORR_NORMED) hc, wc = corrimg.shape # get locations of all peaks higher than match_thresh for up to num_matches imgcopy = img.copy() corrcopy = corrimg.copy() for i in range(1, num_matches): # get max value and location of max min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(corrcopy) x1 = max_loc[0] y1 = max_loc[1] x2 = x1 + ww y2 = y1 + hh loc = str(x1) + "," + str(y1) if max_val > match_thresh: print("match number:", i, "match value:", max_val, "match x,y:", loc) # draw draw blue bounding box to define match location cv2.rectangle(imgcopy, (x1,y1), (x2,y2), (0,0,255), 2) # draw black filled circle over copy of corrimg cv2.circle(corrcopy, (x1,y1), match_radius, 0, -1) # faster alternate - insert black rectangle # corrcopy[y1:y2, x1:x2] = 0 i = i + 1 else: break # save results # power of 4 exaggeration of correlation image to emphasize peaks cv2.imwrite('prova_da_epcar_corr.png', (255*cv2.pow(corrimg,4)).clip(0,255).astype(np.uint8)) cv2.imwrite('prova_da_epcar_star_corr_masked.png', (255*cv2.pow(corrcopy,4)).clip(0,255).astype(np.uint8)) cv2.imwrite('prova_da_epcar_star_multi_match.png', imgcopy) # show results # power of 4 exaggeration of correlation image to emphasize peaks cv2.imshow('image', img) cv2.imshow('template', tmplt) cv2.imshow('corr', cv2.pow(corrimg,4)) cv2.imshow('corr masked', cv2.pow(corrcopy,4)) cv2.imshow('result', imgcopy) cv2.waitKey(0) cv2.destroyAllWindows() Correlation Image: Final Masked Correlation Image: Result: I suggest in the future that you use 3 or 4 symbols in your image to help if your image is slightly rotated. You might also consider using a ring as your symbol to be rotation independent. I also suggest that you use cv2.TM_CCORR_NORMED (or cv2.TM_SQDIFF_NORMED and mask with white) rather than cv2.TM_CCOEF_NORMED There are also non-maxima suppression methods for detecting multiple matches that do not need masking. See this example.
2
4
77,072,374
2023-9-9
https://stackoverflow.com/questions/77072374/classify-input-numbers-into-fixed-ranges-several-million-of-times
I have a few ranges (that can overlap) as parameter; for example: # tuple[0] <= n < tuple[1] ranges = [(70, 80), (80, 120), (120, 130), (120, 2000), (1990, 2000), (2000, 2040), (2040, 2050)] And I have a list of tuples as input, where the second element of each tuple is the number that determines the range(s) the tuple belongs. tuples = [('a', 71), ('b', 79), ('c', 82), ('d', 121), ('e', 1991), ('f', 2010), ('g', 2045), ('h', 3000)] I need to determine the members of each range and store the results in the following manner: # ranges[i] => members[i] members = [{71: 'a', 79: 'b'}, {82: 'c'}, {121: 'd'}, {121: 'd', 1991: 'e'}, {1991: 'e'}, {2010: 'f'}, {2045: 'g'}] My approach is to create a dict that associates a number to the range(s) it belongs: members = [{} for _ in ranges] number_to_members = defaultdict(list) for i,(x0,x1) in enumerate(ranges): for x in range(x0,x1): number_to_members[x].append(members[i]) for c,n in tuples: if n in number_to_members: for m in number_to_members[n]: m[n] = c Now the problem is that I need to classify tens of millions of different lists of tuples using the same ranges; the current implementation would require to generate number_to_members for each list of tuples, which is a lot of overhead. How do you work around that problem? UPDATE Adding an example of the actual problem: from collections import defaultdict ranges = [(70, 80), (80, 120), (120, 130), (120, 2000), (1990, 2000), (2000, 2040), (2040, 2050)] inputs = [ [('a', 71), ('b', 79), ('c', 82), ('d', 121), ('e', 1991), ('f', 2010), ('g', 2045), ('h', 3000)], [('x', 75), ('y', 78), ('z', 1995)] ] members = [{} for _ in ranges] number_to_members = defaultdict(list) for i,(x0,x1) in enumerate(ranges): for x in range(x0,x1): number_to_members[x].append(members[i]) for tuples in inputs: for c,n in tuples: if n in number_to_members: for m in number_to_members[n]: m[n] = c print(members) output: [{71: 'a', 79: 'b'}, {82: 'c'}, {121: 'd'}, {121: 'd', 1991: 'e'}, {1991: 'e'}, {2010: 'f'}, {2045: 'g'}] [{71: 'a', 79: 'b', 75: 'x', 78: 'y'}, {82: 'c'}, {121: 'd'}, {121: 'd', 1991: 'e', 1995: 'z'}, {1991: 'e', 1995: 'z'}, {2010: 'f'}, {2045: 'g'}]
Given the sizes you mentioned, your code in UPDATE seems speed-wise rather optimal already, as it takes time proportional to the size of your desired outputs. It's just incorrect, since it accumulates the input list data instead of computing the members list for each input alone. Since you said you don't need the members after each iteration anymore, you can fix the issue by simply clearing the dicts: for tuples in inputs: ... print(members) for m in members: m.clear() If you change your mind and do need the members lists later, you can make a copy before you clear: members_lists = [] for tuples in inputs: ... print(members) copy = [m.copy() for m in members] members_lists.append(copy) for m in members: m.clear()
4
1
77,071,445
2023-9-9
https://stackoverflow.com/questions/77071445/convert-plotly-express-graph-into-json
I am using plotly express to create different graphs. I am trying to convert graphs into json format to save in json file. While doing so I am getting error using different ways as below: Way-1 code gives error as below Error-2 Object of type ndarray is not JSON serializable Way-2 code gives error as below Error-2 Object of type Figure is not JSON serializable Below is MWE: import json import dash_bootstrap_components as dbc from dash import dcc from dash_bootstrap_templates import load_figure_template import plotly.express as px import plotly.io as pio import pandas as pd def generate_pie_charts(df, template) -> list[dict[str, Any]]: pie_charts = list() for field in df.columns.tolist(): value_count_df = df[field].value_counts().reset_index() cols = value_count_df.columns.tolist() name: str = cols[0] value: str = cols[1] try: # Way-1 # figure = px.pie( # data_frame=value_count_df, # values=value, # names=name, # title=f"Pie chart of {field}", # template=template, # ).to_plotly_json() # pie_chart = dcc.Graph(figure=figure).to_plotly_json() # Way-2 figure = px.pie( data_frame=value_count_df, values=value, names=name, title=f"Pie chart of {field}", template=template, ) figure = pio.to_json(figure) # figure = pio.to_json(figure).encode() pie_chart = dcc.Graph(figure=pio.from_json(figure)).to_plotly_json() # pie_chart = dcc.Graph(figure=pio.from_json(figure.decode())).to_plotly_json() pie_charts.append(pie_chart) except Exception as e: print(f"Error-1 {e}") print(f"Length {len(pie_charts)}") return pie_charts def perform_exploratory_data_analysis(): rows = list() template = "darkly" load_figure_template(template) info = { "A": [ "a", "a", "b", "b", "c", "a", "a", "b", "b", "c", "a", "a", "b", "b", "c", ], "B": [ "c", "c", "c", "c", "c", "a", "a", "b", "b", "c", "a", "a", "b", "b", "c", ], } df = pd.DataFrame(info) try: row = dbc.Badge( "For Pie Charts", color="info", className="ms-1" ).to_plotly_json() rows.append(row) row = generate_pie_charts(df, template) rows.append(row) data = {"contents": rows} status = False msg = "Error creating EDA graphs." file = "eda.json" with open(file, "w") as json_file: json.dump(data, json_file) msg = "EDA graphs created." status = True except Exception as e: print(f"Error-2 {e}") result = (status, msg) return result perform_exploratory_data_analysis() What I am missing?
You were close with way-2, you need to : Convert the figure (go.Figure) to a JSON string using pio.to_json(), so that ndarray and other python types used in the figure's data are properly converted into their javascript equivalent. Deserialize the JSON string using json.loads() in order to get the figure as a standard JSON dict and prevent double encoding with future call to json.dump() (nb. pio.from_json() would return a go.Figure which json.dump() doesn't know how to serialize). The relevant part of the code : # ... figure = px.pie( data_frame=value_count_df, values=value, names=name, title=f"Pie chart of {field}", template=template, ) # serializable figure dict fig_dict = json.loads(figure.to_json()) pie_chart = dcc.Graph(figure=fig_dict).to_plotly_json() # ...
2
2
77,070,883
2023-9-9
https://stackoverflow.com/questions/77070883/performance-comparison-mojo-vs-python
Mojo, a programming language, claims to be 65000x faster than python. I am eager to understand if is there any concrete benchmark data that supports this claim? Also, how does it differ in real world problems? I am primarily encountered this claim on their website and have watched several videos discussing Mojo's speed. However I am seeking concrete benchmark data that substantiates this assertion.
TL;DR: The claim directly comes from a blog on the Mojo website. The benchmark is a computation of the Mandelbrot set. It is not a rigorous benchmark nor one representative of most Python applications. It is also clearly biased (e.g. sequential VS parallel codes). They choose it because it has the following properties: "Simple to express", "Pure compute" (ie. compute-bound), "Irregular computation", "Embarrassingly parallel", "Vectorizable". They also states "Mandelbrot is a nice problem that demonstrates the capabilities of Mojo". This is the kind of problem where Python does not shine (and it not meant to be used for), but on which Mojo shine well (pretty optimal use-case for it). Thus, the speed-up can be pretty huge. In fact, this benchmark is rather a maximum speed-up you can get and not an average of many real-world applications. First things first, it means nothing to compare language performance. We compare implementations. CPython is the standard Python implementation, but not the only one. CPython is an interpreter so a code is very slow when a it is fully interpreted. Optimized Python codes tends not to run much interpreted code but vectorized ones (eg. the script mostly calls Numpy optimized functions written in C). PyPy is an alternative implementation which uses a JIT-compiler to run code faster. It claims 4.8x faster performance compared to CPython with a detailed set of benchmarks (geometric average). There are some benchmark that are very hard to make faster, even using a native language. Symbolic and large string computations are hard to make faster (CPython string are already well optimized in C). It tends to be faster for numerical codes. In the Mojo benchmark, the naive Python codes, Numpy ones and PyPy ones are sequential while the final Mojo code is multi-threaded. This is not a fair comparison. One should have to use at least the multiprocessing module to compare parallel codes together. This is a critical point since they run the code on a 88-Core Intel Xeon CPU. Indeed, since the computation is compute-bound, one can expect a speed up close to 88 using multiple threads. In fact, their parallel Mojo implementation is 85 times faster than their sequential Mojo one. Without a parallel Python implementation, It would be more fair to claim that Mojo is 874 times faster than a naive CPython implementation, 175 times faster than a (rather naive) Numpy code, and 40 times faster than a PyPy implementation (on this specific Mandelbrot set computation). In sequential, most of the speed-up comes from the use of SIMD instructions and instruction-level parallelism. The Python implementations tends not to use them. While Numpy can do that, not all functions are well vectorized (the one operating on Complex numbers tends not to be AFAIK) and Numpy codes tends to be memory-bound due to the creation of many large temporary arrays. Note that tools like Numba and Cython are not shown in the benchmark while they are frequently used to speed up numerical codes. It would be more fair to add them (or at least mention them).
15
26
77,069,769
2023-9-8
https://stackoverflow.com/questions/77069769/running-a-python-script-using-schedule-library
My company blocks the Windows Task Scheduler (along with many other Python libraries), so I need to use the schedule library. My script imports several flat files, performs some grouping and simple calculations using dataframes, and then saves one final TXT file. I've read up on how schedule works, but how do I use it to run the entire script? import pandas as pd import time import schedule def daily(): #then do all dataframe creation/manipulation, group by's, concatenations, etc... schedule.every().day.at("16:30").do(daily) It doesn't error out, but it also doesn't do anything; it just states: do daily() (last run: [never], next run: 2023-09-08 16:30:00
The do function only registers the job, but it will not actually execute it. You need to use the run_pending function to execute the job. import pandas as pd import time import schedule def daily(): #dataframe creation/manipulation etc... schedule.every().day.at("16:30").do(daily) while True: schedule.run_pending() time.sleep(60) The interval of 60 seconds should be sufficient for your case since you are working with minute-level time accuracy and not seconds. run_pending() Run all jobs that are scheduled to run. Please note that it is intended behavior that run_pending() does not run missed jobs. For example, if you’ve registered a job that should run every minute and you only call run_pending() in one hour increments then your job won’t be run 60 times in between but only once.
2
5
77,068,908
2023-9-8
https://stackoverflow.com/questions/77068908/how-to-install-pytorch-with-cuda-support-on-windows-11-cuda-12-no-matching
I'm trying to install PyTorch with CUDA support on my Windows 11 machine, which has CUDA 12 installed and python 3.10. When I run nvcc --version, I get the following output: nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Tue_Aug_15_22:09:35_Pacific_Daylight_Time_2023 Cuda compilation tools, release 12.2, V12.2.140 Build cuda_12.2.r12.2/compiler.33191640_0 I'd like to install PyTorch version 2.0.0 with CUDA support, so I attempted to run the following command: python -m pip install torch==2.0.0+cu117 However, I encountered the following error: ERROR: Could not find a version that satisfies the requirement torch==2.0.0+cu117 (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0, 2.0.1) ERROR: No matching distribution found for torch==2.0.0+cu117 Does anyone have any suggestions?
To install PyTorch using pip or conda, it's not mandatory to have an nvcc (CUDA runtime toolkit) locally installed in your system; you just need a CUDA-compatible device. To install PyTorch (2.0.1 with CUDA 11.7), you can run: pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117 For CUDA 11.8, run: pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 If you want to use CUDA 12.1, you can install PyTorch Nightly using: (FYI: as of today (Sep 9, 2023), cu12.1 is not available for a stable release.) pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121 You can find more details on PyTorch's official page: https://pytorch.org/get-started/locally/. I hope it helps you. Thank you!
3
13
77,068,488
2023-9-8
https://stackoverflow.com/questions/77068488/how-to-efficiently-convert-a-markdown-table-to-a-dataframe-in-python
I need to convert a markdown table into a pandas DataFrame. I've managed to do this using the pd.read_csv function with '|' as the separator, but it seems like there's some additional cleanup required. Specifically, I need to remove the row containing '-----', which is used for table separation, and I also want to get rid of the last column. Here's a simplified example of what I'm doing: import pandas as pd from io import StringIO # The text containing the table text = """ | Some Title | Some Description | Some Number | |------------|------------------------------|-------------| | Dark Souls | This is a fun game | 5 | | Bloodborne | This one is even better | 2 | | Sekiro | This one is also pretty good | 110101 | """ # Use StringIO to create a file-like object from the text text_file = StringIO(text) # Read the table using pandas read_csv with '|' as the separator df = pd.read_csv(text_file, sep='|', skipinitialspace=True) # Remove leading/trailing whitespace from column names df.columns = df.columns.str.strip() # Remove the index column df = df.iloc[:, 1:] Is there a more elegant and efficient way to convert a markdown table into a DataFrame without needing to perform these additional cleanup steps? I'd appreciate any suggestions or insights on improving this process.
Like this import re import pandas as pd text = """ | Some Title | Some Description | Some Number | |------------|------------------------------|-------------| | Dark Souls | This is a fun game | 5 | | Bloodborne | This one is even better | 2 | | Sekiro | This one is also pretty good | 110101 | """ pattern = r"\| ([\w\s]+) \| ([\w\s]+) \| ([\w\s]+) \|" # Use the findall function to extract all rows that match the pattern matches = re.findall(pattern, text) # Extract the header and data rows header = matches[0] data = matches[1:] # Create a pandas DataFrame using the extracted header and data rows df = pd.DataFrame(data, columns=header) # Optionally, convert numerical columns to appropriate types df['Some Number'] = df['Some Number'].astype(int) print(df)
7
3
77,068,855
2023-9-8
https://stackoverflow.com/questions/77068855/specifying-multiple-possible-criteria-at-one-level-of-a-multi-index-cross-sectio
I'm newly working with a MultiIndex, and am struggling with how to get certain cross-sections of data. Specifically, when I want to specify more than one category within an index level, but not all categories at that level. Borrowing from pandas documentation for the data: d = {'num_legs': [4, 4, 2, 2], 'num_wings': [0, 0, 2, 2], 'class': ['mammal', 'mammal', 'mammal', 'bird'], 'animal': ['cat', 'dog', 'bat', 'penguin'], 'locomotion': ['walks', 'walks', 'flies', 'walks']} df = pd.DataFrame(data=d) df = df.set_index(['class', 'animal', 'locomotion']) df num_legs num_wings class animal locomotion mammal cat walks 4 0 dog walks 4 0 bat flies 2 2 bird penguin walks 2 2 I'm using pandas.DataFrame.xs() to grab the data I want. I'm able to get crossection based on one citerion like so: df.xs('mammal',level = 'class'), and can get refine on multiple levels like so: df.xs(('mammal','flies'),level = ['class','locomotion']) Where things break down is when I want to specify, say, bat and penguin, instead of just bat. A statement like df.xs((('mammal','bird')),level = ['class','class']) technically works, but it's looking for data that is both 'mammal' and 'bird', returning an empty dataframe. Statements like df.xs((('mammal','bird')),level = 'class') return errors. The .xs documentation doesn't cover this case. Is what I want possible? Is there a better way altogether? Any light you can shine on the subject is appreciated. EDIT: Answers by both respondents very helpful. I should have mentioned that some of that data I am working with is multi-indexed at the column level, which is part of why I was looking at df.xs(). Is there a similar simple way of specifying multiple criteria at the same level for multi-indexed columns?
For these cases, I recommend just to use .loc: out = df.loc[(["mammal", "bird"], slice(None), slice(None))] print(out) Prints: num_legs num_wings class animal locomotion mammal cat walks 4 0 dog walks 4 0 bat flies 2 2 bird penguin walks 2 2 EDIT: For multi-index columns: Initial df: num_legs num_wings A B A B class animal locomotion mammal cat walks 4 0 1 2 dog walks 4 0 1 2 bat flies 2 2 1 2 bird penguin walks 2 2 2 2 bacteria coli swims -1 -1 -1 -1 Then: out = df.loc[(["mammal", "bird"], slice(None), "walks"), ((slice(None), "A"))] print(out) Prints: num_legs num_wings A A class animal locomotion mammal cat walks 4 1 dog walks 4 1 bird penguin walks 2 2
2
1
77,067,644
2023-9-8
https://stackoverflow.com/questions/77067644/jax-errors-unexpectedtracererror-only-when-using-jax-debug-breakpoint
My jax code runs fine but when I try to insert a breakpoint with jax.debug.breakpoint I get the error: jax.errors.UnexpectedTracerError. I would expect this error to show up also without setting a breakpoint. Is this intended behavior or is something weird happening? When using jax_checking_leaks none of the reported tracers seem to actually be leaked.
There is currently a bug in jax.debug.breakpoint that can lead to spurious tracer leaks in some situations: see https://github.com/google/jax/issues/16732. There's not any easy workaround at the moment, unfortunately, but hopefully the issue will be addressed soon.
2
2
77,067,011
2023-9-8
https://stackoverflow.com/questions/77067011/scipy-optimize-linprog-doesnt-return-minimal-solution-id-expect-it-to
I read that linprog from scipy returns minimal solutions and one could get the optimal by multiply the objective function by -1. I read it here: https://realpython.com/linear-programming-python/ And I've tested the example they provided to see if i could get the minimal solution too -- I could. Regarding the problem im trying to solve I would expect the solutions to be: opt_sol1.x = [0.61538462 0.38461538] opt_sol2.x = [0.0, 1.0] but i get the same result [0.61538462 0.38461538] in both cases why? -- my guess is it relates to the values in my objective function to be close to one another but just a guess And is there a way i can get the second solution im looking for? from scipy.optimize import linprog obj_fct1 = [0.5, 0.5] obj_fct2 = [-0.5, -0.5] lhs_ineq = [[1.5, 0.2]] rhs_ineq = [1] lhs_eq = [[1,1]] rhs_eq = [1] bnds = [(0, 1), (0, 1)] opt_sol1 = linprog(c=obj_fct1, A_ub=lhs_ineq, b_ub=rhs_ineq, A_eq=lhs_eq, b_eq=rhs_eq, bounds=bnds) print(opt_sol1.x) print("------------------------------------------") opt_sol2 = linprog(c=obj_fct2, A_ub=lhs_ineq, b_ub=rhs_ineq, A_eq=lhs_eq, b_eq=rhs_eq, bounds=bnds) print(opt_sol2.x) >>> [0.61538462 0.38461538] >>> ------------------------------------------ >>> [0.61538462 0.38461538]
The "issue" here is that there are an infinite number of minima and maxima for this problem, with the objective function equal to the same value for all optimal solutions (ignoring the sign flip for maximization). This can be seen by examining your equality constraint relative to your objective function and noting that they are linear combinations of each other. In the case of linear programs, the optimal solution lies on the boundary, either a line or vertex. If it is at a vertex and not a line, then there is a unique solution, but if it lies on a line, then there are infinite solutions. Because your objective function and equality constraint are linear combinations of each other, the consequence is that there are an infinite number of solutions and the minima and maxima are the same. In your case, your objective is f(x1, x2) = 0.5*x1 + 0.5*x2 or, equivalently, f(x1, x2) = 0.5*(x1 + x2). Your equality constraint says that x1 + x2 = 1, so we see that f(x1, x2) = 0.5 is the result. (If we were maximizing, we would change the sign of the objective function as you did and get that f(x1,x2) = -0.5 for all feasible solutions.) Considering there are an infinite number of solutions for both the maximization and minimization problems, why does the solver always return [0.61538462 0.38461538]? The default solver uses a form of the SIMPLEX algorithm, which always returns a vertex value (even when there is a line of solutions, a vertex value is still a solution, and this narrows down the number of possible solutions that need to be checked). In this case, there is a vertex when x1 + x2 = 1 and 1.5*x1 + 0.2*x2 = 1 intersect, which is at x1 = 8/13 and x2 = 5/13.
2
6
77,066,397
2023-9-8
https://stackoverflow.com/questions/77066397/duckdb-whats-the-difference-between-sql-and-execute-function
I am a newbie using DuckDb library in python and while going through docs I stumbled upon 2 functions to execute sql instructions, namely execute() and sql(). What's the difference between the 2? I am really scratching my head with this.
While sql and execute can be used to achieve the same results, they have slight differences which may impact which function you use. The sql function runs the query as-is. It will return a DuckDBPyRelation which allows "constructing relationships". duckdb.sql(query: str, alias: str = 'query_relation', connection: duckdb.DuckDBPyConnection = None) β†’ duckdb.DuckDBPyRelation Run a SQL query. If it is a SELECT statement, create a relation object from the given SQL query, otherwise run the query as-is. The execute function will also run queries, but can handle prepared statements that accepts parameters and returns the connection DuckDBPyConnection instead of a relationship. duckdb.execute(query: str, parameters: object = None, multiple_parameter_sets: bool = False, connection: duckdb.DuckDBPyConnection = None) β†’ duckdb.DuckDBPyConnection Execute the given SQL query, optionally using prepared statements with parameters set
9
6
77,064,579
2023-9-8
https://stackoverflow.com/questions/77064579/module-numpy-has-no-attribute-no-nep50-warning
When load HuggingFaceEmbeddings, always shows error like below. --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-27-04d1583103a5> in <cell line: 2>() 1 get_ipython().system('pip install --force-reinstall numpy==1.24.0') ----> 2 embeddings = HuggingFaceEmbeddings( 3 model_name="oshizo/sbert-jsnli-luke-japanese-base-lite" 4 ) 12 frames /usr/local/lib/python3.10/dist-packages/numpy/__init__.py in __getattr__(attr) 309 """ 310 try: --> 311 x = ones(2, dtype=float32) 312 if not abs(x.dot(x) - float32(2.0)) < 1e-5: 313 raise AssertionError() AttributeError: module 'numpy' has no attribute '_no_nep50_warning' I tried to downgrade numpy version to 1.25.2 -> 1.24.4 -> 1.24.2 -> 1.24.1 -> 1.24.0 But always same errors happen. and googles has no answer about this. If anyone know how to fixing it this problem... please tell me any information, Heres my pip installed source code. numpy versions 1.24.0 was trying to downgrade version last time. !pip install --force-reinstall numpy==1.24.0 from langchain.embeddings import HuggingFaceEmbeddings embeddings = HuggingFaceEmbeddings( model_name="intfloat/multilingual-e5-large" ) Heres my pip list below Package Version -------------------------------- --------------------- absl-py 1.4.0 accelerate 0.22.0 aiohttp 3.8.5 aiosignal 1.3.1 alabaster 0.7.13 albumentations 1.3.1 altair 4.2.2 annotated-types 0.5.0 anyio 3.7.1 appdirs 1.4.4 argon2-cffi 23.1.0 argon2-cffi-bindings 21.2.0 array-record 0.4.1 arviz 0.15.1 astropy 5.3.2 astunparse 1.6.3 async-timeout 4.0.3 attrs 23.1.0 audioread 3.0.0 autograd 1.6.2 Babel 2.12.1 backcall 0.2.0 beautifulsoup4 4.11.2 bleach 6.0.0 blinker 1.4 blis 0.7.10 blosc2 2.0.0 bokeh 3.2.2 boto3 1.28.43 botocore 1.31.43 bqplot 0.12.40 branca 0.6.0 build 1.0.0 CacheControl 0.13.1 cachetools 5.3.1 catalogue 2.0.9 certifi 2023.7.22 cffi 1.15.1 chardet 5.2.0 charset-normalizer 3.2.0 chex 0.1.7 click 8.1.7 click-plugins 1.1.1 cligj 0.7.2 cloudpickle 2.2.1 cmake 3.27.2 cmdstanpy 1.1.0 colorcet 3.0.1 colorlover 0.3.0 colour 0.1.5 community 1.0.0b1 confection 0.1.1 cons 0.4.6 contextlib2 21.6.0 contourpy 1.1.0 convertdate 2.4.0 cryptography 41.0.3 cufflinks 0.17.3 cupy-cuda11x 11.0.0 cvxopt 1.3.2 cvxpy 1.3.2 cycler 0.11.0 cymem 2.0.7 Cython 0.29.36 dask 2023.8.1 dataclasses-json 0.5.14 datascience 0.17.6 db-dtypes 1.1.1 dbus-python 1.2.18 debugpy 1.6.6 decorator 4.4.2 defusedxml 0.7.1 diskcache 5.6.3 distributed 2023.8.1 distro 1.7.0 dlib 19.24.2 dm-tree 0.1.8 docutils 0.18.1 dopamine-rl 4.0.6 duckdb 0.8.1 earthengine-api 0.1.367 easydict 1.10 ecos 2.0.12 editdistance 0.6.2 eerepr 0.0.4 en-core-web-sm 3.6.0 entrypoints 0.4 ephem 4.1.4 et-xmlfile 1.1.0 etils 1.4.1 etuples 0.3.9 exceptiongroup 1.1.3 faiss-gpu 1.7.2 fastai 2.7.12 fastcore 1.5.29 fastdownload 0.0.7 fastjsonschema 2.18.0 fastprogress 1.0.3 fastrlock 0.8.2 filelock 3.12.3 Fiona 1.9.4.post1 firebase-admin 5.3.0 Flask 2.2.5 flatbuffers 23.5.26 flax 0.7.2 folium 0.14.0 fonttools 4.42.1 frozendict 2.3.8 frozenlist 1.4.0 fsspec 2023.6.0 future 0.18.3 gast 0.4.0 gcsfs 2023.6.0 GDAL 3.4.3 gdown 4.6.6 geemap 0.25.0 gensim 4.3.2 geocoder 1.38.1 geographiclib 2.0 geopandas 0.13.2 geopy 2.3.0 gin-config 0.5.0 glob2 0.7 google 2.0.3 google-api-core 2.11.1 google-api-python-client 2.84.0 google-auth 2.17.3 google-auth-httplib2 0.1.0 google-auth-oauthlib 1.0.0 google-cloud-bigquery 3.10.0 google-cloud-bigquery-connection 1.12.1 google-cloud-bigquery-storage 2.22.0 google-cloud-core 2.3.3 google-cloud-datastore 2.15.2 google-cloud-firestore 2.11.1 google-cloud-functions 1.13.2 google-cloud-language 2.9.1 google-cloud-storage 2.8.0 google-cloud-translate 3.11.3 google-colab 1.0.0 google-crc32c 1.5.0 google-pasta 0.2.0 google-resumable-media 2.5.0 googleapis-common-protos 1.60.0 googledrivedownloader 0.4 graphviz 0.20.1 greenlet 2.0.2 grpc-google-iam-v1 0.12.6 grpcio 1.57.0 grpcio-status 1.48.2 gspread 3.4.2 gspread-dataframe 3.3.1 gym 0.25.2 gym-notices 0.0.8 h5netcdf 1.2.0 h5py 3.9.0 holidays 0.32 holoviews 1.17.1 html5lib 1.1 httpimport 1.3.1 httplib2 0.22.0 huggingface-hub 0.16.4 humanize 4.7.0 hyperopt 0.2.7 idna 3.4 imageio 2.31.3 imageio-ffmpeg 0.4.8 imagesize 1.4.1 imbalanced-learn 0.10.1 imgaug 0.4.0 importlib-metadata 6.8.0 importlib-resources 6.0.1 imutils 0.5.4 inflect 7.0.0 iniconfig 2.0.0 intel-openmp 2023.2.0 ipyevents 2.0.2 ipyfilechooser 0.6.0 ipykernel 5.5.6 ipyleaflet 0.17.3 ipython 7.34.0 ipython-genutils 0.2.0 ipython-sql 0.5.0 ipytree 0.2.2 ipywidgets 7.7.1 itsdangerous 2.1.2 jax 0.4.14 jaxlib 0.4.14+cuda11.cudnn86 jeepney 0.7.1 jieba 0.42.1 Jinja2 3.1.2 jmespath 1.0.1 joblib 1.3.2 jsonpickle 3.0.2 jsonschema 4.19.0 jsonschema-specifications 2023.7.1 jupyter-client 6.1.12 jupyter-console 6.1.0 jupyter_core 5.3.1 jupyter-server 1.24.0 jupyterlab-pygments 0.2.2 jupyterlab-widgets 3.0.8 kaggle 1.5.16 keras 2.12.0 keyring 23.5.0 kiwisolver 1.4.5 langchain 0.0.284 langcodes 3.3.0 langsmith 0.0.33 launchpadlib 1.10.16 lazr.restfulclient 0.14.4 lazr.uri 1.0.6 lazy_loader 0.3 libclang 16.0.6 librosa 0.10.1 lightgbm 4.0.0 linkify-it-py 2.0.2 lit 16.0.6 llama-cpp-python 0.1.83 llvmlite 0.39.1 locket 1.0.0 logical-unification 0.4.6 LunarCalendar 0.0.9 lxml 4.9.3 Markdown 3.4.4 markdown-it-py 3.0.0 MarkupSafe 2.1.3 marshmallow 3.20.1 matplotlib 3.7.1 matplotlib-inline 0.1.6 matplotlib-venn 0.11.9 mdit-py-plugins 0.4.0 mdurl 0.1.2 miniKanren 1.0.3 missingno 0.5.2 mistune 0.8.4 mizani 0.9.3 mkl 2023.2.0 ml-dtypes 0.2.0 mlxtend 0.22.0 more-itertools 10.1.0 moviepy 1.0.3 mpmath 1.3.0 msgpack 1.0.5 multidict 6.0.4 multipledispatch 1.0.0 multitasking 0.0.11 murmurhash 1.0.9 music21 9.1.0 mypy-extensions 1.0.0 natsort 8.4.0 nbclassic 1.0.0 nbclient 0.8.0 nbconvert 6.5.4 nbformat 5.9.2 nest-asyncio 1.5.7 networkx 3.1 nibabel 4.0.2 nltk 3.8.1 notebook 6.5.5 notebook_shim 0.2.3 numba 0.56.4 numexpr 2.8.5 numpy 1.24.0 oauth2client 4.1.3 oauthlib 3.2.2 openai 0.28.0 opencv-contrib-python 4.8.0.76 opencv-python 4.8.0.76 opencv-python-headless 4.8.0.76 openpyxl 3.1.2 opt-einsum 3.3.0 optax 0.1.7 orbax-checkpoint 0.3.5 osqp 0.6.2.post8 packaging 23.1 pandas 1.5.3 pandas-datareader 0.10.0 pandas-gbq 0.17.9 pandocfilters 1.5.0 panel 1.2.2 param 1.13.0 parso 0.8.3 partd 1.4.0 pathlib 1.0.1 pathy 0.10.2 patsy 0.5.3 pexpect 4.8.0 pickleshare 0.7.5 Pillow 9.4.0 pip 23.2.1 pip-tools 6.13.0 platformdirs 3.10.0 plotly 5.15.0 plotnine 0.12.3 pluggy 1.3.0 polars 0.17.3 pooch 1.7.0 portpicker 1.5.2 prefetch-generator 1.0.3 preshed 3.0.8 prettytable 3.8.0 proglog 0.1.10 progressbar2 4.2.0 prometheus-client 0.17.1 promise 2.3 prompt-toolkit 3.0.39 prophet 1.1.4 proto-plus 1.22.3 protobuf 3.20.3 psutil 5.9.5 psycopg2 2.9.7 ptyprocess 0.7.0 py-cpuinfo 9.0.0 py4j 0.10.9.7 pyarrow 9.0.0 pyasn1 0.5.0 pyasn1-modules 0.3.0 pycocotools 2.0.7 pycparser 2.21 pyct 0.5.0 pydantic 2.3.0 pydantic_core 2.6.3 pydata-google-auth 1.8.2 pydot 1.4.2 pydot-ng 2.0.0 pydotplus 2.0.2 PyDrive 1.3.1 PyDrive2 1.6.3 pyerfa 2.0.0.3 pygame 2.5.1 Pygments 2.16.1 PyGObject 3.42.1 PyJWT 2.3.0 pymc 5.7.2 PyMeeus 0.5.12 PyMuPDF 1.23.3 PyMuPDFb 1.23.3 pymystem3 0.2.0 PyOpenGL 3.1.7 pyOpenSSL 23.2.0 pyparsing 3.1.1 pyperclip 1.8.2 pyproj 3.6.0 pyproject_hooks 1.0.0 pyshp 2.3.1 PySocks 1.7.1 pytensor 2.14.2 pytest 7.4.1 python-apt 0.0.0 python-box 7.1.1 python-dateutil 2.8.2 python-louvain 0.16 python-slugify 8.0.1 python-utils 3.7.0 pytz 2023.3.post1 pyviz_comms 3.0.0 PyWavelets 1.4.1 PyYAML 6.0.1 pyzmq 23.2.1 qdldl 0.1.7.post0 qudida 0.0.4 ratelim 0.1.6 referencing 0.30.2 regex 2023.6.3 requests 2.31.0 requests-oauthlib 1.3.1 requirements-parser 0.5.0 rich 13.5.2 rpds-py 0.10.2 rpy2 3.4.2 rsa 4.9 s3transfer 0.6.2 safetensors 0.3.3 scikit-image 0.19.3 scikit-learn 1.2.2 scipy 1.10.1 scooby 0.7.2 scs 3.2.3 seaborn 0.12.2 SecretStorage 3.3.1 Send2Trash 1.8.2 sentence-transformers 2.2.2 sentencepiece 0.1.99 setuptools 67.7.2 shapely 2.0.1 six 1.16.0 sklearn-pandas 2.2.0 smart-open 6.3.0 sniffio 1.3.0 snowballstemmer 2.2.0 sortedcontainers 2.4.0 soundfile 0.12.1 soupsieve 2.5 soxr 0.3.6 spacy 3.6.1 spacy-legacy 3.0.12 spacy-loggers 1.0.4 Sphinx 5.0.2 sphinxcontrib-applehelp 1.0.7 sphinxcontrib-devhelp 1.0.5 sphinxcontrib-htmlhelp 2.0.4 sphinxcontrib-jsmath 1.0.1 sphinxcontrib-qthelp 1.0.6 sphinxcontrib-serializinghtml 1.1.9 SQLAlchemy 2.0.20 sqlparse 0.4.4 srsly 2.4.7 statsmodels 0.14.0 sympy 1.12 tables 3.8.0 tabulate 0.9.0 tbb 2021.10.0 tblib 2.0.0 tenacity 8.2.3 tensorboard 2.12.3 tensorboard-data-server 0.7.1 tensorflow 2.12.0 tensorflow-datasets 4.9.2 tensorflow-estimator 2.12.0 tensorflow-gcs-config 2.12.0 tensorflow-hub 0.14.0 tensorflow-io-gcs-filesystem 0.33.0 tensorflow-metadata 1.14.0 tensorflow-probability 0.20.1 tensorstore 0.1.41 termcolor 2.3.0 terminado 0.17.1 text-unidecode 1.3 textblob 0.17.1 tf-slim 1.1.0 thinc 8.1.12 threadpoolctl 3.2.0 tifffile 2023.8.30 tinycss2 1.2.1 tokenizers 0.13.3 toml 0.10.2 tomli 2.0.1 toolz 0.12.0 torch 2.0.1+cu118 torchaudio 2.0.2+cu118 torchdata 0.6.1 torchsummary 1.5.1 torchtext 0.15.2 torchvision 0.15.2+cu118 tornado 6.3.2 tqdm 4.66.1 traitlets 5.7.1 traittypes 0.2.1 transformers 4.33.1 triton 2.0.0 tweepy 4.13.0 typer 0.9.0 types-setuptools 68.1.0.1 typing_extensions 4.7.1 typing-inspect 0.9.0 tzlocal 5.0.1 uc-micro-py 1.0.2 uritemplate 4.1.1 urllib3 1.26.16 vega-datasets 0.9.0 wadllib 1.3.6 wasabi 1.1.2 wcwidth 0.2.6 webcolors 1.13 webencodings 0.5.1 websocket-client 1.6.2 Werkzeug 2.3.7 wheel 0.41.2 widgetsnbextension 3.6.5 wordcloud 1.9.2 wrapt 1.14.1 xarray 2023.7.0 xarray-einstats 0.6.0 xgboost 1.7.6 xlrd 2.0.1 xyzservices 2023.7.0 yarl 1.9.2 yellowbrick 1.5 yfinance 0.2.28 zict 3.0.0 zipp 3.16.2
Open fresh collab editor and run each command #this is minimum pre-requisites pip install langchain pip install sentence-transformers Code from langchain.embeddings import HuggingFaceEmbeddings embeddings = HuggingFaceEmbeddings( model_name="intfloat/multilingual-e5-large" )
3
3
77,063,818
2023-9-8
https://stackoverflow.com/questions/77063818/how-to-wait-for-user-input-for-5-seconds-in-python
I am trying to process the contents of a file. I need ideas as to how to interact with the user and to wait for a press of a key (s) to write the current line into another file. Thank you in advance. with open(srcFile) as f: line = f.readline().strip() #- do something with line #- part where I need help if s is press, write the current line to file if s is not pressed after 5 seconds, continue with the next line in srcFile
you can use the time and keyboard package for this case: import time import keyboard start_time = time.time() # Start the timer key_pressed = False while True: if keyboard.is_pressed("s"): key_pressed = True break if time.time() - start_time >= 5: break if key_pressed: # write the current line to file You might test with the above code to see the effect.
2
2
77,062,348
2023-9-7
https://stackoverflow.com/questions/77062348/modulenotfounderror-no-module-named-pycaret-arules
I want to use the Association Rule Mining package from PyCaret. I installed the same using: pip install pycaret[full] However, when I try to import the arules module, I get the ModuleNotFoundError: >>> from pycaret.arules import * Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'pycaret.arules' What should I do to be able to use the association rule mining module?
Arules module was removed from Pycaret versions 3.x. If you really need to use Arules, then you need to downgrade your Pycaret to at least 2.3.10. I tested with Pycaret 2.3.5 and it works fine: pip install pycaret==2.3.5
2
2
77,061,365
2023-9-7
https://stackoverflow.com/questions/77061365/creating-a-list-of-dictionaries-from-two-nested-lists-with-repeated-keys
I have two key/value inputs, both in the form of nested lists. Any given key may have multiple values, as with K1 and K2 here: key_in = [['K1', 'K1', 'K2', 'K3'], ['K1', 'K2', 'K2', 'K3']] val_in = [['V1', 'V2', 'V3', 'V4'], ['V5', 'V6', 'V7', 'V8']] I would like to merge / consolidate these into a list of nested dicts: out = [{'K1': {'V1', 'V2'}, 'K2': {'V3'}, 'K3': {'V4'}}, {'K1': {'V5'}, 'K2': {'V6', 'V7'}, 'K3': {'V8'}}] The closest thing I know how to do is dict(zip(k, v)) for k, v in zip(key_in, val_in), but that doesn't work for one-to-many key:vals. In case this is something that could be done more efficiently in pandas, my ultimate goal is the creation of a dataframe like below. (I mention efficiency since my source data is many millions of rows.) K1 K2 K3 0 {V1, V2} {V3} {V4} 1 {V5} {V6, V7} {V8}
You can use a double zip-loop to feed the nested dicts with sets and form your list : from collections import defaultdict out = [] for ink, inv in zip(key_in, val_in): d = defaultdict(set) for k, v in zip(ink, inv): d[k].add(v) out.append(dict(d)) Another variant, as suggest by @mozway, you can use : out = [] for ink, inv in zip(key_in, val_in): d = {} for k, v in zip(ink, inv): d.setdefault(k, set()).add(v) out.append(d) Output : >>> print(out) [{'K1': {'V1', 'V2'}, 'K2': {'V3'}, 'K3': {'V4'}}, {'K1': {'V5'}, 'K2': {'V6', 'V7'}, 'K3': {'V8'}}] To make a DataFrame, you simply call the constructor : >>> print(pd.DataFrame(out)) K1 K2 K3 0 {V2, V1} {V3} {V4} 1 {V5} {V7, V6} {V8}
2
3
77,061,417
2023-9-7
https://stackoverflow.com/questions/77061417/how-to-make-long-format-using-one-column-has-a-list-of-items-in-pandas-python
I am working with a data frame that has one column, values, with a list of items within it. Below is data frame I have: | uniqId | DeptId | Date | values | | -------- | ------- | ---------- | ---------- | | 1234 | BKNG | 2023-09-05 | [VGM, FJK] | | 2534 | FINA | 2023-09-04 | [GTD, WEH] | | 3469 | ASKG | 2023-09-05 | [MUG, PIS] | And I want to achieve the output as: | values_1 | uniqId | DeptId | Date | values | | -------- | -------- | ------- | ---------- | ---------- | | VGM | 1234 | BKNG | 2023-09-05 | [VGM, FJK] | | FJK | 1234 | BKNG | 2023-09-05 | [VGM, FJK] | | GTD | 2534 | FINA | 2023-09-04 | [GTD, WEH] | | WEH | 2534 | FINA | 2023-09-04 | [GTD, WEH] | | MUG | 3469 | ASKG | 2023-09-05 | [MUG, PIS] | | PIS | 3469 | ASKG | 2023-09-05 | [MUG, PIS] | The original values column stays with the data frame and create new column called values_1, with each item from the list in Values listed as its own column with rest of the data columns as is. Can someone please help me with this? I know that pandas melt or pandas long function can be utilized but I was not sure how to apply to a column that contains list. https://www.geeksforgeeks.org/python-pandas-melt/ Thanks!
Try: cp = df["values"] df = df.explode("values") df = df.rename(columns={"values": "values_1"}).assign(values=cp) print(df) Prints: uniqId DeptId Date values_1 values 0 1234 BKNG 2023-09-05 VGM [VGM, FJK] 0 1234 BKNG 2023-09-05 FJK [VGM, FJK] 1 2534 FINA 2023-09-04 GTD [GTD, WEH] 1 2534 FINA 2023-09-04 WEH [GTD, WEH] 2 3469 ASKG 2023-09-05 MUG [MUG, PIS] 2 3469 ASKG 2023-09-05 PIS [MUG, PIS]
2
2
77,060,728
2023-9-7
https://stackoverflow.com/questions/77060728/sorting-a-list-in-specific-order-using-substrings
I have a list of some strings that looks like this: my_list = ['0123_abcd', '1234_bcde', '2345_cdef', '3456_defg', '4567_efgh'] I want to sort this list by using another list containing only substrings of the first list: ordering = ['3456', '2345', '0123'] Every Element which is in my_list but not in ordering shall be set to the end of the list. So my expected result would be: result_list = ['3456_defg', '2345_cdef', '0123_abcd', '1234_bcde', '4567_efgh'] My code so far: result_list = sorted(my_list, key=lambda e: (ordering.index(e),e) if e in ordering else (len(ordering),e)) This works fine, as long as my_list looks like this: my_list = ['0123', '1234', '2345', '3456', '4567']. If I add the underscore and some letters after the numbers in my_list it does not work. I don't get any error message but the ordering is wrong. So I would guess I have to use something like .split('_')[0] somewhere in my line of code. But result_list = sorted(my_list, key=lambda e: e.split("_")[0] (ordering.index(e),e) if e in ordering else (len(ordering),e)) also does not work. How do I get this list sorting right?
If the strings in ordering are unique, then this is a possible solution: ordering_dict = {s: i for i, s in enumerate(ordering)} sorted_list = sorted( my_list, key=lambda x: ordering_dict.get(x.split("_")[0], len(ordering_dict)), ) This has the advantage over other methods to not have quadratic complexity!
2
2
77,059,223
2023-9-7
https://stackoverflow.com/questions/77059223/opencv-get-framerate-and-frame-timestamp-from-live-webcam-stream
I'm struggling a bit trying to read/set the fps for my webcam and to read the timestamp for specific frames captured from my webcam. Specifically when I try to use vc.get(cv2.CAP_PROP_POS_MSEC), vc.get(cv2.CAP_PROP_FPS), vc.get(cv2.CAP_PROP_FRAME_COUNT) they return respectively -1, 0, -1. Clearly there's something that I'm missing. Can someone help? The code looks like this: import os import time import cv2 import numpy as np [...] # Create a new VideoCapture object vc = cv2.VideoCapture(0, cv2.CAP_DSHOW) vc.set(cv2.CAP_PROP_FRAME_WIDTH, 1280) vc.set(cv2.CAP_PROP_FRAME_HEIGHT, 720) # Initialise variables to store current time difference as well as previous time call value previous = time.time() delta = 0 n = len(os.listdir("directory")) # Keep looping while True: timem = vc.get(cv2.CAP_PROP_POS_MSEC) fps = vc.get(cv2.CAP_PROP_FPS) total_frames = vc.get(cv2.CAP_PROP_FRAME_COUNT) print(timem, fps, total_frames) # Get the current time, increase delta and update the previous variable current = time.time() delta += current - previous previous = current # Check if 3 (or some other value) seconds passed if delta > 3: # Operations on image # Reset the time counter delta = 0 _, img = vc.read() [...] # press esc to exit if cv2.waitKey(20) == 27: break edit: if I remove cv2.CAP_DSHOW, it works, but then I cannot use CAP_PROP_FRAME_WIDTH
Apparently the OpenCV backend for cameras on your operating system (DSHOW) does not keep track of frame timestamps. After a read(), just use time.perf_counter() or a sibling function. It'll be close enough, unless you throttle the reading, in which case the frames would be stale. You could open an issue on OpenCV's github and request such a feature for DSHOW/MSMF. One would expect such a timestamp to represent the time the frame was taken, not the time it was finally read by the user program.
2
2
77,060,299
2023-9-7
https://stackoverflow.com/questions/77060299/join-elements-of-a-nested-list-based-on-condition
I have a nested array called element_text in the form of for example: [[1, 'the'], [1, 'quick brown'], [2, 'fox jumped'], [2, 'over'], [2, 'the'], [3, 'lazy goat']] And would like to concatenate the elements in the array and return a new array called page_text as so: [[1, 'the quick brown'], [2, 'fox jumped over the'], [3, 'lazy goat']] So, if the first number is the same, join the second text strings together with a space in between. I've tried: page_text = [] for i in element_text: #join the list of strings together if the page number is the same if i[0] == i[0]: text = " ".join(i[1]) page_text.append([i[0], text]) But this just returns the same array as what was there in the first place. Any help appreciated! Thanks, Carolina
Solution: You can use pandas by grouping the records by your number and joining all the strings together into a new column. import pandas as pd data = [[1, 'the'], [1, 'quick brown'], [2, 'fox jumped'], [2, 'over'], [2, 'the'], [3, 'lazy goat']] df = pd.DataFrame(data, columns=['num','text']) df['full_text'] = df.groupby(['num'])['text'].transform(lambda x : ' '.join(x)) df = df[['num','full_text']].drop_duplicates(subset='num') df.head() # num full_text #0 1 the quick brown #2 2 fox jumped over the #5 3 lazy goat
3
3
77,059,938
2023-9-7
https://stackoverflow.com/questions/77059938/whats-the-logic-behind-cumsum-to-make-flags-compute-counts-and-form-groups
Without further ado, my input (s1) & expected-output (df) are below : #INPUT s1 = pd.Series(['a', np.nan, 'b', 'c', np.nan, np.nan, 'd', np.nan]).rename('col1') #EXPECTED-OUTPUT s2 = pd.Series([1, 2, 3, 3, 4, 4, 5, 6]).rename('col2') # flag the transition null>notnull or vice-versa s3 = pd.Series([0, 1, 0, 0, 2, 3, 0, 4]).rename('col3') # counter of the null values df = pd.concat([s1, s2, s3], axis=1) col1 col2 col3 0 a 1 0 1 NaN 2 1 2 b 3 0 3 c 3 0 4 NaN 4 2 5 NaN 4 3 6 d 5 0 7 NaN 6 4 I tried plenty of weird combinations of cumsum and masks but without any success. Likely that's because I don't have the basics of the logical thinking. What questions do I need to ask myself before starting to build the chain that will give me my series ? Any help would be greately appreciated, guys !
You can use isna with cumsum and where for "col3". For "col2" a classical ne+shift/cumsum: m = df['col1'].isna() # if the flag is different from the previous one, increment df['col2'] = m.ne(m.shift()).cumsum() # increment on each True, mask the False df['col3'] = m.cumsum().where(m, 0) Output: col1 col2 col3 0 a 1 0 1 NaN 2 1 2 b 3 0 3 c 3 0 4 NaN 4 2 5 NaN 4 3 6 d 5 0 7 NaN 6 4 Intermediates: col1 col2 col3 m m.shift() m.ne(m.shift()) m.cumsum() 0 a 1 0 False NaN True 0 1 NaN 2 1 True False True 1 2 b 3 0 False True True 1 3 c 3 0 False False False 1 4 NaN 4 2 True False True 2 5 NaN 4 3 True True False 3 6 d 5 0 False True True 3 7 NaN 6 4 True False True 4
2
2
77,049,666
2023-9-6
https://stackoverflow.com/questions/77049666/deploying-fastapi-in-azure
I am trying to deploy my application which is built using Python and FastAPI for backend and the HTML for the frontend. Using the student login i created an app service and uploaded my code using github. My project directory is like Frontend/ |- file.html |- x.css FastAPI/ |- main.py |- other.py |- requirements.txt in my .yml files, I changed the directory of the requirements.txt and package in deploy to ./fastAPI. But the website url shows the following message. Hey, Python developers! Your app service is up and running. Time to take the next step and deploy your code. And it displays the error message in github workflow Deployment Failed, Error: Failed to deploy web package to App Service. Conflict (CODE: 409) I tried to use startup-command in the yml file to start the FastAPI, it didn't work and gave the following error message Deployment Failed, Error: startup-command is not a valid input for Windows web app or with publish-profile auth scheme. I changed the configuration on the azure portal to add WEBSITES_CONTAINER_STARTUP_COMMAND = uvicorn main:app --host 0.0.0.0 --port 80 in the application settings, didn't work.
I have deployed a simple Fast API project with HTML and CSS as frontend: My project structure: - FastAPI - templates - index.html -static - style.css - main.py - requirements.txt Create Azure App Service with Python as Runtime stack, Linux as OS and Consumption hosting Plan in Azure Portal: Configure Deployment during the creation of App Service. Click on Review+Create. Once the App Service gets created, it will generate the workflow in your GitHub repository, and it starts the deployment. Deployment status can be tracked in your GitHub repository=>Actions. My Workflow to deploy FastAPI: name: Build and deploy Python app to Azure Web App - <web_app_Name> on: push: branches: - main workflow_dispatch: jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Set up Python version uses: actions/setup-python@v1 with: python-version: '3.10' - name: Create and start virtual environment run: | python -m venv venv source venv/bin/activate - name: Install dependencies run: pip install -r requirements.txt # Optional: Add step to run tests here (PyTest, Django test suites, etc.) - name: Upload artifact for deployment jobs uses: actions/upload-artifact@v2 with: name: python-app path: | . !venv/ deploy: runs-on: ubuntu-latest needs: build environment: name: 'Production' url: ${{ steps.deploy-to-webapp.outputs.webapp-url }} steps: - name: Download artifact from build job uses: actions/download-artifact@v2 with: name: python-app path: . - name: 'Deploy to Azure Web App' uses: azure/webapps-deploy@v2 id: deploy-to-webapp with: app-name: '<web_app_Name>' slot-name: 'Production' publish-profile: ${{ secrets.AZUREAPPSERVICE_PUBLISHPROFILE_XXXXXXXXXXX }} Ad Deployed Successfully: You have to run the Startup command for FastAPI in App Service=>Settings=>Configuration: gunicorn --bind=0.0.0.0 main:app main is from the filename main.py and app is the object which I used in main.py. References: Refer my SO thread on the similar issue.
3
4
77,059,419
2023-9-7
https://stackoverflow.com/questions/77059419/split-up-column-value-into-empty-column-values-in-a-dataframe
I am having a dataframe df: columnA columnB columnC A A 10 A B NaN A C 20 B A 30 B C NaN A D NaN D C 15 How can I fill the NaNvalues in that case, that the next non `NaNΒ΄ value is diveded by the missing entries before and splitted (including the already filled row)? So in my case that the output is: columnA columnB columnC A A 10 A B 10 A C 10 B A 30 B C 5 A D 5 D C 5 Further explanation, in that case 20 was divided by 2 and leads to 10, and 15 was divided by 3 and leads to 5.
Another one-chained variation using pd.Series.shift: df['columnC'] = (df['columnC'].fillna(0).groupby(df['columnC'].notna().cumsum() .shift().fillna(0)).transform('mean')) columnA columnB columnC 0 A A 10.0 1 A B 10.0 2 A C 10.0 3 B A 30.0 4 B C 5.0 5 A D 5.0 6 D C 5.0
2
3
77,059,331
2023-9-7
https://stackoverflow.com/questions/77059331/y-parameters-to-z-parameters-in-python
Is there any function in python that converts y parameters to z parameters like matlab's y2z function? Here is matlab's y2z() function documentation.
No built-in Python function for Y-to-Z conversion like MATLAB's y2z. However, you can easily implement it. MATLAB Code Y = [1, 2; 3, 4]; Z = y2z(Y); disp('Y Parameters:'); disp(Y); disp('Z Parameters:'); disp(Z); Python Code You can use NumPy for matrix operations. import numpy as np def y2z(Y): Y11, Y12, Y21, Y22 = Y[0, 0], Y[0, 1], Y[1, 0], Y[1, 1] D = (1 + Y11) * (1 + Y22) - Y12 * Y21 Z11 = (1 + Y11 - Y12 * Y21) / D Z12 = (Y11 + Y22 - Y12 - Y21) / D Z21 = 2 / D Z22 = (-1 + Y11 + Y22 - Y12 * Y21) / D return np.array([[Z11, Z12], [Z21, Z22]]) Y = np.array([[1, 2], [3, 4]]) Z = y2z(Y) print("Y Parameters:") print(Y) print("Z Parameters:") print(Z) Run these snippets in their respective environments to compare. Both should yield the same Z-parameters.
2
2
77,058,813
2023-9-7
https://stackoverflow.com/questions/77058813/can-a-pytest-fixture-know-whether-a-test-has-passed-or-failed
I'm writing some tests in pytest, and I'd like to make the fixture do something only if the test passes (update a value in a DB). It doesn't seem like fixtures in general know about whether the tests they run pass or fail -- is there a way to make them? import pytest @pytest.fixture def my_fixture(request): ... yield [if test passes]: db.write(new_value) def test_foo(my_fixture): assert False Trying to catch an exception doesn't work as pytest has its own exception handling which handles things before we'd hit my own: @pytest.fixture def my_fixture(request): ... try: yield except Exception as e: raise e else: db.write('new_value') I've also tried inspecting the pytest request object, but it doesn't seem to have what I want I suspect doing things this way doesn't follow the philosophy of pytest, so I'll probably end up doing things a different way in reality, but I'm interested to know the answer nonetheless :)
You can introspect the test context by requesting the request object in the fixture. There you can get request.session.testsfailed @pytest.fixture def my_fixture(request): failed_count = request.session.testsfailed yield if request.session.testsfailed > failed_count: print("this test failed")
2
4
77,058,208
2023-9-7
https://stackoverflow.com/questions/77058208/pyopengl-flickering-points-and-lines
I'm trying to visualize 3D human keypoints in PyOpenGL, the code works fine if one human is present. more than one, the points and lines starts to flicker. while True: # Grab an image time_temp=time.time() def points(keypoints_3d): glEnable(GL_POINT_SMOOTH) glEnable(GL_BLEND) glEnable(GL_FRAMEBUFFER_SRGB) glPointSize(10) glBegin(GL_POINTS) glColor3d(1, 1, 1) for i in range(len(keypoints_3d)): glVertex3d(keypoints_3d[i][0]/100, keypoints_3d[i][1]/100, keypoints_3d[i][2]/100) #print("JNW",keypoints_3d[i][0]) glEnd() def lines(keypoints_3d): glEnable(GL_POINT_SMOOTH) glEnable(GL_BLEND) glEnable(GL_FRAMEBUFFER_SRGB) glPointSize(10) glBegin(GL_LINES) glColor3d(1, 1, 1) for i in range(len(keypoints_3d)): if(i<5): glVertex3d(keypoints_3d[i][0]/100, keypoints_3d[i][1]/100, keypoints_3d[i][2]/100) glVertex3d(keypoints_3d[i+1][0]/100, keypoints_3d[i+1][1]/100, keypoints_3d[i+1][2]/100) #print("JNW",keypoints_3d[i][0]) glEnd() for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() quit() glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT) for body in bodies_list: points(keypoints_3d) lines(keypoints_3d) pygame.display.flip() pygame.time.wait(10) I'm sure there are better ways to visualize these points and lines in PyopenGl. Can someone guide me?
You need to update the display once after you have drawn all the geometry, instead of updating it after each geometry. So call pygame.display.flip() after the loop, but not in the loop: while True: # [...] # clear display glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT) # draw all the geometry for body in bodies_list: points(keypoints_3d) lines(keypoints_3d) # update the display pygame.display.flip() pygame.time.wait(10)
2
2
77,056,035
2023-9-7
https://stackoverflow.com/questions/77056035/why-would-you-want-to-create-more-than-one-event-loops-in-asyncio
Why not just use the default one always? are there any usecases for creating multiple event loops?
I don't know the exact reason behind your question. But there are certainly use cases for using multiple loops. e.g. To increase scalability. We know, CPU-bound code will interfere with event-loops in a process. This will reduce throughput. Not if we have loops in other processes, event-loops in other processes will not be disturbed. Of course we have to think about locking mechanisms etc. If using asyncio with multiprocessing.
2
1
77,056,341
2023-9-7
https://stackoverflow.com/questions/77056341/how-do-you-modify-styled-data-frame-in-pandas
I have this data frame: df Server Env. Model Percent_Utilized server123 Prod Cisco. 50 server567. Prod Cisco. 80 serverabc. Prod IBM. 100 serverdwc. Prod IBM. 45 servercc. Prod Hitachi. 25 Avg 60 server123Uat Uat Cisco. 40 server567u Uat Cisco. 30 serverabcu Uat IBM. 80 serverdwcu Uat IBM. 45 serverccu Uat Hitachi 15 Avg 42 I have style applied to this df as follows: def color(val): if val > 80: color = 'red' elif val > 50 and val <= 80: color = 'yellow' else: color = 'green' return 'background-color: %s' % color df_new = df.style.applymap(color, subset=["Percent_Utilized"]) I need to add % at the end of the numbers on Percent_Utilized columns: resulting data frame need to look something like this: df_new Server Env. Model Percent_Utilized server123 Prod Cisco. 50% server567. Prod Cisco. 80% serverabc. Prod IBM. 100% serverdwc. Prod IBM. 45% servercc. Prod Hitachi. 25% Avg 60% server123Uat Uat Cisco. 40% server567u Uat Cisco. 30% serverabcu Uat IBM. 80% serverdwcu Uat IBM. 45% serverccu Uat Hitachi 15% Avg 42% when I do this: df_new['Percent_Utilized'] = df_new['Percent_Utilized'].astype(str) + '%' I get this error: TypeError: 'Styler" object is not suscriptable.
df.style returns a pd.Styler object, not a pd.DataFrame object. So you cannot use .astype. You can use Styler.format like this: df_new = df.style.applymap(color, subset=["Percent_Utilized"])\ .format('{:.0f}%', subset=['Percent_Utilized'], na_rep='nan') You'd get something like this, ignore the Avg rows as they are not read properly by read_clipboard:
2
2
77,053,457
2023-9-6
https://stackoverflow.com/questions/77053457/how-could-i-extract-all-digits-and-strings-from-the-first-list-into-other-list
I have the first list call OGList that have a list of strings, after that I want to extract all digits and strings from the OGList into a list of string and a list of digits. My Input: OGList = ['A10', 'BMW320i', 'Nissan NSX200', 'Benz 220c'] numlist = [] strlist = [] otherlist = [] for i in OGList: for x in i: if x.isalpha(): strlist.append(x) elif x.isdigit(): numlist.append(x) else: otherlist.append(x) print(numlist) print(strlist) My Output: ['1', '0', '3', '2', '0', '2', '0', '0', '2', '2', '0'] ['A', 'B', 'M', 'W', 'i', 'N', 'i', 's', 's', 'a', 'n', 'N', 'S', 'X', 'B', 'e', 'n', 'z', 'c'] But I want it to stick together like my desired output. My desired output is: ['10', '320', '200', '220'] ['A', 'BMWi', 'NissanNSX', 'Benzc'] How can I fix my code?
Since you tagged the question with regex, here's an alternative approach using the same to separate the three groups: import re OGList = ['A10', 'BMW320i', 'Nissan NSX200', 'Benz 220c'] types = {"chars": "[a-zA-Z]", "nums": "[0-9]", "other": "[^a-zA-Z0-9]"} res = {k: [''.join(re.findall(types[k], s)) for s in OGList] for k in types} Afterwards, res is a dict with the different lists in it: { 'chars': ['A', 'BMWi', 'NissanNSX', 'Benzc'], 'nums': ['10', '320', '200', '220'], 'other': ['', '', ' ', ' '] } Note that there's not elif and else here, you have to make sure the patterns are mutually exclusive and covering all the cases, which negating ^ the other groups should do nicely.
2
5
77,049,524
2023-9-6
https://stackoverflow.com/questions/77049524/selenium-isnt-using-my-own-chrome-driver-i-set-and-is-using-the-default-one
So I'm trying to get the source of a webpage in Python, and for compatability reasons, I have to use Google Chrome 114 instead of the latest 116. I used a service to create it and downloaded my own version that should work, however it just seems to be completely ignoring it and using my system one. from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By chrome_service = Service("/Users/*/Downloads/chromedriver_mac64/chromedriver") # >:( driver = webdriver.Chrome(service=chrome_service) url = "https://google.com" driver.get(url) driver.implicitly_wait(10) website_source = driver.page_source print(website_source) driver.quit() Output: Traceback (most recent call last): File "/Users/*/Desktop/main.py", line 17, in <module> driver = webdriver.Chrome(service=chrome_service, options=chrome_options) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/selenium/webdriver/chrome/webdriver.py", line 84, in __init__ super().__init__( File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/selenium/webdriver/chromium/webdriver.py", line 104, in __init__ super().__init__( File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/selenium/webdriver/remote/webdriver.py", line 286, in __init__ self.start_session(capabilities, browser_profile) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/selenium/webdriver/remote/webdriver.py", line 378, in start_session response = self.execute(Command.NEW_SESSION, parameters) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/selenium/webdriver/remote/webdriver.py", line 440, in execute self.error_handler.check_response(response) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/selenium/webdriver/remote/errorhandler.py", line 245, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.SessionNotCreatedException: Message: session not created: This version of ChromeDriver only supports Chrome version 114 Current browser version is 116.0.5845.140 with binary path /Applications/Google Chrome.app/Contents/MacOS/Google Chrome Stacktrace: 0 chromedriver 0x000000010f7436b8 chromedriver + 4937400 1 chromedriver 0x000000010f73ab73 chromedriver + 4901747 2 chromedriver 0x000000010f2f8616 chromedriver + 435734 3 chromedriver 0x000000010f32ad10 chromedriver + 642320 4 chromedriver 0x000000010f32618a chromedriver + 622986 5 chromedriver 0x000000010f32267c chromedriver + 607868 6 chromedriver 0x000000010f369a08 chromedriver + 899592 7 chromedriver 0x000000010f368ebf chromedriver + 896703 8 chromedriver 0x000000010f35fde3 chromedriver + 859619 9 chromedriver 0x000000010f32dd7f chromedriver + 654719 10 chromedriver 0x000000010f32f0de chromedriver + 659678 11 chromedriver 0x000000010f6ff2ad chromedriver + 4657837 12 chromedriver 0x000000010f704130 chromedriver + 4677936 13 chromedriver 0x000000010f70adef chromedriver + 4705775 14 chromedriver 0x000000010f70505a chromedriver + 4681818 15 chromedriver 0x000000010f6d792c chromedriver + 4495660 16 chromedriver 0x000000010f722838 chromedriver + 4802616 17 chromedriver 0x000000010f7229b7 chromedriver + 4802999 18 chromedriver 0x000000010f73399f chromedriver + 4872607 19 libsystem_pthread.dylib 0x00007ff8185474e1 _pthread_start + 125 20 libsystem_pthread.dylib 0x00007ff818542f6b thread_start + 15
You can use https://github.com/seleniumbase/SeleniumBase to mix any Chrome browser version with any chromedriver version. After pip install seleniumbase, you can run the following script with python to force a specific chromedriver version for the already-installed Chrome version: from seleniumbase import Driver driver = Driver(browser="chrome", driver_version="114") driver.get("https://google.com") driver.quit() Here's the output on the first run if you don't already have chromedriver 114 on your PATH: Warning: chromedriver update needed. Getting it now: *** chromedriver to download = 114.0.5735.90 (Legacy Version) Downloading chromedriver_mac_arm64.zip from: https://chromedriver.storage.googleapis.com/114.0.5735.90/chromedriver_mac_arm64.zip ... Download Complete! Extracting ['chromedriver'] from chromedriver_mac_arm64.zip ... Unzip Complete! The file [chromedriver] was saved to: ~/seleniumbase/drivers/chromedriver Making [chromedriver 114.0.5735.90] executable ... [chromedriver 114.0.5735.90] is now ready for use!
3
2
77,051,578
2023-9-6
https://stackoverflow.com/questions/77051578/convert-dataframe-of-dictionary-entries-to-dataframe-of-all-entries-based-on-exi
I have a pandas dataframe that consists of an id and an associated count of different encoded words. For instance: Original = pd.DataFrame(data=[[1,'1:2,2:3,3:1'],[2,'2:2,4:3']], columns=['id','words']) I have a dictionary that has the mapping to the actual words, for instance: WordDict = {1:'A',2:'B',3:'C',4:'D'} What I would like to do is create a new dataframe that maps the counts to columns for all possible words, so it would look something like: Final =pd.DataFrame(data=[[1,2,3,1,0],[2,0,2,0,3]], columns=['id','A','B','C','D']).set_index('id') I know I can split the 'words' column of the original into separate columns, and can create a dataframe from WordDict so that it has all possible columns, but could not figure out how to create the mapping.
You can use a regex, a list comprehension, and the DataFrame constructor: import re Final = pd.DataFrame([{WordDict.get(int(k), None): v for k,v in re.findall('([^:,]+):([^:,]+)', s)} for s in Original['words']], index=Original['id'] ).fillna(0).astype(int) Or with split: Final = pd.DataFrame([{WordDict.get(int(k), None): v for x in s.split(',') for k,v in [x.split(':')]} for s in Original['words']], index=Original['id'] ).fillna(0).astype(int) Or ast.literal_eval: from ast import literal_eval Final = pd.DataFrame([{WordDict.get(k, None): v for k,v in literal_eval(f'{{{s}}}').items()} for s in Original['words']], index=Original['id'] ).fillna(0, downcast='infer') Output: A B C D id 1 2 3 1 0 2 0 2 0 3
4
4