question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
76,228,417
2023-5-11
https://stackoverflow.com/questions/76228417/cannot-import-name-event-type-opened-from-watchdog-events
I'm trying to make a REST api (begginer) but when i tried to initialize the server from this code: from flask import Flask app = Flask(__name__) if __name__=='__main__': app.run(debug=True, port=4000) i get this error in the prompt: from watchdog.events import EVENT_TYPE_OPENED ImportError: cannot import name 'EVENT_TYPE_OPENED' from 'watchdog.events' (C:\ ********* \Python\Python310\lib\site-packages\watchdog\events.py) I'm expecting something like this (Min 8:27): https://www.youtube.com/watch?v=GMppyAPbLYk&ab_channel=TechWithTim
Try to do the following: pip install --upgrade watchdog
20
42
76,217,189
2023-5-10
https://stackoverflow.com/questions/76217189/create-xarray-dataset-with-observations-and-averages-that-has-combined-index
Suppose I have the following dataarray containing observations for different locations over time: import numpy as np import pandas as pd import xarray as xr np.random.seed(42) data = xr.DataArray( np.random.randint(1,100, (36, 3)), dims=("time", "location"), coords={ "time": pd.date_range("2022-01-01", periods=36, freq="10D"), "location": ["A", "B", "C"] }, name="observations" ) and now I calculate the monthly average and combine it with the observations to a dataset: monthly_avg = data.groupby("time.month").mean() data = data.to_dataset() data["average"] = monthly_avg giving me How can is set the indices correctly (if possible) so when I run: data.sel(time="2022-01-01") I get a subset of the dataset for one time, all locations and one monthly average (which corresponds to the selected time)? At the moment when I run this I get returning all monthly averages for the timestep. Conversely, when I run data.sel(month=1) I'd like a subset with only the timesteps that are in January.
To get the selection return what you want, I would first compute the monthly averages and repeat them to match the original time-dimension. Then I would create a multi-index, such that you can select either the specific date or the month. #setup test data import numpy as np import pandas as pd import xarray as xr np.random.seed(42) data = xr.DataArray( np.random.randint(1,100, (36, 3)), dims=("time", "location"), coords={ "time": pd.date_range("2022-01-01", periods=36, freq="10D"), "location": ["A", "B", "C"] }, name="observations" ) #compute monthly array and repeat with list comprehension data=data.to_dataset() monthly_avg = data.groupby("time.month").mean()['observations'].values data['average']=(('time','location'),np.array([monthly_avg[i-1,:] for i in data.time.dt.month])) #add month and create multiindex data['month']=data.time.dt.month data=data.set_index(day_month=['time','month']) You can then run the selection to get what you want. print(data.sel(time="2022-01-01")) <xarray.Dataset> Dimensions: (location: 3, month: 1) Coordinates: * location (location) <U1 'A' 'B' 'C' * month (month) int64 1 time <U10 '2022-01-01' Data variables: observations (month, location) int64 52 93 15 average (month, location) float64 70.5 82.25 33.75 print(data.sel(month=1)) <xarray.Dataset> Dimensions: (location: 3, time: 4) Coordinates: * location (location) <U1 'A' 'B' 'C' * time (time) datetime64[ns] 2022-01-01 2022-01-11 ... 2022-01-31 month int64 1 Data variables: observations (time, location) int64 52 93 15 72 61 21 83 87 75 75 88 24 average (time, location) float64 70.5 82.25 33.75 ... 70.5 82.25 33.7 This gives repeated values for the second command. Maybe there is a better way to set up the multi-index. You can have a look at the pandas multiindex documentation: https://pandas.pydata.org/docs/user_guide/advanced.html: or into stack/unstack in xarray: https://xarray.pydata.org/en/v0.7.2/reshaping.html#stack-and-unstack in case you haven't done so before.
3
2
76,226,696
2023-5-11
https://stackoverflow.com/questions/76226696/fastapi-uvicorn-multithreading-how-to-make-web-app-to-work-with-many-reques
I'm new to Python development. (But I have doenet background) I do have a simple FastAPI application from fastapi import FastAPI import time import logging import asyncio import random app = FastAPI() r = random.randint(1, 100) logging.basicConfig(level="INFO", format='%(levelname)s | %(asctime)s | %(name)s | %(message)s') logging.info(f"Starting app {r}") @app.get("/") async def long_operation(): logging.info(f"Starting long operation {r}") await asyncio.sleep(1) time.sleep(4) # I know this is blocking and the endpoint marked as async, but I actually do have some blocking requests in my code. return r And I run the app using this comand: uvicorn "main:app" --workers 4 And the app starts 4 instances in different processes: INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) INFO: Started parent process [22112] INFO | 2023-05-11 12:32:43,544 | root | Starting app 17 INFO: Started server process [10180] INFO: Waiting for application startup. INFO: Application startup complete. INFO | 2023-05-11 12:32:43,579 | root | Starting app 58 INFO: Started server process [29592] INFO: Waiting for application startup. INFO: Application startup complete. INFO | 2023-05-11 12:32:43,587 | root | Starting app 12 INFO: Started server process [7296] INFO: Waiting for application startup. INFO: Application startup complete. INFO | 2023-05-11 12:32:43,605 | root | Starting app 29 INFO: Started server process [15208] INFO: Waiting for application startup. INFO: Application startup complete. Then I open the 3 browser tabs and start sending requests to the app as parallel as possible. And here is the log: INFO | 2023-05-11 12:32:50,770 | root | Starting long operation 29 INFO: 127.0.0.1:55031 - "GET / HTTP/1.1" 200 OK INFO | 2023-05-11 12:32:55,774 | root | Starting long operation 29 INFO: 127.0.0.1:55031 - "GET / HTTP/1.1" 200 OK INFO | 2023-05-11 12:33:00,772 | root | Starting long operation 29 INFO: 127.0.0.1:55031 - "GET / HTTP/1.1" 200 OK INFO | 2023-05-11 12:33:05,770 | root | Starting long operation 29 INFO: 127.0.0.1:55031 - "GET / HTTP/1.1" 200 OK INFO | 2023-05-11 12:33:10,790 | root | Starting long operation 29 INFO: 127.0.0.1:55031 - "GET / HTTP/1.1" 200 OK INFO | 2023-05-11 12:33:15,779 | root | Starting long operation 29 INFO: 127.0.0.1:55031 - "GET / HTTP/1.1" 200 OK INFO | 2023-05-11 12:33:20,799 | root | Starting long operation 29 INFO: 127.0.0.1:55031 - "GET / HTTP/1.1" 200 OK INFO | 2023-05-11 12:33:25,814 | root | Starting long operation 29 INFO: 127.0.0.1:55031 - "GET / HTTP/1.1" 200 OK INFO | 2023-05-11 12:33:30,856 | root | Starting long operation 29 INFO: 127.0.0.1:55031 - "GET / HTTP/1.1" 200 OK My observations: Only 1 process is working. Others do not handle requests (I have tried many times. It is always like that.) 4 different instances are created. My questions: Why only one process does work and others don't? If I want to have an in-memory cache. Can I achieve that? Can I run 1 process which can handle some amount of requests in parallel? Can this be somehow related to the fact that I do tests on Windows? UPDATE+SOLUTION: My real problem was the def/async def behavior (which I find very confusing). I was trying to solve the problem with a blocked thread using multiple workers which worked wired for my case as well (only 1 actually worked) and that's probably because I used a single browser with many tabs. Once I tested the service using JMeter it showed me that all workers were used. But the solution with multiple processes was not the right one for me. The better one was to try to unblock the single thread in a single process. At first, I used the following approach because I used an external library with SYNC IO function. However I have found an ASYNC variant of that function. So the problem was solved by using the correct library. Thank you all for your help.
Why only one process does work and others don't? I can't reproduce your observation. And in fact I don't know how you deduced that. If I change your logging format and add logging.basicConfig(level="INFO", format='%(process)d | %(levelname)s | %(asctime)s | %(name)s | %(message)s') (note the %(process)d which prints process' id) then I see in logs 19968 | INFO | 2023-05-11 12:45:53,297 | root | Starting long operation 35 21368 | INFO | 2023-05-11 12:45:56,112 | root | Starting long operation 90 5268 | INFO | 2023-05-11 12:45:56,626 | root | Starting long operation 3 22024 | INFO | 2023-05-11 12:45:57,032 | root | Starting long operation 19 5268 | INFO | 2023-05-11 12:45:57,416 | root | Starting long operation 3 22024 | INFO | 2023-05-11 12:45:57,992 | root | Starting long operation 19 after spawning multiple requests in parallel. Is it possible that you've incorrectly fired your requests? Not in parallel? Anyway yes, all workers are utilized. The exact way they are chosen is however an implementation detail. If I want to have in-memory cache. Can I acheave that? You mean shared between workers? Not really. You can do some cross-process communication (e.g. shared memory), but this is not simple to do and maintain. Generally we would use an in-memory cache per process. Unless you are limited by memory, in which case it becomes a problem, indeed. Can I run 1 process which can handle some amount of requests in parallel? I'm not sure I get your question. You can run uvicorn with --workers 1 if you want, no problem. Python's default async runtime is single threaded though, so you won't get true parallelism. But instead concurrency, similar to how JavaScript works. And therefore you need to be careful, you have to avoid blocking calls like time.sleep and use non-blocking calls like asyncio.sleep. Well, with async programming you always have to be careful when doing that, regardless of how many processes you spawn. Can this be somehow related to the fact that I do tests on Windows? No, this is unrelated to the operating system. This design is due to the major flaw of Python itself: it has GIL (Global Interpreter Lock) which makes threads a lot less useful compared to other runtimes like dotnet/C#. In Python true parallelism is achieved through subprocesses.
5
5
76,226,066
2023-5-11
https://stackoverflow.com/questions/76226066/columns-must-be-same-length-as-key-error-when-trying-split
The code below just runs fine with Python 3.8.10 but does not run in Python 3.10. Any idea what could be the problem? import pandas as pd import requests url = "https://coinmarketcap.com/new/" page = requests.get(url,headers={'User-Agent': 'Mozilla/5.0'}, timeout=1) pagedata = page.text usecols = ["Name", "Symbol", "1h", "24h", "MarketCap"] df = pd.read_html(page.text)[0] df[["Name", "Symbol"]] = df["Name"].str.split(r"\d+", expand=True) df = (df.rename(columns={"Fully Diluted Market Cap": "MarketCap"})[usecols] .sort_values("24h", ascending=False, key=lambda ser: ser.str.replace("%", "").astype(float)) .replace(r"^\$", "", regex=True) ) numcols = df.columns[~df.columns.isin(['Name'])] df = df.head(5).to_markdown(index=True) print (df) Current Output: Traceback (most recent call last): df[["Name", "Symbol"]] = df["Name"].str.split(r"\d+", expand=True) .... .... ValueError: Columns must be same length as key Correct Output: (Output in Python 3.8) | | Name | Symbol | 1h | 24h | MarketCap | |---:|:------------|:----------|:-------|:---------|:------------| | 3 | Shrekt |4HREK | 23.82% | 2536.51% | 342,357 | | 8 | BLAZE |TOKEN9BLZE | 1.07% | 106.71% | 3,828,088 | | 26 | Goner27 |GONER | 6.32% | 88.09% | 1,094,010 | | 14 | Party Hat15 |PHAT | 13.34% | 81.64% | 60,136 | | 29 | PepeChat |30PPC | 48.01% | 78.25% | 431,159 |
I think it has to do with one of the values (NOOT (BRC-20)4NOOT) found in the column Name. To handle this, we can try to split on the last number found in each row of this column. Replace this : df[["Name", "Symbol"]] = df["Name"].str.split(r"\d+", expand=True) By this : df[["Name", "Symbol"]] = df["Name"].str.split(r"\d+(?!.*\d)", expand=True) Regex [demo] Output : print(df) | | Name | Symbol | 1h | 24h | MarketCap | |---:|:------------|:---------|:-------|:---------|:------------| | 5 | Shrekt | HREK | 54.61% | 1124.57% | 159,013 | | 10 | BLAZE TOKEN | BLZE | 2.40% | 109.53% | 3,880,242 | | 8 | CMC DOGE | CMCDOGE | 12.93% | 102.76% | 169,492 | | 28 | Goner | GONER | 1.37% | 88.66% | 1,050,089 | | 4 | nomeme | NOMEME | 53.86% | 86.14% | 4,603,393 |
3
3
76,217,229
2023-5-10
https://stackoverflow.com/questions/76217229/update-plot-in-tkinter-from-user-input-without-lag-using-threading
I'm currently coding my first tkinter GUI. I'm trying to make an interactive plot, using some scales so that the user can set the values of parameters affecting the plot. when I do this it starts lagging and my solution; working with threading is not working as intended. In my real code there are multiple plots so the application started lagging as shown here: from tkinter import * from tkinter import ttk import threading import matplotlib.pyplot as plt from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg import time import numpy as np def create_window(root): frame1 = Frame(root) frame2 = Frame(root) frame1.grid(row=0,column=0) frame2.grid(row=0,column=1) figure1 = plt.Figure(figsize=(5,5)) figure1.set_tight_layout(True) ax1 = figure1.add_subplot(111) canvas1 = FigureCanvasTkAgg(figure1, frame1) canvas1.get_tk_widget().grid(column=1,row=1, padx=10, pady=10) m.trace_add('write', lambda var=None, index=None, mode=None: update_plot_tracer(ax1, canvas1)) m_scale = ttk.Scale(frame2, orient=VERTICAL, length=200, from_=100.0, to=0.0, variable=m) m_scale.grid(column=0, row=0) update_plot(ax1,canvas1) def update_plot(ax, canvas): #to make it lag time.sleep(0.1) x = np.arange(0,10,1) y = m.get() * x ax.clear() ax.plot(x,y) ax.set_ylim([0, 100]) canvas.draw() def update_plot_tracer(ax, canvas, var=None, index=None, mode=None): update_plot(ax, canvas) if __name__ == '__main__': root = Tk() m = DoubleVar(value = 10.0) create_window(root) root.mainloop() My solution to that was to use threading so that the rest of the user interface is still usable while the plot is plotting. The problem I am running into now is that when the user drags the scale a lot there would be multiple threads called which interfere so I tried to stop the program from updating multiple times at the same time as shown below: from tkinter import * from tkinter import ttk import threading import matplotlib.pyplot as plt from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg import time import numpy as np def create_window(root): frame1 = Frame(root) frame2 = Frame(root) frame1.grid(row=0,column=0) frame2.grid(row=0,column=1) figure1 = plt.Figure(figsize=(5,5)) figure1.set_tight_layout(True) ax1 = figure1.add_subplot(111) canvas1 = FigureCanvasTkAgg(figure1, frame1) canvas1.get_tk_widget().grid(column=1,row=1, padx=10, pady=10) m.trace_add('write', lambda var=None, index=None, mode=None: update_plot_tracer(ax1, canvas1)) m_scale = ttk.Scale(frame2, orient=VERTICAL, length=200, from_=100.0, to=0.0, variable=m) m_scale.grid(column=0, row=0) update_plot(ax1,canvas1) m_entry = ttk.Entry(frame2, textvariable = m) m_entry.grid(column=0, row=1) def update_plot(ax, canvas): x = np.arange(0,10,1) y = m.get() * x print('last used value of m', m.get()) ax.clear() ax.plot(x,y) ax.set_ylim([0, 100]) canvas.draw() #to make it lag time.sleep(0.2) def update_plot_tracer(ax, canvas, var=None, index=None, mode=None): global thread1#just to make this example work, in my real code i have it as a class variable... if thread1 is None or not thread1.is_alive(): thread1 = threading.Thread(target= lambda: update_plot(ax, canvas)) thread1.start() if __name__ == '__main__': thread1 = None root = Tk() m = DoubleVar(value = 10.0) create_window(root) root.mainloop() Now I have the next problem: when the thread is started but the user keeps moving the mouse, the value he/she sees is not the one used in the plot, so I would like to do something like adding the threads to a queue, but then I will run into the problem that this queue could get very long and it would take some time to update to the newest state, even though all the threads between the currently running and the newest added are useless, but as far as I have understood it, once I have added a thread to the queue (from the queue bib) it cant be removed. I have never worked with most of this so any help is appreciated. Also, I want to keep it as interactive as possible, so to only update once the user lets go of the scale is not my preferred way of doing this. EDIT: The question is how do I create a queue of threads from which I can remove entries?
Even though Tkinter supports manipulating GUI in a thread different than the thread that created the Tcl interpreter(the root Tk instance), it's best to avoid that scenario because the implementation is not perfect and most GUI frameworks do not support that scenario. I recommend to do a data processing work in a background thread and a plotting work in the main thread. Now, let's move to the main topic on the queue. You don't need to implement a queue of threads. You can just start a new thread when the existing thread is finished. Use the after_idle() to schedule a function like this. Also, don't forget to join the thread. ... def update_plot_tracer(ax, canvas, var=None, index=None, mode=None): global thread1 if thread1 is None: def entry(): update_plot(ax, canvas) root.after_idle(on_end_thread) def on_end_thread(): # This will be run in the main thread. global thread1 if thread1.invalidated: root.after_idle(lambda: update_plot_tracer(ax, canvas)) thread1.join() thread1 = None thread1 = threading.Thread(target=entry) thread1.invalidated = False thread1.start() else: thread1.invalidated = True ...
3
2
76,225,668
2023-5-11
https://stackoverflow.com/questions/76225668/remove-duplicated-values-appear-in-two-columns-in-dataframe
I have table similar to this one: index name_1 path1 name_2 path2 0 Roy path/to/Roy Anne path/to/Anne 1 Anne path/to/Anne Roy path/to/Roy 2 Hari path/to/Hari Wili path/to/Wili 3 Wili path/to/Wili Hari path/to/Hari 4 Miko path/to/miko Lin path/to/lin 5 Miko path/to/miko Dan path/to/dan 6 Lin path/to/lin Miko path/to/miko 7 Lin path/to/lin Dan path/to/dan 8 Dan path/to/dan Miko path/to/miko 9 Dan path/to/dan Lin path/to/lin ... As you can see, the table kind of showing relationship between entities - Roi is with Anne, Wili with Hari, Lin with Dan and with Miko. The table is actually showing overlap data , meaning, Hari and wili for example, have the same document, and I would like to remove one of them not to have duplicated files. In order to do this, I would like to create new table that has only one name in relationship, so I can later create list of paths to remove. The result table will look like this : index name_1 path1 name_2 path2 0 Roy path/to/Roy Anne path/to/Anne 1 Hari path/to/Hari Wili path/to/Wili 2 Miko path/to/miko Lin path/to/lin 3 Miko path/to/miko Dan path/to/dan The idea is that I'll use the values of "path2" to remove files with this path, and will still have the files in path1. for that reason, this line: 4 Lin path/to/lin Dan path/to/dan is missing, as it will be removed using Miko... any ideas how to do this ? :) Edit: I have tried this based on this answer: df_2= df[~pd.DataFrame(np.sort(df.values,axis=1)).duplicated()] And it's true that I get less rows in my dataframe (it has 695 and I got now 402) , but, I still have the first lines like this: index name_1 path1 name_2 path2 0 Roy path/to/Roy Anne path/to/Anne 1 Anne path/to/Anne Roy path/to/Roy ... meaning I still get the same issue
You can use frozenset to detect duplicates: out = (df[~df[['name_1', 'name_2']].agg(frozenset, axis=1).duplicated()] .loc[lambda x: ~x['path2'].isin(x['path1'])]) # OR out = (df[~pd.DataFrame(np.sort(df.values,axis=1)).duplicated()] .query('~path1.isin(path2)')) Output: >>> out name_1 path1 name_2 path2 0 Roy path/to/Roy Anne path/to/Anne 2 Hari path/to/Hari Wili path/to/Wili 5 Miko path/to/miko Dan path/to/dan 7 Lin path/to/lin Dan path/to/dan
3
4
76,223,870
2023-5-11
https://stackoverflow.com/questions/76223870/parsing-xml-within-html-using-python
I have an HTML file which contains XML at the bottom of it and enclosed by comments, it looks like this: <!DOCTYPE html> <html> <head> *** </head> <body> <div class="panel panel-primary call__report-modal-panel"> <div class="panel-heading text-center custom-panel-heading"> <h2>Report</h2> </div> <div class="panel-body"> <div class="panel panel-default"> <div class="panel-heading"> <div class="panel-title">Info</div> </div> <div class="panel-body"> <table class="table table-bordered table-page-break-auto table-layout-fixed"> <tr> <td class="col-sm-4">ID</td> <td class="col-sm-8">1</td> </tr> </table> </div> </div> </body> </html> <!--<?xml version = "1.0" encoding="Windows-1252" standalone="yes"?> <ROOTTAG> <mytag> <headername>BASE</headername> <fieldname>NAME</fieldname> <val><![CDATA[Testcase]]></val> </mytag> <mytag> <headername>BASE</headername> <fieldname>AGE</fieldname> <val><![CDATA[5]]></val> </mytag> </ROOTTAG> --> Requirement is to parse the XML which is in comments in above HTML. So far I have tried to read the HTML file and pass it to a string and did following: with open('my_html.html', 'rb') as file: d = str(file.read()) d2 = d[d.index('<!--') + 4:d.index('-->')] d3 = "'''"+d2+"'''" this is returning XML piece of data in string d3 with 3 single qoutes. Then trying to read it via Etree: ET.fromstring(d3) but it is failing with following error: xml.etree.ElementTree.ParseError: not well-formed (invalid token): line 1, column 2 need some help to basically: Read HTML take out snippet with XML piece which is commented at bottom of HTML take that string and pass to ET.fromString() function, but since this function takes string with triple qoutes, it is not formatting it properly and hence throwing the error
You already have been on the right path. I put your HTML in the file and it works fine like following. import xml.etree.ElementTree as ET with open('extract_xml.html') as handle: content = handle.read() xml = content[content.index('<!--')+4: content.index('-->')] document = ET.fromstring(xml) for element in document.findall("./mytag"): for child in element: print(child, child.text)
4
2
76,221,461
2023-5-10
https://stackoverflow.com/questions/76221461/how-to-generate-python-type-hints-in-generated-grpc-code
Can PEP compliant type hints be automatically added to generated source code, or dynamically created, for python and gRPC? Specifically in the basics tutorial in the client section for feature = stub.GetFeature(point) I would like my IDE to know and check that point is type Point in the *_pb2.py and feature is type Feature with an attribute location: Point. Thank you.
Type hints aren't yet (!?) available for Python gRPC, see Issue 29041 You can generate type hints for the Protobuf messages only using the --pyi_out=${PWD} flag when running protoc directly or indirectly with python3 -m grpc.tools.protoc
3
6
76,221,322
2023-5-10
https://stackoverflow.com/questions/76221322/exec-fails-when-applied-to-a-code-with-a-new-type
I have a file multiply.py with the following contents: from typing import NamedTuple class Result(NamedTuple): product: int desc: str def multiply(a: int, b: int) -> Result: return Result( product=a * b, desc=f"muliplied {a} and {b}", ) x=4 y=5 print(multiply(x, y)) If I run it just like that it of course yields the expected result: $ python multiply.py Result(product=20, desc='muliplied 4 and 5') However I'm trying to run it with exec function from main.py: from pathlib import Path gl, lo = {}, {} exec(Path("multiply.py").read_text(), gl, lo) and this time the output is disappointing: $ python main.py Traceback (most recent call last): File "main.py", line 4, in <module> exec(Path("multiply.py").read_text(), gl, lo) File "<string>", line 16, in <module> File "<string>", line 9, in multiply NameError: name 'Result' is not defined Why is that? Can't I create new types in the code executed by exec?
Since you are passing two different dictionaries for locals and globals, from the docs: If exec gets two separate objects as globals and locals, the code will be executed as if it were embedded in a class definition Class bodies are not an enclosing scope, functions defined in the class body do not have access to variables defined in the class body. This is the reason that in a method definition you must use self. to refer to another method. To elaborate a little on what I mean about self, here's an example of actually just running the code you have in a class body, and seeing how it behaves: In [1]: class NS: ...: from typing import NamedTuple ...: ...: class Result(NamedTuple): ...: product: int ...: desc: str ...: ...: def multiply(a: int, b: int) -> Result: ...: return Result( ...: product=a * b, ...: desc=f"muliplied {a} and {b}", ...: ) ...: ...: In [2]: NS.multiply(1, 2) --------------------------------------------------------------------------- NameError Traceback (most recent call last) Cell In [2], line 1 ----> 1 NS.multiply(1, 2) Cell In [1], line 9, in NS.multiply(a, b) 8 def multiply(a: int, b: int) -> Result: ----> 9 return Result( 10 product=a * b, 11 desc=f"muliplied {a} and {b}", 12 ) NameError: name 'Result' is not defined In any case, exec is a function that is typically misused. There are times when it is perfectly fine, but often it is the case that when people reach for exec/eval there is a better way to do it in Python.
7
5
76,222,274
2023-5-10
https://stackoverflow.com/questions/76222274/how-to-add-private-methods-to-pyo3-pymethods
I'd like to add hidden methods to pyo3 class methods implementation which will be invisible for Python. Example: #[pyclass] pub struct SomeItem; #[pymethods] impl SomeItem { #[new] pub fn new() -> Self { // visible for python constructor SomeItem; } pub fn method(&self) -> u8 { // visible for python method self.hidden_method() } #[pyo3(ignore)] // just for example fn hidden_method(&self) -> u8 { // invisible for python method (that must be able to return non-python type) 0 } }
What about using a separate impl block? #[pymethods] impl SomeItem { #[new] pub fn new() -> Self { // visible for python constructor SomeItem; } pub fn method(&self) -> u8 { // visible for python method self.hidden_method() } } impl SomeItem { fn hidden_method(&self) -> u8 { 0 } }
3
3
76,222,022
2023-5-10
https://stackoverflow.com/questions/76222022/how-to-use-categorical-data-type-with-pyarrow-dtypes
I'm working with the arrow dtypes with pandas, and my dataframe has a variable that should be categorical, but I can't figure out how to transform it into pyarrow data type for categorical data (dictionary) According to pandas (https://arrow.apache.org/docs/python/pandas.html#pandas-arrow-conversion), the arrow data type I should be using is dictionary. Usually, if you want pandas to use a pyarrow dtype you just add[pyarrow] to the name of the pyarrow type, for example dtype='string[pyarrow]'. I tried using dtype='dictionary[pyarrow]', but that yields the error: data type 'dictionary[pyarrow]' not understood I also tried 'categorical[pyarrow]', or 'category[pyarrow]', pyarrow.dictionary, pyarrow.dictionary(pyarrow.int16(),pyarrow.string()), and they didn't work either. How can i use dictionary dtype on a pandas series? pd.Series(['Chocolate','Candy','Waffles'], dtype='what_to_put_here????')
I believe pd.ArrowDtype is required: dtype=pd.ArrowDtype(pa.dictionary(pa.int16(), pa.string()))
6
6
76,220,790
2023-5-10
https://stackoverflow.com/questions/76220790/in-network-programming-must-the-accepting-end-close-both-sockets
I raised a question earlier, and one of the answers involved another question. That is to say, in general, the accepting end should create two sockets to establish the accepting end. like this: Python from socket import * sockfd = socket(...) # ... client_sockfd, addr = sockfd.accept() # ... client_sockfd.close() sockfd.close() C int sockfd, client_sockfd; sockfd = socket(...); // ... client_sockfd = accept(sockfd, ...); // ... shutdown(client_sockfd, 2); shutdown(sockfd, 2); close(client_sockfd); close(sockfd); So can we skip the task of creating the client_sockfd variable? like this: Python sockfd = socket(...) # ... sockfd, addr = sockfd.accept() # ... sockfd.close() C int sockfd; struct sockaddr_in server, client; socklen_t client_size = sizeof(client); sockfd = socket(...); // ... sockfd = accept(sockfd, (struct sockaddr *)&client, &client_size); Or it could be like this: Python sockfd = socket(...) # ... client_sockfd, addr = sockfd.accept() sockfd.close() # ... client_sockfd.close() C int sockfd = socket(...); int client_sockfd; // ... client_sockfd = accept(sockfd, ...); shutdown(sockfd, 2); close(sockfd); // ... shutdown(client_sockfd, 2); close(client_sockfd); As shown in the above code, can we use only one socket to complete the accepting end of the entire network programming? Is there any problem with this? (At least I didn't have any problems writing the program like this myself)
So can we skip the task of creating the client_sockfd variable? For connection-oriented protocols such as TCP, no. The two sockets involved on the receiving side of such a service are of different kinds and serve different purposes, both required. The socket that (in C) you create via the socket() function and use with the accept() function is often called a "server socket". It's job is to wait for incoming connection requests, and when it accepts one, to prepare an endpoint for communication with that request's remote peer. The resulting endpoint is represented by the second (and third, and fourth ...) socket, obtained via successful calls to accept(). You use the server socket to accept connections, potentially having multiple active connections at the same time, and you use each of the others ("peer sockets", if you like) to communicate with a specific client. Closing a peer socket terminates the connection with the associated remote peer. Closing the server socket makes the system stop accepting new connections at its address / port. You propose this alternative: int sockfd; struct sockaddr_in server, client; socklen_t client_size = sizeof(client); sockfd = socket(...); // ... sockfd = accept(sockfd, (struct sockaddr *)&client, &client_size); That's not technically wrong, but as soon as you assign the result of accept() to variable sockfd, you no longer have access to the server socket (supposing you did not store its file descriptor in another variable, which would moot the question). This is a resource leak. Moreover, it prevents you from accepting any subsequent connections at the designated address / port. Until the process terminates (if then), you will not even be able to create a new server socket for that port, as the existing one will be in the way. All of this is different for datagram-oriented sockets -- there are no connections to accept(), and no inherent distinction between server sockets and peer sockets for such protocols. But based on the appearance of accept() calls in your example code, that's not what you are asking about.
3
2
76,219,628
2023-5-10
https://stackoverflow.com/questions/76219628/how-to-find-the-no-of-nulls-in-every-column-in-a-polars-dataframe
In pandas, one can do: import pandas as pd d = {"foo":[1,2,3, None], "bar":[4,None, None, 6]} df_pandas = pd.DataFrame.from_dict(d) dict(df_pandas.isnull().sum()) [out]: {'foo': 1, 'bar': 2} In polars it's possible to do the same by looping through the columns: import polars as pl d = {"foo":[1,2,3, None], "bar":[4,None, None, 6]} df_polars = pl.from_dict(d) {col:df_polars[col].is_null().sum() for col in df_polars.columns} Looping through the columns in polars is particularly painful when using LazyFrame, then the .collect() has to be done in chunks to do the aggregation. Is there a way to find no. of nulls in every column in a polars dataframe without looping through each columns?
Assuming you're not married to the output format the idiomatic way to do it is... df.select(pl.all().is_null().sum()) However if you really like the dict output you can easily get it... df.select(pl.all().is_null().sum()).to_dicts()[0] The way this works is that inside the select we start with pl.all() which means all of the columns and then, much like in the pandas version, we apply is_null which would return True/False. From that we chain sum which turns the Trues into 1s and gives you the number of nulls in each column. There's also the dedicated null_count() so you don't have to chain is_null().sum() thanks to @jqurious for that tip.
7
8
76,218,118
2023-5-10
https://stackoverflow.com/questions/76218118/convert-matlab-to-python-for-reading-a-binary-file
Im trying to convert a matlab script into python. The script read a binary file and reshape into a number of columns. The matlab script is: fid=fopen(binary_file,'rb'); [inpar,ic]=fread(fid,4,'int'); if (ic<4) ; idata=[];return;end nmagic=inpar(1); nh=inpar(2); nrpar=inpar(3); nipar=inpar(4); [rdata,ic]=fread(binary_file,[nh,nrpar],'float'); if (ic<nh*nrpar) ; return;end [idata,ic]=fread(fid,[nh,nipar],'int'); if (ic<nh*nipar) ; return;end The python code I tried is: import numpy as np inpar = np.fromfile(fid, dtype=np.int32) nmagic, nh, nrpar, nipar = inpar rdata = np.fromfile(fid, dtype=np.float32, count=nh * nrpar).reshape(nh, nrpar) idata = np.fromfile(fid, dtype=np.int32, count=nh * nipar).reshape(nh, nipar) I don't exactly know how Matlab is reshaping the data. Can anyone help me translate the code to achieve the same result in Python. A sample data is given here
Lets try to read the data with open in rb mod so it will be read in binary mode. Then we will use np.fromfile to read from the file. import numpy as np with open(binary_file, 'rb') as fid: inpar = np.fromfile(fid, dtype=np.int32, count=4) if inpar.size < 4: idata = [] else: nmagic, nh, nrpar, nipar = inpar rdata = np.fromfile(fid, dtype=np.float32, count=nh * nrpar) if rdata.size < nh * nrpar: # handle error pass else: rdata = rdata.reshape(nh, nrpar) idata = np.fromfile(fid, dtype=np.int32, count=nh * nipar) if idata.size < nh * nipar: # handle error pass else: idata = idata.reshape(nh, nipar)
3
4
76,159,708
2023-5-3
https://stackoverflow.com/questions/76159708/how-to-disable-authentication-in-fastapi-based-on-environment
I have a FastAPI application for which I enable Authentication by injecting a dependency function. controller.py router = APIRouter( prefix="/v2/test", tags=["helloWorld"], dependencies=[Depends(api_key)], responses={404: {"description": "Not found"}}, ) Authorzation.py async def api_key(api_key_header: str = Security(api_key_header_auth)): if api_key_header != API_KEY: raise HTTPException( status_code=401, detail="Invalid API Key", ) This works fine. However, I would like to disable the authentication based on environment. For instance, I would want to keep entering the authentication key in localhost environment.
You could create a subclass of APIKeyHeader class and override the __call__() method to perform a check whether the request comes from a "safe" client, such as localhost or 127.0.0.1, using request.client.host, as explained here. If so, you could set the api_key to application's API_KEY value, which would be returned and used by the check_api_key() dependency function to validate the api_key. In case there were multiple API keys, one could perform a check on the client's hostname/IP in both the __call__() and the check_api_key() functions, and raise an exception only if the client's IP is not in the safe_clients list. Example from fastapi import FastAPI, Request, Depends, HTTPException from starlette.status import HTTP_403_FORBIDDEN from fastapi.security.api_key import APIKeyHeader from fastapi import Security from typing import Optional API_KEY = 'some-api-key' API_KEY_NAME = 'X-API-KEY' safe_clients = ['127.0.0.1'] class MyAPIKeyHeader(APIKeyHeader): async def __call__(self, request: Request) -> Optional[str]: if request.client.host in safe_clients: api_key = API_KEY else: api_key = request.headers.get(self.model.name) if not api_key: if self.auto_error: raise HTTPException( status_code=HTTP_403_FORBIDDEN, detail='Not authenticated' ) else: return None return api_key api_key_header_auth = MyAPIKeyHeader(name=API_KEY_NAME) async def check_api_key(request: Request, api_key: str = Security(api_key_header_auth)): if api_key != API_KEY: raise HTTPException(status_code=401, detail='Invalid API Key') app = FastAPI(dependencies=[Depends(check_api_key)]) @app.get('/') def main(request: Request): return request.client.host Example (UPDATED) The previous example could be simplified to the one below, which does not require overriding the APIKeyHeader class. You could instead set the auto_error flag to False, which would prevent APIKeyHeader from raising the pre-defined error in case the api_key is missing from the request, but rather let you handle it on your own in the check_api_key() function. from fastapi import FastAPI, Request, Security, Depends, HTTPException from fastapi.security.api_key import APIKeyHeader # List of valid API keys API_KEYS = [ 'z77xQYZWROmI4fY4', 'FXhO4i3bLA1WIsvR' ] API_KEY_NAME = 'X-API-KEY' safe_clients = ['127.0.0.1'] api_key_header = APIKeyHeader(name=API_KEY_NAME, auto_error=False) async def check_api_key(request: Request, api_key: str = Security(api_key_header)): if api_key not in API_KEYS and request.client.host not in safe_clients: raise HTTPException(status_code=401, detail='Invalid or missing API Key') app = FastAPI(dependencies=[Depends(check_api_key)]) @app.get('/') def main(request: Request): return request.client.host How to remove/hide the Authorize button from Swagger UI The example provided above will work as expected, that is, users whose their IP address is included in the safe_clients list won't be asked to provide an API key in order to issue requests to the API, regardless of the Authorize button being present in Swagger UI page when accessing the autodocs at /docs. If you still, however, would like to remove the Authorize button from the UI for safe_clients, you could have a custom middleware, as demonstrated here, in order to remove the securitySchemes component from the OpenAPI schema (in /openapi.json)—Swagger UI is actually based on OpenAPI Specification. This approach was inspired by the link mentioned earlier, as well as here and here. Please make sure to add the middleware after initializing your app in the example above (i.e., after app = FastAPI(dependencies=...)) from fastapi import Response # ... rest of the code is the same as above app = FastAPI(dependencies=[Depends(check_api_key)]) @app.middleware("http") async def remove_auth_btn(request: Request, call_next): response = await call_next(request) if request.url.path == '/openapi.json' and request.client.host in safe_clients: response_body = [section async for section in response.body_iterator] resp_str = response_body[0].decode() # convert "response_body" bytes into string resp_dict = json.loads(resp_str) # convert "resp_str" into dict del resp_dict['components']['securitySchemes'] # remove securitySchemes resp_str = json.dumps(resp_dict) # convert "resp_dict" back to str return Response(content=resp_str, status_code=response.status_code, media_type=response.media_type) return response
9
6
76,189,734
2023-5-6
https://stackoverflow.com/questions/76189734/how-to-convert-struct-to-series-in-polars
I have Dataframe with stuct inside after used pl.cum_fold(). How struct convert to normal series based column? ┌───────────────────────────────────┐ │ price │ │ --- │ │ struct[2000] │ ╞═══════════════════════════════════╡ │ {null,null,null,null,30302.67187… │ └───────────────────────────────────┘ Is it possible with polars expression or some kind of util method? I find the only way is convert to Python and then back to DF. df = pl.DataFrame(pl.Series('price', df[0, 0].values()))
There is transpose once the struct is unnested. This will be relatively slow but I don't think there is any other way: df = pl.DataFrame( {"price": {'test0' : None, 'test1' : 1, 'test2' : 2}}) df.unnest('price').transpose(column_names=['price']) shape: (3, 1) ┌───────┐ │ price │ │ --- │ │ f64 │ ╞═══════╡ │ null │ │ 1.0 │ │ 2.0 │ └───────┘
5
1
76,200,899
2023-5-8
https://stackoverflow.com/questions/76200899/polars-equivalent-of-pandas-groupby-applydrop-duplicates
I am new to polars and I wonder what is the equivalent of pandas groupby.apply(drop_duplicates) in polars. Here is the code snippet I need to translate : import pandas as pd GROUP = list('123231232121212321') OPERATION = list('AAABBABAAAABBABBBA') BATCH = list('777898897889878987') df_input = pd.DataFrame({'GROUP':GROUP, 'OPERATION':OPERATION, 'BATCH':BATCH}) df_output = df_input.groupby('GROUP').apply(lambda x: x.drop_duplicates()) input data desired output I tried the following, but, it does not output what I need import polars as pl GROUP = list('123231232121212321') OPERATION = list('AAABBABAAAABBABBBA') BATCH = list('777898897889878987') df_input = pl.DataFrame({'GROUP':GROUP, 'OPERATION':OPERATION, 'BATCH':BATCH}) df_output = df_input.group_by('GROUP').agg(pl.all().unique()) If I take only one Group, I get locally what I want : df_part = df_input.filter(pl.col('GROUP')=='2') df_part[['OPERATION', 'BATCH']].unique() Does somebody know how to do that ?
It looks like you want the first instance of each OPERATION, BATCH "pairing" per GROUP You can use pl.struct to create the "pairing" and then use is_first_distinct() as a Window function. (df.with_row_index() .filter(pl.struct("OPERATION", "BATCH").is_first_distinct().over("GROUP")) ) shape: (9, 4) ┌───────┬───────┬───────────┬───────┐ │ index ┆ GROUP ┆ OPERATION ┆ BATCH │ │ --- ┆ --- ┆ --- ┆ --- │ │ u32 ┆ str ┆ str ┆ str │ ╞═══════╪═══════╪═══════════╪═══════╡ │ 0 ┆ 1 ┆ A ┆ 7 │ │ 1 ┆ 2 ┆ A ┆ 7 │ │ 2 ┆ 3 ┆ A ┆ 7 │ │ 3 ┆ 2 ┆ B ┆ 8 │ │ 4 ┆ 3 ┆ B ┆ 9 │ │ 5 ┆ 1 ┆ A ┆ 8 │ │ 7 ┆ 3 ┆ A ┆ 9 │ │ 10 ┆ 2 ┆ A ┆ 8 │ │ 11 ┆ 1 ┆ B ┆ 9 │ └───────┴───────┴───────────┴───────┘ The with_row_index is just used here as a visual guide to help see the removed rows. (index 8, 9)
3
2
76,205,305
2023-5-9
https://stackoverflow.com/questions/76205305/updating-non-trivial-structures-in-polars-cells
Say I have this: >>> polars.DataFrame([[(1,2),(3,4)],[(5,6),(7,8)]], list('ab')) shape: (2, 2) ┌────────┬────────┐ │ a ┆ b │ │ --- ┆ --- │ │ object ┆ object │ ╞════════╪════════╡ │ (1, 2) ┆ (5, 6) │ │ (3, 4) ┆ (7, 8) │ └────────┴────────┘ How would I go with updating the 2nd element of the tuple only? E.g. what do I need to do to get this: ┌────────┬────────┐ │ a ┆ b │ │ --- ┆ --- │ │ object ┆ object │ ╞════════╪════════╡ │ (1, 9) ┆ (5, 9) │ │ (3, 9) ┆ (7, 9) │ └────────┴────────┘ where I've replaced the 2nd element of all tuples to 9. Note that I'm not necessarily married to using tuples, but I'd like some kind of structure (dict, list, ...) as I want elements to have two pieces of information per cell due to a downstream requirement.
If you choose dicts instead, Polars will turn them into Structs df = pl.DataFrame({ "a": [{"x": 1, "y": 2}, {"x": 3, "y": 4}], "b": [{"x": 5, "y": 6}, {"x": 7, "y": 8}] }) shape: (2, 2) ┌───────────┬───────────┐ │ a ┆ b │ │ --- ┆ --- │ │ struct[2] ┆ struct[2] │ ╞═══════════╪═══════════╡ │ {1,2} ┆ {5,6} │ │ {3,4} ┆ {7,8} │ └───────────┴───────────┘ You can then set the new value easily using .with_fields() df.with_columns( pl.all().struct.with_fields(y = 9) ) shape: (2, 2) ┌───────────┬───────────┐ │ a ┆ b │ │ --- ┆ --- │ │ struct[2] ┆ struct[2] │ ╞═══════════╪═══════════╡ │ {1,9} ┆ {5,9} │ │ {3,9} ┆ {7,9} │ └───────────┴───────────┘
3
1
76,188,316
2023-5-6
https://stackoverflow.com/questions/76188316/is-pythonpath-really-an-environment-variable
The docs for sys.path state the following: A list of strings that specifies the search path for modules. Initialized from the environment variable PYTHONPATH, plus an installation-dependent default. So my understanding here is that PYTHONPATH is an environment variable. Environment variables can be printed out in Powershell using the following command: PS> echo $ENV:VARIABLENAME However when I do $ENV:PYTHONPATH I get no output. If I try to access PYTHONPATH from a python terminal, I get a KeyError: >>> import os >>> os.environ["PYTHONPATH"] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python310\lib\os.py", line 679, in __getitem__ raise KeyError(key) from None KeyError: 'PYTHONPATH' However, I know PYTHONPATH is defined somewhere, because its value does appear when I use sys.path: >>> import sys >>> sys.path ['', 'C:\\Python310\\python310.zip', 'C:\\Python310\\DLLs', 'C:\\Python310\\lib', 'C:\\Python310', 'C:\\Users\\aa\\AppData\\Roaming\\Python\\Python310\\site-packages', 'C:\\Python310\\lib\\site-packages', 'C:\\Python310\\lib\\site-packages\\scons-4.4.0-py3.10.egg', 'C:\\Python310\\lib\\site-packages\\colorama-0.3.2-py3.10.egg', 'C:\\Python310\\lib\\site-packages\\win32', 'C:\\Python310\\lib\\site-packages\\win32\\lib', 'C:\\Python310\\lib\\site-packages\\Pythonwin'] If PYTHONPATH is truly an environment variable, why can't I access it using either Powershell or os in my Python interpreter?
The variable PYTHONPATH is an environment variable which you can set to add additional directories where python will look for modules and packages. This variable is not set by default and not needed for Python to work because it it already knows where to find its standard libraries (sys.path). But if for any reason you need some custom Python libraries that you do not want to install, you can export PYTHONPATH=/path/to/my/modules/ to add /path/to/my/modules/ on Python will then know where to find those custom modules. And this time, if you print again sys.path, it will display you the newly added modules.
6
12
76,189,891
2023-5-6
https://stackoverflow.com/questions/76189891/imagedraw-object-has-no-attribute-textbbox
I am working on a simple text mining project. When I tried to create a word-cloud I got this error: AttributeError: 'ImageDraw' object has no attribute 'textbbox' I have a dataset of News and their categories; to create a word-cloud I tried to preprocessing the text: import pandas as pd import numpy as np import pandas as pd import matplotlib.pyplot as plt from nltk.corpus import stopwords from nltk.stem import PorterStemmer from textblob import Word from wordcloud import WordCloud newsData = pd.read_csv("data.txt", sep= '\t', header=None, names=["Description", "Category", "Tags"],on_bad_lines='skip', engine='python' , encoding='utf-8') #print(newsData.head()) newsData['Description'] = newsData['Description'].apply(lambda x: " ".join(x.lower() for x in x.split())) newsData['Category'] = newsData['Category'].apply(lambda x: " ".join(x.lower() for x in x.split())) newsData['Tags'] = newsData['Tags'].apply(lambda x: " ".join(x.lower() for x in x.split())) # stopword filtering stop = stopwords.words('english') newsData['Description'] = newsData['Description'].apply(lambda x: " ".join (x for x in x.split() if x not in stop)) #stemming st = PorterStemmer() newsData['Description'] = newsData['Description'].apply(lambda x: " ".join ([st.stem(word) for word in x.split()])) newsData['Category'] = newsData['Category'].apply(lambda x: " ".join ([st.stem(word) for word in x.split()])) newsData['Tags'] = newsData['Tags'].apply(lambda x: " ".join ([st.stem(word) for word in x.split()])) #lemmatize newsData['Description'] = newsData['Description'].apply(lambda x: " ".join ([Word(word).lemmatize() for word in x.split()])) newsData['Category'] = newsData['Category'].apply(lambda x: " ".join ([Word(word).lemmatize() for word in x.split()])) newsData['Tags'] = newsData['Tags'].apply(lambda x: " ".join ([Word(word).lemmatize() for word in x.split()])) #print(newsData.head()) culture = newsData[newsData['Category'] == 'culture'].sample(n=200) health = newsData[newsData['Category'] == 'health'].sample(n=200) dataSample = pd.concat([culture, health],axis=0) culturesmpl = culture[culture['Category'] == 'culture'].sample(n=200) healthspml = health[health['Category'] == 'health'].sample(n=200) #print(dataSample.head()) cultureSTR = culturesmpl.Description.str.cat() healthSTR = healthspml.Description.str.cat() #print(spam_str) and then I tried to create wordcloud using WordCloud library wordcloud_culture = WordCloud(collocations= False, background_color='white' ).generate(cultureSTR) # Plot plt.imshow(wordcloud_culture, interpolation='bilinear') plt.axis('off') plt.show() but after running this code I got the error: File ~/anaconda3/lib/python3.9/site-packages/wordcloud/wordcloud.py:508 in generate_from_frequencies box_size = draw.textbbox((0, 0), word, font=transposed_font, anchor="lt") AttributeError: 'ImageDraw' object has no attribute 'textbbox' do you know how can I fix this?
History The ImageDraw.textsize() method was deprecated in PIL version 9.2.0 and completely removed beginning with version 10.0.0 on 2023-07-01. The ImageDraw.textbbox() method was introduced in version 8.0.0 as a more robust solution. Example If you are looking to simply replace one line of code, and you previously had text_width, text_height = ImageDraw.Draw(image).textsize(your_text, font=your_font) ..then you could instead use _, _, text_width, text_height = ImageDraw.Draw(image).textbbox((0, 0), your_text, font=your_font) Explanation textsize() outputs dimensions for the nominal width and height of the text as a tuple: (width, height). textbbox() outputs the x and y extents of the bounding box as a tuple: (left, top, right, bottom). Starting the line with _, _, is a way to discard the first two elements of the output tuple. Adding (0, 0) as the first argument in textbbox() tells it to anchor the bounding box at the origin. Avoid relying on outdated libraries, and explore reasons for this change and why textbbox() is a more robust method!
6
8
76,187,256
2023-5-6
https://stackoverflow.com/questions/76187256/importerror-urllib3-v2-0-only-supports-openssl-1-1-1-currently-the-ssl-modu
After pip install openai, when I try to import openai, it shows this error: the 'ssl' module of urllib3 is compile with LibreSSL not OpenSSL I just followed a tutorial on a project about using API of OpenAI. But when I get to the first step which is the install and import OpenAI, I got stuck. And I tried to find the solution for this error but I found nothing. Here is the message after I try to import OpenAI: Python 3.9.6 (default, Mar 10 2023, 20:16:38) [Clang 14.0.3 (clang-1403.0.22.14.1)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import openai Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/yule/Library/Python/3.9/lib/python/site-packages/openai/__init__.py", line 19, in <module> from openai.api_resources import ( File "/Users/mic/Library/Python/3.9/lib/python/site-packages/openai/api_resources/__init__.py", line 1, in <module> from openai.api_resources.audio import Audio # noqa: F401 File "/Users/mic/Library/Python/3.9/lib/python/site-packages/openai/api_resources/audio.py", line 4, in <module> from openai import api_requestor, util File "/Users/mic/Library/Python/3.9/lib/python/site-packages/openai/api_requestor.py", line 22, in <module> import requests File "/Users/mic/Library/Python/3.9/lib/python/site-packages/requests/__init__.py", line 43, in <module> import urllib3 File "/Users/mic/Library/Python/3.9/lib/python/site-packages/urllib3/__init__.py", line 38, in <module> raise ImportError( ImportError: urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with LibreSSL 2.8.3. See: https://github.com/urllib3/urllib3/issues/2168 I tried to --upgrade the urllib3, but it is still not working. The result is: pip3 install --upgrade urllib3 Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: urllib3 in ./Library/Python/3.9/lib/python/site-packages (2.0.2)
The reason why the error message mentioned OpenSSL 1.1.1+ and LibreSSL 2.8.3 is that urllib3 v2.0 (the version you've installed) requires OpenSSL 1.1.1+ to work properly, as it relies on some new features of OpenSSL 1.1.1. The issue is that the version of the 'ssl' module that is currently installed in your environment is compiled with LibreSSL 2.8.3, which is not compatible with urllib3 v2.0. To use urllib3 v2.0, you need an 'ssl' module compiled with OpenSSL 1.1.1 or later, by trying: brew install [email protected] Or you could use an older version of urllib3 that is compatible suc. For example urllib3 v1.26.6, which does not have a strict OpenSSL version requirement. You can force the version installing with this command: pip install urllib3==1.26.6
172
259
76,212,816
2023-5-9
https://stackoverflow.com/questions/76212816/403-forbidden-tweepy-authentication-error-when-attempting-to-access-tweet-user
import tweepy import os api_key = os.getenv('TWI_API_KEY') api_key_secret = os.getenv('TWI_API_KEY_SECRET') bearer_token = os.getenv('TWI_BEARER_TOKEN') access_token = os.getenv('TWI_ACCESS_TOKEN') access_token_secret = os.getenv('TWI_ACCESS_TOKEN_SECRET') client = tweepy.Client(bearer_token=bearer_token, consumer_key=api_key, consumer_secret=api_key_secret, access_token=access_token, access_token_secret=access_token_secret) user_get = client.get_user(username='twitterusers') user_id = user_get.data.id print(user_id) Above, is the code I've written so far in my attempt to create a Twitter bot that replies to a specific user's most recent tweet, however I did not make it far as I receive the following error when attempting to use the Client.get_user() method to retrieve a user's Twitter ID: tweepy.errors.Forbidden: 403 Forbidden When authenticating requests to the Twitter API v2 endpoints, you must use keys and tokens from a Twitter developer App that is attached to a Project. You can create a project via the developer portal. I went into the Twitter developer portal --> user authentication settings, and I made sure that the app permissions were set to 'Read and write'. I then tried regenerating all of my keys and tokens and made sure that there was no issue with the validity of the credentials themselves. I further confirmed it's not an issue with the keys or tokens when I found I was able to successfully tweet without receiving any errors using the following code. import tweepy import os api_key = os.getenv('TWI_API_KEY') api_key_secret = os.getenv('TWI_API_KEY_SECRET') bearer_token = os.getenv('TWI_BEARER_TOKEN') access_token = os.getenv('TWI_ACCESS_TOKEN') access_token_secret = os.getenv('TWI_ACCESS_TOKEN_SECRET') client = tweepy.Client(bearer_token=bearer_token, consumer_key=api_key, consumer_secret=api_key_secret, access_token=access_token, access_token_secret=access_token_secret) response = client.create_tweet(text='Testing!') print(response) I tried changing the credentials passed to tweepy.Client(), first using the bearer_token only. Passing the bearer_token exclusively, as I have seen done in some examples, I received the same 403 Error as mentioned above. Then I tried passing the consumer_key, consumer_secret, access_token, access_token_secret. In this case, I receive the following 401 error: tweepy.errors.Unauthorized: 401 Unauthorized Unauthorized In either case, I am unable to utilize Client.get_user()
The error is because of twitter policy change where in the free tier, there is no way for us to pull tweets, we can only do that if we have a basic subscription which is 100$ per month now. Details can be found here: https://developer.twitter.com/en/products/twitter-api
3
2
76,182,655
2023-5-5
https://stackoverflow.com/questions/76182655/trying-to-convert-a-date-to-another-format-gives-error
I have a program where the user selects a date from a datepicker, and I need to convert this date to another format. The original format is %d/%m/%Y and I need to convert it to %-d-%b-%Y. I made a small example of what happens: from datetime import datetime # Import tkinter library from tkinter import * from tkcalendar import Calendar, DateEntry win = Tk() win.geometry("750x250") win.title("Example") def convert(): date1 = cal.get() datetimeobject = datetime.strptime(date1, '%d/%m/%Y') print(date1) new_format = datetimeobject.strftime('%-d-%b-%Y') print(new_format) cal = DateEntry(win, width=16, background="gray61", foreground="white", bd=2, date_pattern='dd/mm/y') cal.pack(pady=20) btn = Button(win, command=convert, text='PRESS') btn.pack(pady=50) win.mainloop() This gives me the following error: File "---------\date.py", line 15, in convert new_format = datetimeobject.strftime('%-d-%b-%Y') ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: Invalid format string
The reason why that gives an error is because %-d only works on Unix machines (Linux, MacOS). On Windows (or Cygwin), you have to use %#d.
3
2
76,207,341
2023-5-9
https://stackoverflow.com/questions/76207341/how-to-get-first-n-chars-from-a-str-column-in-python-polars
What's the alternative of pandas : data['ColumnA'].str[:2] in python polars? pl.col('ColumnA').str[:3] throws TypeError: 'ExprStringNameSpace' object is not subscriptable error.
You can use str.slice, It takes two arguments offset and length. offset specifies the start index and the length specifies the length of slice. If set to None (default), the slice is taken to the end of the string. >>> import polars as pl >>> >>> df = pl.DataFrame({"animal": ["Crab", "cat and dog", "rab$bit", None]}) >>> df shape: (4, 1) ┌─────────────┐ │ animal │ │ --- │ │ str │ ╞═════════════╡ │ Crab │ │ cat and dog │ │ rab$bit │ │ null │ └─────────────┘ >>> df.with_columns(pl.col("animal").str.slice(0, 3).alias("sub_string")) shape: (4, 2) ┌─────────────┬────────────┐ │ animal ┆ sub_string │ │ --- ┆ --- │ │ str ┆ str │ ╞═════════════╪════════════╡ │ Crab ┆ Cra │ │ cat and dog ┆ cat │ │ rab$bit ┆ rab │ │ null ┆ null │ └─────────────┴────────────┘
6
10
76,200,452
2023-5-8
https://stackoverflow.com/questions/76200452/error-while-iterating-over-dataframe-columns-entries-attributeerror-series
Using pandas version 2, I get an error when calling iteritems. for event_id, region in column.iteritems(): pass The following error message appears: Traceback (most recent call last): File "/home/analyst/anaconda3/envs/outrigger_env/lib/python3.10/site- packages/outrigger/io/gtf.py", line 185, in exon_bedfiles for event_id, region in column.iteritems()) AttributeError: 'Series' object has no attribute 'iteritems'
iteritems was removed in 2.0.0 by GH45321. Removed deprecated Series.iteritems()... use obj.items instead. (from v2.0.0 release notes) You can use items wherever you used iteritems: for event_id, region in column.items(): pass
23
44
76,195,972
2023-5-7
https://stackoverflow.com/questions/76195972/aspect-sentiment-analysis-using-hugging-face
I am new to transformers models and trying to extract aspect and sentiment for a sentence but having issues from transformers import AutoTokenizer, AutoModelForSequenceClassification model_name = "yangheng/deberta-v3-base-absa-v1.1" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) text = "The food was great but the service was terrible." inputs = tokenizer(text, return_tensors="pt") outputs = model(**inputs) I am able to get the tensor what I need is the output to extract the aspect and sentiment for the overall sentence I tried this however getting error sentiment_scores = outputs.logits.softmax(dim=1) aspect_scores = sentiment_scores[:, 1:-1] aspects = [tokenizer.decode([x]) for x in inputs["input_ids"].squeeze()][1:-1] sentiments = ['Positive' if score > 0.5 else 'Negative' for score in aspect_scores.squeeze()] for aspect, sentiment in zip(aspects, sentiments): print(f"{aspect}: {sentiment}") I am looking for below o/p or similar o/p I am unable to write the logic as to how extract aspect and sentiment text -The food was great but the service was terrible aspect- food ,sentiment positive aspect - service, sentiment negative or at overall level aspect - food, sentiment positive
The model you are trying to use predicts the sentiment for a given aspect based on a text. That means, it requires text and aspect to perform a prediction. It was not trained to extract aspects from a text. You could use a keyword extraction model to extract aspects (compare this SO answer). import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModelForSequenceClassification model_name = "yangheng/deberta-v3-base-absa-v1.1" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) aspects = ["food", "service"] text = "The food was great but the service was terrible." sentiment_aspect = {} for aspect in aspects: inputs = tokenizer(text, aspect, return_tensors="pt") with torch.inference_mode(): outputs = model(**inputs) scores = F.softmax(outputs.logits[0], dim=-1) label_id = torch.argmax(scores).item() sentiment_aspect[aspect] = (model.config.id2label[label_id], scores[label_id].item()) print(sentiment_aspect) Output: {'food': ('Positive', 0.9973154664039612), 'service': ('Negative', 0.9935430288314819)}
3
1
76,208,101
2023-5-9
https://stackoverflow.com/questions/76208101/is-there-a-way-to-change-the-html-textcontent-from-inside-the-pyodide-runpython
So for example I have a for loop inside my pyodide script that is inside my .html document. Is there a ways to change textContent of a div directly from pyodide for loop. In the example below, only the last value of the for loop (in my case 99) is then sent to "myDiv". Is it even possible to change the textContent directly from pyodide script? <head> <script src="https://cdn.jsdelivr.net/pyodide/v0.22.1/full/pyodide.js"></script> </head> <body> <div id="myDiv">Text that needs to change</div> <script> async function main() { let pyodide = await loadPyodide(); return pyodide; } let pyodideReadyPromise = main(); async function pythonChange() { let pyodide = await pyodideReadyPromise; pyodide.runPython(` from js import document print("started") for i in range(100): print(i) document.getElementById("myDiv").textContent = i print("finished") `) } pythonChange(); </script> </body>
The issue isn't that the textContent isn't being changed; the issue is that there's no opportunity for the screen to update to actually display the change. The solution is to use a coroutine. Confirming that Changes Do Happen To observe that the textContent is indeed changing, we can add a small Mutation Observer to the very beginning of your script tag. This will log to the browser dev console any changes to the observed DOM node: function callback(mutationList, observer){ mutationList.forEach(mutation => console.log(record.target)) } const MO = new MutationObserver(callback) MO.observe(document.getElementById("myDiv"), {attributes: true }) With this added code, the console fills with started, 0, 1, ..., 98, 99, finished. So the textContent is, in fact, changing, which is good. Slowing Things Down to Human Speed You might think that the code is simply proceeding too fast for you eyes to see the numbers change, but that's not what's happening either. Let's slow things down by modifying your for loop: for i in range(100): print(i) document.getElementById("myDiv").textContent = i for j in range(1_000_000): _ = 1 Now your loop has to "do a little useless work" before it advances to the next number. (You may need to change 1_000_000 to a larger or smaller number, depending on your system.) If you open the dev console, you'll see the numbers 0 to 99 appearing at a more measured pace. But the text on the page doesn't update until the Python code has finished. So what gives? The Real Issue The issue is that while updates to the DOM are synchronous (i.e. no further code will be executed until the DOM update is complete), updates to the screen are asynchronous. What's more, the entire call to runPython() is synchronous, so no updates to the screen will occur until the runPython terminates. Essentially, the call to runPython is a blocking call, and nothing else can happen on the page - screen updates and repainting, other JavaScript calls, etc - until runPython returns. This blog post gives a good high-level explanation of the interaction between synchronous code and visible changes on screen. The Solution So, if the screen can't update until our synchronous code call terminates, what can we do? Make our code asynchronous! By turning our code into a coroutine which occasionally yields back to the browser's event loop to do some work (i.e. update the screen), we can see the updates visibly as they happen. Pyodide has a nifty utility for this in the form of the runPythonAsync function, which allows you to write async code without resorting to wrapping your code into a coroutine. Here's a description of this feature and its purpose from when it was used in PyScript. Here's a final code sample, which would replace the entire call to pyodide.runPython in your original example. I've left the slowdown code in place so that the results are visible, but there's no need for it to be there in production. pyodide.runPythonAsync(` from js import document from asyncio import sleep print("started") for i in range(100): print(i) document.getElementById("myDiv").textContent = i await sleep(0.01) for j in range(1_000_000): _ = 1 print("finished") `)
3
3
76,213,454
2023-5-9
https://stackoverflow.com/questions/76213454/hide-ultralytics-yolov8-model-predict-output-from-terminal
I have this output that was generated by model.predict() 0: 480x640 1 Hole, 234.1ms Speed: 3.0ms preprocess, 234.1ms inference, 4.0ms postprocess per image at shape (1, 3, 640, 640) 0: 480x640 1 Hole, 193.6ms Speed: 3.0ms preprocess, 193.6ms inference, 3.5ms postprocess per image at shape (1, 3, 640, 640) ... How do I hide the output from the terminal? I can't find out the information in this official link: https://docs.ultralytics.com/modes/predict/#arguments
The Ultralytics docs are sadly not up to date, as of today. The right way of doing this is: from ultralytics import YOLO model = YOLO('yolov8m-seg.pt') results = model.predict(source='0', verbose=False) for result in results: masks = result.masks.masks print(masks.shape) Notice the verbose=False argument. This won't print the default output ... 0: 480x640 1 Hole, 193.6ms Speed: 3.0ms preprocess, 193.6ms inference, 3.5ms postprocess per image at shape (1, 3, 640, 640) ... In this case only: ... torch.Size([4, 480, 640]) ...
9
17
76,193,660
2023-5-7
https://stackoverflow.com/questions/76193660/in-python-can-i-return-a-child-instance-when-instantiating-its-parent
I have a zoo with animals, represented by objects. Historically, only the Animal class existed, with animal objects being created with e.g. x = Animal('Bello'), and typechecking done with isinstance(x, Animal). Recently, it has become important to distinguish between species. Animal has been made an ABC, and all animal objects are now instances of its subclasses such as Dog and Cat. This change allows me to create an animal object directly from one of the subclasses, e.g. with dog1 = Dog('Bello') in the code below. This is cheap, and I can use it as long as I know what kind of animal I'm dealing with. Typechecking isinstance(dog1, Animal) still works as before. However, for usibility and backwards compatibility, I also want to be able to call dog2 = Animal('Bello'), have it (from the input value) determine the species, and return a Dog instance - even if this is computationally more expensive. I need help with the second method. Here is my code: class Animal: def __new__(cls, name): if cls is not Animal: # avoiding recursion return super().__new__(cls) # Return one of the subclasses if name.lower() in ['bello', 'fido', 'bandit']: # expensive tests name = name.title() # expensive data correction return Dog(name) elif name.lower() in ['tiger', 'milo', 'felix']: # ... name = property(lambda self: self._name) present = lambda self: print(f"{self.name}, a {self.__class__.__name__}") # ... and (many) other methods that must be inherited class Dog(Animal): def __init__(self, name): self._name = f"Mr. {name}" # cheap data correction # ... and (few) other dog-specific methods class Cat(Animal): def __init__(self, name): self._name = f"Dutchess {name}" # cheap data correction # ... and (few) other cat-specific methods dog1 = Dog("Bello") dog1.present() # as expected, prints 'Mr. Bello, a Dog'. dog2 = Animal("BELLO") dog2.present() # unexpectedly, prints 'Mr. BELLO, a Dog'. Should be same. Remarks: In my use-case, the second creation method is by far the more important one. What I want to achieve is that calling Animal return a subclass, Dog in this case, initialized with manipulated arguments (name, in this case) So, I'm looking for a way to keep the basic structure of the code above, where the parent class can be called, but just always returns a child instance. Of course, this is a contrived example ;) Many thanks, let me know if more information is helpful. Suboptimal solutions factory function def create_animal(name) -> Animal: # Return one of the subclasses if name.lower() in ['bello', 'fido', 'bandit']: name = name.title() return Dog(name) elif name.lower() in ['tiger', 'milo', 'felix']: # ... class Animal: name = property(lambda self: self._name) present = lambda self: print(f"{self.name}, a {self.__class__.__name__}") # ... and (many) other methods that must be inherited class Dog(Animal): # ... This breaks backward compatibility by no longer allowing the creation of animals with a Animal() call. Typechecking is still possible I prefer the symmetry of being able to call a specific species, with Dog(), or use the more general Animal(), in the exact same way, which does not exist here. factory funcion, alternative Same as previous, but change the name of the Animal class to AnimalBase, and the name of the create_animal function to Animal. This fixes the previous problem, but breaks backward compatibility by no longer allowing typechecking with isinstance(dog1, Animal).
In the end, I went with a class decorator: # defining the decorator def dont_init_twice(Class): """Decorator for child classes of Animal, to allow Animal to return a child instance.""" original_init = Class.__init__ Class._initialized = False def wrapped_init(self, *args, **kwargs): if not self._initialized: object.__setattr__(self, "_initialized", True) #works also on frozen dataclasses original_init(self, *args, **kwargs) Class.__init__ = wrapped_init return Class # decorating the child classes @dont_init_twice class Dog: def __init__(self, name): self._name = f"Mr. {name}" @dont_init_twice class Cat: def __init__(self, name): self._name = f"Dutchess {name}" IMO, this is the cleanest and least invasive solution.
4
0
76,199,653
2023-5-8
https://stackoverflow.com/questions/76199653/valueerror-run-not-supported-when-there-is-not-exactly-one-input-key-got-q
Getting the error while trying to run a langchain code. ValueError: `run` not supported when there is not exactly one input key, got ['question', 'documents']. Traceback: File "c:\users\aviparna.biswas\appdata\local\programs\python\python37\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script exec(code, module.__dict__) File "D:\Python Projects\POC\Radium\Ana\app.py", line 49, in <module> answer = question_chain.run(formatted_prompt) File "c:\users\aviparna.biswas\appdata\local\programs\python\python37\lib\site-packages\langchain\chains\base.py", line 106, in run f"`run` not supported when there is not exactly one input key, got ['question', 'documents']." My code is as follows. import os from apikey import apikey import streamlit as st from langchain.llms import OpenAI from langchain.prompts import PromptTemplate from langchain.chains import LLMChain, SequentialChain #from langchain.memory import ConversationBufferMemory from docx import Document os.environ['OPENAI_API_KEY'] = apikey # App framework st.title('🦜🔗 Colab Ana Answering Bot..') prompt = st.text_input('Plug in your question here') # Upload multiple documents uploaded_files = st.file_uploader("Choose your documents (docx files)", accept_multiple_files=True, type=['docx']) document_text = "" # Read and combine Word documents def read_docx(file): doc = Document(file) full_text = [] for paragraph in doc.paragraphs: full_text.append(paragraph.text) return '\n'.join(full_text) for file in uploaded_files: document_text += read_docx(file) + "\n\n" with st.expander('Contextual Prompt'): st.write(document_text) # Prompt template question_template = PromptTemplate( input_variables=['question', 'documents'], template='Given the following documents: {documents}. Answer the question: {question}' ) # Llms llm = OpenAI(temperature=0.9) question_chain = LLMChain(llm=llm, prompt=question_template, verbose=True, output_key='answer') # Show answer if there's a prompt and documents are uploaded if prompt and document_text: formatted_prompt = question_template.format(question=prompt, documents=document_text) answer = question_chain.run(formatted_prompt) st.write(answer['answer']) I have gone through the documentations and even then I am getting the same error. I have already seen demos where multiple prompts are being taken by langchain.
For a prompt with multiple inputs, use predict() instead of run(), or just call the chain directly. (Note: Requires Python 3.8+) prompt_template = "Tell me a {adjective} joke and make it include a {profession}" llm_chain = LLMChain( llm=OpenAI(temperature=0.5), prompt=PromptTemplate.from_template(prompt_template) ) # Option 1 llm_chain(inputs={"adjective": "corny", "profession": "plumber"}) # Option 2 llm_chain.predict(adjective="corny", profession="plumber") Also note that you only need to assign the PromptTemplate at the moment you're instantiating the LLMChain - after that you're just passing in the template variables - in your case, documents and question (instead of passing in the formatted template, as you have currently).
4
4
76,208,544
2023-5-9
https://stackoverflow.com/questions/76208544/networkx-digraph-nodes-set-the-border-and-inner-color-and-format-text
I'm using Django, and in my view I generate a diagram of relations between parties. The problem is with the following: nx.set_node_attributes(G, fill_coloring, name="background") But this unfortunately doesn't work. I'm unable to find in the documentation how I can set it properly. What I want to achieve is to direct the coloring / formatting of each node based on masterdata related to it. nx.set_node_attributes(G, border_coloring, name="color") With the above I'm able to set the border color, but not the inner color. What am I overlooking? Ideally later on by adding information to the lines etc., but for now I want to color the nodes themselves in specific colors. Currently, it only colors the border. What am I overlooking / misunderstanding? def index(request): filename = f'{datetime.now():%Y-%m-%d_%H-%M-%S}.png' save_path = 'static/app/master/routing/' completeName = os.path.join(save_path, filename) print(completeName) G = nx.DiGraph() routings = Customer_Routing.objects.all() end_customers = [] border_coloring = {} fill_coloring = {} for e in routings: G.add_edge(e.customer_start.name, e.customer_end.name) if e.customer_end.end_customer == True: try: print(e.customer_end.customer_class.name) if e.customer_end.customer_class.name == "DIY": border_coloring[e.customer_end.name] = "#DAC85A" fill_coloring[e.customer_end.name] = "#FEF19C" except AttributeError: print(e.customer_end.name + " has no class.") ### Define the customer coloring nx.set_node_attributes(G, border_coloring, name="color") nx.set_node_attributes(G, fill_coloring, name="background") A = to_agraph(G) A.draw(completeName, prog='dot') template = loader.get_template('index.html') context = { 'img': completeName, 'routing': routings, } return HttpResponse(template.render(context, request)) EDIT: I found the answer, shortly after posting this. Graphviz needs the style to be set to "filled". Below solved this: nx.set_node_attributes(G, {e.customer_end.name: "filled"}, name="style")
Thanks @Paul Brodersen; below posted as accepted answer: I found the answer, shortly after posting this. Graphviz needs the style to be set to "filled". Below solved this: nx.set_node_attributes(G, {e.customer_end.name: "filled"}, name="style") Additionally I changed the model to add colors based on the segmentation. Now the for looks as follows: nx.set_node_attributes(G, {e.customer_end.name: "filled"}, name="style") border_coloring[e.customer_end.name] = e.customer_end.customer_segment.border_color fill_coloring[e.customer_end.name] = e.customer_end.customer_segment.fill_color
3
0
76,174,677
2023-5-4
https://stackoverflow.com/questions/76174677/correctly-displaying-tooltips-with-rounded-corners-with-pyqt5
I need to properly display a tooltip with rounded corners using PyQt5. The default behavior whenever using the border-radius: 8px; in the stylesheet of a QToolTip keeps the tooltip rendered as a rectangle, and the border-radius property only impacts the shape of the border, which is rather ugly. This question is following this question : How to make Qtooltip corner round in pyqt5 which was left partially unanswered. Based on this answer, I tried this code : from PyQt5 import QtCore, QtGui, QtWidgets import sip class ProxyStyle(QtWidgets.QProxyStyle): def styleHint(self, hint, opt=None, widget=None, returnData=None): if hint == self.SH_ToolTip_Mask and widget: if super().styleHint(hint, opt, widget, returnData): # the style already creates a mask return True returnData = sip.cast(returnData, QtWidgets.QStyleHintReturnMask) src = QtGui.QImage(widget.size(), QtGui.QImage.Format_ARGB32) src.fill(QtCore.Qt.transparent) widget.render(src) mask = QtGui.QRegion(QtGui.QBitmap.fromImage( src.createHeuristicMask())) if mask == QtGui.QRegion(opt.rect.adjusted(1, 1, -1, -1)): # if the stylesheet doesn't set a border radius, create one x, y, w, h = opt.rect.getRect() mask = QtGui.QRegion(x + 4, y, w - 8, h) mask += QtGui.QRegion(x, y + 4, w, h - 8) mask += QtGui.QRegion(x + 2, y + 1, w - 4, h - 2) mask += QtGui.QRegion(x + 1, y + 2, w - 2, h - 4) returnData.region = mask return 1 return super().styleHint(hint, opt, widget, returnData) app = QtWidgets.QApplication([]) app.setStyle(ProxyStyle()) palette = app.palette() app.setStyleSheet(''' QToolTip { color: black; background: white; border: 1px solid black; border-radius: 8px; } ''') test = QtWidgets.QPushButton('Hover me', toolTip='Tool tip') test.show() app.exec() However this does not work as expected : the produced tooltip does not change from the default one. After some debugging, I noticed that sizeHint() logic is actually never executed ; super().styleHint(hint, opt, widget, returnData) always returns True. How can I render a tooltip with rounded corners ? Edit : I am running Windows 11, and it seems that it is important (see answer)
It seems that by default, Windows 11 creates a mask for tooltips ; therefore if super().styleHint(hint, opt, widget, returnData): is always True. If we want to run the logic generating the mask, we have to remove that block. In order to render the tooltip, to get a mask to clip it later, we also have to prevent infinite recursion ; widget.render would call this styleHint which would in turn call widget.render. Therefore we have to add a recursive check. Lastly, due to the antialiased pixels of the border, the produced mask is a tiny bit too large. To fix that issue, we have to crop the mask of 1px in every direction, which can be done by downscaling the mask by 2 pixels in both directions and then translate the QRegion by 1px. Here is a working code (at least on Windows 11, with PyQt5): class ProxyStyle(QtWidgets.QProxyStyle): _recursive_check = False def styleHint(self, hint, opt=None, widget=None, returnData=None): if hint == self.SH_ToolTip_Mask and widget and not self._recursive_check: self._recursive_check = True returnData = sip.cast(returnData, QtWidgets.QStyleHintReturnMask) src = QtGui.QImage(widget.size(), QtGui.QImage.Format_ARGB32) src.fill(QtCore.Qt.transparent) widget.render(src) image = src.createHeuristicMask() bitmap = QtGui.QBitmap.fromImage(image) mask = QtGui.QRegion(bitmap) x, y, w, h = opt.rect.getRect() if mask == QtGui.QRegion(opt.rect.adjusted(1, 1, -1, -1)): mask = QtGui.QRegion(x + 4, y, w - 8, h) mask += QtGui.QRegion(x, y + 4, w, h - 8) mask += QtGui.QRegion(x + 2, y + 1, w - 4, h - 2) mask += QtGui.QRegion(x + 1, y + 2, w - 2, h - 4) image = image.scaled(image.width()-2, image.height()-2) bitmap = QtGui.QBitmap.fromImage(image) mask = QtGui.QRegion(bitmap).translated(1, 1) returnData.region = mask self._recursive_check = False return 1 return super().styleHint(hint, option=opt, widget=widget, returnData=returnData) app = QtWidgets.QApplication([]) app.setStyle(ProxyStyle()) palette = app.palette() app.setStyleSheet(''' QToolTip {{ color: {fgd}; background: {bgd}; border: 2px solid black; border-radius: 13px; }} '''.format( fgd=palette.color(palette.ToolTipText).name(), bgd=palette.color(palette.ToolTipBase).name(), border=palette.color(palette.ToolTipBase).darker(110).name() )) test = QtWidgets.QPushButton('Hover me', toolTip='Tool tip') test.show() app.exec()
3
3
76,180,798
2023-5-5
https://stackoverflow.com/questions/76180798/jenkins-job-failing-for-urllib3-valueerror-timeout-value-connect-was-object-o
As of May 4, 2023, at 16:00, I started seeing one of our Jenkins job failing with the following error: Traceback (most recent call last): File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/jenkins/__init__.py", line 822, in get_info return json.loads(self.jenkins_open( File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/jenkins/__init__.py", line 560, in jenkins_open return self.jenkins_request(req, add_crumb, resolve_auth).text File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/jenkins/__init__.py", line 576, in jenkins_request self.maybe_add_crumb(req) File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/jenkins/__init__.py", line 373, in maybe_add_crumb response = self.jenkins_open(requests.Request( File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/jenkins/__init__.py", line 560, in jenkins_open return self.jenkins_request(req, add_crumb, resolve_auth).text File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/jenkins/__init__.py", line 579, in jenkins_request self._request(req)) File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/jenkins/__init__.py", line 553, in _request return self._session.send(r, **_settings) File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/requests/adapters.py", line 483, in send timeout = TimeoutSauce(connect=timeout, read=timeout) File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/urllib3/util/timeout.py", line 119, in __init__ self._connect = self._validate_timeout(connect, "connect") File "/home/jenkins/agent/workspace/my-jenkins-job/.tox/appdev-ci-staging/lib/python3.9/site-packages/urllib3/util/timeout.py", line 156, in _validate_timeout raise ValueError( ValueError: Timeout value connect was <object object at 0x7efe5adb9aa0>, but it must be an int, float or None. As nothing had changed on my side in my configuration, it looks like an upstream issue. I was using requests Python library in my job and requests uses urllib3. How can we fix this?
A new version 2.0.2 of urllib3 was released on May 4, 2023, which can be seen here: urllib3 2.0.2 - Release history As my job installs the Python libraries in the beginning of the job using pip in a virtual Python environment, it started installing the latest version of urllib3 which had some issues. So, it is looks an upstream issue. I fixed it by adding urllib3>=1.26.15,<2 in my requirements.txt file.
4
16
76,191,646
2023-5-6
https://stackoverflow.com/questions/76191646/how-to-annotate-grouped-bars-with-percent-for-each-index
I have created a bar plot with the following code. I would like to label the percentages adding up to ~100% for each user, as I've done in MSpaint for user1 and user2 below: import matplotlib.pyplot as plt import numpy as np import pandas as pd users = ['user1', 'user2', 'user3', 'user4', 'user5', 'user6', 'user7',\ 'user8', 'user9', 'user10', 'user11', 'user12'] NEG = [433, 1469, 1348, 2311, 522, 924, 54, 720, 317, 135, 388, 9] NEU = [2529, 4599, 4617, 4297, 1782, 2742, 61, 2640, 1031, 404, 1723, 76] POS = [611, 1149, 1262, 1378, 411, 382, 29, 513, 421, 101, 584, 49] data = {'Negative': NEG, 'Neutral': NEU, 'Positive': POS} df = pd.DataFrame(data, index=users) ax = df.plot(kind='bar', ylabel='Number of Messages\nw/ <= 128 Characters',\ xlabel='Username', title='Discord Sentiment Analysis',\ color=['coral', 'khaki', 'skyblue']) plt.tight_layout() plt.show() Following the advice from this thread, I've made the following attempts: Attempt 1 ax = df.plot(kind='bar', ylabel='Number of Messages\nw/ <= 128 Characters',\ xlabel='Username', title='Discord Sentiment Analysis',\ color=['coral', 'khaki', 'skyblue']) for p in ax.containers: ax.bar_label(p, fmt='%.1f%%', label_type='edge')` plt.tight_layout() plt.show() Attempt 2 ax = df.plot(kind='bar', ylabel='Number of Messages\nw/ <= 128 Characters',\ xlabel='Username', title='Discord Sentiment Analysis',\ color=['coral', 'khaki', 'skyblue']) for p in ax.patches: width = p.get_width() height = p.get_height() x, y = p.get_xy() ax.annotate(f'{height:.0%}', (x + width/2, y + height*1.02), ha='center') plt.tight_layout() plt.show() Attempt 3 ax = df.plot(kind='bar', ylabel='Number of Messages\nw/ <= 128 Characters',\ xlabel='Username', title='Discord Sentiment Analysis',\ color=['coral', 'khaki', 'skyblue']) for p in ax.containers: ax.bar_label(p, fmt='%.1f%%', label_type='edge') plt.tight_layout() plt.show()
Plot df, but create a dataframe of the percents, which can be used as custom labels for .bar_label. See How to add value labels on a bar chart for a thorough explanation, and additional examples, of .bar_label. # create a dataframe of percents percent = df.div(df.sum(axis=1), axis=0).mul(100).round(1) # plot ax = df.plot(kind='bar', ylabel='Number of Messages\nw/ <= 128 Characters',\ xlabel='Username', title='Discord Sentiment Analysis',\ color=['coral', 'khaki', 'skyblue'], width=0.9, figsize=(10, 8)) # add annotations for p in ax.containers: # get the current legend label, which is the column name label = p.get_label() # use the column name to access the correct labels from percent labels = percent[label].astype(str).add('%') # add the bar labels ax.bar_label(p, labels=labels, label_type='edge', rotation=90, fontsize=10, padding=3) # pad the spacing between the number and the edge of the figure ax.margins(y=0.1) plt.tight_layout() plt.show() The code for annotating a grouped horizontal bar chart, kind='barh', is exactly the same. # create a dataframe of percents percent = df.div(df.sum(axis=1), axis=0).mul(100).round(1) # plot ax = df.plot(kind='barh', ylabel='Number of Messages\nw/ <= 128 Characters',\ xlabel='Username', title='Discord Sentiment Analysis',\ color=['coral', 'khaki', 'skyblue'], width=0.9, figsize=(10, 8)) # add annotations for p in ax.containers: label = p.get_label() labels = percent[label].astype(str).add('%') ax.bar_label(p, labels=labels, label_type='edge', rotation=0, fontsize=10, padding=3) # pad the spacing between the number and the edge of the figure ax.margins(x=0.1) plt.tight_layout() plt.show() df Negative Neutral Positive user1 433 2529 611 user2 1469 4599 1149 user3 1348 4617 1262 user4 2311 4297 1378 user5 522 1782 411 user6 924 2742 382 user7 54 61 29 user8 720 2640 513 user9 317 1031 421 user10 135 404 101 user11 388 1723 584 user12 9 76 49 percent Negative Neutral Positive user1 12.1 70.8 17.1 user2 20.4 63.7 15.9 user3 18.7 63.9 17.5 user4 28.9 53.8 17.3 user5 19.2 65.6 15.1 user6 22.8 67.7 9.4 user7 37.5 42.4 20.1 user8 18.6 68.2 13.2 user9 17.9 58.3 23.8 user10 21.1 63.1 15.8 user11 14.4 63.9 21.7 user12 6.7 56.7 36.6
3
4
76,213,501
2023-5-9
https://stackoverflow.com/questions/76213501/python-packages-imported-in-editable-mode-cant-be-resolved-by-pylance-in-vscode
I have several Python packages developed locally, which I often use in VSCode projects. For these projects, I create a new virtualenv and install these packages with pip install -e ../path/to/package. This succeeds, and I can use these packages in the project. However, VSCode underlines the package's import line in yellow, with this error: Import "mypackage" could not be resolved Pylance(reportMissingImports) Again, mypackage works fine in the project, but VSCode reports that error, and I lose all autocomplete and type hint features when calling mypackage in the project. I ensured that the right Python interpreter is selected (the one from the project's virtualenv), but the error persists. The error and Pylance docs do not offer any other possible solutions. VSCode version: 1.78.0
Usually, when you choose the correct interpreter, Pylance should take effect immediately. You can try adding "python.analysis.extraPaths": ["path/to/your/package"] to your settings.json. You can also try clicking on the install prompt in vscode to see which environment it will install the package in. I still believe that the problem was caused by incorrect selection of the interpreter.
22
9
76,201,333
2023-5-8
https://stackoverflow.com/questions/76201333/how-can-time-spent-in-asynchronous-generators-be-measured
I want to measure a time spent by generator (time blocking main loop). Lets say that I have two following generators: async def run(): for i in range(5): await asyncio.sleep(0.2) yield i return async def walk(): for i in range(5): time.sleep(0.3) yield i return I want to measure that run spent around 0.0s per iteration, while walk used at least 0.3s. I wanted to use something similar to this, but wasn't able to make it work for me. Clarification: I want to exclude the time spent in any await section. If for any reason coroutine is halted, then I don't want to take this time into account.
So - one thing (a bit though) is to measure times in regular co-routines - I got it going with a decorator. However, when you go a step ahead into async-generators, it is another beast - I am still trying to figure it out - it is a mix of public-exposed async-iterator methods (__anext__, asend, etc...) with traditional iterator methods in the objects returned, I could not yet figure out (I've just opened PEP 525 to see if I can make sense of it). As for regular co-routines, there is a gotcha: if you make an awaitable class (my decorator), asyncio will demand its __await__ method returns something with a __next__ method, which will be called. But native Python co-routines have no __next__: asyncio calls send() on those - so I had to make this "transpose" in order to be able to measure the time (in co-routines). import asyncio import time class ProfileCoro: def __init__(self, corofunc): self.corofunc = corofunc def __call__(self, *args, **kw): # WARNING: not parallel execution-safe: fix by # keeping "self.ellapsed" in a proper contextvar self.ellapsed = 0 self.coro = self.corofunc(*args, **kw) return self def __await__(self): return self def __iter__(self): return self def __next__(self): return self.send(None) def throw(self, exc): print(f"Arghh!, got an {exc}") self.coro.throw(exc) def send(self, arg): start = time.monotonic() try: result = self.coro.send(arg) except StopIteration: duration = time.monotonic() - start self.ellapsed += duration print(f"Ellapsed time in execution of {self.corofunc.__name__}: {self.ellapsed:.05f}s") raise duration = time.monotonic() - start self.ellapsed += duration return result def __repr__(self): return f"<ProfileCoro wrapper for {self.corofunc}>" @ProfileCoro async def run(): for i in range(5): await asyncio.sleep(0.2) # yield i return 1 @ProfileCoro async def walk(): for i in range(5): time.sleep(0.3) #yield i return 3 async def main(): await run() await walk() return asyncio.run(main()) To maybe be continued, if I can figure out how to wrap the async generators. (I think most existing profiling tools use the tooling available in the language for debuggers and tracing (enabled with sys.settrace() : everything is just "visible" in a callback for them, and no worries about wrapping all the inner the calls made by the async machinery and the asyncio loop) ... So, here is the code instrumented to also catch the time in async-generators. It will get the happy path - if there are complicated awaitable classes, implementing or making use of asend, athrow, this won't do - but for a simple async generator function plugget to an async for statement it now works: DISCLAIMER: there might be unused code, and even unused states in the code bellow - I went back and forth quite a bit to get it working (a lot of it because I had not attempted to the fact __anext__ had to be async itself). Nonetheless here it go: import asyncio import time from functools import wraps async def _a(): yield 1 async_generator_asend_type = type(_a().__anext__()) class ProfileCoro: def __init__(self, corofunc): self.corofunc = corofunc self.measures = 0 def measure_ellapsed(func): @wraps(func) def wrapper(self, *args, **kw): self.measures += 1 if self.measures > 1: try: return func(self, *args, **kw) finally: self.measures -= 1 start = time.monotonic() try: result = func(self, *args, **kw) except (StopIteration, StopAsyncIteration): self.ellapsed += time.monotonic() - start #self.print_ellapsed() raise finally: self.measures -= 1 self.ellapsed += time.monotonic() - start return result return wrapper def print_ellapsed(self): name = getattr(self.corofunc, "__name__", "inner_iterator") print(f"Ellapsed time in execution of {name}: {self.ellapsed:.05f}s") def __call__(self, *args, **kw): # WARNING: not parallel execution-safe: fix by # keeping "self.ellapsed" in a proper contextvar self.ellapsed = 0 self.measures = 0 if not isinstance(self.corofunc, async_generator_asend_type): self.coro = self.corofunc(*args, **kw) else: self.coro = self.corofunc return self def __await__(self): return self def __iter__(self): return self @measure_ellapsed def __next__(self): target = self.coro if hasattr(target, "__next__"): return target.__next__() elif hasattr(target, "send"): return target.send(None) async def athrow(self, exc): print(f"Arghh!, got an async-iter-mode {exc}") return await self.async_iter.athrow(exc) def throw(self, exc): print(f"Arghh!, got an {exc}") self.coro.throw(exc) @measure_ellapsed def send(self, arg): return self.coro.send(arg) def __aiter__(self): return self #async def asend(self, value): ... async def aclose(self): return await self.async_iter.close() def close(self): return self.async_iter.close() async def __anext__(self): if not hasattr(self, "async_iter"): self.async_iter = aiter(self.coro) self.inner_profiler = ProfileCoro(self.async_iter.__anext__()) #start = time.monotonic() try: result = await self.inner_profiler() except StopAsyncIteration: #self.print_ellapsed() raise finally: self.ellapsed += self.inner_profiler.ellapsed return result def __repr__(self): return f"<ProfileCoro wrapper for {self.corofunc}>" @ProfileCoro async def run(): for i in range(5): await asyncio.sleep(0.05) # yield i return 1 @ProfileCoro async def walk(): for i in range(5): time.sleep(0.05) #yield i return 3 @ProfileCoro async def irun(): for i in range(5): await asyncio.sleep(0.05) yield i @ProfileCoro async def iwalk(): for i in range(5): time.sleep(0.05) yield i async def main(): await run() run.print_ellapsed() await walk() walk.print_ellapsed() async for _ in irun(): print(".", end="", flush=True) irun.print_ellapsed() async for _ in iwalk(): pass iwalk.print_ellapsed() return asyncio.run(main())
3
2
76,196,220
2023-5-7
https://stackoverflow.com/questions/76196220/python-why-is-my-marginal-y-histogram-plot-changing-when-the-x-variable-is-chan
I am trying to use plotly to create a scatter plot where the x-variable can be selected from a drop-down menu. The scatter plot would also have marginal histogram plots. # Generate data and import libraries import pandas as pd import numpy as np import plotly.express as px import plotly.graph_objects as go # set the random seed np.random.seed(123) # generate arrays of random numbers a = np.random.rand(100) b = np.random.rand(100) c = np.random.rand(100) y = np.random.rand(100) # create a DataFrame with columns A, B, and C df_scatterplot = pd.DataFrame({'A': a, 'B': b, 'C': c, 'y':y}) df_scatterplot_nocc = df_scatterplot.loc[:, df_scatterplot.columns != 'y'] # print the DataFrame print(df_scatterplot) I have created a working script where the y-variable can be changed: # Plot scatter plot and marginal histograms fig = px.scatter(df_scatterplot, x='A', y='y', marginal_x="histogram", marginal_y="histogram") buttonlist = [] for col in df_scatterplot.columns: buttonlist.append( dict( args = [ {'y': [df_scatterplot[str(col)]]}, # update y variable {'yaxis.title.text': str(col)} # update y-axis title ], label=str(col), method='update' ) ) # Add dropdown menu fig.update_layout(updatemenus=[ go.layout.Updatemenu( buttons=buttonlist, x=0.75, xanchor="left", y=1.0, yanchor="top", ), ], ) fig.update_layout(autosize=False, width=1000, height=700,) fig.show() Output for varying y variable: As expected, only the marginal_y histogram is changing. The issue is when trying to modify my script so that the x-variable can be changed, resulting in the marginal_y histogram plot changing as well (it plots a marginal_x histogram for both x and y). It should not be changing. Modifying to: args = [ {'x': [df_scatterplot[str(col)]]}, # update x variable {'xaxis.title.text': str(col)} # update x-axis title ], Output for varying x variable now becomes: I have tried changing the method argument from update to restyle with no luck. Any help is appreciated; thank you.
You can fix this by specifying which trace to update, passing an array of trace indices to the update method : args = [ {'x': [df_scatterplot[str(col)]]}, # update x variable {'xaxis.title.text': str(col)}, # update x-axis title [0, 1] # update only trace 0 and 1 (preserve marginal_y) ], The idea is to preserve the marginal_y histogram from the data update. Nb. The plot ends up with 3 traces, indexed as follows : 0 the main scatter trace on subplot xy 1 the marginal_x histogram above the scatter on subplot x3y3 2 the marginal_y histogram on the right on subplot x2y2
3
2
76,209,122
2023-5-9
https://stackoverflow.com/questions/76209122/how-to-detect-a-windows-network-path
I am writing a script which runs a Windows executable file in a directory supplied by a user. If this script is a network path, making it the current directory will look like it worked, but running any commands in this directory will fail: >>> import os >>> os.chdir(r'\\server\share\folder') # my actual name is different but similar >>> os.getcwd() '\\\\server\\share\\folder' >>> os.system('echo %CD%') '\\server\share\folder' CMD.EXE was started with the above path as the current directory. UNC paths are not supported. Defaulting to Windows directory. C:\Windows 0 I want to predict this situation in my script — if the user supplies a network path instead of a local path like c:\work\project42, give a meaningful error immediately instead of letting it fail later. How can I determine if a path is a Windows network path?
I agree that the simplest check is yourpathstring.startswith(r'\\'). However, this will be a false-positive for any linux path which starts with \\ (unlikely but possible) and a false-negative when people use forward slashes: //server\share\folder. It appears that the library function splitdrive was designed with network path support in mind. The network part is considered "drive", so just have to check if it's a non-DOS drive: def is_network_path(path): drive = os.path.splitdrive(path)[0] if drive: return not drive.endswith(':') else: return False Probably not worth the effort, but it "feels" more correct.
3
4
76,197,174
2023-5-8
https://stackoverflow.com/questions/76197174/aws-elastic-beanstalk-cant-install-mysqlclient
I am going through the adding database to elastic beanstalk python documentation but following the steps leads to the following error 2023/05/07 14:06:36.847596 [ERROR] An error occurred during execution of command [app-deploy] - [InstallDependency]. Stop running the command. Error: fail to install dependencies with requirements.txt file with error Command /bin/sh -c /var/app/venv/staging-LQM1lest/bin/pip install -r requirements.txt failed with error exit status 1. Stderr: error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [18 lines of output] /bin/sh: line 1: mysql_config: command not found /bin/sh: line 1: mariadb_config: command not found /bin/sh: line 1: mysql_config: command not found Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/tmp/pip-install-7hpz1pvs/mysqlclient_f18de82744fa43e9b3b8706c3e581791/setup.py", line 15, in <module> metadata, options = get_config() ^^^^^^^^^^^^ File "/tmp/pip-install-7hpz1pvs/mysqlclient_f18de82744fa43e9b3b8706c3e581791/setup_posix.py", line 70, in get_config libs = mysql_config("libs") ^^^^^^^^^^^^^^^^^^^^ File "/tmp/pip-install-7hpz1pvs/mysqlclient_f18de82744fa43e9b3b8706c3e581791/setup_posix.py", line 31, in mysql_config raise OSError("{} not found".format(_mysql_config_path)) OSError: mysql_config not found mysql_config --version mariadb_config --version mysql_config --libs [end of output] I did some digging and came to the conclusion that mysql or mariadb is not installed to the ec2 instance. So I started browsing the available packages for the Amazon Linux 2023 AMI since my ec2 is running that version and modified my config in .ebextensions folder to look like this: packages: yum: mysql-selinux: [] mariadb105: [] mariadb-connector-c: [] container_commands: 01_migrate: command: "source /var/app/venv/*/bin/activate && python3 manage.py migrate" leader_only: true 02_createsuperuser: command: "source /var/app/venv/*/bin/activate && python3 manage.py createsuperuser --noinput --username aradipatrik2 --email [email protected]" leader_only: true 03_save_data_to_db: command: "source /var/app/venv/*/bin/activate && python3 manage.py save_data_to_db" leader_only: true option_settings: aws:elasticbeanstalk:application:environment: DJANGO_SETTINGS_MODULE: env_info.settings The packages install successfully but I still get the same exact error. When I ssh into the ec2 device to install mysql or mariadb manually the terminal tells me: Changes made via SSH WILL BE LOST if the instance is replaced by auto-scaling So my question is: How can I install mysql or mariadb onto the ec2 instance when deploying the application using elastic beanstalk, so I can have mysqlclient==2.0.3 inside my requirements.txt My requirements.txt asgiref==3.6.0 branca==0.6.0 certifi==2022.12.7 charset-normalizer==3.1.0 Django==4.2.1 folium==0.14.0 idna==3.4 Jinja2==3.1.2 MarkupSafe==2.1.2 numpy==1.24.3 python-dotenv==1.0.0 requests==2.29.0 sqlparse==0.4.4 tzdata==2023.3 urllib3==1.26.15 mysqlclient==2.0.3 edit: Received feedback to install mariadb package with a name ending in -dev or -devel but there's no such package listed at the available packages list for the Amazon Linux 2023 AMI any further pointers would be appreciated!
After some trial and error I ended up modifying my config file to start with this, which solved my issue packages: yum: mariadb105-devel: [] I found the package completely by chance. If someone know how to discover these for the future please do tell me
3
9
76,208,396
2023-5-9
https://stackoverflow.com/questions/76208396/pyrebase4-error-cannot-import-name-gaecontrib
I have been trying to install pyrebase4 using pip install pyrebase4 but when ran it throws below error "C:\Users\ChodDungaTujheMadhadchod\anaconda3\envs\sam_upgraded\lib\site-packages\requests_toolbelt\adapters\appengine.py", line 42, in <module> from .._compat import gaecontrib ImportError: cannot import name 'gaecontrib' from 'requests_toolbelt._compat' As I see the error direct directly to requests_toolbelt , but I cannot figure out the possible way to fix it, I tried upgrading to latest version as which is requests-toolbelt==1.0.0 . So is there any way to fix it.
Okay so what I found that the latest requests-toolbelt 1.0.1 is currently throwing this issue. So downgrading it to next previous version requests-toolbelt==0.10.1 fixes the issue.
7
12
76,202,488
2023-5-8
https://stackoverflow.com/questions/76202488/mockerfixture-has-no-attribute-assert-called-once-attr-defined
In module.py, I have: def func(x: int) -> str | None: if x > 9: return "OK" return None def main(x: int) -> str | None: return func(x) In test_module.py, I have: import pytest from pytest_mock import MockerFixture from module import main @pytest.fixture(name="func") def fixture_func(mocker: MockerFixture) -> MockerFixture: return mocker.patch("module.func", autospec=True) def test_main(func: MockerFixture) -> None: _ = main(11) func.assert_called_once() pytest says: 1 passed in 0.01s [100%] But mypy generates the following error: test_module.py:13:5: error: "MockerFixture" has no attribute "assert_called_once" [attr-defined] Found 1 error in 1 file What am I missing?
You have declared a wrong type for the func argument. The return type of mocker.patch is unittest.mock.MagicMock: import pytest from unittest.mock import MagicMock from pytest_mock import MockerFixture from module import main @pytest.fixture(name="func") def fixture_func(mocker: MockerFixture) -> MagicMock: return mocker.patch("module.func", autospec=True) def test_main(func: MagicMock) -> None: main(11) func.assert_called_once()
3
4
76,180,770
2023-5-5
https://stackoverflow.com/questions/76180770/writing-integers-as-sum-of-kth-power-distinct-integers
Given an n integer (n >= 1), and a number k, return all possible ways to write n as kth power different integers. For instance, if n = 100 and k = 2: 100 = 1**2 + 3**2 + 4**2 + 5**2 + 7**2 = 6**2 + 8**2 = 10**2 or if k = 3: 100 = 1**3 + 2**3 + 3**3 + 4**3 So program(100,2) returns something like [(2, [1, 3, 4, 5, 7]), (2, [6, 8]), (2, [10])], and program(100,3) [(3, [1, 2, 3, 4])]. Everything works fine, as long as the input n is small, or k is "big" (>=3). My approach was to first get a list of all integers, whose kth power is <= n. def possible_powers(n,k): arr,i = [],1 while i**k <= n: arr.append(i) i += 1 return arr Then (and here's the mistake), I created all subsets of this list (as lists): def subsets(L): subsets = [[]] for i in L: subsets += [subset+[i] for subset in subsets] return subsets And finally I cycled through all these subsets, raising each element to the kth power and adding them together, selecting only the ones that add up to n. def kth_power_sum(arr,k): return sum([i**k for i in arr]) def solutions(n,k): return [(k,i) for i in subsets(possible_powers(n,k)) if kth_power_sum(i,k) == n] I know the problem is with the subset-creation, but I have no clue how to optimize this. Like, if I try solutions(1000,2), it creates a large set, which takes up more than 4GB memory. My guess would be to sieve out a few subsets, but that wouldn't help much, unless I have a very efficient sieve. Any help is greatly appreciated. If something isn't clear, or I made a mistake in posting this, please tell me.
If you implement this as a recursive generator, you won't need to store a large number of values (not even the results): def powersum(n,k,b=1): bk = b**k while bk < n: for bases in powersum(n-bk,k,b+1): yield [b]+bases b += 1 bk = b**k if bk == n : yield [b] print(*powersum(100,2)) # [1, 3, 4, 5, 7] [6, 8] [10] print(*powersum(100,3)) # [1, 2, 3, 4] print(sum(1 for _ in powersum(1000,2))) # 1269 solutions print(sum(1 for _ in powersum(2000,2))) # 27526 solutions (6 seconds) Note that this still has an exponential time complexity so it will be much slower for slightly larger values of n print(sum(1 for _ in powersum(2200,2))) # 44930 solutions (12 seconds) print(sum(1 for _ in powersum(2500,2))) # 91021 solutions (25 seconds) print(sum(1 for _ in powersum(2800,2))) # 175625 solutions (55 seconds) print(sum(1 for _ in powersum(3100,2))) # 325067 solutions (110 seconds) [EDIT] For future reference, here is Kelly Bundy's cached version which runs considerably faster. Posting it here in case his demo link gets broken: from functools import cache from time import time @cache def powersum(n,k,b=1): bk = b**k res = [] while bk < n: for bases in powersum(n-bk,k,b+1): res.append([b]+bases) b += 1 bk = b**k if bk == n : res.append([b]) return res Note: although the cache will consume some memory, it will be nowhere near the size the full powerset of all possible combinations.
3
5
76,197,265
2023-5-8
https://stackoverflow.com/questions/76197265/why-is-squaring-a-number-slower-than-multiplying-itself-by-itself
I was curious and decided to run this code in python: import time def timeit(function): strt = time.time() for _ in range(100_000_000): function() end = time.time() print(end-strt) @timeit def function1(): return 1 * 1 @timeit def function2(): return 1_000_000_000_000_000_000_000_000_000_000 * 1_000_000_000_000_000_000_000_000_000_000 @timeit def function3(): return 1_000_000_000_000_000_000_000_000_000_000 ** 2 Here are my results: 4.712368965148926 9.684480905532837 11.74640703201294 Why is the third function (squaring the number) slower than the second function (multiplying the number by itself)? What's going on in the computer, as I thought that doing exponents was simply multiplying a number by itself directly anyways?
Here's the implementation of int.__pow__, abridged to only show code that executes for this operation: /* pow(v, w, x) */ static PyObject * long_pow(PyObject *v, PyObject *w, PyObject *x) { PyLongObject *a, *b, *c; /* a,b,c = v,w,x */ int negativeOutput = 0; /* if x<0 return negative output */ PyLongObject *z = NULL; /* accumulated result */ Py_ssize_t i, j; /* counters */ PyLongObject *temp = NULL; PyLongObject *a2 = NULL; /* may temporarily hold a**2 % c */ PyLongObject *table[EXP_TABLE_LEN]; Py_ssize_t num_table_entries = 0; /* a, b, c = v, w, x */ CHECK_BINOP(v, w); a = (PyLongObject*)v; Py_INCREF(a); b = (PyLongObject*)w; Py_INCREF(b); if (PyLong_Check(x)) { /* Not executed */ } else if (x == Py_None) c = NULL; else { /* Not executed */ } if (Py_SIZE(b) < 0 && c == NULL) { /* Not executed */ } if (c) { /* Not executed */ } /* At this point a, b, and c are guaranteed non-negative UNLESS c is NULL, in which case a may be negative. */ z = (PyLongObject *)PyLong_FromLong(1L); if (z == NULL) ; /* Not executed */ /* Perform a modular reduction, X = X % c, but leave X alone if c * is NULL. */ #define REDUCE(X) \ do { \ if (c != NULL) { \ /* Not executed */ \ } \ } while(0) /* Multiply two values, then reduce the result: result = X*Y % c. If c is NULL, skip the mod. */ #define MULT(X, Y, result) \ do { \ temp = (PyLongObject *)long_mul(X, Y); \ if (temp == NULL) \ goto Error; \ Py_XDECREF(result); \ result = temp; \ temp = NULL; \ REDUCE(result); \ } while(0) i = Py_SIZE(b); digit bi = i ? b->ob_digit[i-1] : 0; digit bit; if (i <= 1 && bi <= 3) { /* aim for minimal overhead */ if (bi >= 2) { MULT(a, a, z); if (bi == 3) { /* Not executed */ } } else if (bi == 1) { /* Not executed */ } /* else bi is 0, and z==1 is correct */ } else if (i <= HUGE_EXP_CUTOFF / PyLong_SHIFT ) { /* Not executed */ } else { /* Not executed */ } if (negativeOutput && (Py_SIZE(z) != 0)) { /* Not executed */ } goto Done; Error: /* Not executed */ Done: for (i = 0; i < num_table_entries; ++i) Py_DECREF(table[i]); Py_DECREF(a); Py_DECREF(b); Py_XDECREF(c); Py_XDECREF(a2); Py_XDECREF(temp); return (PyObject *)z; } Even abridged like that, there's a lot of code checking various cases and manipulating references, because this function needs to be able to handle so much more than just a single multiplication. Buried in all that, inside the MULT macro, there's a single call to long_mul, the integer multiplication implementation: #define MULT(X, Y, result) \ do { \ temp = (PyLongObject *)long_mul(X, Y); \ ... When you multiply a number with itself with *, it gets to skip past everything I just posted and go straight to long_mul.
5
6
76,196,651
2023-5-7
https://stackoverflow.com/questions/76196651/how-to-sort-dataframe-result-based-on-column-values
import requests import pandas as pd url = "https://coinmarketcap.com/new/" page = requests.get(url,headers={'User-Agent': 'Mozilla/5.0'}, timeout=1) pagedata = page.text usecols = ["Name", "Price", "1h", "24h", "MarketCap", "Volume"]#, "Blockchain"] df = pd.read_html(pagedata)[0] #Checking table df[["Name", "Symbol"]] = df["Name"].str.split(r"\d+", expand=True) df = df.rename(columns={"Fully Diluted Market Cap": "MarketCap"})[usecols] dfAsString = df.to_string(index=False) print(dfAsString) Current Code Output: (Truncated) Name Price 1h 24h MarketCap Volume 0 DollarPepe $0.02752 22.64% 336.25% $3 $456,913 1 Billy Token $0.00002822 41.69% 75.80% $1,958,942 $6,999,241 2 JEFF $0.1946 4.42% 226.18% $19,458,328 $19,744,583 3 PUG AI $0.00000001459 10.80% 15.84% $1,459,428 $239,454 4 FART COIN $0.0000004281 1.13% 42.13% $42,806,075 $46,604 [30 rows x 6 columns] How to produce the outpur sorted based on a particular column (24h)? -> Truncated Name Price 1h 24h MarketCap Volume 0 DollarPepe $0.02752 22.64% 336.25% $3 $456,913 2 JEFF $0.1946 4.42% 226.18% $19,458,328 $19,744,583 1 Billy Token $0.00002822 41.69% 75.80% $1,958,942 $6,999,241 4 FART COIN $0.0000004281 1.13% 42.13% $42,806,075 $46,604 3 PUG AI $0.00000001459 10.80% 15.84% $1,459,428 $239,454 [30 rows x 6 columns]
I would convert all the numeric columns in your df to numeric values; then you can sort them easily (you can always add back $ and % as required on display). numcols = df.columns[df.columns != 'Name'] df[numcols] = df[numcols].apply(lambda c:pd.to_numeric(c.str.replace(r'[^\d.]|(?<!\d)\.|\.(?!\d)', '', regex=True))) df = df.sort_values('24h', ascending=False) Output (for your sample data): Name Price 1h 24h MarketCap Volume 0 DollarPepe 2.752000e-02 22.64 336.25 3 456913 2 JEFF 1.946000e-01 4.42 226.18 19458328 19744583 1 Billy Token 2.822000e-05 41.69 75.80 1958942 6999241 4 FART COIN 4.281000e-07 1.13 42.13 42806075 46604 3 PUG AI 1.459000e-08 10.80 15.84 1459428 239454 Note the non-numeric character replacement is more complicated than just [^\d.] as would be implied by your sample data; this is because some of the other price values fetched from that page have ... in them (presumably because they are too small to represent). Ideally you need to figure out how to get those as exact values; otherwise they can only be approximated. Alternatively, you can sort on the values by converting them to floats before passing the series to sort_values: df = df.sort_values('24h', ascending=False, key=lambda v:v.str.replace('%', '').astype(float)) Output: Name Price 1h 24h MarketCap Volume 0 DollarPepe $0.02752 22.64% 336.25% $3 $456,913 2 JEFF $0.1946 4.42% 226.18% $19,458,328 $19,744,583 1 Billy Token $0.00002822 41.69% 75.80% $1,958,942 $6,999,241 4 FART COIN $0.0000004281 1.13% 42.13% $42,806,075 $46,604 3 PUG AI $0.00000001459 10.80% 15.84% $1,459,428 $239,454
3
2
76,194,973
2023-5-7
https://stackoverflow.com/questions/76194973/python-tkinter-cascaded-menu-command-not-executing
I have a problem, which I am trying to solve, and have reproduced it with the code below. The issue that I have, is that I can get the specified command to work from a main menu item, but when the same command is included in a cascade menu, it doesn't appear to execute. I'm not sure whether this is in anyway to do with my requirements, which are that I need to render a grid of buttons and attach a context menu to each. Here is some code which I contrived which demonstrate the issue: import tkinter as tk class App(tk.Tk): def __init__(self): super().__init__() self.title('Tkinter Validation Demo') self.create_widgets() @staticmethod def print_bg_color(button, button_id): colour = button.cget('bg') print(f'Button {button_id} colour is {colour}') @staticmethod def _context_menu(event: tk.Event = None, menu: tk.Menu = None): menu.tk_popup(event.x_root, event.y_root) def create_widgets(self): colors = ['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet'] for i in range(0, 7): button = tk.Button(master=self, bg=colors[i], width=10, height=10) button.grid(row=0, column=i) context_menu = tk.Menu(button, tearoff=False) # Add print option to main menu context_menu.add_command(label="Print colour", command=lambda btn=button, button_id=i: self.print_bg_color(button=btn, button_id=button_id)) sub_menu = tk.Menu(button, tearoff=False) # Add a print colour option on the sub menu sub_menu.add_command(label="Print colour", command=lambda btn=button, button_id=i: self.print_bg_color(button=btn, button_id=button_id)) context_menu.add_cascade(label='Cascade', menu=sub_menu) button.bind("<Button-3>", lambda event, menu=context_menu, button_id=i: self._context_menu(event, menu)) if __name__ == '__main__': app = App() app.mainloop() When the above code is run, it allows you to right click on any of the rendered buttons, and select the "Print color" from either the main context menu, or from a cascade option. The command bound to the event, simply obtains the colour of the button and prints it to the console. This works for the main context menu option, but the cascade menu entry does nothing, despite having the same command. Any suggestions gratefully received. Thanks. UPDATE: Having determined that this only seems to be happening on my Linux Mint environment running Python 3.8 (it works on my Windows 10 with Python 3.10), I ran the script suggested by Nordine in the comments: import tkinter from platform import python_version print(python_version()) root = tkinter.Tk() print(root.tk.call("info", "patchlevel")) The results showed as: 3.8.10 8.6.10 FURTHER UPDATE: I just upgraded to Python 3.10 on my Linux Mint machine, and it still isn't working :o/
The submenu in the code in the question is set up as an element of button: sub_menu = tk.Menu(button, tearoff=False) However, this causes the submenu to be displayed correctly but to not be clickable (reproduced on Ubuntu, Python 3.10.6, Tkinter 8.6). When you make the submenu an element of context_menu then it does work: sub_menu = tk.Menu(context_menu, tearoff=False)
5
1
76,195,685
2023-5-7
https://stackoverflow.com/questions/76195685/how-to-replicate-value-based-on-distinct-column-values-from-a-different-df-pyspa
I have a df like: df1 = AA BB CC DD 1 X Y Z 2 M N O 3 P Q R I have another df like: df2 = BB CC DD G K O H L P I M Q I want to copy all the columns and rows of df2 for every distinct value of 'AA' column of df1 and get the resultant df as: df = AA BB CC DD 1 X Y Z 1 G K O 1 H L P 1 I M Q 2 M N O 2 G K O 2 H L P 2 I M Q 3 P Q R 3 G K O 3 H L P 3 I M Q What I am doing right now is: AAs = df1.select("AA").distinct().rdd.flatMap(lambda x: x).collect() out= [] for i in AAs: dff = df1.filter(col('AA')==i) temp_df = (df1.orderBy(rand()) .withColumn('AA', lit(i)) ) out.append(temp_df) df = reduce(DataFrame.unionAll, out) Which is taking extremely long time and failing the cluster as these are mock dataframes, actual dataframes are quite large in dimension. Any Pysparky way of doing it? Thanks in advance.
This would work: resultDf= df.select("AA")\ .crossJoin(df2)\ .union(df) # No Need to order the actual result, this is just for displaying this example. resultDf.orderBy("AA").show() Although, this would still be a huge operation and can be expensive on the cluster. Input DF1: +---+---+---+---+ | AA| BB| CC| DD| +---+---+---+---+ | 1| X| Y| Z| | 2| M| N| O| | 3| P| Q| R| +---+---+---+---+ DF2: +---+---+---+ | BB| CC| DD| +---+---+---+ | G| K| O| | H| L| P| | I| M| Q| +---+---+---+ Output: +---+---+---+---+ | AA| BB| CC| DD| +---+---+---+---+ | 1| G| K| O| | 1| X| Y| Z| | 1| I| M| Q| | 1| H| L| P| | 2| M| N| O| | 2| I| M| Q| | 2| H| L| P| | 2| G| K| O| | 3| P| Q| R| | 3| I| M| Q| | 3| H| L| P| | 3| G| K| O| +---+---+---+---+
3
2
76,195,463
2023-5-7
https://stackoverflow.com/questions/76195463/shorthand-index-to-get-a-cycle-of-a-numpy-array
I want to index all elements in a numpy array and also include the first index at the end. So if I have the array [2, 4, 6] , I want to index the array such that the result is [2, 4, 6, 2]. import numpy as np a = np.asarray([2,4,6]) # One solution is cycle = np.append(a, a[0]) # Another solution is cycle = a[[0, 1, 2, 0]] # Instead of creating a list, can indexing type be combined? cycle = a[:+0]
Another possible solution: a = np.asarray([2,4,6]) cycle = np.take(a, np.arange(len(a)+1), mode='wrap') Output: [2 4 6 2]
3
2
76,174,236
2023-5-4
https://stackoverflow.com/questions/76174236/is-there-any-way-to-load-an-index-created-through-vectorstoreindexcreator-in-lan
I am experimenting with langchains and its applications, but as a newbie, I could not understand how the embeddings and indexing really work together here. I know what these two are, but I can't figure out a way to use the index that I created and saved using persist_directory. I succesfully saved the object created by VectorstoreIndexCreator using the following code: index = VectorstoreIndexCreator(vectorstore_kwargs={"persist_directory":"./custom_save_dir_path"}).from_loaders([loader]) but I cannot find a way to use the .pkl files created. How can I use these files in my chain to retrieve data? Also, how does the billing in openAI work? If I cannot use any saved embeddings or index, will it re-embed all the data every time I run the code? As a beginner, I am still learning my way around and any assistance would be greatly appreciated. Here is the full code: from langchain.document_loaders import CSVLoader from langchain.indexes import VectorstoreIndexCreator from langchain.chains import RetrievalQA from langchain.llms import OpenAI import os os.environ["OPENAI_API_KEY"] = "sk-xxx" # Load the documents loader = CSVLoader(file_path='data/data.csv') #creates an object with vectorstoreindexcreator index = VectorstoreIndexCreator(vectorstore_kwargs={"persist_directory":"./custom_save_dir_path"}).from_loaders([loader]) # Create a question-answering chain using the index chain = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=index.vectorstore.as_retriever(), input_key="question") # Pass a query to the chain while True: query = input("query: ") response = chain({"question": query}) print(response['result'])
By default VectorstoreIndexCreator use the vector database DuckDB which is transient a keeps data in memory. If you want to persist data you have to use Chromadb and you need explicitly persist the data and load it when needed (for example load data when the db exists otherwise persist it). for more details about chromadb see: chroma The llm used in your case openai is the one responsible for the creation of embbedings (i.e the vectors that will be stored in the vector database). So whenever you process you data and store it in the vector store you will incure charges in openai, if you load vector store from the db you want incurr charge from openai
5
3
76,187,515
2023-5-6
https://stackoverflow.com/questions/76187515/how-to-parse-the-result-of-requests-get-to-produce-the-needed-data
I am trying to get some data from a url using the code below. I need help to produce the needed output. from bs4 import BeautifulSoup import requests page = requests.get("https://coinmarketcap.com/new/") pagedata = page.text print ("* ", pagedata) My output: (Truncated) <a href="/currencies/cheems-token/" class="cmc-link"><div data-sensors-click="true" class="sc-aef7b723-0 sc-8497df48-0 iULUNk"> <img class="coin-logo" src="https://s2.coinmarketcap.com/static/img/coins/64x64/24988.png" alt="Cheems logo"/>< div class="sc-aef7b723-0 sc-8497df48-1 iATetF name-area "> <p font-weight="semibold" color="text" font-size="1" data-sensors-click="true" class="sc-4984dd93-0 kKpPOn">Cheems</p> <div data-nosnippet="true" class="sc-8497df48-2 gtUrBF"><div class="sc-8497df48-3 erCSsg">1</div> <p color="text3" class="sc-4984dd93-0 iqdbQL coin-item-symbol" font-size="1" data-sensors-click="true">Cheems</p></div></div></div></a></td> <td style="text-align:end"><span>$0.00000005918</span></td><td style="text-align:end"><span class="sc-97d6d2ca-0 bQjSqS"> <span class="icon-Caret-down"></span>19.90%</span></td><td style="text-align:end"><span class="sc-97d6d2ca-0 cYiHal"> <span class="icon-Caret-up"></span>58.53%</span></td><td style="text-align:end"><div class="sc-abe52752-3 eSDaBL">--</div></td> <td style="text-align:end">$3,774,982</td><td style="text-align:end"> <div class="sc-abe52752-3 eSDaBL"><img src="https://s2.coinmarketcap.com/static/img/coins/64x64/24091.png"/>zkSync</div></td> <td style="text-align:end">3 hours ago</td><td><div class="sc-aef7b723-0 dDQUel"> Output Wanted: Name Symbol Price 1h 24h MarketCap Volume Blockchain Cheems Cheems $0.00000005918 19.90% 58.53% --- $3,774,982 zkSync CLIPS Clips $0.00005811 10.38% 9.44% $12,697,441 $45,436,152 Ethereum Pooh Inu POOH $0.0...07305 17.69% 87.04% $1,497,586 $332,306 BNB
With pandas, you can use read_html with a bit of post-processing : #pip install pandas import pandas as pd usecols = ["Name", "Symbol", "Price", "1h", "24h", "MarketCap", "Volume", "Blockchain"] df = pd.read_html("https://coinmarketcap.com/new/")[0] df[["Name", "Symbol"]] = df["Name"].str.split(r"\d+", expand=True) df = df.rename(columns={"Fully Diluted Market Cap": "MarketCap"})[usecols] Output : print(df) Name Symbol Price 1h 24h MarketCap Volume Blockchain 0 Cheems Cheems $0.00000006134 3.64% 64.31% -- $4,361,803 zkSync 1 Clips CLIPS $0.00005213 10.28% 18.75% $11,391,728 $43,346,358 Ethereum 2 Pooh Inu POOH $0.0...08057 10.29% 106.29% $1,651,664 $338,030 BNB .. ... ... ... ... ... ... ... ... 27 Mexican Pepe MEXPEPE $0.0000000617 19.14% 78.02% $61,704 $94,416 Ethereum 28 AIBabyDoge AIBABYDOGE $0.0...01419 7.50% 74.99% $595,903 $22,701 Ethereum 29 MonoLend MLD $0.143 1.38% 3.51% $285,937,514 $18,166 Polygon [30 rows x 8 columns]
3
2
76,187,178
2023-5-6
https://stackoverflow.com/questions/76187178/perform-a-python-split-on-a-pandas-dataframe
I have the following dataframe: import pandas as pd data = {'Test_Step_ID': ['9.1.1', '9.1.2', '9.1.3', '9.1.4'], 'Protocol_Name': ['A', 'B', 'C', 'D'], 'Req_ID': ['SRS_0081d', 'SRS_0079', 'SRS_0082SRS_0082a', 'SRS_0015SRS_0015cSRS_0015d'] } df = pd.DataFrame(data) I want to duplicate the rows based on the column "Req_ID" based on the "SRS" value keeping all other columns values same; hence I want 2 rows for the SRS_0082, SRS_0082a and then three rows for SRS_0015, SRS_0015c, SRS_0015d Can someone help me here? appreciate the help. Thanks in advance. [EDITED]: I want the result to look like this:
split on the zero width location between SRS and a preceding character using the '(?<=.)(?=SRS) regex, and explode: out = (df .assign(Req_ID=df['Req_ID'].str.split(r'(?<=.)(?=SRS)')) .explode('Req_ID') ) Output: Test_Step_ID Protocol_Name Req_ID 0 9.1.1 A SRS_0081d 1 9.1.2 B SRS_0079 2 9.1.3 C SRS_0082 2 9.1.3 C SRS_0082a 3 9.1.4 D SRS_0015 3 9.1.4 D SRS_0015c 3 9.1.4 D SRS_0015d Regex: (?<=.) # match any character before the split (?=SRS) # match "SRS" after the split regex demo
4
3
76,167,901
2023-5-3
https://stackoverflow.com/questions/76167901/tkinter-modify-fill-option-when-using-tksvg
Thanks to this question, I discovered tksvg. I already know how to display an svg file: import tkinter as tk import tksvg window = tk.Tk() svg_image = tksvg.SvgImage(file="tests/orb.svg") label = tk.Label(image=svg_image) label.pack() window.mainloop() and also how to do the same using svg data/string: import tkinter as tk import tksvg svg_string = """ <svg aria-hidden="true" focusable="false" role="img" viewBox="0 0 24 24" class="" fill="none" stroke-width="2" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round"><g stroke-width="1.5px" stroke="#B8B8B8" fill="none"><path stroke="none" d="M0 0h24v24H0z" fill="none" stroke-width="1.5px"></path><line x1="4" y1="20" x2="7" y2="20" stroke="#B8B8B8" fill="none" stroke-width="1.5px"></line><line x1="14" y1="20" x2="21" y2="20" stroke="#B8B8B8" fill="none" stroke-width="1.5px"></line><line x1="6.9" y1="15" x2="13.8" y2="15" stroke="#B8B8B8" fill="none" stroke-width="1.5px"></line><line x1="10.2" y1="6.3" x2="16" y2="20" stroke="#B8B8B8" fill="none" stroke-width="1.5px"></line><polyline points="5 20 11 4 13 4 20 20" stroke="#B8B8B8" fill="none" stroke-width="1.5px"></polyline></g></svg> """ window = tk.Tk() svg_image = tksvg.SvgImage(data=svg_string) label = tk.Label(image=svg_image) label.pack() window.mainloop() but I'm wondering, how do I modify the fill option of the svg element (based on the svg data). Only way I thought of was to redraw/delete and create the svg element with a different color used in fill. Here is my attempt: import tkinter as tk import tksvg svg_string = '<svg viewBox="0 0 100 100"><circle cx="50" cy="50" r="40" fill="red"/></svg>' def change_color(event): global svg_string fill_color = "blue" if "red" in svg_string else "red" svg_string = '<svg viewBox="0 0 100 100"><circle cx="50" cy="50" r="40" fill="{}"/></svg>'.format(fill_color) canvas.delete(image_store[0]) svg_store.pop(0) svg_store.append(tksvg.SvgImage(data=svg_string)) image_store[0] = canvas.create_image(100, 100, image=svg_store[0]) window = tk.Tk() canvas = tk.Canvas(window, width=200, height=200) canvas.pack() image_store = [] svg_store = [] svg_store.append(tksvg.SvgImage(data=svg_string)) image_store.append(canvas.create_image(100, 100, image=svg_store[0])) canvas.tag_bind(image_store[0], "<Button-1>", change_color) window.mainloop() Here I tried to switch the color back and forth, so that when it is blue it becomes red, and when red, it becomes blue. But in this case, the color only seems to change once. I'm only trying to make a workaround since I don't think this is supported yet (feel free to correct me if I'm wrong, as I tried to look at the code, with my best attempt at understanding it at the official github repository). How can I do this? either using the above workaround (redraw element) or with a better alternative? I'm on Windows 10, Python version 3.8.10, Tkinter 8.6.9
In addition to the answer of "acw1668". Creating a new TkImage is costly and you should just update your old image with the new data. Therefore it makes sense to use image.configure(data=data_string), example below: def change_color(event): global svg_string fill_color = "blue" if "red" in svg_string else "red" svg_string = '<svg viewBox="0 0 100 100"><circle cx="50" cy="50" r="40" fill="{}"/></svg>'.format(fill_color) id_, img = image_store[0], svg_store[0] img.configure(data=svg_string) canvas.itemconfig(id_, image=img)
3
1
76,183,612
2023-5-5
https://stackoverflow.com/questions/76183612/delete-string-if-there-is-a-longer-string-that-starts-with-same-pattern
I have a list of values: ['27', '27.1', '27.1.a', '27.1.b', '27.3.d.28', '27.3.d.28.1', '27.3.d.28.2', ] I want to keep only the longest values for each "parent" value. That is, delete 27 if there is a 27.something. Delete 27.1. if there is a 27.1.something, etc. In this case, output should be ['27.1.a', '27.1.b', '27.3.d.28.1', '27.3.d.28.2', ] Background info: this is a dataset in which 27 contains the sum of all 27.sth, so I only want to keep the most dissagregated data. I know that I can find the longest string by lis = ['27', '27.1', '27.1.a', '27.1.b', '27.3.d.28', '27.3.d.28.1', '27.3.d.28.2', ] ls = max(len(x) for x in lis) [x for x in lis if len(x) == le] But I don't know how to do this tree search. I guess something with regex could work but I have no idea where to start. Any hint is appreciated. I also saw this idea which works well if I had only one level (one period), but I don't know how to apply it to my case.
Assuming the match is anchored on the left, a strategy might be to sort the strings, thus the strings will cluster by prefix, then only compare the successive pairs: def is_parent(p, target): return p.startswith(target) and len(p)>len(target) and p[len(target)] == '.' out = [] prev = '' for s in sorted(lis)+['']: if prev and not is_parent(s, prev): out.append(prev) prev = s print(out) Output: ['27.1.a', '27.1.b', '27.3.d.28.1', '27.3.d.28.2']
4
3
76,180,900
2023-5-5
https://stackoverflow.com/questions/76180900/how-to-make-a-custom-class-be-able-to-be-received-by-range
range() in Python seems to accept integer as parameter only. How to make range() accept a custom class as parameter? I've defined a class such as: class MyInteger: def __init__(self, a:int): self._a = a def __int__(self): return self._a And I tried: n = MyInteger(5) for i in range(n): print(i) It always causes 'object cannot be interpreted as an integer' whether or not I defined __int__. Is there any way to solve this problem? range(n) cannot be changed to range(int(n)) or range(n.get_int()).
Implement __index__ to allow your class instances to be interpreted as an int class MyInteger: def __init__(self, a:int): self._a = a def __index__(self): return self._a n = MyInteger(5) for i in range(n): print(i)
4
6
76,177,406
2023-5-4
https://stackoverflow.com/questions/76177406/compute-m-columns-from-2m-columns-without-a-for-loop
I have a dataframe where some columns are name-paired (for each column ending with _x there is a corresponding column ending with _y) and others are not. For example: import pandas as pd import numpy as np colnames = [ 'foo', 'bar', 'baz', 'a_x', 'b_x', 'c_x', 'a_y', 'b_y', 'c_y', ] rng = np.random.default_rng(0) data = rng.random((20, len(colnames))) df = pd.DataFrame(data, columns=colnames) Assume I have two lists containing all the column names ending with _x, and all the column names ending with _y (it's easy to build such lists), of the same length m (remember that for each _x column there is one and only one corresponding _y column). I want to create m new columns with a simple formula: df['a_err'] = (df['a_x'] - df['a_y']) / df['a_y'] without hard-coding the column names, of course. It's easy to do so with a for loop, but I would like to know if it's possible to do the same without a loop, in the hope that it would be faster (the real dataframe is way bigger than this small example).
You can use groupby_apply with a custom function: func = lambda sr: (sr.iloc[:, 0] - sr.iloc[:, 1]) / sr.iloc[:, 1] # r'' stands for raw strings like f'' for formatted strings # Keep columns that end ($) with '_x' or '_y' (_[xy]) # Groupby column prefix (a_x -> a, b_x -> b, ..., c_y -> c) # Apply your formula on each group (a_x, a_y), (b_x, b_y), (c_x, c_y) err = (df.filter(regex=r'_[xy]$') .groupby(lambda x: x.split('_')[0], axis=1) .apply(func).add_suffix('_err')) # Append error columns to your original dataframe df = pd.concat([df, err], axis=1) Output: >>> df foo bar baz a_x b_x c_x a_y b_y c_y a_err b_err c_err 0 0.636962 0.269787 0.040974 0.016528 0.813270 0.912756 0.606636 0.729497 0.543625 -0.972755 0.114838 0.679017 1 0.935072 0.815854 0.002739 0.857404 0.033586 0.729655 0.175656 0.863179 0.541461 3.881166 -0.961091 0.347567 2 0.299712 0.422687 0.028320 0.124283 0.670624 0.647190 0.615385 0.383678 0.997210 -0.798040 0.747885 -0.351000 3 0.980835 0.685542 0.650459 0.688447 0.388921 0.135097 0.721488 0.525354 0.310242 -0.045796 -0.259697 -0.564545 4 0.485835 0.889488 0.934044 0.357795 0.571530 0.321869 0.594300 0.337911 0.391619 -0.397955 0.691361 -0.178106 5 0.890274 0.227158 0.623187 0.084015 0.832644 0.787098 0.239369 0.876484 0.058568 -0.649014 -0.050018 12.439042 6 0.336117 0.150279 0.450339 0.796324 0.230642 0.052021 0.404552 0.198513 0.090753 0.968411 0.161849 -0.426782 7 0.580332 0.298696 0.671995 0.199515 0.942113 0.365110 0.105495 0.629108 0.927155 0.891226 0.497538 -0.606204 8 0.440377 0.954590 0.499896 0.425229 0.620213 0.995097 0.948944 0.460045 0.757729 -0.551893 0.348158 0.313262 9 0.497423 0.529312 0.785786 0.414656 0.734484 0.711143 0.932060 0.114933 0.729015 -0.555119 5.390557 -0.024516 10 0.927424 0.967926 0.014706 0.863640 0.981195 0.957210 0.148764 0.972629 0.889936 4.805437 0.008807 0.075595 11 0.822374 0.479988 0.232373 0.801881 0.923530 0.266130 0.538934 0.442753 0.931017 0.487900 1.085882 -0.714151 12 0.040511 0.732006 0.614373 0.028365 0.719220 0.015992 0.757951 0.512759 0.929104 -0.962576 0.402648 -0.982788 13 0.066082 0.841317 0.066690 0.344310 0.430299 0.966062 0.562232 0.258865 0.241676 -0.387601 0.662254 2.997349 14 0.888118 0.225869 0.124555 0.288331 0.586123 0.554091 0.809711 0.560476 0.288421 -0.643909 0.045760 0.921116 15 0.412896 0.818121 0.626506 0.959078 0.369404 0.552612 0.593924 0.848291 0.145474 0.614815 -0.564531 2.798708 16 0.406510 0.909959 0.043067 0.822706 0.415384 0.829804 0.009955 0.365046 0.078630 81.646166 0.137895 9.553270 17 0.652615 0.273849 0.702652 0.943801 0.126817 0.864778 0.059464 0.380771 0.429774 14.871772 -0.666946 1.012170 18 0.488850 0.976462 0.775691 0.308857 0.269837 0.863120 0.881307 0.510707 0.344296 -0.649546 -0.471640 1.506915 19 0.994917 0.315944 0.182712 0.880098 0.812335 0.667889 0.958414 0.925715 0.748249 -0.081714 -0.122477 -0.107396 You can also use filter to split your columns: # Python > 3.8, walrus operator err = (df.filter(regex='_x$').values - (y := df.filter(regex='_y$'))) / y err.columns = err.columns.str.split('_').str[0] + '_err' df = pd.concat([df, err], axis=1)
3
1
76,172,523
2023-5-4
https://stackoverflow.com/questions/76172523/error-httpresponse-object-has-no-attribute-strict-when-verifying-id-token-wi
I have a Cloud Function in python 3.9 that calls this code : firebase_admin.initialize_app() def check_token(token, app_check_token): """ :param app_check_token: :param token: :return: """ try: app_token = app_check.verify_token(app_check_token) logging.info(f"App check token verified : {app_token}") except Exception as e: logging.error(f"Exception while decoding app check token : {e}") try: decoded_token = auth.verify_id_token(token) logging.info(f"verified token : {decoded_token}") if "uid" in decoded_token: return decoded_token["uid"] return "" except Exception as e: logging.error(f"check_token : {e}") return "" And here is the log i have from Cloud logging when i call my function with a valid firebase id token : ERROR:root:check_token : 'HTTPResponse' object has no attribute 'strict' What does it mean please ? Note : i have 10 cloud functions, only one has this issue and there is no differences with the others... ChatGPT says it's an error with Firebase authentication backend and to contact firbease support, but since this happens only in one of my Cloud Functions i would like to understand if i am doing something wrong. PS : If i run this cloud function locally everything is fine with functions-framework --target function_name --debug --port=8080 with the exact same code.
I had a similar issue and someone pointed me to this: https://github.com/psf/requests/issues/6437 and also for firebase specifically: https://github.com/firebase/firebase-admin-python/issues/699 Apparently there's been a breaking change to the library and one solution is to use urllib3 2.0.0 as a workaround which I am yet to try (Not sure how given I'm dependent on Firebase). You can see my question here: Firebase Authentication: 'HTTPResponse' object has no attribute 'strict', status: error
3
3
76,166,620
2023-5-3
https://stackoverflow.com/questions/76166620/generate-hierarchical-data-from-pandas-df-to-list
I have data in this form data = [ [2019, "July", 8, '1.2.0', 7.0, None, None, None], [2019, "July", 10, '1.2.0', 52.0, "Breaking", 6.0, 'Path Removed w/o Deprecation'], [2019, "July", 15, "0.1.0", 210.0, "Breaking", 57.0, 'Request Parameter Removed'], [2019, 'August', 20, '2.0.0', 100.0, "Breaking", None, None], [2019, 'August', 25, '2.0.0', 200.0, 'Non-breaking', None, None], ] The list goes in this hierarchy: Year, Month, Day, info_version, API_changes, type1, count, content I want to generate this hierarchical tree structure for the data: { "name": "2020", # this is year "children": [ { "name": "July", # this is month "children": [ { "name": "10", #this is day "children": [ { "name": "1.2.0", # this is info_version "value": 52, # this is value of API_changes(always a number) "children": [ { "name": "Breaking", # this is type1 column( it is string, it is either Nan or Breaking) "value": 6, # this is value of count "children": [ { "name": "Path Removed w/o Deprecation", #this is content column "value": 6 # this is value of count } ] } ] } ] } ] } ] } For all other months it continues in the same format.I do not wish to modify my data in any way whatsoever, this is how its supposed to be for my use case( graphical purposes). I am not sure how I could achieve this, any suggestions would be really grateful. This is in reference to this format for Sunburst graph in pyecharts
Assuming that headers are known and sorted in hierarchical with description of header that must be grouped order like so (see datetime doc for its usage): from datetime import datetime hierarchical_description = [ ([("name", "Year")], lambda d: int(d["name"])), ([("name", "Month")], lambda d: datetime.strptime(d["name"], "%B").month), ([("name", "Day")], None), ([("name", "info_version"), ("value", "API_changes")], None), ( [ ("name", "type1"), ("value", "count"), ], None, ), ([("name", "content"), ("value", "count")], None), ] And that the dataframe is loaded as follows: import pandas as pd data = [ [2019, "July", 8, "1.2.0", 7.0, None, None], [2019, "July", 10, "1.2.0", 52.0, "Breaking", 6.0, "Path Removed w/o Deprecation"], [2019, "July", 15, "0.1.0", 210.0, "Breaking", 57.0, "Request Parameter Removed"], [2019, "August", 20, "2.0.0", 100.0, "Breaking", None, None], [2019, "August", 25, "2.0.0", 200.0, "Non-breaking", None, None], ] hierarchical_order = [ "Year", "Month", "Day", "info_version", "API_changes", "type1", "count", "content", ] df = pd.DataFrame( data, columns=hierarchical_order, ) It is possible to create a recursive methods that goes hierarchically into the dataframe: def logical_and_df(df, conditions): if len(conditions) == 0: return df colname, value = conditions[0] return logical_and_df(df[df[colname] == value], conditions[1:]) def get_hierarchical_data(df, description): if len(description) == 0: return [] children = [] parent_description, sorting_function_key = description[0] for colvalues, subdf in df.groupby([colname for _, colname in parent_description]): attributes = { key: value for (key, _), value in zip(parent_description, colvalues) } grand_children = get_hierarchical_data( logical_and_df( subdf, [ (colname, value) for (_, colname), value in zip(parent_description, colvalues) ], ), description[1:], ) if len(grand_children) > 0: attributes["children"] = grand_children children.append(attributes) if sorting_function_key is None: return children return sorted(children, key=sorting_function_key) The method logical_and takes a dataframe and a list of condition. A condition is a pair where the left member is the column name and the right one is the value on that column. The recursive method get_hierarchical_data takes the hierarchical description as input. The description, is a list of tuple. Each tuple is composed by a list that indicates the name, value column and a optional sorting key method, that will be used to order the children list. The method returns the children where value / name are based on the first element in the description. If the description is empty, it returns an empty list of children. Otherwise, it uses groupby method from pandas to look for unique pairs (see this post). A name, value dictionary is created and concatenated with the recursive call of the method looking for children. The following lines help you printing the dictionary: import json print(json.dumps(get_hierarchical_data(df, hierarchical_description), indent=5)) Firstly posted version My first version was not specific to the problem with grouped column. I edited this post to this new version that should solve your issue.
4
4
76,175,067
2023-5-4
https://stackoverflow.com/questions/76175067/find-second-occurence-from-the-right-of-a-character-without-iteration
I have a Python string: test = 'blablabla_foo_bar_baz' I want the negative index of the second occurence from the right of _: in this case, this would be -8, but even 13, the positive index, would be an acceptable answer, because it's not hard to compute the negative index from the positive index. How can I do that? A GPT-4 solution uses rfind + iteration, but I was wondering if it could be possible to solve it in one shot, without iteration. Of course the answer should handle strings of arbitrary length, and it should match an arbitrary character (_ in this example). EDIT: I need the negative index of the second occurence from the right of _, because I want to strip away _bar_baz, which can be done easily with test_prefix = test[:-8] Thus, alternative solutions that allow me to get test_prefix without first finding the negative index of the second occurence from the right of _, would also be ok. Using the length of the substring _bar_baz (8 in this case) is not ok because I don't know the length of this substring. Instead computing the length of the substring from the second occurrence from the right of _, until the end of the string, would be ok. It doesn't seem any different from computing he negative index of the second occurence from the right of _, though. EDIT2: a commenter says that it's impossible to solve this without iteration. I'm not sure they're right, but I'll show a (GPT-4-generated) example of what I don't want: def find_second_last_occurrence(text, char='_'): last_occurrence = text.rfind(char) second_last_occurrence = text.rfind(char, 0, last_occurrence) return second_last_occurrence string = "blablabla_foo_bar_baz" result = find_second_last_occurrence(string, '_') print(result) # Output: 20 In this case, we had to call rfind twice. Is this really necessary? I don't think all possible solutions (including those based on re, rsplit, etc.) require calling a builtin twice. Note that I didn't include any code checking for the occurence of _ in the string, because I know for sure that my input string contains at least two occurences of _.
I am unsure the exact solution that is wanted but if the goal is just to split the string there is a much cleaner approach which can also give you the index it was cut at. test = 'blablabla_foo_bar_baz' result, *_ = test.rsplit('_', maxsplit=2) # split from the right two times print(result) # blablabla_foo print(len(result)) # 13 which is the index it was cut off at print(len(test) - len(result)) # 8 i.e. the negative index
4
1
76,163,832
2023-5-3
https://stackoverflow.com/questions/76163832/pandas-on-spark-throwing-java-lang-stackoverflowerror
I am using pandas-on-spark in combination with regex to remove some abbreviations from a column in a dataframe. In pandas this all works fine, but I have the task to migrate this code to a production workload on our spark cluster, and therefore decided to use pandas-on-spark. However, I am running into a weird error. I'm using the following function to clean up the abbreviations (Somewhat simplified here for readability purposes, in reality abbreviations_dict has 61 abbreviations and patterns is a list with three regex patterns). import pyspark.pandas as pspd def resolve_abbreviations(job_list: pspd.Series) -> pspd.Series: """ The job titles contain a lot of abbreviations for common terms. We write them out to create a more standardized job title list. :param job_list: df.SchoneFunctie during processing steps :return: SchoneFunctie where abbreviations are written out in words """ abbreviations_dict = { "1e": "eerste", "1ste": "eerste", "2e": "tweede", "2de": "tweede", "3e": "derde", "3de": "derde", "ceo": "chief executive officer", "cfo": "chief financial officer", "coo": "chief operating officer", "cto": "chief technology officer", "sr": "senior", "tech": "technisch", "zw": "zelfstandig werkend" } #Create a list of abbreviations abbreviations_pob = list(abbreviations_dict.keys()) #For each abbreviation in this list for abb in abbreviations_pob: # define patterns to look for patterns = [fr'((?<=( ))|(?<=(^))|(?<=(\\))|(?<=(\())){abb}((?=( ))|(?=(\\))|(?=($))|(?=(\))))', fr'{abb}\.'] # actual recoding of abbreviations to written out form value_to_replace = abbreviations_dict[abb] for patt in patterns: job_list = job_list.str.replace(pat=fr'{patt}', repl=f'{value_to_replace} ', regex=True) return job_list When I then call the function with a pspd Series, and perform an action so the query plan is executed: df['SchoneFunctie'] = resolve_abbreviations(df['SchoneFunctie']) print(df.head(100)) it throws a java.lang.StackOverflowError. The stack trace is too long to paste here, I pasted a subset of it since it is a repeating one. 23/05/05 09:53:14 WARN TaskSetManager: Lost task 0.0 in stage 4.0 (TID 4) (PC ID executor driver): java.lang.StackOverflowError at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2408) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466) at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527) at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466) at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527) at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466) at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527) at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466) at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527) at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466) at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527) at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466) at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527) at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466) at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527) at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466) at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527) at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466) at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527) at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466) at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527) at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466) at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527) at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466) at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527) at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466) at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527) at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428) It goes on like this for quite a while, untill I get: 23/05/03 14:19:11 ERROR TaskSetManager: Task 0 in stage 4.0 failed 1 times; aborting job Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm 2021.3\plugins\python\helpers\pydev\pydevconsole.py", line 364, in runcode coro = func() File "<input>", line 194, in <module> File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\frame.py", line 12255, in __repr__ pdf = cast("DataFrame", self._get_or_create_repr_pandas_cache(max_display_count)) File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\frame.py", line 12246, in _get_or_create_repr_pandas_cache self, "_repr_pandas_cache", {n: self.head(n + 1)._to_internal_pandas()} File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\frame.py", line 12241, in _to_internal_pandas return self._internal.to_pandas_frame File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\utils.py", line 588, in wrapped_lazy_property setattr(self, attr_name, fn(self)) File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\internal.py", line 1056, in to_pandas_frame pdf = sdf.toPandas() File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\sql\pandas\conversion.py", line 205, in toPandas pdf = pd.DataFrame.from_records(self.collect(), columns=self.columns) File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\sql\dataframe.py", line 817, in collect sock_info = self._jdf.collectToPython() File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\py4j\java_gateway.py", line 1321, in __call__ return_value = get_return_value( File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\sql\utils.py", line 190, in deco return f(*a, **kw) File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\py4j\protocol.py", line 326, in get_return_value raise Py4JJavaError( py4j.protocol.Py4JJavaError---------------------------------------- Exception occurred during processing of request from ('127.0.0.1', 54483) Traceback (most recent call last): File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\socketserver.py", line 316, in _handle_request_noblock self.process_request(request, client_address) File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\socketserver.py", line 347, in process_request self.finish_request(request, client_address) File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\socketserver.py", line 360, in finish_request self.RequestHandlerClass(request, client_address, self) File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\socketserver.py", line 747, in __init__ self.handle() File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\accumulators.py", line 281, in handle poll(accum_updates) File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\accumulators.py", line 253, in poll if func(): File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\accumulators.py", line 257, in accum_updates num_updates = read_int(self.rfile) File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\serializers.py", line 593, in read_int length = stream.read(4) File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\socket.py", line 704, in readinto return self._sock.recv_into(b) ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host ---------------------------------------- ERROR:root:Exception while sending command. Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm 2021.3\plugins\python\helpers\pydev\pydevconsole.py", line 364, in runcode coro = func() File "<input>", line 194, in <module> File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\frame.py", line 12255, in __repr__ pdf = cast("DataFrame", self._get_or_create_repr_pandas_cache(max_display_count)) File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\frame.py", line 12246, in _get_or_create_repr_pandas_cache self, "_repr_pandas_cache", {n: self.head(n + 1)._to_internal_pandas()} File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\frame.py", line 12241, in _to_internal_pandas return self._internal.to_pandas_frame File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\utils.py", line 588, in wrapped_lazy_property setattr(self, attr_name, fn(self)) File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\internal.py", line 1056, in to_pandas_frame pdf = sdf.toPandas() File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\sql\pandas\conversion.py", line 205, in toPandas pdf = pd.DataFrame.from_records(self.collect(), columns=self.columns) File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\sql\dataframe.py", line 817, in collect sock_info = self._jdf.collectToPython() File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\py4j\java_gateway.py", line 1321, in __call__ return_value = get_return_value( File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\sql\utils.py", line 190, in deco return f(*a, **kw) File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\py4j\protocol.py", line 326, in get_return_value raise Py4JJavaError( py4j.protocol.Py4JJavaError: <unprintable Py4JJavaError object> During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\py4j\clientserver.py", line 511, in send_command answer = smart_decode(self.stream.readline()[:-1]) File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\socket.py", line 704, in readinto return self._sock.recv_into(b) ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\py4j\java_gateway.py", line 1038, in send_command response = connection.send_command(command) File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\py4j\clientserver.py", line 539, in send_command raise Py4JNetworkError( py4j.protocol.Py4JNetworkError: Error while sending or receiving : <exception str() failed> Some things i've tried / facts I think could be relevant: For now I am trying to run this locally. I am running it locally on a subset of 5000 rows of data, so that shouldn't be the problem. Perhaps increasing some kind of default config could still help. I think this has to do with the lazy evaluation in spark, and the DAG of spark getting too big because of the for-loops in the function. But I have no idea how to solve the problem. As per pyspark-on-pandas best practices documentation I have tried to implement checkpointing, but this is not available for pspd.Series, and converting my Series into a pspd.Dataframe makes the .apply(lambda ...) fail inside the resolve_abbreviations function. Any help would be greatly appreciated. Perhaps I am better off avoiding the pandas-on-spark API, and transform the code to regular pyspark as the pandas-on-spark API apparently isn't mature enough yet to run pandas scripts "as is"? Or perhaps our code design is flawed by nature and there is another efficient way to achieve similar results?
Is it possible that your input data is deeply nested? This could contribute to the looping stack calls you can see in there. The first thing I would try is running with a larger stack size than you're doing now. I'm not sure what OS/java version you're running this on, so can't know what the default stack size is on your machine. Typically, though, it ranges in the order of magnitude of 100KB - 1024KB. Try running it with a stack size of 4MB. Inside of the JVM, this is done with the Xss parameter. You'll want to do this on the driver, with the spark.driver.extraJavaOptions config parameter. Something like this: from pyspark import SparkConf, SparkContext conf = (SparkConf() .setMaster("whateverMasterYouHave") .setAppName("MyApp") .set("spark.driver.extraJavaOptions", "-Xss4M")) sc = SparkContext.getOrCreate(conf = conf)
4
4
76,171,532
2023-5-4
https://stackoverflow.com/questions/76171532/spark-cdm-connector-in-databricks-java-lang-noclassdeffounderror-org-apache-sp
we are having compatibility issue with spark-cdm-connector, to give a little context I have a cdm data in ADLS which I’m trying to read into Databricks Databricks Runtime Version 12.1 (includes Apache Spark 3.3.1, Scala 2.12) , I have installed com.microsoft.azure:spark-cdm-connector:0.19.1 I ran this code: AccountName = "<AccountName>" container = "<container>" account_key = "<account_key>" Storage_Account = f"account={AccountName};key={account_key}" # Implicit write case from pyspark.sql.types import * from pyspark.sql import functions, Row from decimal import Decimal from datetime import datetime # Write a CDM entity with Parquet data files, entity definition is derived from the dataframe schema data = [ [1, "Alex", "Lai", "[email protected]", "Consultant", "Delivery", datetime.strptime("2018-07-03", '%Y-%m-%d'), datetime.now()], [2, "James", "Russel", "[email protected]", "Senior Consultant", "Delivery", datetime.strptime("2014-05-14", '%Y-%m-%d'), datetime.now()] ] schema = (StructType() .add(StructField("EmployeeId", StringType(), True)) .add(StructField("FirstName", StringType(), True)) .add(StructField("LastName", StringType(), True)) .add(StructField("EmailAddress", StringType(), True)) .add(StructField("Position", StringType(), True)) .add(StructField("Department", StringType(), True)) .add(StructField("HiringDate", DateType(), True)) .add(StructField("CreatedDateTime", TimestampType(), True)) ) df = spark.createDataFrame(spark.sparkContext.parallelize(data), schema) display(df) # Creates the CDM manifest and adds the entity to it with parquet partitions # with both physical and logical entity definitions (df.write.format("com.microsoft.cdm") .option("storage", Storage_Account) .option("manifestPath", container + "<path to manifest.cdm.json file>") .option("entity", "Employee") .option("format", "parquet") .mode("overwrite") .save() ) and its throwing this error ERROR: java.lang.NoClassDefFoundError:org/apache/spark/sql/sources/v2/ReadSupport. Py4JJavaError Traceback (most recent call last) File <command-2314057479770273>:3 1 # Creates the CDM manifest and adds the entity to it with parquet partitions 2 # with both physical and logical entity definitions ----> 3 (df.write.format("com.microsoft.cdm") 4 .option("storage", Storage_Account) 5 .option("manifestPath", container + "<path to manifest.cdm.json file>") 6 .option("entity", "Employee") 7 .option("format", "parquet") 8 .mode("overwrite") 9 .save() 10 ) File /databricks/spark/python/pyspark/instrumentation_utils.py:48, in _wrap_function.<locals>.wrapper(*args, **kwargs) 46 start = time.perf_counter() 47 try: ---> 48 res = func(*args, **kwargs) 49 logger.log_success( 50 module_name, class_name, function_name, time.perf_counter() - start, signature 51 ) 52 return res File /databricks/spark/python/pyspark/sql/readwriter.py:1193, in DataFrameWriter.save(self, path, format, mode, partitionBy, **options) 1191 self.format(format) 1192 if path is None: -> 1193 self._jwrite.save() 1194 else: 1195 self._jwrite.save(path) File /databricks/spark/python/lib/py4j-0.10.9.5-src.zip/py4j/java_gateway.py:1321, in JavaMember.__call__(self, *args) 1315 command = proto.CALL_COMMAND_NAME +\ 1316 self.command_header +\ 1317 args_command +\ 1318 proto.END_COMMAND_PART 1320 answer = self.gateway_client.send_command(command) -> 1321 return_value = get_return_value( 1322 answer, self.gateway_client, self.target_id, self.name) 1324 for temp_arg in temp_args: 1325 temp_arg._detach() File /databricks/spark/python/pyspark/sql/utils.py:209, in capture_sql_exception.<locals>.deco(*a, **kw) 207 def deco(*a: Any, **kw: Any) -> Any: 208 try: --> 209 return f(*a, **kw) 210 except Py4JJavaError as e: 211 converted = convert_exception(e.java_exception) File /databricks/spark/python/lib/py4j-0.10.9.5-src.zip/py4j/protocol.py:326, in get_return_value(answer, gateway_client, target_id, name) 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client) 325 if answer[1] == REFERENCE_TYPE: --> 326 raise Py4JJavaError( 327 "An error occurred while calling {0}{1}{2}.\n". 328 format(target_id, ".", name), value) 329 else: 330 raise Py4JError( 331 "An error occurred while calling {0}{1}{2}. Trace:\n{3}\n". 332 format(target_id, ".", name, value)) Py4JJavaError: An error occurred while calling o464.save. : java.lang.NoClassDefFoundError: org/apache/spark/sql/sources/v2/ReadSupport at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:757) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:473) at java.net.URLClassLoader.access$100(URLClassLoader.java:74) at java.net.URLClassLoader$1.run(URLClassLoader.java:369) at java.net.URLClassLoader$1.run(URLClassLoader.java:363) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:362) at java.lang.ClassLoader.loadClass(ClassLoader.java:419) at com.databricks.backend.daemon.driver.ClassLoaders$LibraryClassLoader.loadClass(ClassLoaders.scala:151) at java.lang.ClassLoader.loadClass(ClassLoader.java:352) at com.databricks.backend.daemon.driver.ClassLoaders$ReplWrappingClassLoader.loadClass(ClassLoaders.scala:65) at java.lang.ClassLoader.loadClass(ClassLoader.java:406) at java.lang.ClassLoader.loadClass(ClassLoader.java:352) at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$lookupDataSource$5(DataSource.scala:717) at scala.util.Try$.apply(Try.scala:213) at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$lookupDataSource$4(DataSource.scala:717) at scala.util.Failure.orElse(Try.scala:224) at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:717) at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSourceV2(DataSource.scala:781) at org.apache.spark.sql.DataFrameWriter.lookupV2Provider(DataFrameWriter.scala:988) at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:293) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:258) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380) at py4j.Gateway.invoke(Gateway.java:306) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:195) at py4j.ClientServerConnection.run(ClientServerConnection.java:115) at java.lang.Thread.run(Thread.java:750) Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.sources.v2.ReadSupport at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:419) at com.databricks.backend.daemon.driver.ClassLoaders$LibraryClassLoader.loadClass(ClassLoaders.scala:151) at java.lang.ClassLoader.loadClass(ClassLoader.java:352) ... 36 more
You're using incorrect version of the connector - 0.19.1 was for Spark 3.1, but you use Spark 3.3. You need to try Release spark3.3-1.19.5, but it may not be available on Maven Central.
3
3
76,170,810
2023-5-4
https://stackoverflow.com/questions/76170810/tweepy-twitter-api-v2-retrieve-tweets-on-free-access
I'm trying to authenticate to twitter's new API (v2) using tweepy and retrieve tweets but encounter a strange error related to the authentication process. I'm currently using the free access to the API. Code sample : import tweepy # Authentification OAuth 1.0a User Context to retrieve my own data dict_twitter_api = { "consumer_key": "blah", "consumer_secret": "blah", "access_token": "blah", "access_token_secret": "blah" } client = tweepy.Client(**dict_twitter_api) # If you're working behind a corporate proxy, # client.session.proxies = { # "http": "my-corporate-proxy", # "https": "my-corporate-proxy", # } print(client.get_me()) # <-- this works well print(client.get_home_timeline()) Traceback result : > Forbidden: 403 Forbidden > When authenticating requests to the Twitter API v2 endpoints, you must use keys and tokens from a Twitter developer App that is attached to a Project. You can create a project via the developer portal. I've checked my different tokens and confirmed that the OAuth 1.0a user context authentication should be working to retrieve my own timeline.
The returned exception is in fact not related to the authentication process. Twitter's free API access does not allow you to retrieve your own timeline or event tweets at all. This is stated in the about the twitter API documentation's page; it also explains why the page on rate limits only details the 'Basic' access'rates and bears no mention of the 'Free' access... Nota: in definitive, this appears to be a "no-code-related" question/answer; it may still prove usefull as 1) this is not referenced enough as yet and 2) the current traceback would throw any developer on the wrong path... Given the circonstances, I'll also share this resource which might be more uptodate/thorough.
10
10
76,166,230
2023-5-3
https://stackoverflow.com/questions/76166230/how-can-a-python-program-determine-which-core-it-is-running-on
I need to debug a Python 3 program that uses the multiprocessing module. I want to keep track of which cores (of a multi-core machine) are getting used and how. Q: I am looking for a way for the Python code to determine which core is running it. The closest I have found is to use the following: multiprocessing.current_process()._identity[0] - 1 Putting aside the fact that such code appears to be "going behind the API,"1 as far as I can tell, the code that initializes the _identity attribute makes no reference to the underlying hardware2, which I think is unsatisfactory. 1 For one thing, I can find no official documentation for the _identity attribute, as one would expect from the leading underscore in the name.2 More specifically, this code evaluates something like next(_process_counter), where _process_counter is initially set to the value itertools.count(1), and uses the the result as the basis for the _identity attribute's value.
This is absolutely going to be an OS-specific problem, but here's how to do it in Windows (source). As mentioned in the comments, the processor number will likely be changing all the time unless you specifically pin the task to a given core (outside the scope of this answer). from ctypes import * from ctypes import wintypes class PROCESSOR_NUMBER(Structure): _fields_ = [("Group", wintypes.WORD), ("Number", wintypes.BYTE), ("Reserved", wintypes.BYTE)] pn = PROCESSOR_NUMBER() windll.kernel32.GetCurrentProcessorNumberEx(byref(pn)) print(pn.Number) Edit: for Ubuntu 20.04.5 (WSL) from ctypes import * libc = CDLL("libc.so.6") print(libc.sched_getcpu())
4
2
76,168,695
2023-5-4
https://stackoverflow.com/questions/76168695/a-memory-leak-in-a-simple-python-c-extension
I have some code similar to the one below. That code leaks, and I don't know why. The thing that leaks is a simple creation of a Python class' instance inside a C code. The function I use to check the leak is create_n_times that's defined below and just creates new Python instances and derefrences them in a loop. This is not an MWE per-se, but part of an example. To make it easier to understand, what the code does is: The Python code defines the dataclass and registers it into the C-extension using set_ip_settings_type. Then, a C-extension function create_n_times is called and that function creates and destroys n instances of the Python dataclass. Can anyone help? In Python: import c_api @dataclass class IpSettings: ip: str port: int dhcp: bool c_api.set_ip_settings_type(IpSettings) c_api.generate_n_times(100000) In C++ I have the following code that's compiled into a Python extension called c_api (it's a part of that library's definition): #include <Python.h> // ... Other functions including a "PyInit" function extern "C" { PyObject* ip_settings_type = NULL; PyObject* set_ip_settings_type(PyObject* tp) { Py_XDECREF(ip_settings_type); Py_INCREF(tp); ip_settings_type = tp; return Py_None; } PyObject* create_n_times(PyObject* n) { long n_ = PyLong_AsLong(n); for (int i = 0; i < n_ ++i) { PyObject* factory_object = ip_settings_type; PyObject* args = PyTuple_New(3); PyTuple_SetItem(args, 0, PyUnicode_FromString("123.123.123.123")); PyTuple_SetItem(args, 1, PyLong_FromUnsignedLong(1231)); PyTuple_SetItem(args, 2, Py_False); PyObject* obj = PyObject_CallObject(factory_object, args); Py_DECREF(obj); } return Py_None; } }
PyTuple_SetItem steals the reference to the supplied object, but Py_False is a single object. When the args tuple is destroyed, the reference count for Py_False isgetting mangled. Use PyBool_FromLong(0) to create a new reference to Py_False, like the other two calls to PyTuple_SetItem. (see docs.python.org/3/c-api/bool.html)
3
5
76,168,260
2023-5-3
https://stackoverflow.com/questions/76168260/regex-look-ahead-look-behind-when-the-look-behind-pattern-is-different
First off, my apologies for posting this ugly, long sample, but it's all I could muster. I'm trying to get the IPs for both the source of the malware and the host. My pattern works well with the host, but it breaks when I try to return the source IP because the pattern captured in the look behind portion of the log changes. So I'm stuck. logs = [ "May 03 2023 19:30:30 abcde.manage.trendmicro.com CEF:0|Trend Micro|Apex Central|2019|CnC:Block|CnC Callback|3|deviceExternalId=12446 devicePayloadId=8F003A0D28D9 rt=2023-05-03 00:09:25 cat=1756 deviceFacility=Apex One cs2Label=EI_ProductVersion cs2=14.0 shost=MIM0002012 TMCMLogDetectedHost=MIM0002012 src=172.16.4.90 TMCMLogDetectedIP=172.16.4.90 cs3Label=SLF_DomainName cs3=Acme act=Block cn1Label=SLF_CCCA_RiskLevel cn1=3 cn2Label=SLF_CCCA_DetectionSource cn2=2 cn3Label=SLF_CCCA_DestinationFormat cn3=1 dst=173.233.137.60 deviceProcessName=C:\\\\Program Files (x86)\\\\Google\\\\Chrome\\\\Application\\\\chrome.exe dvchost=somedomain.manage.trendmicro.com", "May 03 2023 19:30:30 abcde.manage.trendmicro.com CEF:0|Trend Micro|Apex Central|2019|CnC:Block|CnC Callback|3|deviceExternalId=12447 devicePayloadId=8F003A0D28D9 rt=2023-05-03 08:02:58 cat=1756 deviceFacility=Apex One cs2Label=EI_ProductVersion cs2=14.0 shost=LENOVOM910Q TMCMLogDetectedHost=LENOVOM910Q src=10.10.110.69 TMCMLogDetectedIP=10.10.110.69 cs3Label=SLF_DomainName cs3=Acme_Headquarter act=Block cn1Label=SLF_CCCA_RiskLevel cn1=3 cn2Label=SLF_CCCA_DetectionSource cn2=2 cn3Label=SLF_CCCA_DestinationFormat cn3=1 dst=192.243.61.227 deviceProcessName=C:\\\\Program Files (x86)\\\\Microsoft\\\\Edge\\\\Application\\\\msedge.exe dvchost=somedomain.manage.trendmicro.com ", "May 03 2023 19:30:30 abcde.manage.trendmicro.com CEF:0|Trend Micro|Apex Central|2019|CnC:Block|CnC Callback|3|deviceExternalId=12448 devicePayloadId=8F003A0D28D9 rt=2023-05-03 08:02:58 cat=1756 deviceFacility=Apex One cs2Label=EI_ProductVersion cs2=14.0 shost=LENOVOM910Q TMCMLogDetectedHost=LENOVOM910Q src=10.10.110.69 TMCMLogDetectedIP=10.10.110.69 cs3Label=SLF_DomainName cs3=Acme_Headquarter act=Block cn1Label=SLF_CCCA_RiskLevel cn1=3 cn2Label=SLF_CCCA_DetectionSource cn2=2 cn3Label=SLF_CCCA_DestinationFormat cn3=1 dst=173.233.137.36 deviceProcessName=C:\\\\Program Files (x86)\\\\Microsoft\\\\Edge\\\\Application\\\\msedge.exe dvchost=somedomain.manage.trendmicro.com ", "May 03 2023 19:30:30 abcde.manage.trendmicro.com CEF:0|Trend Micro|Apex Central|2019|CnC:Block|CnC Callback|3|deviceExternalId=12449 devicePayloadId=8F003A0D28D9 rt=2023-05-03 08:02:59 cat=1756 deviceFacility=Apex One cs2Label=EI_ProductVersion cs2=14.0 shost=LENOVOM910Q TMCMLogDetectedHost=LENOVOM910Q src=10.10.110.69 TMCMLogDetectedIP=10.10.110.69 cs3Label=SLF_DomainName cs3=Acme_Headquarter act=Block cn1Label=SLF_CCCA_RiskLevel cn1=3 cn2Label=SLF_CCCA_DetectionSource cn2=2 cn3Label=SLF_CCCA_DestinationFormat cn3=1 dst=192.243.59.13 deviceProcessName=C:\\\\Program Files (x86)\\\\Microsoft\\\\Edge\\\\Application\\\\msedge.exe dvchost=somedomain.manage.trendmicro.com ", "May 03 2023 19:30:30 abcde.manage.trendmicro.com CEF:0|Trend Micro|Apex Central|2019|CnC:Block|CnC Callback|3|deviceExternalId=12450 devicePayloadId=8F003A0D28D9 rt=2023-05-03 09:42:15 cat=1756 deviceFacility=Apex One cs2Label=EI_ProductVersion cs2=14.0 shost=DELLTELEC01 TMCMLogDetectedHost=DELLTELEC01 src=10.10.220.172 TMCMLogDetectedIP=10.10.220.172 cs3Label=SLF_DomainName cs3=Acme_Headquarter act=Block cn1Label=SLF_CCCA_RiskLevel cn1=3 cn2Label=SLF_CCCA_DetectionSource cn2=2 cn3Label=SLF_CCCA_DestinationFormat cn3=4 cs5Label=CnCDestinationURL cs5=somewebsite.com deviceProcessName=C:\\\\Program Files (x86)\\\\Google\\\\Chrome\\\\Application\\\\chrome.exe dvchost=somedomain.manage.trendmicro.com ", "May 03 2023 19:30:30 abcde.manage.trendmicro.com CEF:0|Trend Micro|Apex Central|2019|CnC:Block|CnC Callback|3|deviceExternalId=12451 devicePayloadId=8F003A0D28D9 rt=2023-05-03 09:42:16 cat=1756 deviceFacility=Apex One cs2Label=EI_ProductVersion cs2=14.0 shost=DELLTELEC01 TMCMLogDetectedHost=DELLTELEC01 src=10.10.220.172 TMCMLogDetectedIP=10.10.220.172 cs3Label=SLF_DomainName cs3=Acme_Headquarter act=Block cn1Label=SLF_CCCA_RiskLevel cn1=3 cn2Label=SLF_CCCA_DetectionSource cn2=2 cn3Label=SLF_CCCA_DestinationFormat cn3=4 cs5Label=CnCDestinationURL cs5=somewebsite.com deviceProcessName=C:\\\\Program Files (x86)\\\\Google\\\\Chrome\\\\Application\\\\chrome.exe dvchost=somedomain.manage.trendmicro.com ", "May 03 2023 19:30:30 abcde.manage.trendmicro.com CEF:0|Trend Micro|Apex Central|2019|CnC:Block|CnC Callback|3|deviceExternalId=12452 devicePayloadId=8F003A0D28D9 rt=2023-05-03 09:42:19 cat=1756 deviceFacility=Apex One cs2Label=EI_ProductVersion cs2=14.0 shost=DELLTELEC01 TMCMLogDetectedHost=DELLTELEC01 src=10.10.220.172 TMCMLogDetectedIP=10.10.220.172 cs3Label=SLF_DomainName cs3=Acme_Headquarter act=Block cn1Label=SLF_CCCA_RiskLevel cn1=3 cn2Label=SLF_CCCA_DetectionSource cn2=2 cn3Label=SLF_CCCA_DestinationFormat cn3=4 cs5Label=CnCDestinationURL cs5=somewebsite.com deviceProcessName=C:\\\\Windows\\\\System32\\\\svchost.exe dvchost=somedomain.manage.trendmicro.com ", "May 03 2023 19:30:30 abcde.manage.trendmicro.com CEF:0|Trend Micro|Apex Central|2019|CnC:Block|CnC Callback|3|deviceExternalId=12453 devicePayloadId=8F003A0D28D9 rt=2023-05-03 09:42:19 cat=1756 deviceFacility=Apex One cs2Label=EI_ProductVersion cs2=14.0 shost=DELLTELEC01 TMCMLogDetectedHost=DELLTELEC01" ] The host Ips are found in this section: And the code is: endpoint_ip_list = [re.sub('dst=','',re.search('(?<=src=).*?(?=\s+TMCMLogDetectedIP=)',log).group()) for log in logs] Output: ['172.16.4.90', '10.10.110.69', '10.10.110.69', '10.10.110.69', '10.10.220.172', '10.10.220.172', '10.10.220.172', '10.10.220.172'] The second part is the source IP (the source of the possible attack), which is found in this section: Sometimes the logs show a domain instead of an IP address depending on the policy. So, when I run the regex for the section highlighted in green,it obviously breaks. callback_ip_list = [re.sub('dst=','',re.search('(?<=dst=).*?(?=\s+deviceProcessName=)',log).group()) for log in logs] Output: callback_ip_list = [re.sub('dst=','',re.search('(?<=dst=).*?(?=\s+deviceProcessName=)',log).group()) for log in logs] AttributeError: 'NoneType' object has no attribute 'group' If you know of a way to capture both the IP and the domain in the same expression, it would be perfect, but I'm content with any fix for this tbh. Thanks for your help!
Use alternatives to match either dst= or cs5= before deviceProcessName=. (?:(?<=dst=).*?|(?<=cs5=).*?)(?=\s+deviceProcessName=)
3
2
76,166,628
2023-5-3
https://stackoverflow.com/questions/76166628/can-i-store-and-then-read-write-to-a-sqlite-database-using-azure-file-storage-fr
I have a Python flask app that I want to see if I can turn it into a product, and it just needs a simple storage like SQLite, but it needs to have read/write capabilities. I want to do this as cheap as possible for now because there's a lot of unknowns on if it would even be successful. Currently, I'm able to push it out to an Azure App Service via VSCode, and it deploys successfully. I have the existing SQLite database included in the Deployment, and it seems to work OK, however, when I restart the App Service, all the data gets reset, which I don't understand? I thought the data would persist since it's part of the app? I do have issues with the app freezing and I haven't figured out why. It all works perfectly when ran locally. Could I attach an Azure File storage to the App Service and store the SQlite database on there? Is that possible? I'd be using flask_sqlalchemy. EDIT I accepted the answer below because it did create the db, but it still doesn't work. When using the app, at some point I'll get the error: sqlite3.OperationalError: unable to open database file Even though the db still shows in the directory, and it worked fine before. I don't know what goes wrong. It's so unfortunate that SQLite seems to just not work with Azure App Services because it's necessary for young apps that are just trying to test the market. I just hope that Microsoft isn't doing this on purpose because I trust them and that would really burn me.
Regarding the issue of data reset when restarting the App Service, it is because the SQLite database is stored in the temporary storage of the App Service. When you restart the App Service, all data in the temporary storage is deleted. If you go to the section Development Tools of your Azure App Service, you can SSH and inspect the filesystem of your Web App. You will be warned about any data outside of '/home' is not persisted. Moreover, if you list the files of the current directory, you will find your app files, including your SQLite database inside a /tmp folder. Therefore, given your scenario, to keep it simple and cheap, my suggestion to avoid complexity by using other Azure Services to persist your database is to save your SQLite file inside /home/site/wwwroot folder as mentioned here. I built an basic flask example app to test this and it worked. Project structure │ app.py │ database.db │ requirements.txt ├───templates ├──────index.html ├───venv index.html <!DOCTYPE html> <html> <head> <title>Message Board</title> </head> <body> <h1>Message Board</h1> <form action="/add_message" method="post"> <input type="text" name="message"> <button type="submit">Submit</button> </form> <hr> <ul> {% for message in messages %} <li>{{ message['message'] }}</li> {% endfor %} </ul> </body> </html> app.py import os from flask import Flask, render_template, request, redirect, url_for import sqlite3 app = Flask(__name__) # create the database connection def get_db_connection(): if "AZURE" in os.environ: # running on Azure db_path = "/home/site/wwwroot/database.db" else: # running locally db_path = "database.db" conn = sqlite3.connect(db_path) conn.row_factory = sqlite3.Row return conn # create the table if it doesn't exist def create_table(): conn = get_db_connection() conn.execute( "CREATE TABLE IF NOT EXISTS messages (id INTEGER PRIMARY KEY AUTOINCREMENT, message TEXT)" ) conn.commit() conn.close() create_table() @app.route("/") def index(): conn = get_db_connection() messages = conn.execute("SELECT * FROM messages").fetchall() conn.close() return render_template("index.html", messages=messages) @app.route("/add_message", methods=["POST"]) def add_message(): message = request.form["message"] conn = get_db_connection() conn.execute("INSERT INTO messages (message) VALUES (?)", (message,)) conn.commit() conn.close() return redirect(url_for("index")) if __name__ == "__main__": app.run(debug=True) This code uses the current working directory on local development while it uses /home/site/wwwroot on Azure. This is possible since I added an App Setting called AZURE with any value at the Configuration section from Azure App Service. Later, you can safely stop or restart your Web App. The new database.db will be saved at a persistent location. And again, you can verify that your database.db was not deleted if you SSH into your machine following the steps I mentioned earlier. Regarding the app freezing issue, it's hard to say without more information about the app and the Azure App Service configuration. You might want to enable application logging and detailed error messages in the Azure App Service to get more information about what's causing the app to freeze. Finally, regarding PaaS database options in Azure, I'd suggest to go for the serverless compute tier of Azure SQL Database if you do not have too many requests and you don't mind about cold starts. Here is a useful YouTube video that talks about pricing.
3
3
76,166,558
2023-5-3
https://stackoverflow.com/questions/76166558/creating-a-list-comprehension-that-repeats-certain-values-in-a-list-for-x-times
I have a very specific use-case for creating a list comprehension and I am having a bit of trouble figuring out how to do it. I am sure there must be a method or function that can help me, but I guess I am not aware of it. Here is the scenario: Following code works as expected and generates all expected variations (8 in total) List_A = ["apples","oranges"] List_B = ["meat","chicken"] flexibility = range(2) list = [] for a in List_A: results = [(day, b, a) for day in flexibility for b in List_B] list.append(results) print (list) result [[(0, 'meat', 'apples'), (0, 'chicken', 'apples'), (1, 'meat', 'apples'), (1, 'chicken', 'apples')], [(0, 'meat', 'oranges'), (0, 'chicken', 'oranges'), (1, 'meat', 'oranges'), (1, 'chicken', 'oranges')]] Here is the complication. Let's assume I have a rate limit of 5 calls per second, and the list above creates 8 elements, hence I would go above the rate limit. The way I thought I could overcome this, was to create another variable to pass on to the function that will use a different accounts to make my api request. In theory, each account would make only 5 requests. Thus in my example above, the first 5 elements of the list would have account 0 and the rest 3 would use account 1. I tried using list comprehension to achieve this, but the result is not what I expected: no = [0,1,2] List_A = ["apples","oranges"] List_B = ["meat","chicken"] flexibility = range(2) list = [] for a in List_A: results = [(day, b, a,n) for day in flexibility for b in List_B for n in no] list.append(results) print (list) results [[(0, 'meat', 'apples', 0), (0, 'meat', 'apples', 1), (0, 'meat', 'apples', 2), (0, 'chicken', 'apples', 0), (0, 'chicken', 'apples', 1), (0, 'chicken', 'apples', 2), (1, 'meat', 'apples', 0), (1, 'meat', 'apples', 1), (1, 'meat', 'apples', 2), (1, 'chicken', 'apples', 0), (1, 'chicken', 'apples', 1), (1, 'chicken', 'apples', 2)], [(0, 'meat', 'oranges', 0), (0, 'meat', 'oranges', 1), (0, 'meat', 'oranges', 2), (0, 'chicken', 'oranges', 0), (0, 'chicken', 'oranges', 1), (0, 'chicken', 'oranges', 2), (1, 'meat', 'oranges', 0), (1, 'meat', 'oranges', 1), (1, 'meat', 'oranges', 2), (1, 'chicken', 'oranges', 0), (1, 'chicken', 'oranges', 1), (1, 'chicken', 'oranges', 2)]] I get many more elements in my list than what I want. I still want the original 8 but the first five would use account 0 and the last 3 account 1. The goal is to get this list: [(0, 'apples', 'meat',0), (0, 'apples', 'chicken',0), (0, 'oranges', 'meat',0), (0, 'oranges', 'chicken',0), (1, 'apples', 'meat',0), (1, 'apples', 'chicken',1), (1, 'oranges', 'meat',1), (1, 'oranges', 'chicken',1)] How can I achieve this? thanks
A concise way to do this combines itertools.product to give the combinations, with enumerate to give you a running counter. You can floor-divide that counter by 5 to give your "account number": import itertools List_A = ["apples", "oranges"] List_B = ["meat", "chicken"] flexibility = range(2) rate_limit = 5 res = [(n, a, b, i // rate_limit) for i, (n, a, b) in enumerate(itertools.product(flexibility, List_A, List_B))] print(res) which gives the output list you're looking for: [(0, 'apples', 'meat', 0), (0, 'apples', 'chicken', 0), (0, 'oranges', 'meat', 0), (0, 'oranges', 'chicken', 0), (1, 'apples', 'meat', 0), (1, 'apples', 'chicken', 1), (1, 'oranges', 'meat', 1), (1, 'oranges', 'chicken', 1)]
5
5
76,164,992
2023-5-3
https://stackoverflow.com/questions/76164992/creating-an-array-by-shifting-values
I have an interger d and boolean array, say M = array([[ True, False, False, True], [False, True, False, False], [False, False, False, False], [False, False, False, True]]) Now I want to create a new array from M with the following rule: Trues stay and the next d positions left of a True also become True. I came up with this: newarray = np.full_like(M,False) for r,row in enumerate(M[1:],1): for i,boo in enumerate(row): if boo: newarray[max(0,r-d):r,i] = True I have the feeling there is a more efficient way of doing this using numpy commands; probably using np.where. EDIT: e.g. d = 1, the result should be M = array([[ True, False, True, True], [ True, True, False, False], [False, False, False, False], [False, False, True, True]]) for d = 2, the result should be M = array([[ True, True, True, True], [ True, True, False, False], [False, False, False, False], [False, True, True, True]])
You can use a 2D convolution with scipy.signal.convolve2d: from scipy.signal import convolve2d d = 2 kernel = np.repeat([1, 1, 0], [d, 1, d])[None] # array([[1, 1, 1, 0, 0]]) out = convolve2d(M, kernel, mode='same') > 0 Output for d = 1: array([[ True, False, True, True], [ True, True, False, False], [False, False, False, False], [False, False, True, True]]) Output for d = 2: array([[ True, True, True, True], [ True, True, False, False], [False, False, False, False], [False, True, True, True]])
3
4
76,161,409
2023-5-3
https://stackoverflow.com/questions/76161409/adding-security-to-opc-ua-python-server-client-asyncua
I'm new to OPC UA and Python, but with the asyncua examples, I created the example that I need for a real project. Now I need to add security to the server and client, and for now, using a username and password is fine. Here is my functional code without security. If someone knows what functions I need to use to create two users, one with admin privileges and the other with none, please let me know. import asyncio from asyncua import Server, ua from asyncua.common.methods import uamethod from asyncua.common.structures104 import new_struct, new_struct_field @uamethod def func(parent, value): return value * 2 async def main(): server = Server() await server.init() server.set_endpoint("opc.tcp://localhost:4840/freeopcua/server/") uri = "http://examples.freeopcua.github.io" idx = await server.register_namespace(uri) myobj = await server.nodes.objects.add_object(idx, "MyObject") myvar = await myobj.add_variable(idx, "MyVariable", 0.0) await server.nodes.objects.add_method( ua.NodeId("ServerMethod", idx), ua.QualifiedName("ServerMethod", idx), func, [ua.VariantType.Int64], [ua.VariantType.Int64], ) struct = await new_struct(server, idx, "MyStruct", [ new_struct_field("FirstValue", ua.VariantType.Float, 0.0), new_struct_field("SecondValue", ua.VariantType.Float, 0.0), new_struct_field("ThirdValue", ua.VariantType.Float, 0.0), new_struct_field("FourthValue", ua.VariantType.Float, 0.0), new_struct_field("FifthValue", ua.VariantType.Float, 0.0), ]) custom_objs = await server.load_data_type_definitions() mystruct = await myobj.add_variable(idx, "my_struct", ua.Variant(ua.MyStruct(), ua.VariantType.ExtensionObject)) await mystruct.set_writable() await myvar.set_writable() print("Starting server!") async with server: while True: await asyncio.sleep(0.5) n_struct = await mystruct.get_value() var = await myvar.read_value() print ("\n%f\n%f\n%f\n%f\n%f\n%f" % (var, n_struct.FirstValue, n_struct.SecondValue, n_struct.ThirdValue, n_struct.FourthValue, n_struct.FifthValue)) try: loop = asyncio.get_running_loop() except RuntimeError: loop = None if loop and loop.is_running(): print('Async event loop already running. Adding coroutine to the event loop.') tsk = loop.create_task(main()) tsk.add_done_callback( lambda t: print(f'Task done with result={t.result()} << return val of main()')) else: print('Starting new event loop') result = asyncio.run(main(), debug=True) I tried to use the encryption example from Asyncua, but I can't make it work. So, I tried to read the functions of the main code from Asyncua server and do something by myself, but I only got errors. Thank you.
You have to create a class that implements get_user class CustomUserManager: def get_user(self, iserver, username=None, password=None, certificate=None): if username == "admin": if password == 'secret_admin_pw': return User(role=UserRole.Admin) elif username == "user": if password == 'secret_pw': return User(role=UserRole.User) return None To use the manager in your code: user_manager = CustomUserManager() server = Server(user_manager=cert_user_manager) await server.init() server.set_endpoint("opc.tcp://localhost:4840/freeopcua/server/")
3
3
76,122,326
2023-4-27
https://stackoverflow.com/questions/76122326/how-to-return-separate-json-responses-using-fastapi
I am not sure if this is part of OpenAPI standard. I am trying to develop an API server to replace an existing one, which is not open source and vendor is gone. One particular challenge I am facing is it returns multiple JSON objects without enclosing them either in a list or array. For example, it returns the following 3 JSON objects as they are, in separate lines: {"items": 10} {"order": "shelf", "amount": 100} {"id": 100, "date": "2022-01-01", "status": "X"} Not in a list format () or in array []. For example, the code below returns all 3 objects in an array: from fastapi import FastAPI app = FastAPI() @app.get("/") def read_root(): data_1 = {"items": 10} data_2 = {"order": "shelf", "amount": 100} data_3 = {"id": 100, "date": "2022-01-01", "status": "X"} return data_1, data_2, data_3 Can anyone help me to get this done with FastAPI?
Option 1 You could return a custom Response directly, as demonstrated in this answer, as well as in Option 2 of this answer. Example from fastapi import FastAPI, Response import json app = FastAPI() def to_json(d): return json.dumps(d, default=str) @app.get('/') async def main(): data_1 = {'items': 10} data_2 = {'order': 'shelf', 'amount': 100} data_3 = {'id': 100, 'date': '2022-01-01', 'status': 'X'} json_str = '\n'.join([to_json(data_1), to_json(data_2), to_json(data_3)]) return Response(json_str, media_type='application/json') Option 2 You could use a StreamingResponse, as shown here and here. You might also find this and this helpful. If the generator function performs some blocking operations that would block the event loop, then you could define the gen() function below with a normal def instead of async def, and FastAPI will use iterate_in_threadpool() to run the generator in a separate thread that will then be awaited. Have a look at the linked answers above for more details. Example from fastapi import FastAPI from fastapi.responses import StreamingResponse import json app = FastAPI() @app.get('/') async def main(): data_1 = {'items': 10} data_2 = {'order': 'shelf', 'amount': 100} data_3 = {'id': 100, 'date': '2022-01-01', 'status': 'X'} async def gen(): for d in [data_1, data_2, data_3]: yield json.dumps(d, default=str) + '\n' return StreamingResponse(gen(), media_type='application/json') Option 3 As mentioned in the comments section above, one could also return a dictionary of dict (JSON) objects. However, using this solution, adding a line break between the objects would not be feasible. Example from fastapi import FastAPI, Response app = FastAPI() @app.get('/') async def main(): data_1 = {'items': 10} data_2 = {'order': 'shelf', 'amount': 100} data_3 = {'id': 100, 'date': '2022-01-01', 'status': 'X'} return {1: data_1, 2: data_2, 3: data_3} Note Although in Options 1 & 2 the media_type is set to application/json, the returned object would not be a valid JSON, as JSON strings do not allow real newlines (only escaped ones, i.e., \\n)—see this answer as well. Hence, in Swagger UI autodocs at /docs, you may come across the following message when testing the endpoint: can't parse JSON. Raw result:. If you would like to avoid getting that message, then you could set the media_type to text/plain instead (for Option 2, where StreamingResponse is used, you might want to disable MIME Sniffing when using text/plain; alternatively, you could use text/event-stream—see this answer for more details.).
4
3
76,129,550
2023-4-28
https://stackoverflow.com/questions/76129550/how-to-make-case-insensitive-choices-using-pythons-enum-and-fastapi
I have this application: import enum from typing import Annotated, Literal import uvicorn from fastapi import FastAPI, Query, Depends from pydantic import BaseModel app = FastAPI() class MyEnum(enum.Enum): ab = "ab" cd = "cd" class MyInput(BaseModel): q: Annotated[MyEnum, Query(...)] @app.get("/") def test(inp: MyInput = Depends()): return "Hello world" def main(): uvicorn.run("run:app", host="0.0.0.0", reload=True, port=8001) if __name__ == "__main__": main() curl http://127.0.0.1:8001/?q=ab or curl http://127.0.0.1:8001/?q=cd returns "Hello World" But any of these curl http://127.0.0.1:8001/?q=aB curl http://127.0.0.1:8001/?q=AB curl http://127.0.0.1:8001/?q=Cd etc returns 422Unprocessable Entity which makes sense. How can I make this validation case insensitive?
You could make case insensitive enum values by overriding the Enum's _missing_ method . As per the documentation, this classmethod—which by default does nothing—can be used to look up for values not found in cls; thus, allowing one to try and find the enum member by value. Note that one could extend from the str class when declaring the enumeration class (e.g., class MyEnum(str, Enum)), which would indicate that all members in the enum must have values of the specified type (e.g., str). This would also allow comparing a string to an enum member (using the equality operator ==), without having to use the .value attribute on the enum member (e.g., if member.lower() == value). Otherwise, if the enumeration class was declared as class MyEnum(Enum) (without str subclass), one would need to use the .value attribute on the enum member (e.g., if member.value.lower() == value) to safely compare the enum member to a string. Also, note that calling the lower() function on the enum member (i.e., member.lower()) would not be necessary, unless the enum member values of your class include uppercase (or a combination of uppercase and lowercase) letters as well (e.g., ab = 'aB', cd = 'Cd', etc.). Hence, for the example below, where only lowercase letters are used, you could avoid using it, and instead simply use if member == value to compare the enum member to a value; thus, saving you from calling the lower() funciton on every member in the class. Example 1 from enum import Enum class MyEnum(str, Enum): ab = 'ab' cd = 'cd' @classmethod def _missing_(cls, value): value = value.lower() for member in cls: if member.lower() == value: return member return None Generic Version (with FastAPI example) from fastapi import FastAPI from enum import Enum app = FastAPI() class CaseInsensitiveEnum(str, Enum): @classmethod def _missing_(cls, value): value = value.lower() for member in cls: if member.lower() == value: return member return None class MyEnum(CaseInsensitiveEnum): ab = 'aB' cd = 'Cd' @app.get("/") def main(q: MyEnum): return q In case you needed the Enum query parameter to be defined using Pydantic's BaseModel, you could then use the below (see this answer and this answer for more details): from fastapi import Query, Depends from pydantic import BaseModel ... class MyInput(BaseModel): q: MyEnum = Query(...) @app.get("/") def main(inp: MyInput = Depends()): return inp.q In both cases, the endpoint could be called as follows: http://127.0.0.1:8000/?q=ab http://127.0.0.1:8000/?q=aB http://127.0.0.1:8000/?q=cD http://127.0.0.1:8000/?q=CD ... Example 2 In Python 3.11+, one could instead use the newly introduced StrEnum, which allows using the auto() feature, resulting in the lower-cased version of the member's name as the value. from enum import StrEnum, auto class MyEnum(StrEnum): AB = auto() CD = auto() @classmethod def _missing_(cls, value): value = value.lower() for member in cls: if member == value: return member return None
11
20
76,130,370
2023-4-28
https://stackoverflow.com/questions/76130370/why-is-python-slower-inside-a-docker-container
The following small code snippet times how long adding a bunch of numbers takes. import gc from time import process_time_ns gc.disable() # disable garbage collection for func in [ process_time_ns, ]: pre = func() s = 0 for a in range(100000): for b in range(100): s += b print(f"sum: {s}") post = func() delta_s = (post - pre) / 1e9 # difference in seconds print(f"{func}: {delta_s}") To my surprise, this takes much longer when run inside a docker container (~1.6s) than it does when run directly on the host machine (~0.8s). After some digging, I found that some of docker's security features may cause slowdowns (https://betterprogramming.pub/faster-python-in-docker-d1a71a9b9917, https://pythonspeed.com/articles/docker-performance-overhead/). Indeed, adding the docker argument --privileged reduces it's runtime only ~0.9s. However, I'm still confused by this ~0.1s gap I'm observing, which doesn't show up in the article. I've set my cpu frequency to 3000MHz and fixed the python execution to run on core 0. Statistics of 30 measurements each: local docker --privileged docker avg 0.79917586 0.904496884 1.61980727 std 0.02433539 0.031948695 0.04034594 min 0.78087375 0.867265714 1.56995282 q1 0.78211388 0.880717119 1.58672566 q2 0.79006154 0.895180195 1.61322376 q3 0.80732969 0.916945585 1.64363027 max 0.89824817 1.012580084 1.72252714 For measurements, the following commands were used: local: taskset -c 0 python3 main.py docker --privileged: taskset -c 0 docker run --privileged --rm -w /data -v /home/slammer/Projects/timing-python-inside-docker:/data -it python:3 python main.py docker: taskset -c 0 docker run --rm -w /data -v /home/slammer/Projects/timing-python-inside-docker:/data -it python:3 python main.py What causes the remaining docker overhead? Can it be mitigated to achieve bare-metal-performance? Edit: Measurements were taken on a linux mint 20.3 host (kernel: x86_64 Linux 5.4.0-117-generic); docker version: 20.10.17
The slowdown seems to be caused not by docker, but by differences in the python binary. I copied the python packaged within the docker image python:3 to my host machine (copying docker's /usr/local to my hosts docker-python folder). Then I ran the same benchmark again on using this binary with the following command: LD_LIBRARY_PATH=docker-python/local/lib taskset -c 0 docker-python/local/bin/python3.10 main.py And voila, the measurements using this "dockerbinary" are the same (within measurement error) as those measured with "docker --privileged": local dockerbinary docker --privileged docker avg 0.79917586 0.89829016 0.904496884 1.61980727 std 0.02433539 0.03554546 0.031948695 0.04034594 min 0.78087375 0.86344007 0.867265714 1.56995282 q1 0.78211388 0.86950620 0.880717119 1.58672566 q2 0.79006154 0.88853465 0.895180195 1.61322376 q3 0.80732969 0.91612282 0.916945585 1.64363027 max 0.89824817 0.99477790 1.012580084 1.72252714 Mystery solved :) Now, what is the difference between these binaries? As far as I could tell, the binary shipped with docker is with debug_info, not stripped, while my local binary was only stripped. $ file `which python3.10` /usr/bin/python3.10: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=fb3f4369481251e6ba441382fd6d9ab47af0db29, for GNU/Linux 3.2.0, stripped $ file docker-python/local/bin/python3.10 docker-python/local/bin/python3.10: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=618b23f947f202224f4ea8e16375ac7bcad13c4f, for GNU/Linux 3.2.0, with debug_info, not stripped My guess is that the with debug_info compilation introduces this ~11% performance overhead. If this is correct, it prompts the next question "Why does the default docker image use this binary if it causes such a significant slowdown?". To that, I have no answer at the moment (also this guess may be entirely wrong). Crosslink: https://github.com/docker-library/python/issues/825
5
6
76,107,909
2023-4-26
https://stackoverflow.com/questions/76107909/when-should-i-use-asyncio-create-task
I am using Python 3.10 and I am a bit confused about asyncio.create_task. In the following example code, the functions are executed in coroutines whether or not I use asyncio.create_task. It seems that there is no difference. How can I determine when to use asyncio.create_task and what are the advantages of using asyncio.create_task compared to without it? import asyncio from asyncio import sleep async def process(index: int): await sleep(1) print('ok:', index) async def main1(): tasks = [] for item in range(10): tasks.append(asyncio.create_task(process(item))) await asyncio.gather(*tasks) async def main2(): tasks = [] for item in range(10): tasks.append(process(item)) # Without asyncio.create_task await asyncio.gather(*tasks) asyncio.run(main1()) asyncio.run(main2())
TL;DR It makes sense to use create_task, if you want to schedule the execution of that coroutine immediately, but not necessarily wait for it to finish, instead moving on to something else first. Explanation As has been pointed out in the comments already, asyncio.gather itself wraps the provided awaitables in tasks, which is why it is essentially redundant to call create_task on them beforehand in your simple example. From the gather docs: If any awaitable [...] is a coroutine, it is automatically scheduled as a Task. That being said, the two examples you constructed are not equivalent! When you call create_task, the Task is immediately scheduled for execution on the even loop. This means, if a context switch takes place after you called create_task for all your coroutines (as in your first example), any number of them may immediately start executing, without you having to await them explicitly. From the create_task docs: (my emphasis) Wrap the [...] coroutine into a Task and schedule its execution. By contrast, when you simply create the coroutines (as in your second example), they will not begin execution by themselves, unless you somehow schedule their execution (e.g. by simply awaiting them). You can see this in action, if you add any await (e.g. asyncio.sleep) between creation and the gather call and a few helpful print statements: from asyncio import create_task, gather, sleep, run async def process(index: int): await sleep(.5) print('ok:', index) async def create_tasks_then_gather(): tasks = [create_task(process(item)) for item in range(5)] print("tasks scheduled") await sleep(2) # <-- because of this `await` the tasks may begin to execute print("now gathering tasks") await gather(*tasks) print("gathered tasks") async def create_coroutines_then_gather(): coroutines = [process(item) for item in range(5)] print("coroutines created") await sleep(2) # <-- despite this, the coroutines will not begin execution print("now gathering coroutines") await gather(*coroutines) print("gathered coroutines") run(create_tasks_then_gather()) run(create_coroutines_then_gather()) Output: tasks scheduled ok: 0 ok: 1 ok: 2 ok: 3 ok: 4 now gathering tasks gathered tasks coroutines created now gathering coroutines ok: 0 ok: 1 ok: 2 ok: 3 ok: 4 gathered coroutines As you can see, in create_tasks_then_gather the process body was executed before the gather call, whereas in create_coroutines_then_gather it was executed only after. Therefore, whether or not using create_task is useful depends on the situation. If you only care about the coroutines being executed concurrently and awaited at that particular point in your code, there is no use in calling create_task. If you want to schedule them, but then move on to something else, while they may or may not do their thing in the background, it makes sense to use create_task. One important thing to remember however is that you can only ever be sure that the tasks you scheduled actually execute completely, if you at some point await them. This is why you still should await gather them (or equivalent) to actually wait for them to finish eventually.
14
16
76,145,761
2023-5-1
https://stackoverflow.com/questions/76145761/use-poetry-to-create-binary-distributable-with-pyinstaller-on-package
I think I'm missing something simple I have a python poetry application: name = "my-first-api" version = "0.1.0" description = "" readme = "README.md" packages = [{include = "application"}] [tool.poetry.scripts] start = "main:start" [tool.poetry.dependencies] python = ">=3.10,<3.12" pip= "23.0.1" setuptools="65.5.0" fastapi="0.89.1" uvicorn="0.20.0" [tool.poetry.group.dev.dependencies] pyinstaller = "^5.10.1" pytest = "^7.3.1" [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api" I can run this and build this using Poetry, however, I would like to be able to create the executable with a poetry script as well. Now I build it like this: poetry run pyinstaller main.py --collect-submodules application --onefile --name myapi I would like something like poetry package to automatically create this executable as well. How do I hook that up? Btw. ths does not work :( [tool.poetry.scripts] start = "main:start" builddist = "poetry run pyinstaller main.py --collect-submodules application --onefile --name myapi"
I have found a solution using the pyinstaller API. As you may know already, Poetry will only let you run 'scripts' if they are functions inside your package. Just like in your pyproject.toml, you map the start command to main:start, which is the start() function of your main.py module. Similarly, you can create a function in a module that triggers Pyinstaller and map that to a command that you can run as poetry run <commmand>. Assuming you have a module structure like this: my_package ├── my_package │ ├── __init__.py │ ├── pyinstaller.py │ └── main.py └── pyproject.toml 1. Create a file pyinstaller.py to call the Pyinstaller API The file should be inside your package structure, as shown in the diagram above. This is adapted from the Pyinstaller docs import PyInstaller.__main__ from pathlib import Path HERE = Path(__file__).parent.absolute() path_to_main = str(HERE / "main.py") def install(): PyInstaller.__main__.run([ path_to_main, '--onefile', '--windowed', # other pyinstaller options... ]) 2. Map the build command in pyproject.toml In the pyproject.toml file, add this [tool.poetry.scripts] build = "my_package.pyinstaller:install" 3. From the terminal, invoke the build command You must do so under the virtual environment that poetry creates: poetry run build 🎉 Profit
8
15
76,119,902
2023-4-27
https://stackoverflow.com/questions/76119902/how-to-create-a-custom-json-mapping-for-nested-dataclasses-with-sqlalchemy-2
I want to persist a python (data)class (i.e. Person) using imperative mapping in SQLAlchemy. One field of this class refers to another class (City). The second class is only a wrapper around two dicts and I want to store it in a denormalized way as a JSON column. My example classes look like this: @dataclass class Person: name: str city: City @dataclass class City: property_a: dict property_b: dict And in the database it should look like this: +--------+-----------------------------------------------------------------+ | name | city | +--------+-----------------------------------------------------------------+ | aaron | {property_a: {some_value: 1}, property_b: {another_value: 2}} | | bob | {property_a: {some_value: 10}, property_b: {another_value: 20}} | +--------+-----------------------------------------------------------------+ My table definition looks like this: person_table = Table( "persons", Column("id", Integer, primary_key=True, autoincrement=True), Column("name", String), Column("city", JSON) ) mapper_registry.map_imperatively( Person, person_table ) This fails (obviously) since "Object of type City is not JSON serializable". I need to provide custom (de)serialization methods to the mapper_registry which tell SqlAlchemy how to convert my City class into a nested dict and back. But I could not find out how to do this (and even if this is a good approach).
I found a solution. Whether this is an elegant way to solve this problem or not is for the reader to decide. My initial reasoning was not to pollute the models with I/O implementation details. Another approach would be to introduce a service layer and/or create additional dumb DTOs that can be translated to/from the model classes and contain the city fields as plain dicts/json. Two additional functions are needed to introduce custom de-/serialization: def serialize_json(o): match o: case City(): return json.dumps({"property_a": o.property_a, "property_b": o.property_b}) case _: return json.dumps(o) def deserialize_json(s): s_dict = json.loads(s) match s_dict: case {"property_a": property_a, "property_b": property_b}: return City(property_a=property_a, property_b=propertyb) case _: return s_dict The match operator is not strictly necessary for this toy example, but keeps everything a little bit cleaner if more of those cases are needed. Now the functions can be passed to SQLAlchemy: from sqlalchemy import create_engine engine = create_engine( "sqlite:///:memory:", json_serializer=orm.serialize_json, json_deserializer=orm.deserialize_json )
3
0
76,147,221
2023-5-1
https://stackoverflow.com/questions/76147221/trying-to-fix-a-numpy-asscalar-deprecation-issue
While trying to update an old Python script I ran into the following error: module 'numpy' has no attribute 'asscalar'. Did you mean: 'isscalar'? Specifically: def calibrate(x, y, z): # H = numpy.array([x, y, z, -y**2, -z**2, numpy.ones([len(x), 1])]) H = numpy.array([x, y, z, -y**2, -z**2, numpy.ones([len(x)])]) H = numpy.transpose(H) w = x**2 (X, residues, rank, shape) = linalg.lstsq(H, w) OSx = X[0] / 2 OSy = X[1] / (2 * X[3]) OSz = X[2] / (2 * X[4]) A = X[5] + OSx**2 + X[3] * OSy**2 + X[4] * OSz**2 B = A / X[3] C = A / X[4] SCx = numpy.sqrt(A) SCy = numpy.sqrt(B) SCz = numpy.sqrt(C) # type conversion from numpy.float64 to standard python floats offsets = [OSx, OSy, OSz] scale = [SCx, SCy, SCz] offsets = map(numpy.asscalar, offsets) scale = map(numpy.asscalar, scale) return (offsets, scale) I found that asscalar has been deprecated since NumPy 1.16. I found one reference that said to use numpy.ndarray.item, but I have no clue how to do that. I did try this: offsets = map.item(offsets) scale = map.item( scale) but got this error: AttributeError: type object 'map' has no attribute 'item' How can I solve this?
Just replace numpy.scalar using numpy.ndarray.item, that is, change offsets = map(numpy.asscalar, offsets) scale = map(numpy.asscalar, scale) to offsets = map(numpy.ndarray.item, offsets) scale = map(numpy.ndarray.item, scale)
4
2
76,138,267
2023-4-29
https://stackoverflow.com/questions/76138267/read-write-data-over-raspberry-pi-pico-usb-cable
How can I read/write data to Raspberry Pi Pico using Python/MicroPython over the USB connection?
Use Thonny to put MicroPython code on Raspberry Pi Pico. Save it as 'main.py'. Unplug Raspberry Pi Pico USB. Plug Raspberry Pi Pico USB back in. (don't hold do the boot button). Run the PC Python code to send and receive data between PC and Raspberry Pi Pico. Code for Raspberry Pi Pico: Read data from sys.stdin. Write data using print. poll to check if data is in the buffer. import select import sys import time # Set up the poll object poll_obj = select.poll() poll_obj.register(sys.stdin, select.POLLIN) # Loop indefinitely while True: # Wait for input on stdin poll_results = poll_obj.poll(1) # the '1' is how long it will wait for message before looping again (in microseconds) if poll_results: # Read the data from stdin (read data coming from PC) data = sys.stdin.readline().strip() # Write the data to the input file sys.stdout.write("received data: " + data + "\r") else: # do something if no message received (like feed a watchdog timer) continue Code for PC: import serial def main(): s = serial.Serial(port="COM3", parity=serial.PARITY_EVEN, stopbits=serial.STOPBITS_ONE, timeout=1) s.flush() s.write("data\r".encode()) mes = s.read_until().strip() print(mes.decode()) if __name__ == "__main__": main() serial is PySerial.
6
8
76,116,626
2023-4-27
https://stackoverflow.com/questions/76116626/error-jupyter-is-not-recognized-as-an-internal-or-external-command-operable
I am trying to install Jupyter Notebook without installing Anaconda on my Windows. I have followed the steps in https://jupyter.org/install but seems not to work. I have tried to close & reopen the command prompt and restart the Windows but didn't work too. What did I miss? C:\Users\xxxxxx>pip install notebook Requirement already satisfied: notebook in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (6.5.4) Requirement already satisfied: jinja2 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from notebook) (3.1.2) Requirement already satisfied: tornado>=6.1 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from notebook) (6.2) Requirement already satisfied: pyzmq>=17 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from notebook) (25.0.2) Requirement already satisfied: argon2-cffi in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from notebook) (21.3.0) Requirement already satisfied: traitlets>=4.2.1 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from notebook) (5.9.0) Requirement already satisfied: jupyter-core>=4.6.1 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from notebook) (5.3.0) Requirement already satisfied: jupyter-client>=5.3.4 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from notebook) (8.1.0) Requirement already satisfied: ipython-genutils in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from notebook) (0.2.0) Requirement already satisfied: nbformat in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from notebook) (5.8.0) Requirement already satisfied: nbconvert>=5 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from notebook) (7.3.1) Requirement already satisfied: nest-asyncio>=1.5 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from notebook) (1.5.6) Requirement already satisfied: ipykernel in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from notebook) (6.22.0) Requirement already satisfied: Send2Trash>=1.8.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from notebook) (1.8.0) Requirement already satisfied: terminado>=0.8.3 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from notebook) (0.17.1) Requirement already satisfied: prometheus-client in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from notebook) (0.16.0) Requirement already satisfied: nbclassic>=0.4.7 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from notebook) (0.5.5) Requirement already satisfied: python-dateutil>=2.8.2 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jupyter-client>=5.3.4->notebook) (2.8.2) Requirement already satisfied: platformdirs>=2.5 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from jupyter-core>=4.6.1->notebook) (3.1.1) Requirement already satisfied: pywin32>=300 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from jupyter-core>=4.6.1->notebook) (305) Requirement already satisfied: jupyter-server>=1.8 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbclassic>=0.4.7->notebook) (2.5.0) Requirement already satisfied: notebook-shim>=0.1.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbclassic>=0.4.7->notebook) (0.2.3) Requirement already satisfied: beautifulsoup4 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (4.11.2) Requirement already satisfied: bleach in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (6.0.0) Requirement already satisfied: defusedxml in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (0.7.1) Requirement already satisfied: jupyterlab-pygments in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (0.2.2) Requirement already satisfied: markupsafe>=2.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (2.1.2) Requirement already satisfied: mistune<3,>=2.0.3 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (2.0.5) Requirement already satisfied: nbclient>=0.5.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (0.7.4) Requirement already satisfied: packaging in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (21.3) Requirement already satisfied: pandocfilters>=1.4.1 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (1.5.0) Requirement already satisfied: pygments>=2.4.1 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from nbconvert>=5->notebook) (2.14.0) Requirement already satisfied: tinycss2 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (1.2.1) Requirement already satisfied: fastjsonschema in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbformat->notebook) (2.16.3) Requirement already satisfied: jsonschema>=2.6 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbformat->notebook) (4.17.3) Requirement already satisfied: pywinpty>=1.1.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from terminado>=0.8.3->notebook) (2.0.10) Requirement already satisfied: argon2-cffi-bindings in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from argon2-cffi->notebook) (21.2.0) Requirement already satisfied: comm>=0.1.1 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipykernel->notebook) (0.1.2) Requirement already satisfied: debugpy>=1.6.5 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipykernel->notebook) (1.6.6) Requirement already satisfied: ipython>=7.23.1 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipykernel->notebook) (8.11.0) Requirement already satisfied: matplotlib-inline>=0.1 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipykernel->notebook) (0.1.6) Requirement already satisfied: psutil in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipykernel->notebook) (5.9.4) Requirement already satisfied: backcall in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipython>=7.23.1->ipykernel->notebook) (0.2.0) Requirement already satisfied: decorator in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from ipython>=7.23.1->ipykernel->notebook) (5.1.1) Requirement already satisfied: jedi>=0.16 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipython>=7.23.1->ipykernel->notebook) (0.18.2) Requirement already satisfied: pickleshare in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipython>=7.23.1->ipykernel->notebook) (0.7.5) Requirement already satisfied: prompt-toolkit!=3.0.37,<3.1.0,>=3.0.30 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipython>=7.23.1->ipykernel->notebook) (3.0.38) Requirement already satisfied: stack-data in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipython>=7.23.1->ipykernel->notebook) (0.6.2) Requirement already satisfied: colorama in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from ipython>=7.23.1->ipykernel->notebook) (0.4.5) Requirement already satisfied: attrs>=17.4.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jsonschema>=2.6->nbformat->notebook) (22.1.0) Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jsonschema>=2.6->nbformat->notebook) (0.19.3) Requirement already satisfied: anyio>=3.1.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (3.6.2) Requirement already satisfied: jupyter-events>=0.4.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (0.6.3) Requirement already satisfied: jupyter-server-terminals in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (0.4.4) Requirement already satisfied: websocket-client in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (1.5.1) Requirement already satisfied: six>=1.5 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from python-dateutil>=2.8.2->jupyter-client>=5.3.4->notebook) (1.16.0) Requirement already satisfied: cffi>=1.0.1 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from argon2-cffi-bindings->argon2-cffi->notebook) (1.15.1) Requirement already satisfied: soupsieve>1.2 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from beautifulsoup4->nbconvert>=5->notebook) (2.4) Requirement already satisfied: webencodings in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from bleach->nbconvert>=5->notebook) (0.5.1) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from packaging->nbconvert>=5->notebook) (3.0.9) Requirement already satisfied: idna>=2.8 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from anyio>=3.1.0->jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (3.4) Requirement already satisfied: sniffio>=1.1 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from anyio>=3.1.0->jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (1.3.0) Requirement already satisfied: pycparser in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from cffi>=1.0.1->argon2-cffi-bindings->argon2-cffi->notebook) (2.21) Requirement already satisfied: parso<0.9.0,>=0.8.0 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from jedi>=0.16->ipython>=7.23.1->ipykernel->notebook) (0.8.3) Requirement already satisfied: python-json-logger>=2.0.4 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jupyter-events>=0.4.0->jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (2.0.7) Requirement already satisfied: pyyaml>=5.3 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jupyter-events>=0.4.0->jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (6.0) Requirement already satisfied: rfc3339-validator in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jupyter-events>=0.4.0->jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (0.1.4) Requirement already satisfied: rfc3986-validator>=0.1.1 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jupyter-events>=0.4.0->jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (0.1.1) Requirement already satisfied: wcwidth in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from prompt-toolkit!=3.0.37,<3.1.0,>=3.0.30->ipython>=7.23.1->ipykernel->notebook) (0.2.6) Requirement already satisfied: executing>=1.2.0 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from stack-data->ipython>=7.23.1->ipykernel->notebook) (1.2.0) Requirement already satisfied: asttokens>=2.1.0 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from stack-data->ipython>=7.23.1->ipykernel->notebook) (2.2.1) Requirement already satisfied: pure-eval in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from stack-data->ipython>=7.23.1->ipykernel->notebook) (0.2.2) Requirement already satisfied: fqdn in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jsonschema>=2.6->nbformat->notebook) (1.5.1) Requirement already satisfied: isoduration in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jsonschema>=2.6->nbformat->notebook) (20.11.0) Requirement already satisfied: jsonpointer>1.13 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jsonschema>=2.6->nbformat->notebook) (2.3) Requirement already satisfied: uri-template in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jsonschema>=2.6->nbformat->notebook) (1.2.0) Requirement already satisfied: webcolors>=1.11 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jsonschema>=2.6->nbformat->notebook) (1.13) Requirement already satisfied: arrow>=0.15.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from isoduration->jsonschema>=2.6->nbformat->notebook) (1.2.3) C:\Users\xxxxxx>jupyter notebook 'jupyter' is not recognized as an internal or external command, operable program or batch file.
You should add <your_python_root>\Scripts to the PATH environment variables. (Go to Environmental Variables > system variable > Path > Edit). Then the system knows where to find jupyter command.
3
1
76,137,512
2023-4-29
https://stackoverflow.com/questions/76137512/langchain-huggingface-cant-evaluate-model-with-two-different-inputs
I'm evaluating a LLM on Huggingface using Langchain and Python using this code: # https://github.com/hwchase17/langchain/blob/0e763677e4c334af80f2b542cb269f3786d8403f/docs/modules/models/llms/integrations/huggingface_hub.ipynb from langchain import HuggingFaceHub, LLMChain import os hugging_face_write = "MY_KEY" os.environ['HUGGINGFACEHUB_API_TOKEN'] = hugging_face_write from langchain import PromptTemplate, HuggingFaceHub, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=HuggingFaceHub(repo_id="google/flan-t5-xl", model_kwargs={"temperature":0, "max_length":64})) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" print(llm_chain.run(question)) I get the error ValueError Traceback (most recent call last) g:\Meine Ablage\python\lang_chain\langchain_huggingface_example.py in line 1 ----> 19 print(llm_chain.run(question)) File c:\Users\johan\.conda\envs\lang_chain\Lib\site-packages\langchain\chains\base.py:213, in Chain.run(self, *args, **kwargs) 211 if len(args) != 1: 212 raise ValueError("`run` supports only one positional argument.") --> 213 return self(args[0])[self.output_keys[0]] 215 if kwargs and not args: 216 return self(kwargs)[self.output_keys[0]] File c:\Users\johan\.conda\envs\lang_chain\Lib\site-packages\langchain\chains\base.py:116, in Chain.__call__(self, inputs, return_only_outputs) 114 except (KeyboardInterrupt, Exception) as e: 115 self.callback_manager.on_chain_error(e, verbose=self.verbose) --> 116 raise e 117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose) 118 return self.prep_outputs(inputs, outputs, return_only_outputs) File c:\Users\johan\.conda\envs\lang_chain\Lib\site-packages\langchain\chains\base.py:113, in Chain.__call__(self, inputs, return_only_outputs) 107 self.callback_manager.on_chain_start( 108 {"name": self.__class__.__name__}, 109 inputs, 110 verbose=self.verbose, 111 ) ... 106 if self.client.task == "text-generation": 107 # Text generation return includes the starter text. 108 text = response[0]["generated_text"][len(prompt) :] ValueError: Error raised by inference API: Model google/flan-t5-xl time out What am I doing incorrectly? I'm a newbie... Many thanks in advance, best regards from Paris, Jennie I ran my python script from above. After some waiting the shown error is given.
You need to upgrade your hugging face account to Pro version to host the large model for inference. "google/flan-t5-base" works for the free account.
3
0
76,122,294
2023-4-27
https://stackoverflow.com/questions/76122294/pyinstaller-adding-directory-inside-dist-as-data
I'm facing this issue while building a executable of my pyqt5 application. I've images folder which contain 6 images and I want to add this images directory in dist folder so structure will look like this dist > app > images but I'm not able to do that it copies all images instead of picking folder. I've tried adding like this in datas list eg. ('images','images') & ('images/*','images/*') Here is my app.spec look like # -*- mode: python ; coding: utf-8 -*- block_cipher = None a = Analysis( ['ui.py'], pathex=[], binaries=[], datas=[('logo.png', '.'), ('images', 'images')], hiddenimports=[], hookspath=[], hooksconfig={}, runtime_hooks=[], excludes=[], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher, noarchive=False, ) pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher) exe = EXE( pyz, a.scripts, [], exclude_binaries=True, name='myapp', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, console=True, disable_windowed_traceback=False, argv_emulation=False, target_arch=None, codesign_identity=None, entitlements_file=None, ) coll = COLLECT( exe, a.binaries, a.zipfiles, a.datas, strip=False, upx=True, upx_exclude=[], name='MyApplication', ) I've run this command to update data pyinstaller app.spec What I'm missing let me know.
Strange! updating app.spec file and running pyinstaller app.spec did not worked for me I don't know why If anyone know please let me know. What I did I deleted dist, build and old app.spec file and ran this command and it worked pyinstaller ui.py -n myapp --add-data "images:images" --add-data "logo.png:."
3
1
76,122,630
2023-4-27
https://stackoverflow.com/questions/76122630/how-to-ignore-environment-directory-when-using-python-ruff-linter-in-console
I was trying ruff linter. I have a file structure like below project_folder ├── env # Python enviroment [python -m venv env] │ ├── Include │ ├── Lib │ ├── Scripts │ ├── ... ├── __init__.py └── numbers.py I am trying to use this code I activated the environment inside the project_folder and ran the script below ruff check . but ruff also checked the env file. 🔗Image.png how to ignore env file like below linux script tree -I env
You can use the exclude option, ruff check --exclude=env .
9
9
76,153,606
2023-5-2
https://stackoverflow.com/questions/76153606/dialogflow-doesnt-return-training-phrases
I am trying to get an overview of the training phrases per intent from Dialogflow in python. I have followed this example to generate the following code: from google.cloud import dialogflow_v2 # get_credentials is a custom function that loads the credentials credentials, project_id = get_credentials() client = dialogflow_v2.IntentsClient(credentials=credentials) request = dialogflow_v2.ListIntentsRequest( parent=f"projects/{project_id}/agent/environments/draft", ) page_result = client.list_intents(request=request) for intent in page_result: print("Intent name: ", intent.name) print("Intent display_name: ", intent.display_name) print("Training phrases: ", intent.training_phrases) The name and display name of the intent are printed as expected, however training phrases is always an empty list (for both the draft as the test environment). Any ideas on why I'm not seeing the training phrases that I can see in the console? EDIT After hkanjih's answer I've updated my code as follows: from google.cloud import dialogflow_v2 # get_credentials is a custom function that loads the credentials credentials, project_id = get_credentials() client = dialogflow_v2.IntentsClient(credentials=credentials) request = dialogflow_v2.ListIntentsRequest( parent=f"projects/{project_id}/agent/environments/draft", ) page_result = client.list_intents(request=request) for intent in page_result: print("Intent name: ", intent.name) # intent.name is equal to projects/{project_id}/agent/intents/{intent_id} intent_request = dialogflow_v2.GetIntentRequest( name=intent.name, ) intent = client.get_intent(request=intent_request) # printing intent name again just to check if it's the same (it is) print("Intent name: ", intent.name) print("Intent display_name: ", intent.display_name) print("Training phrases: ", intent.training_phrases) Unfortunately, for all intents: Training phrases: []
After searching in the documentation some more, I found this page. Therefore I added the intent_view parameter to the request as in the snippet below: from google.cloud import dialogflow_v2 # get_credentials is a custom function that loads the credentials _credentials, project_id = self._get_credentials() client = dialogflow_v2.IntentsClient(credentials=_credentials) request = dialogflow_v2.ListIntentsRequest( parent=f"projects/{project_id}/agent/environments/draft", intent_view="INTENT_VIEW_FULL", ) page_result = client.list_intents(request=request) for intent in page_result: print("Intent name: ", intent.name) print("Intent display_name: {}", intent.display_name) print("Training phrases: ", intent.training_phrases) Note that in the request the parent f"projects/{project_id}/agent" (without the environment) gives the same result.
2
2
76,107,450
2023-4-26
https://stackoverflow.com/questions/76107450/flask-attributeerror-module-flask-json-has-no-attribute-jsonencoder
My flask app was working prior to upgrades. I was having trouble with sending email when there was a forgot-reset-password. To try and fix this I recently upgraded some modules for my flask app. The modules that I upgraded with current versions are: email-validator==2.0.0.post2 Flask==2.3.1 itsdangerous==2.1.2 The Traceback error that I am getting now is: Traceback (most recent call last): File "C:\Users\my_folder\sales\app.py", line 1, in <module> from product import app File "C:\Users\my_folder\sales\product\__init__.py", line 56, in <module> from product.agents.views import agents_bp File "C:\Users\my_folder\sales\product\agents\views.py", line 7, in <module> from product.agents.forms import RegistrationForm, LoginForm, UpdateAccountForm, ResetPasswordForm, RequestResetForm File "C:\Users\my_folder\sales\product\agents\forms.py", line 1, in <module> from flask_wtf import FlaskForm File "C:\Users\my_folder\flask_env\lib\site-packages\flask_wtf\__init__.py", line 4, in <module> from .recaptcha import Recaptcha File "C:\Users\my_folder\flask_env\lib\site-packages\flask_wtf\recaptcha\__init__.py", line 1, in <module> from .fields import RecaptchaField File "C:\Users\my_folder\flask_env\lib\site-packages\flask_wtf\recaptcha\fields.py", line 3, in <module> from . import widgets File "C:\Users\my_folder\flask_env\lib\site-packages\flask_wtf\recaptcha\widgets.py", line 6, in <module> JSONEncoder = json.JSONEncoder AttributeError: module 'flask.json' has no attribute 'JSONEncoder' How do I go about fixing this?
Python has had a builtin JSONEncoder since at least 3.2, making Flask's version redundant. So it's reasonable to remove it. If this was a module you controlled, you could just replace your line JSONEncoder = json.JSONEncoder with from json import JSONEncoder Since you don't control this library though, you should notice which library that is trying to include it, in your case that's flask_wtf. When you check PyPi for that library you'll see that there's a few recent releases, suggesting the first thing you should try is updating that version on Flask-WTF.
20
17
76,110,267
2023-4-26
https://stackoverflow.com/questions/76110267/firestore-warning-on-filtering-with-positional-arguments-how-to-use-filter-kw
Firestore started showing UserWarning: Detected filter using positional arguments. Prefer using the 'filter' keyword argument instead. when using query.where(field_path, op_string, value) while it's the the method from the official docs https://cloud.google.com/firestore/docs/query-data/queries So how shall we use the 'filter' kwarg? Couldn't find docs or samples on that. UPDATE: there is an open issue on this on GitHub https://github.com/googleapis/python-firestore/issues/705 (with no reaction from Google folks) UPDATE2: so, basically, it should look like this from google.cloud.firestore import CollectionReference from google.cloud.firestore_v1.base_query import FieldFilter, BaseCompositeFilter ... conditions = [[field, operator, value], [field, operator, value], ...] query = CollectionReference(path1, path2, path3, ...) query = query.where(filter=BaseCompositeFilter('AND', [FieldFilter(*_c) for _c in conditions])) For a FieldFilter operator usual ==, !=, etc. work as specified here https://firebase.google.com/docs/firestore/query-data/queries#query_operators Instead of 'AND' you could also use 'OPERATOR_UNSPECIFIED', but I'm not sure if it does what I think it should https://firebase.google.com/docs/firestore/reference/rest/v1/StructuredQuery#Operator
Based on @Robert G's excellent answer you can get rid of this warning with a semi-small tweak. Instead of: query = collection_ref query = query.where(field, op, value) Do this: from google.cloud.firestore_v1.base_query import FieldFilter query = collection_ref query = query.where(filter=FieldFilter(field, op, value)) And then the warning disappears. Hope this helps!
28
35
76,125,621
2023-4-28
https://stackoverflow.com/questions/76125621/how-to-annotate-slice-type-in-python
I want to use slice[int] like list[int], but my IDE tells me that this usage is invalid. So how to annotate slice type, if I want to constrain the type of its parameters?
Update (2023-05-12) I opened a PR to make slice generic. The change would introduce quite a few mypy errors in many existing projects. The good people at python/typeshed made it clear that until we have type variable defaults (proposed in PEP 696, but not yet accepted) and the slice type utilizes them, they will not accept the change to make it generic. I will try to keep up to date on the progress with PEP 696 and to implement the missing changes as soon as possible. Original answer As it stands right now, the built-in slice type is not generic, so it has no type parameters (unlike e.g. list). There has been a long-standing discussion about this (since 2015) and the relevant typeshed issue is still open. IMO it should definitely be made generic, but it is unclear if/when that will happen. slice arguments can be of any type (source). This means a statement like e.g. slice("foo", [-1], object()) will pass mypy --strict (even though it may be of questionable utility). That means there is currently no way to constrain a slice type variable in terms of its start/stop/step attribute types. And since you cannot even subclass slice, there is not even a way to easily make your own generic slice-like type.
3
3
76,149,834
2023-5-1
https://stackoverflow.com/questions/76149834/dicom-slice-thickness-in-python
I'm working on a project that requires slicing a 3D .stl file into a stack of DICOM slices, and I need to get the distance between the slices. I have figured out how to include the real .stl dimensions (in mm) in each DICOM file, using Autodesk Meshmixer; however, I can't seem to get the correct distance/thickness between slices. I use ImageJ to read the DICOM stacks, and the voxel depth (distance between slices, if I'm being correct) gets taken the same as the pixel dimensions in the X axis. I want to make the voxel depth to be equal to slice height in the Z axis As you can see in the last screenshot, the z values are arbitrary. This is a summary of the code I've done so far: import numpy as np import pyvista as pv import os import cv2 import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt from PIL import Image import pydicom import datetime import time # Load .stl stl_file = pv.read(stl_path) # Dimensions of the 3d file xmin, xmax, ymin, ymax, zmin, zmax = stl_file.bounds # Number of slices num_slices = 128 slice_height = (zmax - zmin) / num_slices # Dimensions of the images in mm # (reescaled because the slice gets plotted inside a 396x392 rectangle in a 512x512 image) # (multiplied by 10 because .stl units are in cm) pixel_dist_x = (xmax - xmin)/396*10 pixel_dist_y = (ymax - ymin)/392*10 # Date and time datatime = datetime.datetime.now() #Loop for i in range(num_slices): # Here I process each slice as an image (img), and obtain an array arr = pil_to_ndarray(img) # DICOM conversion arr = arr.astype(np.float16) ds = pydicom.dataset.FileDataset(os.path.join(slice_folder, f'slice_{i}.dcm'), \ {}, file_meta=pydicom.dataset.FileMetaDataset()) # DICOM Metadata ds.PixelData = arr.tobytes() ds.SamplesPerPixel = 1 ds.PhotometricInterpretation = 'MONOCHROME2' ds.Rows, ds.Columns = arr.shape[:2] ds.BitsAllocated = 16 ds.BitsStored = 16 ds.HighBit = 15 ds.PixelRepresentation = 1 ds.RescaleIntercept = 1 # This slope is set in a way so that Hounsfield units are correct ds.RescaleSlope = -1000/15360 ds.WindowCenter = np.max(arr) / 2 ds.WindowWidth = np.max(arr) # Conversion from pixel size to real units ds.PixelSpacing = [pixel_dist_x, pixel_dist_y] # This is, I believe, the part I should adjust to get the proper # spacing between slices ds.ImagePositionPatient = [0.0, 0.0, 0.0] ds.ImageOrientationPatient = [1.0, 0.0, 0.0, 0.0, 1.0, 0.0] # Other data ds.PatientName = 'Name' ds.PatientID = 'ID' ds.Modality = 'CT' ds.InstanceCreationDate = datatime.strftime('%Y%m%d') ds.InstanceCreationTime = datatime.strftime('%H%M%S.%f') # Save the DICOM file ds.save_as(os.path.join(slice_folder, f'slice_{i}.dcm'))
Considering what @blunova said, using the 'SliceThickness' attribute also worked for what I need, since (for this case) it's the same to consider thick slices with a null spacing between them, than infinitesimal-thickness slices with non-zero spacing between them. This way, the thickness is equal to the slice height, and the 'patient' position (initial slice position) is equal to the minimum z position: import numpy as np import pyvista as pv import os import cv2 import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt from PIL import Image import pydicom import datetime import time # Load .stl stl_file = pv.read(stl_path) # Dimensions of the 3d file xmin, xmax, ymin, ymax, zmin, zmax = stl_file.bounds # Number of slices num_slices = 128 # Distance between slices slice_height = (zmax - zmin) / num_slices #Loop for i in range(num_slices): # Image and DICOM processing... # Slice spacing/thickness ds.SliceThickness = slice_height ds.ImagePositionPatient = [0.0, 0.0, zmin]
2
1
76,152,720
2023-5-2
https://stackoverflow.com/questions/76152720/how-to-replace-certain-values-in-a-polars-series
I want to replace the inf values in a polars series with 0. I am using the polars Python library. This is my example code: import polars as pl example = pl.Series([1,2,float('inf'),4]) This is my desired output: output = pl.Series([1.0,2.0,0.0,4.0]) All similiar questions regarding replacements are regarding polars Dataframes using the .when expression (e.g Replace value by null in Polars) which does not seem to be available in a Series object: AttributeError: 'Series' object has no attribute 'when' Is this possible using polars expressions? EDIT: I found the following solution but it seems very convoluted: example.map_dict({float('inf'): 0 }, default= pl.first())
You can use Series.set: s = s.set(s == float('inf'), 0) although there's a note there: Use of this function is frequently an anti-pattern, as it can block optimisation (predicate pushdown, etc). Consider using pl.when(predicate).then(value).otherwise(self) instead. which suggests using a way longer: s = s.to_frame().select(polars.when(s == float('inf')).then(0).otherwise(s)).to_series() which may or may not be worth it depending on your use case.
4
4
76,130,589
2023-4-28
https://stackoverflow.com/questions/76130589/what-is-the-function-of-the-text-target-parameter-in-huggingfaces-autotokeni
I'm following the guide here: https://huggingface.co/docs/transformers/v4.28.1/tasks/summarization There is one line in the guide like this: labels = tokenizer(text_target=examples["summary"], max_length=128, truncation=True) I don't understand the function of the text_target parameter. I tried the following code and the last two lines gave exactly the same results. from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('t5-small') text = "Weiter Verhandlung in Syrien." tokenizer(text_target=text, max_length=128, truncation=True) tokenizer(text, max_length=128, truncation=True) The docs just say text_target (str, List[str], List[List[str]], optional) — The sequence or batch of sequences to be encoded as target texts. I don't really understand. Is there some situations when setting text_target will give you a different result?
Sometimes it is necessary to look at the code: if text is None and text_target is None: raise ValueError("You need to specify either `text` or `text_target`.") if text is not None: # The context manager will send the inputs as normal texts and not text_target, but we shouldn't change the # input mode in this case. if not self._in_target_context_manager: self._switch_to_input_mode() encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs) if text_target is not None: self._switch_to_target_mode() target_encodings = self._call_one(text=text_target, text_pair=text_pair_target, **all_kwargs) # Leave back tokenizer in input mode self._switch_to_input_mode() if text_target is None: return encodings elif text is None: return target_encodings else: encodings["labels"] = target_encodings["input_ids"] return encodings As you can see in the above snippet, both text and text_target are passed to self._call_one() to encode them (note that text_target is passed as the text parameter). That means the encoding of the same string as text or text_target will be identical as long as _switch_to_target_mode() doesn't do anything special. The conditions at the end of the function answer your question: When you only provide text you will retrieve the encoding of it. When you only provide text_target you will retrieve the encoding of it. When you provide text and text_target you will retrieve the encoding of text and the token ids of text_target as the value of the labels key. To be honest, I think the implementation is a bit unintuitive. I would expect that passing the text_target would return an object that only contains the labels key. I assume that they wanted to keep their output objects and the respective documentation simple and therefore went for this implementation. Or there is a model where it actually makes sense that I am unaware of.
4
6
76,134,340
2023-4-29
https://stackoverflow.com/questions/76134340/including-additional-data-in-django-drf-serializer-response-that-doesnt-need-to
Our Django project sends GeoFeatureModelSerializer responses and we want to include an additional value in this response for JS to access. We figured out how to do this in serializers.py: from rest_framework_gis import serializers as gis_serializers from rest_framework import serializers as rest_serializers from core.models import Tablename class MarkerSerializer(gis_serializers.GeoFeatureModelSerializer): new_value = rest_serializers.SerializerMethodField('get_new_value') def get_new_value(self, foo): return True class Meta: fields = ("date", "new_value") geo_field = "geom" model = Tablename JS can get this value with geojson.features[0].properties.new_value where const geojson = await response.json(), but it's unnecessarily added with every entry. We'd like for it to be included only once so JS can access it with something like newResponse.new_value and existing functionality can continue getting the same data via newResponse.geojson or similar. How can we include a single additional value in this response? We thought maybe wrapping our serializer in another, but they seem to be asking a different thing we don't understand. Can we append this somehow? In the serializer can we do something like newResponse = {'new_value': new_value, 'geojson': geojson} somewhere? We've had a dig through the Django Rest Framework serializers docs and couldn't work it out, so perhaps we're missing something. Other SO threads seem to only ask about adding data for every entry. edit: we should have mentioned we're using viewsets.py, which looks like: class MarkerViewSet(viewsets.ReadOnlyModelViewSet): bbox_filter_field = "location" filter_backends = (filters.InBBoxFilter,) queryset = Marker.objects.all() serializer_class = MarkerSerializer
Figured it out. This answer is if you're using views, but for viewsets see list() and include this in your viewset: def list(self, request, *args, **kwargs): queryset = self.filter_queryset(self.get_queryset()) serializer = self.get_serializer(queryset, many=True) # idk what this code does but as we're overriding probably best to keep it page = self.paginate_queryset(queryset) if page is not None: serializer = self.get_serializer(page, many=True) return self.get_paginated_response(serializer.data) # adjust things as desired here like so, defining self.new_value elsewhere returned_object = {'new_value': self.new_value, 'geojson': serializer.data} return Response(returned_object)
3
1
76,158,147
2023-5-2
https://stackoverflow.com/questions/76158147/pandas-groupby-valueerror-cannot-subset-columns-with-a-tuple-with-more-than-o
I was updated my Pandas from I think it was 1.5.1 to 2.0.1. Any how I started getting an error on some code that works just fine before. df = df.groupby(df['date'].dt.date)['Lake', 'Canyon'].mean().reset_index() Traceback (most recent call last): File "f:...\My_python_file.py", line 37, in df = df.groupby(df['date'].dt.date)['Lake', 'Canyon'].mean().reset_index() File "C:\Users...\Local\Programs\Python\Python310\lib\site-packages\pandas\core\groupby\generic.py", line 1767, in getitem raise ValueError( ValueError: Cannot subset columns with a tuple with more than one element. Use a list instead.
Versions before Pandas < 2.0.0 raises a FutureWarning if you don't use double brackets to select multiple columns FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead From Pandas >= 2.0.0, it raises a ValueError: ValueError: Cannot subset columns with a tuple with more than one element. Use a list instead. For example: # Pandas < 2.0.0 # Missing [[ ... ]] --v --v >>> df.groupby(df['date'].dt.date)['Lake', 'Canyon'].mean().reset_index() ... FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead. df.groupby(df['date'].dt.date)['Lake', 'Canyon'].mean().reset_index() # Pandas >= 2.0.0 >>> df.groupby(df['date'].dt.date)['Lake', 'Canyon'].mean().reset_index() ... ValueError: Cannot subset columns with a tuple with more than one element. Use a list instead. Fix this using [[col1, col2, ...]]: >>> df.groupby(df['date'].dt.date)[['Lake', 'Canyon']].mean().reset_index() date Lake Canyon 0 2023-05-02 1.5 3.5 Minimal Reproducible Example: import pandas as pd df = pd.DataFrame({'date': ['2023-05-02 12:34:56', '2023-05-02 12:32:12'], 'Lake': [1, 2], 'Canyon': [3, 4]}) df['date'] = pd.to_datetime(df['date']) print(df) # Output date Lake Canyon 0 2023-05-02 12:34:56 1 3 1 2023-05-02 12:32:12 2 4
8
20
76,158,045
2023-5-2
https://stackoverflow.com/questions/76158045/split-column-list-into-rows-without-duplicating-data
I have a dataframe where the first column is a list, how can I iterate through the list and add a value to the relevant pre defined column: workflow cost cam gdp ott pdl ['cam', 'gdp', 'ott'] $2,346 ['pdl', 'ott'] $1,200 should convert to: workflow cost cam gdp ott pdl ['cam', 'gdp', 'ott'] $2,346 782 782 782 ['pdl', 'ott'] $1,200 600 600 I can get the length of the list, but I can't work out how to iterate over the list in order to match it to a column heading. Basically the cost is simply split evenly between the number of processes in the list.
Another alternative: df1 = ( df.assign(cost= df["cost"].str.replace(r"\$|,", "", regex=True).astype("float") / df["workflow"].str.len() ) .explode("workflow") .pivot(columns="workflow", values="cost") ) df = pd.concat([df[["workflow", "cost"]], df1], axis=1) Result for the sample: workflow cost cam gdp ott pdl 0 [cam, gdp, ott] $2,346 782.0 782.0 782.0 NaN 1 [pdl, ott] $1,200 NaN NaN 600.0 600.0
7
4
76,155,063
2023-5-2
https://stackoverflow.com/questions/76155063/thread-joins-timeout-does-not-work-when-thread-is-computing-a-sum
In the following code: import threading def infinite_loop(): while True: pass def huge_sum(): return sum(range(2**100)) thread = threading.Thread(target=huge_sum) thread.start() thread.join(1) print("Done") I expect the script to print "Done" after one second since join() will timeout, but instead the script hangs. If you replace the call to huge_sum with infinite_loop, it works fine. The problem seems to be with the built-in sum() function. Is there a way I can reliably get something like join()'s timeout behavior no matter what the thread is doing? I don't mind wacky metaprogramming solutions, this is for a very niche application. However, for the most part I cannot modify the code being executed inside the thread (e.g. "use a loop instead of sum" is not a solution). Linux, Python 3.8.
Looking at builtin_sum_impl in bltinmodule.c, int and float are optimized to perform the sum without releasing the Global Interpreter Lock (GIL). Python only allows byte code to execute in a single thread at a time. Every so often the byte code executor will release the GIL so that other threads can run. C code can release the GIL, letting python level code run in parallel. But sum doesn't do that. It holds the GIL while summing int so that it doesn't have to worry about other threads changing the thing being summed while in progress. That also means that no python code gets to run in any thread until it completes. One option is to do the work in a different process. For a forking operating system like linux, you could use a separate process to sum. The child can read parent memory so even if there is a large list in memory, it works. This adds the overhead of creating the child process, so not free. Its even better if you can generate the data in the child processs, especially if you want to do multiple sums in parallel. In your range example, you would to just that. Alternately, if you have a large set of integers to work with, you could move to numpy arrays where some operations like array.sum release the GIL. It wouldn't work in your example because range(2**100) is too big. But I assume that's just an example. You could scale this to multiple processes running in parallel, perhaps using multiprocessing.Pool. The best way to do this depends on several factors. For instance, If you have a large set of data in the parent, you don't want to pass it as a parameter in .map() (or other pool methods) because python has to copy the parameters to the workers. Instead you'd use some global variable that the worker just happens to know about.
2
3
76,119,400
2023-4-27
https://stackoverflow.com/questions/76119400/pyspark-remove-duplicated-messages-within-a-24h-window-after-an-initial-new-valu
I have a dataframe with a status (integer) and a timestamp. Since I get a lot of "duplicated" status messages, I want to reduce the dataframe by removing any row which repeats a previous status within a 24h window after a "new" status, meaning: The first 24h window starts with the first message of a specific status. The next 24h window for that status starts with the next message that comes after that first 24h window (the windows are not back-to-back). Given the example: data = [(10, datetime.datetime.strptime("2022-01-01 00:00:00", "%Y-%m-%d %H:%M:%S")), (10, datetime.datetime.strptime("2022-01-01 04:00:00", "%Y-%m-%d %H:%M:%S")), (10, datetime.datetime.strptime("2022-01-01 23:00:00", "%Y-%m-%d %H:%M:%S")), (10, datetime.datetime.strptime("2022-01-02 05:00:00", "%Y-%m-%d %H:%M:%S")), (10, datetime.datetime.strptime("2022-01-02 06:00:00", "%Y-%m-%d %H:%M:%S")), (20, datetime.datetime.strptime("2022-01-01 03:00:00", "%Y-%m-%d %H:%M:%S")) ] myschema = StructType( [ StructField("status", IntegerType()), StructField("ts", TimestampType()) ] ) df = spark.createDataFrame(data=data, schema=myschema) The first 24h window for status 10 is from 2022-01-01 00:00:00 until 2022-01-02 00:00:00. The second 24h window for status 10 is from 2022-01-02 05:00:00 until 2022-01-03 05:00:00. The first 24h window for status 20 is from 2022-01-01 03:00:00 until 2022-01-02 03:00:00. As a result, I want to keep the messages: data = [(10, datetime.datetime.strptime("2022-01-01 00:00:00", "%Y-%m-%d %H:%M:%S")), (10, datetime.datetime.strptime("2022-01-02 05:00:00", "%Y-%m-%d %H:%M:%S")), (20, datetime.datetime.strptime("2022-01-01 03:00:00", "%Y-%m-%d %H:%M:%S")) ] I know how to do this in Python by looping and keeping track of the latest change and I think I need to use a Window function with partitionBy + orderBy, but I cannot figure out the details... any help is appreciated.
That's a nice question, i spent some time trying to fiugre it out. I have something like this: At first i am calculating time diff in hours from prev element for each row within window Then i am aggregating above values together as a new column for each row (first row has null, next [4], third [4,19] etc Then i am using accumulator to calculate cumSum with your condition, so it "resets" after 24h Then i am using this cumSum to assign "inner window" Filtering at the end and i am done Probably this is not the best way to do it so comments appreciated :D Sample code: import datetime from pyspark.sql import Window import pyspark.sql.functions as F data = [ (10, datetime.datetime.strptime("2022-01-01 00:00:00", "%Y-%m-%d %H:%M:%S")), (10, datetime.datetime.strptime("2022-01-01 04:00:00", "%Y-%m-%d %H:%M:%S")), (10, datetime.datetime.strptime("2022-01-01 23:00:00", "%Y-%m-%d %H:%M:%S")), (10, datetime.datetime.strptime("2022-01-02 05:00:00", "%Y-%m-%d %H:%M:%S")), (10, datetime.datetime.strptime("2022-01-02 06:00:00", "%Y-%m-%d %H:%M:%S")), (10, datetime.datetime.strptime("2022-01-02 06:00:00", "%Y-%m-%d %H:%M:%S")), (10, datetime.datetime.strptime("2022-01-02 16:00:00", "%Y-%m-%d %H:%M:%S")), (10, datetime.datetime.strptime("2022-01-03 16:00:00", "%Y-%m-%d %H:%M:%S")), (10, datetime.datetime.strptime("2022-01-03 17:00:00", "%Y-%m-%d %H:%M:%S")), (20, datetime.datetime.strptime("2022-01-01 03:00:00", "%Y-%m-%d %H:%M:%S")) ] myschema = StructType( [StructField("status", IntegerType()), StructField("ts", TimestampType())] ) df = spark.createDataFrame(data=data, schema=myschema) window = Window.partitionBy("status").orderBy(["ts"]) df = df.withColumn( "timeDiffFromPrev", (((F.col("ts")).cast("long") - F.lag("ts").over(window).cast("long")) / 3600).cast("long"), ) df = df.withColumn('aggTimeDiffFromPrev', F.collect_list('timeDiffFromPrev').over(window)) #Zero aggregate when sum > 24h expr = "AGGREGATE(aggTimeDiffFromPrev, 0l, (accumulator, element) -> IF(accumulator < 24, accumulator + element, element))" df = df.select('status', 'ts', F.expr(expr).alias('cumsum')) #Assign window number based on cumSum - anything greater than 24 is beggining of new window df = df.withColumn( "window_number", F.sum( F.when((F.col("cumsum") > 24), F.lit(1)).otherwise(F.lit(0)) ).over(window), ) innerWindow = Window.partitionBy("status", "window_number").orderBy(["ts"]) #Keep only first row in window, drop other df = df.withColumn("row_number", F.row_number().over(innerWindow)).filter( F.col("row_number") == F.lit(1) ).drop("timeDiffFromFirst", "window_number", "row_number", "cumSum") df.show() I added few line on the input just for tests, for my input dataset results are as follows: +------+-------------------+ |status| ts| +------+-------------------+ | 10|2022-01-01 00:00:00| | 10|2022-01-02 05:00:00| | 10|2022-01-03 16:00:00| | 20|2022-01-01 03:00:00| +------+-------------------+
6
1
76,152,242
2023-5-2
https://stackoverflow.com/questions/76152242/vs-code-skips-breakpoints-after-import
When debugging my code, my breakpoints are ignored by visual studio code, once I use these lines of code: from brainflow.board_shim import BoardShim, BrainFlowInputParams, BoardIds from brainflow.data_filter import DataFilter, FilterTypes, DetrendOperations, AggOperations No error Message is given. I edited my launch.json in accordance to this post: Debugger Not Stopping at Breakpoints in VS Code for Python And tried several others, but nothing solves my issue. Probably my import is wrong/has an error? (if anybody knows something brainflow specific, please let me know. First time using it so.. yeah...) Here my Minimal Code example: #Bt_Connection #python -m pip install brainflow #pip install pybluez #python.exe -m pip install --upgrade pip import argparse import time import socket import os from brainflow.board_shim import BoardShim, BrainFlowInputParams, BoardIds from brainflow.data_filter import DataFilter, FilterTypes, DetrendOperations, AggOperations msg="X" print(msg) test= "test" print(test) And some screenshots since the error really only becomes visible in the IDE. (Video would be better, but you have to believe me that the breakpoints are not hit.) In the second picture the code only stops at the first line, since that is specified in my launch.json. But once I hit continue it just goes through the code without stoping and writes "X" and "test" in the terminal. launch.json for completions sake: { "version": "0.2.0", "configurations": [ { "name": "Python: Debug Current File", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal", "stopOnEntry": true, "justMyCode": false } ] } Python Version: 3.11.2 64 bit And for Visual Studio Code:
It has something to do with your python version. Using the python 3.10 version you will get the desired effect. Report it on GitHub: https://github.com/microsoft/debugpy/issues/1284
3
1
76,151,211
2023-5-2
https://stackoverflow.com/questions/76151211/unable-to-import-python-dependencies-that-come-with-miniconda-in-a-docker-run
I am trying to run a python script inside a dockerized miniconda environment. The issue I am facing is that when I docker run interactively(-it) and run the script manually inside, it works great. But when I docker run non-interactively, the modules that come with miniconda installation like cryptography, lxml, are not found. My dockerfile: ARG REGISTRY=harbor-west.reg.com/ci ARG FROM_TAG=master FROM harbor-west.reg.com/base-os/ubuntu:20.04 USER root ENV CONDA_DIR $HOME/miniconda3 RUN apt update && \ DEBIAN_FRONTEND=noninteractive apt install -y \ python3 \ python3-pip \ wget RUN pip install --upgrade \ google-api-python-client \ grpcio \ matplotlib \ numpy \ opencv-python \ pandas \ scikit-learn RUN mkdir /abc #Download a conda package under /abc/bin - steps removed for simplicity #install miniconda RUN wget https://repo.anaconda.com/miniconda/Miniconda3-py38_23.3.1-0-Linux-x86_64.sh RUN chmod 755 Miniconda3-py38_23.3.1-0-Linux-x86_64.sh RUN /bin/bash -c "./Miniconda3-py38_23.3.1-0-Linux-x86_64.sh -b" ENV PATH=$CONDA_DIR/bin:$PATH RUN /root/miniconda3/condabin/conda init WORKDIR /abc/bin CMD ["/bin/bash", "-c", "/abc/bin/start-prediction.sh"] #ENTRYPOINT ["/abc/bin/start-prediction.sh"] Output with non-interactive docker run(unexpected): Traceback (most recent call last): File "prediction_server.py", line 2, in <module> from abc.learn import prepare_data, SuperResolution File "/abc/bin/abc/__init__.py", line 3, in <module> from abc.auth.tools import LazyLoader File "/abc/bin/abc/auth/__init__.py", line 1, in <module> from .api import RegSession File "/abc/bin/abc/auth/api.py", line 30, in <module> from ._auth import ( File "/abc/bin/abc/auth/_auth/__init__.py", line 2, in <module> from ._pki import PKIAuth File "/abc/bin/abc/auth/_auth/_pki.py", line 4, in <module> from ..tools._lazy import LazyLoader File "/abc/bin/abc/auth/tools/__init__.py", line 1, in <module> from .certificate import pfx_to_pem File "/abc/bin/abc/auth/tools/certificate.py", line 6, in <module> import cryptography ModuleNotFoundError: No module named 'cryptography' Output with interactive docker run(as expected): (base)root@dcc788e0a8c5:/abc/bin# ./start-prediction.sh server listening on 0.0.0.0:50443 I tried echoing the PATH inside the container and looks ok: /root/miniconda3/bin:/root/miniconda3/condabin:/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin I'm not able to figure out what I'm missing. I would think if it works interactively, it should work non-interactively as well. Any pointers would be appreciated.
I struggled with the same issue. The key is when you perform conda init, it adds some lines in your shell rc, in your case .bashrc. source. Thus the solution is straightforward: source your .bashrc. Although, you have another solution that does not involve to edit your rc file. You can change your SHELL to be in your conda environment. When you are unsing this shell, install the packages requirements and run your code. SHELL ["conda", "run", "-n", "base", "/bin/bash", "-c"]
3
1
76,152,193
2023-5-2
https://stackoverflow.com/questions/76152193/find-indices-of-nan-elements-in-nested-lists-and-remove-them
names=[['Pat','Sam', np.nan, 'Tom', ''], ["Angela", np.nan, "James", ".", "Jackie"]] values=[[1, 9, 1, 2, 1], [1, 3, 1, 5, 10]] I have 2 lists: names and values. Each value goes with a name, i.e., Pat corresponds to the value 1 and Sam corresponds to the value 9. I would like to remove the nan from names and the corresponding values from values. That is, I want a new_names list that looks like this: [['Pat','Sam', 'Tom', ''], ["Angela", "James", ".", "Jackie"]] and a new_values list that looks like this: [[1, 9, 2, 1], [1, 1, 5, 10]] My attempt was to first find the indices of these nan entries: all_nan_idx = [] for idx, name in enumerate(names): if pd.isnull(name): all_nan_idx.append(idx) However, the above does not account nested lists.
Simply this? import numpy as np import pandas as pd names=[['Pat','Sam', np.nan, 'Tom', ''], ["Angela", np.nan, "James", ".", "Jackie"]] values=[[1, 9, 1, 2, 1], [1, 3, 1, 5, 10]] new_names = [] new_values = [] for names_, values_ in zip(names, values): n = [] v = [] for name, value in zip(names_, values_): if not pd.isnull(name): n.append(name) v.append(value) new_names.append(n) new_values.append(v)
6
2
76,151,960
2023-5-2
https://stackoverflow.com/questions/76151960/how-do-you-get-regex-to-match-individual-words-only-when-the-line-starts-with-a
I'm trying to have RegEx match all the words in a dialogue that are said by a specific character. Every line is formatted as "[NAME]: [DIALOGUE]", so there's a consistent tag at the start of each line to check with, but I can't work out how to do that. For example, if I was looking Romeo's Dialogue in Romeo and Juliet, it would match each word in "Romeo: I love you Juliet", but wouldn't match anything in "Juliet: I love you Romeo". The only thing I've thought of as a possible solution is using lookbehind assertions, for which I have (?<=NAME:[.*])\w+, but that doesn't return any matches. Through some debugging and looking at the other answers, I've worked out that the issue is with adding the [.*], specifically the square brackets. This lead me to (?<=^NAME:).*\w+, which almost worked, but it matched the entire line of dialogue instead of the individual words. After looking through the review questions when making this post, I came across this question, which had the code \Aframe.*width\s(?<width>\d+)\sheight\s(?<height>\d+)\z. I tried modifying it to be \ANAME:.*\w+\s(?<\w+>\d+)\s\z and then to \ANAME:.*\w+\s(?\w+\d+)\s\z, but both returned errors about the second \w+, citing "bad escape". I then looked at this question, which had the code (^@property|(?!^)\G)(.*? )\K([^-\n]\w+), but even the base code without any modifcation returned the same "bad escape" error.
Simply iterate over the lines and match all words if the line starts with the given character name: import re def get_character_words(character_name, dialogue): result = [] for line in dialogue.splitlines(): if line.startswith(character_name): result += re.findall(fr'\w+', line[len(character_name) + 2:]) return result Try it: text = ''' Romeo: Shall I hear more, or shall I speak at this? Juliet: ’Tis but thy name that is my enemy. Thou art thyself, though not a Montague. What’s Montague? It is nor hand, nor foot, Nor arm, nor face. O, be some other name Belonging to a man. What’s in a name? That which we call a rose Romeo: Call me but love, and I’ll be new baptized. Henceforth I never will be Romeo. ''' print(get_character_words('Romeo', text)) ''' [ 'Shall', 'I', 'hear', 'more', 'or', 'shall', 'I', 'speak', 'at', 'this', 'Call', 'me', 'but', 'love', 'and', 'I', 'll', 'be', 'new', 'baptized', 'Henceforth', 'I', 'never', 'will', 'be', 'Romeo' ] '''
2
3
76,143,042
2023-4-30
https://stackoverflow.com/questions/76143042/is-there-an-interface-to-access-pyproject-toml-from-python
Is there an interface to access the information in pyproject.toml from Python? In particular, I'd like to access the dependencies. It doesn't seem hard to do toml.load("pyproject.toml")['project']['dependencies'] and then parse that list. But that would be rewriting logic that must already be available somewhere. Apparently there is importlib.metadata.requires(), which would be better than the line above, but that seems to give a similar list of strings that still requires manual parsing: >>> metadata.requires("scipy") ['numpy<1.27.0,>=1.19.5', 'pytest; extra == "test"', ... (As for the XY problem, I was thinking about writing a test that tries to install a package with the lowest versions of its dependencies to see whether the package still works.)
In both cases, whether project metadata (such as dependency requirements) is obtained from pyproject.toml's standardized [project] section or via importlib.metadata the values should be the same. At least, as far as I know, the dependency requirements have the same format. Probably I would use the 3rd party packaging library, in particular its packaging.requirements.Requirement(), to parse the dependency requirements. This library can handle other values of project metadata as well, read the documentation. As far as I know this is the library that pip itself uses internally (so it should be always reliable and up-to-date). The issue with obtaining project metadata from pyproject.toml, is that some fields can be declared dynamic. Which means their value is not set in pyproject.toml. If you are fine with that, then the TOML parser from Python's standard library tomllib could be good enough. You could also give pyproject-parser a try, it seems like a correct and reliable library.
8
6
76,142,428
2023-4-30
https://stackoverflow.com/questions/76142428/unexpected-uint64-behaviour-0xffffffffffffffff-1-0
Consider the following brief numpy session showcasing uint64 data type import numpy as np a = np.zeros(1,np.uint64) a # array([0], dtype=uint64) a[0] -= 1 a # array([18446744073709551615], dtype=uint64) # this is 0xffff ffff ffff ffff, as expected a[0] -= 1 a # array([0], dtype=uint64) # what the heck? I'm utterly confused by this last output. I would expect 0xFFFF'FFFF'FFFF'FFFE. What exactly is going on here? My setup: >>> sys.platform 'linux' >>> sys.version '3.10.5 (main, Jul 20 2022, 08:58:47) [GCC 7.5.0]' >>> np.version.version '1.23.1'
By default, NumPy converts Python int objects to numpy.int_, a signed integer dtype corresponding to C long. (This decision was made back in the early days when Python int also corresponded to C long.) There is no integer dtype big enough to hold all values of numpy.uint64 dtype and numpy.int_ dtype, so operations between numpy.uint64 scalars and Python int objects produce float64 results instead of integer results. (Operations between uint64 arrays and Python ints may behave differently, as the int is converted to a dtype based on its value in such operations, but a[0] is a scalar.) Your first subtraction produces a float64 with value -1, and your second subtraction produces a float64 with value 2**64 (since float64 doesn't have enough precision to perform the subtraction exactly). Both of these values are out of range for uint64 dtype, so converting back to uint64 for the assignment to a[0] produces undefined behavior (inherited from C - NumPy just uses a C cast). On your machine, this happened to produce wraparound behavior, so -1 wrapped around to 18446744073709551615 and 2**64 wrapped around to 0, but that's not a guarantee. You might see different behavior on other setups. People in the comments did see different behavior.
46
41
76,144,065
2023-4-30
https://stackoverflow.com/questions/76144065/why-does-python-turtle-graphics-keep-crashing-stop-responding
Whenever I try and run my program, it draws the two turtles and then the window stops responding. What I was expecting is that, until one of the pieces touches the other one based on me dragging it close to the other one, I will be able to drag both of them by themselves. What's happening though is that whenever I run the program, after drawing both of the turtles, the window stops responding. I don't get any errors, it just closes after freezing until I click the close button. I've looked at other people's post where they had this problem but they haven't had screen.mainloop() at the end and I do. import turtle captured_pieces = [] blue = turtle.Turtle() black = turtle.Turtle() screen = turtle.Screen() blue.penup() black.penup() blue.shape('square') black.shape('triangle') blue.setpos(100,100) black.setpos(-100,-100) blue.color('blue') black.color('black') def bmove(): black.ondrag(black.goto) if black.distance(blue) < 30: captured_pieces.append("BlC") print(captured_pieces) check() def blmove(): blue.ondrag(blue.goto) if blue.distance(black) < 30: captured_pieces.append("BC") print(captured_pieces) check() def check(): if "BlC" in captured_pieces: print("blue captured") def check(): if "BC" in captured_pieces: print("black captured") while "BlC" not in captured_pieces and "BC" not in captured_pieces: bmove() blmove() screen.mainloop()
The while loop does two things over and over: add drag handlers, and check distance. The loop doesn't call any turtle methods that cause turtle's rendering/event loop to run (for example, .forward() or .goto()), so the distance checks can't be true to break the loop. We want to get to mainloop() to enable user interaction, but we can't get there unless user interaction is allowed--we're stuck. Avoid the loop (you generally don't want to use while in a real-time turtle program) and check for collisions inside the drag handlers rather than in the loop, then let mainloop() run the turtle rendering loop. This reveals a new problem: the internal loop's "gliding" speed causes dragging to be very glitchy. You can use tracer(0) to disable the internal loop and turtle.update() to manually trigger repaints when you need them, eliminating any gliding behavior and giving you full control of rerenders. Here's a simple example: import turtle turtle.tracer(0) blue = turtle.Turtle() black = turtle.Turtle() screen = turtle.Screen() blue.penup() black.penup() blue.shape("square") black.shape("triangle") blue.setpos(100, 100) black.setpos(-100, -100) blue.color("blue") black.color("black") def handle_drag(piece, other, x, y): piece.goto(x, y) turtle.update() if piece.distance(other) < 30: print("capture:", piece.color()[0], other.color()[0]) black.ondrag(lambda x, y: handle_drag(black, blue, x, y)) blue.ondrag(lambda x, y: handle_drag(blue, black, x, y)) turtle.update() screen.mainloop() Before long, you'll likely need custom classes (like a Piece class, which might be subclassed by various types of pieces with different characteristics) and data structures. The approach above with separate variables isn't scalable. But I'll leave it as is for now in the interest of keeping it scoped to your immediate problem.
5
1
76,143,613
2023-4-30
https://stackoverflow.com/questions/76143613/trying-to-add-a-pydantic-model-to-a-set-gives-unhashable-error
I have the following code from pydantic import BaseModel class User(BaseModel): id: int name = "Jane Doe" def add_user(user: User): a = set() a.add(user) return a add_user(User(id=1)) When I run this, I get the following error: TypeError: unhashable type: 'User' Is there a way this issue can be solved?
You just need to implement an hash function for the class. You can easily do that by hashing the User's id or name and return it as the hash... like this: from pydantic import BaseModel class User(BaseModel): id: int name = "Jane Doe" def __hash__(self) -> int: return self.name.__hash__() # or self.id.__hash__() def add_user(user: User): a = set() a.add(user) return a add_user(User(id=1))
5
5
76,137,714
2023-4-29
https://stackoverflow.com/questions/76137714/trying-to-fit-a-gaussian-with-scipy
I have a diffractogram, and I need to fit a given peak to a gaussian function, I'm trying to use curve_fit from scipy.optimize for that but I'm getting some errors. First at all, because of my data, I've added a straight line to the equation to fit, so I have When I try to fit this equation via import matplotlib.pyplot as plt import numpy as np from scipy.optimize import curve_fit def gauss(x, mu, sigma, m, b): return (1.0/(sigma*np.sqrt(2.0*np.pi)))*np.exp((-(x-mu)**2)/(2.0*sigma**2)) + m*x + b #load the dataset datadrx = np.genfromtxt('data.txt') X = datadrx[:,0] Y = datadrx[:,1] mu0 = np.mean(Y) sigma0 = np.std(Y) popt, pcov = curve_fit(gauss, X, Y, p0 = [mu0,sigma0, 1.0, 1.0]) plt.title('Gaussian fit') plt.plot(X, gauss(X,*popt)) plt.plot(X,Y) I get this warning /home/eariasp/miniconda3/envs/herrcomp/lib/python3.10/site-packages/scipy/optimize/_minpack_py.py:906: OptimizeWarning: Covariance of the parameters could not be estimated warnings.warn('Covariance of the parameters could not be estimated', And a straight line as a fit, see When I remove the mx+b part I get a 'better' fit (I mean better because at least it's not just a straight line), but it's still far away from what I want So, my question is, what is wrong? I found that it's a good idea to define the guess parameters p0 so I did it, but it didn't work in my case In order to reproduce, you can use this dataset Check that values in the first column are one close to another, I don't know if that has something to do, like subtractive cancellation, taking e to a negative small power or something; I found something about that but really I didn't get it
Yes, you need initial parameters; no, you're not calculating them correctly. The mean of Y is not the mean of your random variable, and the standard deviation of Y is not the standard deviation of your random variable. The better your constraints and guesses are, the faster and more reliably your fit will converge. Thankfully, for a Gaussian such constraints and guesses are reasonably straightforward to calculate - and in fact the guess is so good that for some (not all) applications, fit isn't even needed. import matplotlib.pyplot as plt import numpy as np from scipy.optimize import curve_fit def gauss(x: np.ndarray, a: float, mu: float, sigma: float, b: float) -> np.ndarray: return ( a/sigma/np.sqrt(2*np.pi) )*np.exp( -0.5 * ((x-mu)/sigma)**2 ) + b X, Y = np.loadtxt('data.txt', delimiter=' ').T xmin, xmax = X.min(), X.max() # left and right bounds i_max = Y.argmax() # index of highest value - for guess, assumed to be Gaussian peak ymax = Y[i_max] # height of guessed peak mu0 = X[i_max] # centre x position of guessed peak b0 = Y[:20].mean() # height of baseline guess # https://en.wikipedia.org/wiki/Gaussian_function#Properties # Index of first argument to be at least halfway up the estimated bell i_half = np.argmax(Y >= (ymax + b0)/2) # Guess sigma from the coordinates at i_half. This will work even if the point isn't at exactly # half, and even if this point is a distant outlier the fit should still converge. sigma0 = (mu0 - X[i_half]) / np.sqrt( 2*np.log( (ymax - b0)/(Y[i_half] - b0) ) ) a0 = (ymax - b0) * sigma0 * np.sqrt(2*np.pi) p0 = a0, mu0, sigma0, b0 popt, _ = curve_fit( f=gauss, xdata=X, ydata=Y, p0=p0, bounds=( ( 1, xmin, 0, 0), (np.inf, xmax, xmax - xmin, ymax), ), ) print('Guess:', np.array(p0)) print('Fit: ', popt) fig, ax = plt.subplots() ax.set_title('Gaussian fit') ax.scatter(X, Y, marker='+', label='experiment', color='orange') ax.plot(X, gauss(X, *p0), label='guess', color='lightgrey') ax.plot(X, gauss(X, *popt), label='fit') ax.legend() plt.show() Guess: [2.43599321e+02 6.86947536e+01 2.23407054e-01 3.02000000e+02] Fit: [2.28882618e+02 6.86885140e+01 2.25575284e-01 3.02398123e+02] Your data are also too noisy and don't have enough samples for m to have any meaning.
3
4