question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
76,140,839 | 2023-4-30 | https://stackoverflow.com/questions/76140839/regex-to-find-sequences-of-two-given-characters-that-alternate | I have written a small program to convert IPv6 addresses to ints and back, and I have managed to beat built-in ipaddress.IPv6Address in terms of performance. import re MAX_IPV6 = 2**128-1 DIGITS = set("0123456789abcdef") def parse_ipv6(ip: str) -> int: assert isinstance(ip, str) and len(ip) <= 39 segments = ip.lower().split(":") l, n, p, count, compressed = len(segments), 0, 7, 0, False last = l - 1 for i, s in enumerate(segments): assert count <= 8 and len(s) <= 4 and not set(s) - DIGITS if not s: if i in (0, last): continue assert not compressed p = l - i - 2 compressed = True else: n += int(s, 16) << p*16 p -= 1 count += 1 return n def to_ipv6(n: int, compress=False) -> str: assert isinstance(n, int) and 0 <= n <= MAX_IPV6 ip = '{:032_x}'.format(n).replace('_', ':') if not compress: return ip return re.sub('0{1,3}([\da-f]+)', '\\1', ip) I am currently trying to implement compression, namely find the longest run of two alternating characters 0 and :, and replace the first occurrence of it with ::. For example, given this address: 'abcd:0:ef12::a:0:0', parse_ipv6('abcd:0:ef12::a:0:0') gives this number: 228362408209208931942454293848746098688, but to_ipv6(parse_ipv6('abcd:0:ef12::a:0:0'), 1) gives this: 'abcd:0:ef12:0:0:a:0:0'. As you see the result is not properly compressed. In short I want a regex pattern to be used with re.findall to find sequences like [':0', ':0:', ':0:0:', '0:', '0:0:', '0:0:0:']. I have Google searched this, and found many questions on this site with similar phrasing, but none of them solves my problem. I have tried this regex: In [277]: ip Out[277]: 'abcd:0:ef12:0:0:a:0:0' In [278]: re.findall('((:0)+)|((0:)+)', ip) Out[278]: [(':0', ':0', '', ''), (':0:0', ':0', '', ''), (':0:0', ':0', '', '')] I was expecting [':0:', ':0:0:', ':0:0']. How to fix this? Using the correct regex I updated my code to this: EMPTY = re.compile(r':?\b(?:0\b:?)+') def to_ipv6(n: int, compress:bool=False) -> str: assert isinstance(n, int) and 0 <= n <= MAX_IPV6 ip = '{:039_x}'.format(n).replace('_', ':') if not compress: return ip ip = ':'.join(s.lstrip('0') if s != '0000' else '0' for s in ip.split(':')) longest = max(EMPTY.findall(ip), key=len, default='') if len(longest) > 2: ip = ip.replace(longest, '::', 1) return ip It correctly compresses the given example: In [334]: to_ipv6(228362408209208931942454293848746098688, True) Out[334]: 'abcd:0:ef12::a:0:0' I stopped using re.sub('0{1,3}([\da-f]+)', '\\1', ip) because somehow it took 8 microseconds and some to complete. The comprehension is faster. The original code in this question doesn't actually work, the parser correctly parses valid IPv6 addresses but fails to identify invalid IPv6 addresses, thus gives incorrect output when it should raise exceptions. And the formatter also doesn't work, it gives incorrect outputs as well, namely it can sometimes fail to choose the longest consecutive empty fields, which is wrong. And will raise exceptions when the address cannot be compressed. DO NOT USE MY CODE IN YOUR PRODUCTION CODE, it is buggy and not well-tested. That said, I have fixed every bug I can find and made the code raise exceptions for all invalid inputs I can think of, a long time ago. But I didn't bother update the code in this question. Because I am lazy and there might still be edge cases I didn't catch. And today this question received an upvote so long after it was posted, an upvote it didn't deserve. So I felt compelled to update the question and edit the code, I fixed the formatter and not the parser, because the parser became too long, and I meant to discourage its use (well I would keep using my code, but nobody else should use it). | See the Python manual -> re.findall If multiple groups are present, return a list of tuples of strings matching the groups. Non-capturing groups do not affect the form of the result. To match the desired sequences and prevent matching such as :00: a0: 0a: here an idea. res = re.findall(r":?\b(?:0\b:?)+", ip) See this demo at regex101 or a Python demo at tio.run It matches an optional : followed by a word boundary, followed by a repeated (?: non capturing group ) one or more times which contains 0 at a word boundary followed by an optional colon. If you don't want to match 0 without colon, another variant: (?::0\b)+:?|(?:\b0:)+(?:0\b)? | 3 | 3 |
76,139,355 | 2023-4-30 | https://stackoverflow.com/questions/76139355/flask-object-has-no-attribute-session-cookie-name | I keep getting the error 'Flask' object has no attribute 'session_cookie_name' right at initialization on an app that used to work. I built a smaller test app to test it and noticed that if I remove the line app.config["SESSION_TYPE"] = "filesystem" the error goes away. Problem is I need to use sessions in my Flask app and, since it is for testing, I would prefer to use the filesystem to store session data. here is my mini program for testing... I don't know what I am missing, this used to work fine and code examples online do it exactly this way but I keep getting the same error. This is the error I receive from flask import Flask, render_template,redirect, url_for, request, session from flask_session import Session app = Flask(__name__) app.config["SESSION_PERMANENT"] = False app.config["SESSION_TYPE"] = "filesystem" Session(app) @app.route("/") def index(): session["test"] = "test" return render_template ('index.html') This worked before. It is all of the sudden not working. I have tried installing a different version of Flask_Session. I have tried adding the "SESSION_COOKIE_NAME" to app.config before Session(app) even though I understand its default is session. Nothing so far has worked or changed the error message. | This error happens when the currently used version of Flask does not have the session_cookie_name attribute. This attribute was added in Flask 2.0, so if you are using an older version of Flask, you will need to upgrade to a newer version. Just upgrade Flask by executing the command pip install --upgrade Flask Note: If you are using a virtual environment, be sure to activate it before upgrading. | 5 | 1 |
76,139,696 | 2023-4-30 | https://stackoverflow.com/questions/76139696/how-to-get-list-of-possible-replacements-in-string-using-regex-in-python | I have the following strings: 4563_1_some_data The general pattern is r'\d{1,5}_[1-4]_some_data Note, that numbers before first underscore may be the same for different some_data So the question is: how to get all possible variants of replacement using regex? Desired output: [4563_1_some_data, 4563_2_some_data, 4563_3_some_data, 4563_4_some_data] My attempt: [re.sub(r'_\d_', r'_[1-4]_', row).group() for row in data] But it has result: 4563_[1-4]_some_data Seems like ordinary replacement. How should I activate pattern replacement and get the list? | You need to iterate over a range object to create your list. Create that range object: To do that you need a pattern to get the [1-4] part from your pattern. You'll need another pattern to replace the number in the actual data with the variable from range object. import re text = "4563_1_some_data" original_pattern = r"\d{1,5}_[1-4]_some_data" # A regex to get the `[1-4]` part. find_numbers_pattern = r"(?<=_)\[(\d)\-(\d)\](?=_)" # Get `1` and `4` to create a range object a, b = map(int, re.search(find_numbers_pattern, original_pattern).groups()) # Use `count=` to only change the first match. lst = [re.sub(r"(?<=_)\d(?=_)", str(i), text, count=1) for i in range(a, b + 1)] print(lst) output: ['4563_1_some_data', '4563_2_some_data', '4563_3_some_data', '4563_4_some_data'] (?<=_)\[(\d)\-(\d)\](?=_) explanation: (?<=_): A positive lookbehind assertion to match _. \[(\d)\-(\d)\]: to get the [1-4] for example. (?=_): A positive lookahead assertion to find the _. | 3 | 2 |
76,136,469 | 2023-4-29 | https://stackoverflow.com/questions/76136469/how-to-allow-hyphen-in-query-parameter-name-using-fastapi | I have a simple application below: from typing import Annotated import uvicorn from fastapi import FastAPI, Query, Depends from pydantic import BaseModel app = FastAPI() class Input(BaseModel): a: Annotated[str, Query(..., alias="your_name")] @app.get("/") def test(inp: Annotated[Input, Depends()]): return f"Hello {inp.a}" def main(): uvicorn.run("run:app", host="0.0.0.0", reload=True, port=8001) if __name__ == "__main__": main() curl "http://127.0.0.1:8001/?your_name=amin" returns "Hello amin" I now change the alias from your_name to your-name. from typing import Annotated import uvicorn from fastapi import FastAPI, Query, Depends from pydantic import BaseModel app = FastAPI() class Input(BaseModel): a: Annotated[str, Query(..., alias="your-name")] @app.get("/") def test(inp: Annotated[Input, Depends()]): return f"Hello {inp.a}" def main(): uvicorn.run("run:app", host="0.0.0.0", reload=True, port=8001) if __name__ == "__main__": main() Then curl "http://127.0.0.1:8001/?your-name=amin" returns: {"detail":[{"loc":["query","extra_data"],"msg":"field required","type":"value_error.missing"}]} However, hyphened alias in a simpler application is allowed. from typing import Annotated import uvicorn from fastapi import FastAPI, Query app = FastAPI() @app.get("/") def test(a: Annotated[str, Query(..., alias="your-name")]): return f"Hello {a}" def main(): uvicorn.run("run:app", host="0.0.0.0", reload=True, port=8001) if __name__ == "__main__": main() curl "http://127.0.0.1:8001/?your-name=amin" returns "Hello Amin" Is this a bug? what is the problem here? | Problem Overview As explained here by @JarroVGIT, this is a behaviour coming from Pydantic, not FastAPI. If you looked closely at the error you posted: {"detail":[{"loc":["query","extra_data"],"msg":"field required","type":"value_error.missing"}]} it talks about a missing value for a query parameter named extra_data—however, there is no such a parameter defined in your BaseModel. If, in fact, you used the Swagger UI autodocs at /docs, you would see that your endpoint is expecting a required parameter named extra_data. As noted by @JarroVGIT in the linked discussion above: Whenever you give a 'non-pythonic' alias (like with dashes/hyphens), the signature that inspect.signature() will get from that BaseModel subclass will be grouped in the kwargs extra_data. Hence the error about the missing value for extra_data query parameter. Solution 1 You could wrap the Query() in a Field(), as demonstrated in this answer. Example: from fastapi import Query, Depends from pydantic import BaseModel, Field class Input(BaseModel): a: str = Field(Query(..., alias="your-name")) @app.get("/") def main(i: Input = Depends()): pass Solution 2 As shown in this answer and this answer, you could use a separate dependency class, instead of a BaseModel. Example: from fastapi import Query, Depends from dataclasses import dataclass @dataclass class Input: a: str = Query(..., alias="your-name") @app.get("/") def main(i: Input = Depends()): pass Solution 3 Declare the query parameter directly in the endpoint, instead of using a Pydantic BaseModel. Example: from fastapi import Query @app.get("/") def main(a: str = Query(..., alias="your-name")): pass | 3 | 6 |
76,139,101 | 2023-4-30 | https://stackoverflow.com/questions/76139101/wordle-copy-having-trouble-with-duplicate-letters-python | I’m trying to make a copy of the game Wordle but I’m having trouble dealing with duplicate letters. Here’s my code and I explain my logic a bit below. # a portion of the code, the logical bit # def minigame(): # wordle global wordGuess delay_print([f'You have 6 chances to guess the 5 letter word. Heres the coloring guide: \n', colored('W', on_color='on_red') + ': The letter is wrong\n', colored('W', on_color='on_yellow') + ': The letter is right but in the wrong spot\n', colored('W', on_color='on_green') + ': The letter is right and in the correct spot\n'], 0.03) with open('valid-wordle-words.txt', 'r') as f: listWords = [line.strip() for line in f] with open('validWords.txt', 'r') as f: listValid = [line.strip() for line in f] wordGuess = random.choice(listWords) counter = 0 while counter <= 5: guess = input('\n> ') if len(guess) != 5: delay_print("Your guess should be 5 letters long.") else: if guess in listValid or guess in listWords: counter += 1 userGuess(guess, wordGuess) else: delay_print("Your guess is not valid.") if guess != wordGuess: delay_print('You lost :(. The word was {}\n'.format(wordGuess)) retry(minigame) def userGuess(guess: str, correct: str): if guess == correct: for x in correct: print(colored(x, on_color='on_green'), end='') delay_print("\nYou win!") exit() else: for index, (guess_letter, correct_letter) in enumerate(zip(guess, correct)): if guess.count(guess_letter) >= 2: numCor = 0 if guess_letter == correct_letter: print(colored(guess_letter, on_color='on_green'), end='') correct = correct.replace(guess_letter, '-', 1) numCor += 1 elif guess_letter in correct and numCor != guess.count(guess_letter): print(colored(guess_letter, on_color='on_yellow'), end='') correct = correct.replace(guess_letter, '-', 1) else: print(colored(guess_letter, on_color='on_red'), end='') else: if guess_letter == correct_letter: print(colored(guess_letter, on_color='on_green'), end='') correct = correct.replace(guess_letter, '-', 1) elif guess_letter in correct: print(colored(guess_letter, on_color='on_yellow'), end='') correct = correct.replace(guess_letter, '-', 1) elif guess_letter not in correct: print(colored(guess_letter, on_color='on_red'), end='') I explain the rules of Wordle (kinda) in the code, so go there if you aren’t aware of what the rules are. Anyways, here’s my logic. Let’s assume that the word I’m trying to guess is STOLE. Basically for each letter, I first check to see if there are two or more instances of the same letter in the guess (this is the problem of duplicates that I’ve been trying to solve, my logic is probably flawed though but this is what I came up with). I thought this would be a good idea since if there are two of the same letters in the guess like the T in LATTE, I would check the first T, giving it the green highlight, and then preventing the second T from getting a yellow highlight. Here’s the code in action: Here’s an instance of where it doesn’t seem to go as planned: <- the problem is with my guess trees. The code is saying that there are two E’s in the final word, but if you look at the end, OFTEN only has one E. Any help would be appreciated. Thank you! This code is my attempt at trying to account for duplicates. I tried integrating a counter but that doesn’t seem to work. I think the error lies in the fundamentals of my logic but I can’t pinpoint where exactly. | You dont handle the duplicate letters properly, the same letter can be counted multiple times as a correct letter in the wrong position. we can create a separate function to compare the guess and correct word and return the colored output for each letter. def userGuess(guess: str, correct: str): if guess == correct: for x in correct: print(colored(x, on_color='on_green'), end='') delay_print("\nYou win!") exit() else: output = compare_guess_and_correct(guess, correct) for letter_color in output: print(letter_color, end='') print() def compare_guess_and_correct(guess, correct): output = [] correct_matched_positions = [] yellow_count = {} for i, g in enumerate(guess): if g == correct[i]: output.append(colored(g, on_color='on_green')) correct_matched_positions.append(i) else: yellow_count[g] = yellow_count.get(g, 0) + 1 for i, g in enumerate(guess): if i not in correct_matched_positions and g in correct: if yellow_count[g] > 0: output.append(colored(g, on_color='on_yellow')) correct = correct.replace(g, '', 1) yellow_count[g] -= 1 else: output.append(colored(g, on_color='on_red')) elif i not in correct_matched_positions: output.append(colored(g, on_color='on_red')) return output | 3 | 1 |
76,138,438 | 2023-4-29 | https://stackoverflow.com/questions/76138438/incompatible-types-in-assignment-expression-has-type-liststr-variable-has | If I run this code through mypy: if __name__ == '__main__': list1: list[str] = [] list2: list[str | int] = [] list2 = list1 It gives me following error: error: Incompatible types in assignment (expression has type "List[str]", variable has type "List[Union[str, int]]") Why? Isn't a list[str] a subset of list[str | int]? If yes, then why can't I assign list1 which doesn't have wider range of possible types to list2? | Generic types can be classified as covariant, contravariant, or invariant. When u is a subtype of v, a generic type g is covariant when g[u] is a subtype of g[v], contravariant when g[v] is a subtype of g[u], or invariant when neither g[u] nor g[v] is a subtype of the other. list is invariant because lists are mutable, in which case it is not safe to substitute one concrete type of list for another. Immutable sequences are covariant: Sequence[str] is, indeed, a subtype of Sequence[str | int], so you could do the following: from collections.abc import Sequence if __name__ == '__main__': list1: Sequence[str] = [] list2: Sequence[str | int] = [] list2 = list1 The only thing you can do with a Sequence, regardless of whether it is a list, tuple, etc, is read from it, and anything reading from list2 will know it can't assume that any given element is a str or an int (though it will be one or the other, rather than, say, a float), so it's safe to provide a sequence of either type. | 3 | 4 |
76,138,324 | 2023-4-29 | https://stackoverflow.com/questions/76138324/create-a-new-dataframe-by-breaking-down-the-columns-data-of-an-old-dataframe | I have the below dataframe salesman north_access south_access east_access west_access A 1 0 0 1 B 0 1 1 1 C 1 0 1 1 I want to convert the above into the below format salesman direction access A north 1 A south 0 A east 0 A west 1 B north 0 B south 1 B east 1 B west 1 I tried exploring the split and transpose function but didnt get the expected results. Can someone please help with the code to make the above changes in python, thanks in advance. | Another solution (using pd.wide_to_long): df.columns = [f'access_{c.split("_")[0]}' if "_access" in c else c for c in df.columns] x = pd.wide_to_long( df, stubnames="access", suffix=r".*", i=["salesman"], j="direction", sep="_" ).reset_index() print(x) Prints: salesman direction access 0 A north 1 1 B north 0 2 C north 1 3 A south 0 4 B south 1 5 C south 0 6 A east 0 7 B east 1 8 C east 1 9 A west 1 10 B west 1 11 C west 1 | 3 | 5 |
76,113,195 | 2023-4-26 | https://stackoverflow.com/questions/76113195/custom-optimizer-in-pytorch-or-tensorflow-2-12-0 | I am trying to implement my custom optimizer in PyTorch or TensorFlow 2.12.0. With help of ChatGPT I always get code that have errors, what's more I can't find any useful examples. I would like to implement custom optimizer as: d1 contains sign of current derivatives d2 contains sign of previous derivatives step_size is 1.0 step_size is divided by 2.0 if sign of d1 != d2 In PyTorch I know that code has to look something like this: import torch.optim as optim class MyOpt(optim.Optimizer): def __init__(self, params, lr=1.0): defaults = dict(lr=lr, d1=None, d2=None) super(MyOpt, slef).__init__(params, defaults) def step(self): ??? Can anyone help me to code it ? | I did it. class MyOptimizer(optim.Optimizer): def __init__(self, params, lr=1.0): defaults = dict(lr=lr) super(MyOptimizer, self).__init__(params, defaults) self.lr = {} self.d2 = {} for group in self.param_groups: for param in group['params']: self.lr[param] = torch.ones_like(param.data) * lr self.d2[param] = torch.ones_like(param.data) def step(self, closure=None): loss = None if closure is not None: loss = closure() for group in self.param_groups: for param in group['params']: if param.grad is None: continue d1 = torch.sign(param.grad.data) t = torch.where(torch.sign(self.d2[param]) == d1, 1.0, 2.0) lr = self.lr[param] lr = lr / t self.lr[param] = lr param.data -= d1 * lr self.d2[param] = d1 return loss | 4 | 2 |
76,136,591 | 2023-4-29 | https://stackoverflow.com/questions/76136591/parse-a-remote-xml-gz-file-of-a-database-without-downloading | I need to parse a Pubchem database to search for certain clues on the pages of compounds (Toxicity codes, to be exact, they look like 'H300'), and then add their CIDs to the correspondent lists The Database is here https://ftp.ncbi.nih.gov/pubchem/Compound/CURRENT-Full/XML/ But the xml.gz files there are so big that they can't be unpacked on my computer So maybe there is a way to read this files directly on the server of a PubChem | One way I would approach this is to use curl and gunzip and maybe grep: Example: curl -ks https://ftp.ncbi.nih.gov/pubchem/Compound/CURRENT-Full/XML/Compound_000000001_000500000.xml.gz -o - | gunzip | grep someString This will stream down the file, and in realtime decompress it, which will allow you in realtime to grep for what you need | 3 | 2 |
76,135,606 | 2023-4-29 | https://stackoverflow.com/questions/76135606/what-is-the-benefit-of-using-curried-currying-function-in-functional-programming | If you consider inner_multiply as an initializer of multiply, shouldn't you make them loosely coupled and DI the initializer (or any other way) especially if you require multiple initializers? Or am I misunderstanding the basic concept of curried function in FP? def inner_multiply(x): def multiply(y): return x * y return multiply def curried_multiply(x): return inner_multiply(x) multiply_by_3 = curried_multiply(3) result = multiply_by_3(5) print(result) # Output: 15 (3 * 5) | You can define an entirely generic curry function like this: def curry(f): return lambda x: lambda y: f(x, y) Assume, in order to reproduce the example in the OP, that you also define a multiply function like this: def multiply(x, y): return x * y You can now partially apply multiply using curry: >>> multiply_by_3 = curry(multiply)(3) >>> multiply_by_3(5) 15 This example immediately uses multiply_by_3, but the benefit is that you don't have to do that. Rather, you can partially apply the function in one place, pass that partially applied function around, and call it in an entirely different part of your code base. | 3 | 5 |
76,129,498 | 2023-4-28 | https://stackoverflow.com/questions/76129498/wordcloud-only-supported-for-truetype-fonts | I am trying to generate a word cloud using the WordCloud module in Python, however I see the following error whenever I call .generate Traceback (most recent call last): File "/mnt/6db3226b-5f96-4257-980d-bb8ec1dad8e7/test.py", line 4, in <module> wc.generate("foo bar foo bar hello world") File "/home/mjc/.local/lib/python3.10/site-packages/wordcloud/wordcloud.py", line 639, in generate return self.generate_from_text(text) File "/home/mjc/.local/lib/python3.10/site-packages/wordcloud/wordcloud.py", line 621, in generate_from_text self.generate_from_frequencies(words) File "/home/mjc/.local/lib/python3.10/site-packages/wordcloud/wordcloud.py", line 453, in generate_from_frequencies self.generate_from_frequencies(dict(frequencies[:2]), File "/home/mjc/.local/lib/python3.10/site-packages/wordcloud/wordcloud.py", line 508, in generate_from_frequencies box_size = draw.textbbox((0, 0), word, font=transposed_font, anchor="lt") File "/usr/lib/python3/dist-packages/PIL/ImageDraw.py", line 671, in textbbox raise ValueError("Only supported for TrueType fonts") ValueError: Only supported for TrueType fonts As it stands, I am trying to create a very simple example WordCloud import matplotlib.pyplot as plt from wordcloud import WordCloud wc = WordCloud(background_color="white", font_path="./arial.ttf", width=800, height=400) wc.generate("foo bar foo bar hello world") plt.axis("off") plt.imshow(wc) plt.savefig("test.png") plt.show() Where arial.ttf is downloaded from https://www.freefontspro.com/14454/arial.ttf and placed in the same directory as test.py. I am using Ubuntu 22.04 and Python 3.10.6. I was expecting to generate a word cloud from the input "foo bar foo bar hello world", however see the error ValueError: Only supported for TrueType fonts despite passing a ttf to the font_path argument. | I could not reproduce your error with Python 3.10 even after downloading the same font - though I am on macOS. The only thing I can imagine is that your PIL doesn't support TrueType fonts, so you can check with: python3 -m PIL Sample Output -------------------------------------------------------------------- Pillow 9.4.0 Python 3.10.0 (v3.10.0:b494f5935c, Oct 4 2021, 14:59:19) [Clang 12.0.5 (clang-1205.0.22.11)] -------------------------------------------------------------------- Python modules loaded from /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/PIL Binary modules loaded from /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/PIL -------------------------------------------------------------------- --- PIL CORE support ok, compiled for 9.4.0 --- TKINTER support ok, loaded 8.6 --- FREETYPE2 support ok, loaded 2.13.0 <--- HERE --- LITTLECMS2 support ok, loaded 2.14 --- WEBP support ok, loaded 1.3.0 --- WEBP Transparency support ok --- WEBPMUX support ok --- WEBP Animation support ok --- JPEG support ok, compiled for libjpeg-turbo 2.1.4 --- OPENJPEG (JPEG2000) support ok, loaded 2.5.0 --- ZLIB (PNG/ZIP) support ok, loaded 1.2.13 --- LIBTIFF support ok, loaded 4.5.0 --- RAQM (Bidirectional Text) support ok, loaded 0.9.0, fribidi 1.0.12, harfbuzz 7.1.0 *** LIBIMAGEQUANT (Quantization method) support not installed --- XCB (X protocol) support ok -------------------------------------------------------------------- BLP Extensions: .blp Features: open, save, encode -------------------------------------------------------------------- BMP image/bmp Extensions: .bmp Features: open, save -------------------------------------------------------------------- BUFR Extensions: .bufr Features: open, save -------------------------------------------------------------------- CUR Extensions: .cur Features: open -------------------------------------------------------------------- DCX Extensions: .dcx Features: open -------------------------------------------------------------------- DDS Extensions: .dds Features: open, save -------------------------------------------------------------------- DIB image/bmp Extensions: .dib Features: open, save -------------------------------------------------------------------- EPS application/postscript Extensions: .eps, .ps Features: open, save -------------------------------------------------------------------- FITS Extensions: .fit, .fits Features: open, save -------------------------------------------------------------------- FLI Extensions: .flc, .fli Features: open -------------------------------------------------------------------- FTEX Extensions: .ftc, .ftu Features: open -------------------------------------------------------------------- GBR Extensions: .gbr Features: open -------------------------------------------------------------------- GIF image/gif Extensions: .gif Features: open, save, save_all -------------------------------------------------------------------- GRIB Extensions: .grib Features: open, save -------------------------------------------------------------------- HDF5 Extensions: .h5, .hdf Features: open, save -------------------------------------------------------------------- ICNS image/icns Extensions: .icns Features: open, save -------------------------------------------------------------------- ICO image/x-icon Extensions: .ico Features: open, save -------------------------------------------------------------------- IM Extensions: .im Features: open, save -------------------------------------------------------------------- IMT Features: open -------------------------------------------------------------------- IPTC Extensions: .iim Features: open -------------------------------------------------------------------- JPEG image/jpeg Extensions: .jfif, .jpe, .jpeg, .jpg Features: open, save -------------------------------------------------------------------- JPEG2000 image/jp2 Extensions: .j2c, .j2k, .jp2, .jpc, .jpf, .jpx Features: open, save -------------------------------------------------------------------- MCIDAS Features: open -------------------------------------------------------------------- MPEG video/mpeg Extensions: .mpeg, .mpg Features: open -------------------------------------------------------------------- MSP Extensions: .msp Features: open, save, decode -------------------------------------------------------------------- PCD Extensions: .pcd Features: open -------------------------------------------------------------------- PCX image/x-pcx Extensions: .pcx Features: open, save -------------------------------------------------------------------- PIXAR Extensions: .pxr Features: open -------------------------------------------------------------------- PNG image/png Extensions: .apng, .png Features: open, save, save_all -------------------------------------------------------------------- PPM image/x-portable-anymap Extensions: .pbm, .pgm, .pnm, .ppm Features: open, save -------------------------------------------------------------------- PSD image/vnd.adobe.photoshop Extensions: .psd Features: open -------------------------------------------------------------------- SGI image/sgi Extensions: .bw, .rgb, .rgba, .sgi Features: open, save -------------------------------------------------------------------- SPIDER Features: open, save -------------------------------------------------------------------- SUN Extensions: .ras Features: open -------------------------------------------------------------------- TGA image/x-tga Extensions: .icb, .tga, .vda, .vst Features: open, save -------------------------------------------------------------------- TIFF image/tiff Extensions: .tif, .tiff Features: open, save, save_all -------------------------------------------------------------------- WEBP image/webp Extensions: .webp Features: open, save, save_all -------------------------------------------------------------------- WMF Extensions: .emf, .wmf Features: open, save -------------------------------------------------------------------- XBM image/xbm Extensions: .xbm Features: open, save -------------------------------------------------------------------- XPM image/xpm Extensions: .xpm Features: open -------------------------------------------------------------------- XVTHUMB Features: open -------------------------------------------------------------------- Or, more speedily - since we are only interested in "freetype" lines: python3 -m PIL | grep -i type Then you may need to install freetype with a command something like - not too sure of the exact package name: sudo apt install libfreetype6 | 10 | 3 |
76,124,622 | 2023-4-27 | https://stackoverflow.com/questions/76124622/override-new-of-a-class-which-extends-enum | A class Message extends Enum to add some logic. The two important parameters are verbose level and message string, with other optional messages (*args). Another class MessageError is a special form of the Message class in which verbose level is always zero, everything else is the same. The following code screams TypeError: TypeError: Enum.new() takes 2 positional arguments but 3 were given from enum import Enum class Message(Enum): verbose: int messages: list[str] def __new__(cls, verbose: int, message: str, *args): self = object.__new__(cls) self._value_ = message return self def __init__(self, verbose: int, message: str, *args): self.verbose = verbose self.messages = [self.value] + list(args) class MessageError(Message): def __new__(cls, message: str): return super().__new__(cls, 0, message) # <- problem is here! def __init__(self, message: str): return super().__init__(0, message) class Info(Message): I_001 = 2, 'This is an info' class Error(MessageError): E_001 = 'This is an error' I was expecting that super().__new__(cls, 0, message) would call __new__ from the Message class, but it seems that is not the case. What am I doing wrong here? | Enums are unusual (aka weird) in several regards, with creation being the biggest area. During enum class creation, after the members themselves have been created but before the class is returned, any existing __new__ is renamed to __new_member__1 and the __new__ from Enum itself is inserted into the class -- this is so calls like Info('This is an error') will return the existing member (or raise), and not create a new member. Your MessageError.__new__ should look like: def __new__(cls, message: str): return super().__new_member__(cls, 0, message) 1 In 3.11+ it's also saved as _new_member_ to follow the pattern of sunder names for enum-specific methods and attributes. Disclosure: I am the author of the Python stdlib Enum, the enum34 backport, and the Advanced Enumeration (aenum) library. | 4 | 8 |
76,130,356 | 2023-4-28 | https://stackoverflow.com/questions/76130356/which-http-method-to-use-when-no-data-is-transfered-just-to-execute-code-in-an | I build an Flask API endpoint which when called will execute an action to process some data (the data are not send but come from the database). The result of the method(action) called is to update some fields in some database objects based on calculations performed on the objects themselves. Initially from what I found most people use POST requests, but post request is mainly designed for sending data in order to create a resource and in these cases no such thing takes place. I looked at all the possible HTTP request methods and no one fits the bill of just asynchronously executing some code in the server without sending and returning any data. Am I missing something, should I simply use POST or is there a better alternative? | PATCH seems to be the correct method for your use case since you're updating some fields of a resource, although POST should also be fine. You can use 204 status code (which means api was successfully executed but there's no content to send) if you're updating the fields before api return. Refer this: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204 But if you're updating the fields async after api return, you should send 202 status code (which means request was accepted but not yet processed): https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202 | 3 | 1 |
76,126,686 | 2023-4-28 | https://stackoverflow.com/questions/76126686/appending-a-single-row-to-a-dataframe-in-pandas-2-0 | I have a DF as below. I have added a new column which has Total of all the rows and a new row which has total of all columns: A B C D Total ------------------- 1 2 3 4 10 5 6 7 8 26 6 8 10 12 36 Now I need to add one more row for which the first element will be NaN and rest will be a column subtracted from the previous column in the Total row. A B C D Total 1 2 3 4 10 5 6 7 8 26 6 8 10 12 36 NaN 2 2 2 24 <--- new row Thanks | This would be one of the rare use cases for df.append, but in lieu of that you can extract the last row with iloc[-1] and diff the individual values, then combine this back with the original. Option 1 One method of doing this concatenation would be using pd.concat df2 = pd.concat([df, df.iloc[-1].diff().to_frame().T]) print (df2) A B C D Total 0 1.0 2.0 3.0 4.0 10.0 1 5.0 6.0 7.0 8.0 26.0 2 6.0 8.0 10.0 12.0 36.0 2 NaN 2.0 2.0 2.0 24.0 Where, df.iloc[-1].diff().to_frame().T # dataframe with 1 row A B C D Total 2 NaN 2.0 2.0 2.0 24.0 Option 2 An alternative using inplace assignment with loc: df.loc[len(df.index)] = df.iloc[-1].diff() print (df) A B C D Total 0 1.0 2.0 3.0 4.0 10.0 1 5.0 6.0 7.0 8.0 26.0 2 6.0 8.0 10.0 12.0 36.0 3 NaN 2.0 2.0 2.0 24.0 Where, df.iloc[-1].diff() # series A NaN B 2.0 C 2.0 D 2.0 Total 24.0 Name: 2, dtype: float64 Option 3 Here's an option that has a bit of fun with dictionaries and pd.DataFrame: pd.DataFrame([*df.to_dict('records'), df.iloc[-1].diff().to_dict()]) A B C D Total 0 1.00 2.00 3.00 4.00 10.00 1 5.00 6.00 7.00 8.00 26.00 2 6.00 8.00 10.00 12.00 36.00 3 NaN 2.00 2.00 2.00 24.00 Option 4 [deprecated] On older versions (pandas <= 1.4) I would have recommended using append like this: df2 = df.append(df.iloc[-1].diff(), ignore_index=True) | 5 | 5 |
76,124,814 | 2023-4-27 | https://stackoverflow.com/questions/76124814/how-to-take-the-cumulative-maximum-of-a-column-based-on-another-column | I have a DataFrame like this: import pandas as pd import numpy as np df = pd.DataFrame({ "realization_id": np.repeat([0, 1], 6), "sample_size": np.tile([0, 1, 2], 4), "num_obs": np.tile(np.repeat([25, 100], 3), 2), "accuracy": [0.8, 0.7, 0.8, 0.6, 0.7, 0.5, 0.6, 0.7, 0.8, 0.7, 0.9, 0.7], "prob": [0.94, 0.96, 0.95, 0.98, 0.93, 0.92, 0.90, 0.92, 0.95, 0.9, 0.91, 0.92] }) df["accum_max_prob"] = df.groupby(["realization_id", "num_obs"])["prob"].cummax() And I want to know how to create a column with this output: df["desired_accuracy"] = [0.8, 0.7, 0.7, 0.6, 0.6, 0.6, 0.6, 0.7, 0.8, 0.7, 0.9, 0.7] Each entry of desired_accuracy equals the accuracy value that corresponds to the row where the highest prob has been achieved so far by group (that is why I create accum_max_prob). So, for example: the first value is 0.8 because there is no data prior to that, but then the next one is 0.7 because the prob of the second row is greater than the first. The third value stays the same, because the third prob is lower than the second one, so it does not update desired_accuracy. For each pair (realization_id, num_obs) the criteria resets. How can I do it in a vectorized fashion using Pandas? | It looks like: df['desired_accuracy'] = df['accuracy'].mask(df['prob'].lt(df['accum_max_prob'])).ffill() Output: realization_id sample_size num_obs accuracy prob accum_max_prob desired_accuracy 0 0 0 25 0.8 0.94 0.94 0.8 1 0 1 25 0.7 0.96 0.96 0.7 2 0 2 25 0.8 0.95 0.96 0.7 3 0 0 100 0.6 0.98 0.98 0.6 4 0 1 100 0.7 0.93 0.98 0.6 5 0 2 100 0.5 0.92 0.98 0.6 6 1 0 25 0.6 0.90 0.90 0.6 7 1 1 25 0.7 0.92 0.92 0.7 8 1 2 25 0.8 0.95 0.95 0.8 9 1 0 100 0.7 0.90 0.90 0.7 10 1 1 100 0.9 0.91 0.91 0.9 11 1 2 100 0.7 0.92 0.92 0.7 | 2 | 4 |
76,124,270 | 2023-4-27 | https://stackoverflow.com/questions/76124270/classes-as-dictionary-keys-in-python | So, I've done a lot of research on this and saw various other links and conferred with the Python documentation, however, I'm still a bit unclear on this. Maybe the way I see classes in Python is a bit wrong. As I understand, keys in a dictionary must be immutable. However, classes can be keys in a dictionary because of their default hash implementation (I think?). Why is this the case since classes are mutable? For example, class C: def __init__(self): self.val = 15 self.array = [] c = C() D = {c: 15} c.val = 14 c.array.append(15) print(D[c]) Why is this okay? | Instances of your C class are actually hashable, it comes with a default implementation of __hash__ which pertains to the identity of that object: >>> hash(c) # calls c.__hash__() 306169194 This __hash__ implementation allows your instance to be used as a key in a dictionary. This explains "Why doesn't changing things in your class change the hash?" — because the identity/reference of the instance doesn't change even if its contents do. On older versions of python, this used to be exactly equal to the object's id, but from python 3 onwards, it seems to be some derivative of it. This post goes into the gory details on the hash implementation. Now let's say you wanted to prevent instances of your class from being used as a key... you could then do this (from the documentation): If a class that does not override __eq__() wishes to suppress hash support, it should include __hash__ = None in the class definition. class C: def __init__(self): self.val = 15 self.array = [] __hash__ = None # suppressing hash support c = C() And now you get a familiar sounding TypeError if you attempt to retrieve the hash of c: >>> hash(c) # TypeError: unhashable type: 'C' Naturally, this also implies you cannot use c as a dictionary key anymore (IOW trying to initialize {c: 15} would also throw the same TypeError since c is not hashable). | 3 | 5 |
76,123,245 | 2023-4-27 | https://stackoverflow.com/questions/76123245/why-is-set-remove-so-slow-here | (Extracted from another question.) Removing this set's 200,000 elements one by one like this takes 30 seconds (Attempt This Online!): s = set(range(200000)) while s: for x in s: s.remove(x) break Why is that so slow? Removing a set element is supposed to be fast. | I think this is happening because you are removing the first element in the set every time. This creates a set which is increasingly empty one each iteration, so each time you create a new iterator and call __next__, it has to search further and further away. So, here is the source code for the iterator __next__ It has to find the next entry like this: while (i <= mask && (entry[i].key == NULL || entry[i].key == dummy)) i++; The the iterator __next__ works by finding the first non-empty, non-dummy value: So, say we have something like: entries = [null, 1, null, 2, null, 3, null, 4, null, 5] Then on each iteration of the while loop, you get: entries = [null, 1, null, 2, null, 3, null, 4, null, 5] entries = [null, DUMMY, null, 2, null, 3, null, 4, null, 5] entries = [null, DUMMY, null, DUMMY, null, 3, null, 4, null, 5] entries = [null, DUMMY, null, DUMMY, null, DUMMY, null, 4, null, 5] So each time, the iterator has to search further and further away from the beginning of the entires, since each iteration of the while loop removes the first one. Hence, the quadratic time behavior. | 4 | 7 |
76,119,007 | 2023-4-27 | https://stackoverflow.com/questions/76119007/efficient-way-to-handle-group-of-ids-in-pandas | df = pd.DataFrame({ 'caseid': [1, 1, 1, 2, 2, 2, 3, 3, 3], 'timestamp': [10, 20, 30, 10, 20, 30, 10, 20, 30] 'var1': [np.nan, np.nan, np.nan, 10, np.nan, 11, 12, 13, 14], 'var2': [2., 3., 4., np.nan, 5., 6., np.nan, np.nan, np.nan] }) I need to find the first (and last) valid timestamp for each variable per caseid. I.e. For var1, caseid 1 it would be None, for caseid 2 it would be 10 (last 30). And the same for each additional var column. Is there handle groups of ids without looping over caseid and doing a first_valid_index() on each column, since loops not most efficient when using pandas? | You can select the desired columns with filter (or manually), then replace the non-NA values with the timestamp (using mul and where), finally use a groupby.agg with first/last: m = df.filter(like='var').notna() out = (m.mul(df['timestamp'], axis=0).where(m) .groupby(df['caseid']).agg(['first', 'last']) ) Output: var1 var2 first last first last caseid 1 NaN NaN 10.0 30.0 2 10.0 30.0 20.0 30.0 3 10.0 30.0 NaN NaN Intermediate: m.mul(df['timestamp'], axis=0).where(m) var1 var2 0 NaN 10.0 1 NaN 20.0 2 NaN 30.0 3 10.0 NaN 4 NaN 20.0 5 30.0 30.0 6 10.0 NaN 7 20.0 NaN 8 30.0 NaN | 2 | 6 |
76,113,063 | 2023-4-26 | https://stackoverflow.com/questions/76113063/how-do-i-get-a-list-faster-with-an-index-relationship-between-two-large-size-lis | The question is given the following two lists. import numpy as np import random as rnd num = 100000 a = list(range(num)) b = [rnd.randint(0, num) for r in range(num)] Between two lists with a huge size(assuming that the reference list is a), a list indicating where the same element is located in the relative array(b) was created using the list(atob) comprehension method. atob = [np.abs(np.subtract(b, i)).argmin() for i in a] print(f"index a to b: {atob}") It doesn't take long when the list size is small. However, I realized that the process of obtaining the list atob is quite time consuming. Is there a way to get list atob faster? Or is there currently no way? (Edited after answering. The purpose of this revision is for future readers.) Many thanks for the replies! Code analysis was conducted based on the answers. check the output The comparison of results was performed with num = 20. import numpy as np import random as rnd import time # set lists num = 20 a = list(range(num)) # b = [rnd.randint(0, num) for r in range(num)] # Duplicate numbers occur among the elements in the list b = rnd.sample(range(0, num), num) print(f"list a: {a}") print(f"list b: {b}\n") # set array as same as lists arr_a = np.array(range(num)) arr_b = np.array(rnd.sample(range(0, num), num)) # --------------------------------------------------------- # # existing method ck_time = time.time() atob = [np.abs(np.subtract(b, i)).argmin() for i in a] print(f"index a to b (existed): {atob}, type: {type(atob)}") print(f"running time (existed): {time.time() - ck_time}\n") ck_time = time.time() # dankal444 method dk = {val: idx for idx, val in enumerate(b)} atob_dk = [dk.get(n) for n in a] # same as atob_dk = [d.get(n) for n in range(num)] print(f"index a to b (dankal): {atob_dk}, type: {type(atob_dk)}") print(f"running time (dankal): {time.time() - ck_time}") print(f"equal to exist method: {np.array_equal(atob, atob_dk)}\n") ck_time = time.time() # smp55 method comb = np.array([a, b]).transpose() atob_smp = comb[comb[:, 1].argsort()][:, 0] print(f"index a to b (smp55): {atob_smp}, type: {type(atob_smp)}") print(f"running time (smp55): {time.time() - ck_time}") print(f"equal to exist method: {np.array_equal(atob, atob_smp)}\n") ck_time = time.time() # Roman method from numba import jit @jit(nopython=True) def find_argmin(_a, _b): out = np.empty_like(_a) # allocating result array for i in range(_a.shape[0]): out[i] = np.abs(np.subtract(_b, _a[i])).argmin() return out atob_rom = find_argmin(arr_a, arr_b) print(f"index a to b (Roman): {atob_rom}, type: {type(atob_rom)}") print(f"running time (Roman): {time.time() - ck_time}") print(f"equal to exist method: {np.array_equal(atob, atob_rom)}\n") ck_time = time.time() # Alain method from bisect import bisect_left ub = {n:-i for i,n in enumerate(reversed(b),1-len(b))} # unique first pos sb = sorted(ub.items()) # sorted to bisect ib = (bisect_left(sb,(n,0)) for n in a) # index of >= val rb = ((sb[i-1],sb[min(i,len(sb)-1)]) for i in ib) # low and high pairs atob_ala = [ i if (abs(lo-n),i)<(abs(hi-n),j) else j # closest index for ((lo,i),(hi,j)),n in zip(rb,a) ] print(f"index a to b (Alain): {atob_ala}, type: {type(atob_ala)}") print(f"running time (Alain): {time.time() - ck_time}") print(f"equal to exist method: {np.array_equal(atob, atob_ala)}\n") ck_time = time.time() # ken method b_sorted, b_sort_indices = np.unique(b, return_index=True) def find_nearest(value): """Finds the nearest value from b.""" right_index = np.searchsorted(b_sorted[:-1], value) left_index = max(0, right_index - 1) right_delta = b_sorted[right_index] - value left_delta = value - b_sorted[left_index] if right_delta == left_delta: # This is only necessary to replicate the behavior of your original code. return min(b_sort_indices[left_index], b_sort_indices[right_index]) elif left_delta < right_delta: return b_sort_indices[left_index] else: return b_sort_indices[right_index] atob_ken = [find_nearest(ai) for ai in a] print(f"index a to b (ken): {atob_ken}, type: {type(atob_ken)}") print(f"running time (ken): {time.time() - ck_time}") print(f"equal to exist method: {np.array_equal(atob, atob_ken)}\n") ck_time = time.time() Above code result is list a: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] list b: [9, 12, 0, 2, 3, 15, 4, 16, 13, 6, 7, 18, 14, 10, 1, 8, 5, 17, 11, 19] index a to b (existed): [2, 14, 3, 4, 6, 16, 9, 10, 15, 0, 13, 18, 1, 8, 12, 5, 7, 17, 11, 19], type: <class 'list'> running time (existed): 0.00024008750915527344 index a to b (dankal): [2, 14, 3, 4, 6, 16, 9, 10, 15, 0, 13, 18, 1, 8, 12, 5, 7, 17, 11, 19], type: <class 'list'> running time (dankal): 1.5497207641601562e-05 equal to exist method: True index a to b (smp55): [ 2 14 3 4 6 16 9 10 15 0 13 18 1 8 12 5 7 17 11 19], type: <class 'numpy.ndarray'> running time (smp55): 0.00020551681518554688 equal to exist method: True index a to b (Roman): [17 11 1 6 16 14 9 4 8 3 5 12 7 2 19 15 18 13 0 10], type: <class 'numpy.ndarray'> running time (Roman): 0.5710980892181396 equal to exist method: False index a to b (Alain): [2, 14, 3, 4, 6, 16, 9, 10, 15, 0, 13, 18, 1, 8, 12, 5, 7, 17, 11, 19], type: <class 'list'> running time (Alain): 3.552436828613281e-05 equal to exist method: True index a to b (ken): [2, 14, 3, 4, 6, 16, 9, 10, 15, 0, 13, 18, 1, 8, 12, 5, 7, 17, 11, 19], type: <class 'list'> running time (ken): 0.00011754035949707031 equal to exist method: True Running time increasing list size If I run the code with num = 1000000 running time (dankal): 0.45094847679138184 running time (smp55): 0.36011743545532227 running time (Alain): 2.178112030029297 running time (ken): 2.663684368133545 (With Roman's method, it was difficult to check the time when the size was increased.) The memory point of view also needs to be checked, but first of all, the @smp55 method is the fastest way to obtain a list based on the required time in replies.(I'm sure there are other good ways.) Once again, thank you all for your attention and replies!!! (Subsequent replies and comments are also welcome. If anyone has a good idea, it would be nice to share!) | I can give a specific answer that is very fast based on the fact that your first list is simply an index. If you combine them in a 2D array, you can then sort by the second list, which will put the first list (indices of the second list) in the order of the result you want: import numpy as np import random as rnd num = 100000 a = list(range(num)) b = [rnd.randint(0, num) for r in range(num)] comb = np.array([a, b]).transpose() atob = comb[comb[:, 1].argsort()][:,0] Takes ~0.08 seconds. Now, the first item in atob is the index in b where the first item in a appears. Second item in atob is the index in b of the second item of a, etc. | 5 | 3 |
76,110,009 | 2023-4-26 | https://stackoverflow.com/questions/76110009/getitem-only-gets-called-if-iter-is-defined | I am subclassing a dict and would love some help understanding the behavior below (please) [Python version: 3.11.3]: class Xdict(dict): def __init__(self, d): super().__init__(d) self._x = {k: f"x{v}" for k, v in d.items()} def __getitem__(self, key): print("in __getitem__") return self._x[key] def __str__(self): return str(self._x) def __iter__(self): print("in __iter__") d = Xdict({"a": 1, "b": 2}) print(d) print(dict(d)) Produces this output: {'a': 'x1', 'b': 'x2'} in __getitem__ in __getitem__ {'a': 'x1', 'b': 'x2'} If I comment out the __iter__ method the output changes like so: {'a': 'x1', 'b': 'x2'} {'a': 1, 'b': 2} Obviously the __iter__ method is not getting called, however its presence is affecting the behaviour. I am just interested in why this happens. I am not looking for alternative solutions to prevent it. Thanks, Paul. | Python's internals often directly invoke the C-level implementations of built-in class functionality, even in cases where a subclass may have overridden that functionality, leading to a lot of weird bugs where method overrides aren't invoked where you'd expect them to be. This is the case for a lot of the dict implementation, but when dicts became order-preserving in Python 3.6, one of those bugs hit the standard library: when x is an OrderedDict, dict(x) would copy the underlying dict implementation's order, instead of the OrderedDict order (which is tracked separately). To fix this bug, they added a check to dict_merge, in the code where it decides whether to use the fast path: if (PyDict_Check(b) && (Py_TYPE(b)->tp_iter == (getiterfunc)dict_iter)) { dict_merge is the underlying routine responsible for copying the contents of another mapping into a dict. Previously, this line just said if (PyDict_Check(b)) {, which would use the fast path if the other mapping was any dict instance. Now, it also checks that the instance doesn't have an overridden __iter__. If the instance has an overridden __iter__, dict_merge will use the slow path, hence the difference you saw. However, the slow path doesn't actually use __iter__. It uses keys, which is why your code worked even though your __iter__ doesn't return an iterator. | 2 | 6 |
76,109,578 | 2023-4-26 | https://stackoverflow.com/questions/76109578/searching-a-large-dataframe-with-a-multiindex-slow | I have a large Pandas DataFrame (~800M rows), which I have indexed on a MultiIndex with two indices, an int and a date. I want to retrieve a subset of the DataFrame's rows based on a list of ints (about 10k) that I have. The ints match the first index of the multi-index. The multi-index is unique. The first thing I tried is to sort the index and then query it using loc: df = get_my_df() # 800M rows ids = [...] # 10k ints, sorted list df.set_index(["int_idx", "date_idx"], inplace=True, drop=False) df.sort_index(inplace=True) idx = pd.IndexSlice res = df.loc[idx[ids, :]] However this is painfully slow, and I stopped running the code after about an hour. Next thing I tried was to set only the first one as index. This is suboptimal for me because the index is not unique, and also later I'll need to to further filter by date: df.set_index("int_idx", inplace=True, drop=False) df.sort_index(inplace=True) idx = pd.IndexSlice res = df.loc[idx[ids, :]] To my surprise this was an improvement, but still very slow. I have two questions: How can I make my query faster? (Either using single index or multi-index) Why is a sorted multi-index still so slow? | It can be difficult to retrieve a subset of a DataFrame containing 800M rows. Here are some ideas to help your search go more quickly: Use .loc() with boolean indexing instead of pd.IndexSlice: Use boolean indexing with.loc() instead of pd.IndexSlice to slice your multi-index instead. This can assist Pandas in avoiding the costly practise of establishing a new index object for each slice when working with huge DataFrames. For example: res = df.loc[df.index.get_level_values('int_idx').isin(ids)] Avoid setting the index multiple times: It can be costly to set the index and sort the data numerous times. Try to just set the index once if you can, but try to avoid sorting it. For example: df.set_index(["int_idx", "date_idx"], inplace=True, drop=False) res = df[df.index.get_level_values('int_idx').isin(ids)] Use chunking or parallel processing: You might want to think about dividing your DataFrame into smaller parts, processing each one separately, and then concatenating the results if it is too big to store in memory. To speed up the query, you might also use parallel processing. Both of these tactics work well with the Dask library. In response to your second query, a sorted multi-index ought should be quicker than an unsorted one because it enables Pandas to utilise the quick search methods built into NumPy. However, if a huge DataFrame has numerous columns or the sorting order is complicated, sorting the data can be expensive. Generally speaking, sorting a DataFrame is an expensive process that should be avoided wherever possible. | 3 | 3 |
76,108,171 | 2023-4-26 | https://stackoverflow.com/questions/76108171/why-does-sqlalchemy-recommend-using-built-in-id-as-column-name | Using reserved keywords or built-in functions as variable/attribute names is commonly seen as bad practice. However, the SQLALchemy tutorial is full of exampled with attributes named id. Straight from the tutorial >>> class User(Base): ... __tablename__ = "user_account" ... ... id: Mapped[int] = mapped_column(primary_key=True) ... name: Mapped[str] = mapped_column(String(30)) ... fullname: Mapped[Optional[str]] ... ... addresses: Mapped[List["Address"]] = relationship( ... back_populates="user", cascade="all, delete-orphan" ... ) ... ... def __repr__(self) -> str: ... return f"User(id={self.id!r}, name={self.name!r}, fullname={self.fullname!r})" Why is it not recommended to use id_ instead, as recommended at least for keywords in PEP 8? | Firstly, id is not a keyword; if it was one, you would get a syntax error (try replacing id there with e.g. pass). It is a global identifier for a built-in function. And while it is a bad practice to clobber over the default identifiers defined by Python, note that id: Mapped[int] = mapped_column(primary_key=True) inside a class definition defines a static attribute of that class, not a global variable. The global function id is not the same as the static attribute User.id (or self.id, as it is used later) — there is no conflict there. class Foo: id = 42 print(Foo.id) # => 42 print(id) # => <built-in function id> However, as you correctly note: id = "YOU SHOULDN'T DO THIS" print(id) # => YOU SHOULDN'T DO THIS # (and not <built-in function id>) Secondly, as to why id is used: SQLAlchemy maps classes to SQL tables with the snakecased class name, and attributes to SQL columns with the as-is attribute name (unless you specifically say id_: mapped_column('id', primary_key=True)), so id and id_ would be different columns in SQL. id (as opposed to id_) is the conventional name of the primary key column in SQL (if there is a single-column primary key). Some people would name the column user_id, but many dislike having user_id in table called users, since it should be obvious what users.id, and so users.user_id is redundant; the form user_id is generally reserved for foreign keys. The SQLAlchemy examples merely follow this SQL convention. | 4 | 6 |
76,105,218 | 2023-4-25 | https://stackoverflow.com/questions/76105218/why-does-tkinter-or-turtle-seem-to-be-missing-or-broken-shouldnt-it-be-part | I have seen many different things go wrong when trying to use the Tkinter standard library package, or its related functionality (turtle graphics using turtle and the built-in IDLE IDE), or with third-party libraries that have this as a dependency (e.g. displaying graphical windows with Matplotlib). It seems that even when there isn't a problem caused by shadowing the name of the standard library modules (this is a common problem for beginners trying to follow a tutorial and use turtle graphics - example 1; example 2; example 3; example 4), it commonly happens that the standard library Tkinter just doesn't work. This is a big problem because, again, a lot of beginners try to follow tutorials that use turtle graphics and blindly assume that the turtle standard library will be present. The error might be reported: As ModuleNotFoundError: No module named 'tkinter'; or an ImportError with the same message; or with different casing (I am aware that the name changed from Tkinter in 2.x to tkinter in 3.x; that is a different problem). Similarly, but referring to an internal _tkinter module, and displaying code with a comment that says "If this fails your Python may not be configured for Tk"; or with a custom error message that says "please install the python-tk package" or similar. As "No module named turtle" when trying to use turtle specifically, or one of the above errors. When trying to display a plot using Matplotlib; commonly, this will happen after trying to change the backend, which was set by default to avoid trying to use Tkinter. Why do problems like this occur, when Tkinter is documented as being part of the standard library? How can I add or repair the missing standard library functionality? Are there any special concerns for specific Python environments? See also: "UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure." when plotting figure with pyplot on Pycharm . It is possible to use other GUI backends with Matplotlib to display graphs; but if the TkAgg backend does not work, that is because of a missing or faulty Tkinter install. In Python 3.x, the name of the Tkinter standard library module was corrected from Tkinter to tkinter (i.e., all lowercase) in order to maintain consistent naming conventions. Please use Difference between tkinter and Tkinter to close duplicate questions caused by trying to use the old name in 3.x (or the new name in 2.x). This question is about cases where Tkinter is not actually available. If it isn't clear which case applies, please either offer both duplicate links, or close the question as "needs debugging details". | WARNING: Do not use pip to try to solve the problem The Pip package manager cannot help to solve the problem. No part of the Python standard library - including tkinter, turtle etc. - can be installed from PyPI. For security reasons, PyPI now blocks packages using names that match the standard library. There are many packages on PyPI that may look suitable, but are not. Most are simply wrappers that try to add a little functionality to the standard library Tkinter. Historically, one especially problematic package was turtle. Since 2017, it has been policy to block packages with names that match the standard library; but turtle was published in 2009 and never maintained. It was Python 2.x specific code that would break during installation on Python 3, that had nothing whatsoever to do with turtle graphics. Shortly before the original draft of this Q&A, I put in a request to get the turtle package delisted from PyPI. As of November 15, 2024 (I somehow didn't get a notification), the turtle package has been removed from PyPI and the name is blocked following current policy. Why some Python installations don't include Tkinter components There are several reasons why Tkinter might be missing, depending on the platform (although generally, the motivation is probably just to save space). When Python is installed on Windows using the official installer, there is an option to include or exclude Tcl/Tk support. Python installations that come pre-installed with Linux may exclude Tkinter, or various components, according to the distro maintainer's policy. For example, the Python that came with my copy of Linux included the turtle standard library, but not the underlying Tkinter package: >>> import turtle Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3.8/turtle.py", line 107, in <module> import tkinter as TK ModuleNotFoundError: No module named 'tkinter' Other builds might not include the turtle module either. Python built from source might be missing Tkinter support because of deliberately chosen configuration options, or because dependencies were missing before starting compilation. Note that virtual environments will generally have the same Tkinter support as the Python installation they're based upon. However, adding Tkinter support to the base might not update the virtual environment. In this case, it will be necessary to re-create the virtual environment from scratch. It is not possible to add or remove Tkinter support for an individual virtual environment. This is because a virtual environment only differs from its base in terms of the site packages, and there is no site package for Tkinter (since, again, it is a standard library component that cannot be obtained using Pip). How to add Tkinter support, depending on the environment See also: Guide in the official Tk documentation: https://tkdocs.com/tutorial/install.html Some previous (now duplicate) versions of the question with some useful answers: Install tkinter for Python Tkinter: "Python may not be configured for Tk" ImportError: No module named 'Tkinter' Windows For Python installed using the official installer from python.org, use the operating system features to choose to "repair" the installation (or, if acceptable, uninstall and reinstall Python). This time, make sure to check the option to install the "tcl/tk and IDLE" optional feature. Some legacy setups may have had issues with conflicts between 32- and 64-bit versions of Python and Tcl/Tk. This should not cause problems on new setups. For the embeddable zip package, see Python embeddable zip: install Tkinter . Linux: Python that came with the Linux distribution If the Python that came with your Linux distribution doesn't include Tkinter, consider leaving that alone and installing a separate version of Python - just on general principle. However, in general, Tkinter support can be added to the system Python using the system package manager (not Pip). It will typically be necessary to use sudo (not included in examples here) to make such changes to the system Python. On Ubuntu and Debian based systems (including Pop!_OS, Ubuntu-based Mint): use apt-get install python3-tk, assuming the system Python is a 3.x version. For 2.x legacy systems, use apt-get install python-tk instead. In some cases it may be necessary to specify a minor version, like apt-get install python3.11-tk. In some cases, a custom exception message may say to install python-tk even though python3-tk should actually be installed instead. For Fedora, use dnf install python3-tkinter, as described by d-coder here. For Arch, use pacman -S tk, as described by Jabba here. For RHEL, use yum install python3-tkinter, as described by amzy-0 here. Linux: Python built from source The above packages can only add Tkinter support to the system python (the one installed in /usr/bin, which is used by the operating system to run essential scripts). They cannot add Tkinter support to a separate Python built from source. This is because, in addition to the actual Tcl/Tk library, using Tkinter in Python requires a per-installation "binding" library (referred to as _tkinter in the Python source code). System packages will not add this library to other Python installations. Therefore, install a development Tk package first (e.g. apt-get install tk-dev) and try rebuilding. See also: Unable to install tkinter with pyenv Pythons on MacOS (using pyenv) Install tkinter and python locally (in case sudo privileges are not available) Building Python and more on missing modules (more generally about rebuilding Python from source and filling in development dependencies) Brew (typically MacOS) Use brew install python-tk; if necessary, specify a Python version like brew install [email protected]. For non-system installations, it may be necessary to re-install and specify that Tkinter support should be included, like brew install python --with-tcl-tk. See also: Why does Python installed via Homebrew not include Tkinter Headless environments It's generally not possible to install Tkinter - or any other GUI toolkit - for headless server environments like PythonAnywhere or Amazon Linux EC2. The code will run on a remote server, so there is no monitor to display the GUI; while it would be possible in principle for the code to send commands back to the client that the client could then use to create a GUI, in general the server will have no knowledge of the client's environment. Making this work would require setting up some communication protocol ahead of time (such as X11). Virtual environments First, fix the installation that the virtual environment is based upon. If this doesn't resolve the problem, re-create the virtual environment (and reinstall everything that was installed in the old virtual environment). Unfortunately, there is not a clean way around this. It might be possible to patch around the problem by changing a bunch of symlinks, but this is not supported. If it is not possible to fix the base installation (for example, due to not having sudo rights on the system), consider installing a separate Python (for example, by compiling from source), ensuring that it is installed with Tkinter support, and creating virtual environments from that Python. Additional hints: TKinter in a Virtualenv Tkinter components Some users will find it useful to understand exactly what the Tkinter system contains. There are several components: The underlying Tcl/Tk library, written in C. Some systems may independently have a Tcl/Tk installation that is unusable from Python by default. The _tkinter implementation module, which is also written in C and which interfaces between Python and Tcl/Tk ("tkinter" means "Tk interface"). This is an implementation detail, and should not be imported directly in Python user code. (the C code for this dates all the way back to 1994!) The tkinter package itself, which provides wrappers for the lower-level _tkinter interface, as well as ttk (a separate interface for newer "themed" widgets). Higher-level components, such as IDLE and turtle. Any given installation could theoretically be missing any or all of these components. For system installations of Python on Linux and MacOS, the distro maintainer is responsible for making sure that the appropriate package (python3-tk or similar) installs whichever parts are missing by default, to the appropriate places. As explained to me on GitHub by Terry Jan Reedy: when the Windows installer is told to install Tcl/Tk, it will install a separate copy of that library (and the corresponding _tkinter and tkinter etc.) for that Python installation. On Linux, Tcl/Tk will normally come with Linux; packages like python3-tk will add a _tkinter that uses the system Tcl/Tk, and a tkinter package (which will naturally find and use the _tkinter implementation, using the normal import mechanism). Since the Tcl/Tk installation is thus "vendored" for Windows, the Tcl/Tk version will depend on the Python version. On Linux it will depend on the system, and it should be possible to use the system package manager to upgrade Tcl/Tk independently of Python. Note in particular that newer versions of Python might not be able to work with an outdated system Tcl/Tk. (To check the Tcl/Tk version for a working installation, see How to determine what version of python3 tkinter is installed on my linux machine? .) | 34 | 45 |
76,066,812 | 2023-4-20 | https://stackoverflow.com/questions/76066812/how-to-combine-columns-in-polars-horizontally | I'm trying to combine multiple columns into 1 column with python Polars. However I don't seem to find an (elegant) way to combine columns into a list. I only need to combine column b - e into 1 column. Column a needs to stay exactly as it is now. I've tried using map_elements to achieve this. Despite the fact that this isn't working its also slow and must likely not the best way to do this. Anyone who can help me out with how I can achieve this result? The dataframe I have: df = pl.from_repr(""" ┌─────┬─────┬─────┬─────┬─────┐ │ a ┆ b ┆ c ┆ d ┆ e │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ f64 ┆ f64 ┆ f64 ┆ f64 ┆ f64 │ ╞═════╪═════╪═════╪═════╪═════╡ │ 0.1 ┆ 1.1 ┆ 2.1 ┆ 3.1 ┆ 4.1 │ │ 0.2 ┆ 1.2 ┆ 2.2 ┆ 3.2 ┆ 4.2 │ │ 0.3 ┆ 1.3 ┆ 2.3 ┆ 3.3 ┆ 4.3 │ └─────┴─────┴─────┴─────┴─────┘ """) The result I need: shape: (3, 2) ┌─────┬──────────────────────┐ │ a ┆ value │ │ --- ┆ --- │ │ f64 ┆ list[f64] │ ╞═════╪══════════════════════╡ │ 0.1 ┆ [1.1, 2.1, 3.1, 4.1] │ │ 0.2 ┆ [1.2, 2.2, 3.2, 4.2] │ │ 0.3 ┆ [1.3, 2.3, 3.3, 4.3] │ └─────┴──────────────────────┘ | Use concat_list. pl.Config(fmt_table_cell_list_len=10, fmt_str_lengths=80) # increase repr defaults df.select('a', value=pl.concat_list(pl.exclude('a'))) shape: (3, 2) ┌─────┬──────────────────────┐ │ a ┆ value │ │ --- ┆ --- │ │ f64 ┆ list[f64] │ ╞═════╪══════════════════════╡ │ 0.1 ┆ [1.1, 2.1, 3.1, 4.1] │ │ 0.2 ┆ [1.2, 2.2, 3.2, 4.2] │ │ 0.3 ┆ [1.3, 2.3, 3.3, 4.3] │ └─────┴──────────────────────┘ | 3 | 8 |
76,052,153 | 2023-4-19 | https://stackoverflow.com/questions/76052153/how-to-fill-a-column-with-random-values-in-polars | I would like to know how to fill a column of a polars dataframe with random values. The idea is that I have a dataframe with a given number of columns, and I want to add a column to this dataframe which is filled with different random values (obtained from a random.random() function for example). This is what I tried for now: df = df.with_columns( pl.when((pl.col('Q') > 0)).then(random.random()).otherwise(pl.lit(1)).alias('Prob') ) With this method, the result that I obtain is a column filled with one random value i.e. all the rows have the same value. Is there a way to fill the column with different random values ? Thanks by advance. | You need a "column" of random numbers the same height as your dataframe? np.random.rand could be useful for this: df = pl.DataFrame({"foo": [1, 2, 3]}) df.with_columns(pl.lit(np.random.rand(df.height)).alias("prob")) shape: (3, 2) ┌─────┬──────────┐ │ foo ┆ prob │ │ --- ┆ --- │ │ i64 ┆ f64 │ ╞═════╪══════════╡ │ 1 ┆ 0.657389 │ │ 2 ┆ 0.616265 │ │ 3 ┆ 0.142611 │ └─────┴──────────┘ df.with_columns( pl.when(pl.col("foo") > 2).then(pl.lit(np.random.rand(df.height))) .alias("prob") ) shape: (3, 2) ┌─────┬──────────┐ │ foo ┆ prob │ │ --- ┆ --- │ │ i64 ┆ f64 │ ╞═════╪══════════╡ │ 1 ┆ null │ │ 2 ┆ null │ │ 3 ┆ 0.686551 │ └─────┴──────────┘ It may also be possible to do similar with expressions? e.g. with .int_range() and .sample() df.with_columns( (pl.int_range(1000).sample(pl.len(), with_replacement=True) / 1000) .alias("prob") ) shape: (3, 2) ┌─────┬───────┐ │ foo ┆ prob │ │ --- ┆ --- │ │ i64 ┆ f64 │ ╞═════╪═══════╡ │ 1 ┆ 0.288 │ │ 2 ┆ 0.962 │ │ 3 ┆ 0.734 │ └─────┴───────┘ | 7 | 8 |
76,083,485 | 2023-4-23 | https://stackoverflow.com/questions/76083485/shap-instances-that-have-more-than-one-dimension | I am very new to SHAP and I would like to give it a try, but I am having some difficulty. The model is already trained and seems to perform well. I then use the training data to test SHAP with. It looks like so: var_Braeburn var_Cripps Pink var_Dazzle var_Fuji var_Granny Smith \ 0 1 0 0 0 0 1 0 1 0 0 0 2 0 1 0 0 0 3 0 1 0 0 0 4 0 1 0 0 0 var_Other Variety var_Royal Gala (Tenroy) root_CG202 root_M793 \ 0 0 0 0 0 1 0 0 1 0 2 0 0 1 0 3 0 0 0 0 4 0 0 0 0 root_MM106 ... frt_BioRich Organic Compost_single \ 0 1 ... 0 1 0 ... 0 2 0 ... 0 3 1 ... 0 4 1 ... 0 frt_Biomin Boron_single frt_Biomin Zinc_single \ 0 0 1 1 0 0 2 0 0 3 0 0 4 0 0 frt_Fertco Brimstone90 sulphur_single frt_Fertco Guano _single \ 0 0 0 1 0 0 2 0 0 3 0 0 4 0 0 frt_Gro Mn_multiple frt_Gro Mn_single frt_Organic Mag Super_multiple \ 0 0 0 0 1 1 0 1 2 1 0 1 3 1 0 1 4 1 0 1 frt_Organic Mag Super_single frt_Other Fertiliser 0 0 0 1 0 0 2 0 0 3 0 0 4 0 0 I then do explainer = shap.Explainer(model) and shap_values = explainer(X_train) This runs without error and shap_values gives me this: .values = array([[[ 0.00775555, -0.00775555], [-0.03221035, 0.03221035], [-0.0027203 , 0.0027203 ], ..., [ 0.00259787, -0.00259787], [-0.00459262, 0.00459262], [-0.0303394 , 0.0303394 ]], [[-0.00068313, 0.00068313], [-0.03006355, 0.03006355], [-0.00245706, 0.00245706], ..., [-0.00418809, 0.00418809], [-0.00088372, 0.00088372], [-0.00030019, 0.00030019]], [[-0.00068313, 0.00068313], [-0.03006355, 0.03006355], [-0.00245706, 0.00245706], ..., [-0.00418809, 0.00418809], [-0.00088372, 0.00088372], [-0.00030019, 0.00030019]], ..., However, when I then run shap.plots.beeswarm(shap_values), I get the following error: ValueError: The beeswarm plot does not support plotting explanations with instances that have more than one dimension! What am I doing wrong here? | Try this: from shap import Explainer from shap.plots import beeswarm from sklearn.ensemble import RandomForestClassifier from sklearn.datasets import load_breast_cancer X, y = load_breast_cancer(return_X_y=True, as_frame = True) model = RandomForestClassifier().fit(X, y) explainer = Explainer(model) sv = explainer(X) Then, because RF is a bit special, retrieve shap values just for class 1: beeswarm(sv[:,:,1]) EDIT: Somehow I came to my own post one year later after receiving: ValueError: The beeswarm plot does not support plotting explanations with instances that have more than one dimension! So, instead of stating "RF is special", I have to state: "With every model you receive this kind of error message, try reducing dimentionality like sv[:,:,class_index]" | 4 | 13 |
76,106,117 | 2023-4-25 | https://stackoverflow.com/questions/76106117/python-resolve-forwardref | I have a typing.ForwardRef object as a remnant from earlier generic programming shenanigans. At this point I know the class represented by the ForwardRef exists, but how can I retrieve this type? A fairly minimal example of what I am doing. Convoluted solution for this example, but for the actual use case it makes sense. class GenericClass(Generic[T]): def __init_subclass__(cls, /, **kwargs) -> None: # retrieve the type or forwardref of T orig_bases = [orig_base for orig_base in cls.__orig_bases__ if get_origin(orig_base) is GenericClass] assert len(orig_bases) == 1 orig_base = orig_bases[0] generic_types = get_args(orig_base) assert len(generic_types) == 1 cls.__type_t = generic_types[0] def use_type(self) -> T: thetype = resolve_forward_refs(self.__type_t) # to be implemented return thetype() class MyImpl(GenericClass["SomeType"]): pass | You can evaluate a ForwardRef by calling ForwardRef._evaluate(globals(), locals(), frozenset()) on it. This assumes that the type behind the forward ref is known in these namespaces, i.e. you have imported the type into the local or global scope of the module. if isinstance(myref, ForwardRef): myref_type = myref._evaluate(globals(), locals(), frozenset()) Alternative syntax: typing._eval_type(myref, globals(), locals()) Source: https://github.com/python/cpython/blob/v3.11.3/Lib/typing.py#L849 Edit: For Python versions <3.9, leave out the third parameter to _evaluate: if isinstance(myref, ForwardRef): myref_type = myref._evaluate(globals(), locals()) | 5 | 4 |
76,074,394 | 2023-4-21 | https://stackoverflow.com/questions/76074394/typeerror-multipolygon-object-is-not-iterable | I am trying to run the below script from plotly: https://plotly.com/python/county-choropleth/ I'm receiving the error code right out the gate: TypeError: 'MultiPolygon' object is not iterable I've looked up several posts where this is a similar issue, but I'm skeptical these are solutions for this particular issue. OPTION 2 seems like the more likely approach, but why would there be a workaround for simple coding that plotly is publishing? Seems I might be missing something in the way the code is written. OPTION 1: 'Polygon' object is not iterable- iPython Cookbook OPTION 2: Python: Iteration over Polygon in Dataframe from Shapefile to color cartopy map import plotly.figure_factory as ff import numpy as np import pandas as pd df_sample = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/minoritymajority.csv') df_sample_r = df_sample[df_sample['STNAME'] == 'Florida'] values = df_sample_r['TOT_POP'].tolist() fips = df_sample_r['FIPS'].tolist() endpts = list(np.mgrid[min(values):max(values):4j]) colorscale = ["#030512","#1d1d3b","#323268","#3d4b94","#3e6ab0", "#4989bc","#60a7c7","#85c5d3","#b7e0e4","#eafcfd"] fig = ff.create_choropleth( fips=fips, values=values, scope=['Florida'], show_state_data=True, colorscale=colorscale, binning_endpoints=endpts, round_legend_values=True, plot_bgcolor='rgb(229,229,229)', paper_bgcolor='rgb(229,229,229)', legend_title='Population by County', county_outline={'color': 'rgb(255,255,255)', 'width': 0.5}, exponent_format=True, ) fig.layout.template = None fig.show() | I had the same error message recently and came across this post. I think the key is the migration of shapely to 2.x. You need to use the ".geoms" attribute now to make multi-part geometries iterable: https://shapely.readthedocs.io/en/stable/migration.html This resolved my issue. | 6 | 1 |
76,100,975 | 2023-4-25 | https://stackoverflow.com/questions/76100975/yolov8-custom-save-directory-path | I'm currently working in a project in which I'm using Flask and Yolov8 together. When I run this code from ultralytics import YOLO model = YOLO("./yolov8n.pt") results = model.predict(source="../TEST/doggy.jpg", save=True, save_txt=True) the output will be saved in this default directory /run/detect/ like Ultralytics YOLOv8.0.9 Python-3.10.8 torch-2.0.0+cpu CPU Fusing layers... YOLOv8n summary: 168 layers, 3151904 parameters, 0 gradients, 8.7 GFLOPs Results saved to d:\runs\detect\predict4 1 labels saved to d:\runs\detect\predict4\labels and what I want is the predict directory number or the entire directory path in a variable. I tried capturing the path using sys.stdout methods but i want a direct solution. | You can change the directory where the results are saved (save_dir) by modifying two arguments in predict: project and name results = model.predict(source=xxx, save_txt = True, project="xxx", name="yyy") such that: save_dir=project/name | 3 | 7 |
76,070,711 | 2023-4-21 | https://stackoverflow.com/questions/76070711/how-to-make-a-window-scroll-when-the-turtle-hits-the-edge | I made this Python program that uses psutil and turtle to graph a computer's CPU usage, real time. My problem is, when the turtle hits the edge of the window it keeps going, out of view - but I want to make the window scroll right, so the turtle continues graphing the CPU usage while staying at the edge of the window. How can I get the turtle to stay in view? import turtle import psutil import time # HOW TO MAKE THE DOTS THAT WHEN YOU HOVER OVER THEM IT SHOWS THE PERCENT # HOW TO MAKE IT CONTINUE SCROLLING ONCE THE LINE HITS THE END # Set up the turtle screen = turtle.Screen() screen.setup(width=500, height=125) # Set the width to the actual width, -20% for a buffer width = screen.window_width()-(screen.window_width()/20) # Set the height to the actual height, -10% for a buffer height = screen.window_height()-(screen.window_height()/10) # Create a turtle t = turtle.Turtle() t.hideturtle() t.speed(0) t.penup() # Set x_pos to the width of the window/2 (on the left edge of the window) x_pos = -(width/2) # Set y_pos to the height of the window/2 (on the bottom of the window) y_pos = -(height/2) # Goto the bottom left corner t.goto(x_pos, y_pos) t.pendown() while True: # Get the CPU % cpu_percent = psutil.cpu_percent(interval=None) #Make the title of the Turtle screen the CPU % screen.title(f"CPU %: {cpu_percent}%") #Set y_pos as the bottom of the screen, +1% of the height of the screen for each CPU % y_pos = (-height/2)+((height/100)*cpu_percent) # Goto the point corresponding with the CPU % t.goto(x_pos, y_pos) # Make a dot t.dot(4, "Red") # Make add 5 to x_pos, so the next time it is farther to the left x_pos = x_pos+5 | You can apply a Flappy Bird design to this problem. In that game, it looks like the bird is flying forward, but the implementation is actually such that the pipes are moving from the right side of the screen to the left and the bird doesn't move at all on the x-axis. Applying this to your case, you can slowly move the dots in the graph to the left and keep the turtle at the same spot once it gets close to the right. The easiest way to implement this is with a queue data structure. The oldest CPU reading (or pipe) that falls off the left side of the screen is pruned (dequeued) and a new reading (or pipe) is added (enqueued) to the right side of the screen. Now, to achieve the illusion of movement, you can clear and redraw the whole screen on every frame. Achieving this in turtle involves disabling the internal update loop with tracer(0), then calling update() to draw a frame. You can use clear() to clear the drawings from the last frame (reset() is sometimes handy too). Finally, the while True: approach is a poor event loop that just tries to run as fast as possible with no regard for framerate. Use ontimer for a somewhat more consistent framerate. Here's a proof of concept. import psutil # 5.9.5 import turtle from collections import deque def tick(): cpu_percent = psutil.cpu_percent(interval=None) screen.title(f"CPU %: {cpu_percent}%") y_pos = -height / 2 + height / 100 * cpu_percent positions.append(y_pos) if len(positions) > width / step: positions.popleft() t.penup() t.clear() for i, y_pos in enumerate(positions): x_pos = width / -2 + i * step t.goto(x_pos, y_pos) t.pendown() t.dot(4, "red") turtle.update() screen.ontimer(tick, rough_fps) step = 5 rough_fps = 1000 // 30 turtle.tracer(0) screen = turtle.Screen() screen.setup(width=500, height=500) width = screen.window_width() height = screen.window_height() t = turtle.Turtle() t.hideturtle() positions = deque() screen.ontimer(tick, rough_fps) turtle.exitonclick() | 3 | 2 |
76,092,263 | 2023-4-24 | https://stackoverflow.com/questions/76092263/column-and-row-wise-logical-operations-on-polars-dataframe | In Pandas, one can perform boolean operations on boolean DataFrames with the all and any methods, providing an axis argument. For example: import pandas as pd data = dict(A=["a","b","?"], B=["d","?","f"]) pd_df = pd.DataFrame(data) For example, to get a boolean mask on columns containing the element "?": (pd_df == "?").any(axis=0) and to get a mask on rows: (pd_df == "?").any(axis=1) Also, to get a single boolean: (pd_df == "?").any().any() In comparison, in polars, the best I could come up with are the following: import polars as pl pl_df = pl.DataFrame(data) To get a mask on columns: (pl_df == "?").select(pl.all().any()) To get a mask on rows: pl_df.select( pl.concat_list(pl.all() == "?").alias("mask") ).select( pl.col("mask").list.eval(pl.element().any()).list.first() ) And to get a single boolean value: pl_df.select( pl.concat_list(pl.all() == "?").alias("mask") ).select( pl.col("mask").list.eval(pl.element().any()).list.first() )["mask"].any() The last two cases seem particularly verbose and convoluted for such a straightforward task, so I'm wondering whether there are shorter/faster equivalents? | I think one thing that might be making this more confusing is that you're not doing everything in the select context. In other words, don't do this: (pl_df == "?") The first thing we can do is pl_df.select(pl.all()=="?") shape: (3, 2) ┌───────┬───────┐ │ A ┆ B │ │ --- ┆ --- │ │ bool ┆ bool │ ╞═══════╪═══════╡ │ false ┆ false │ │ false ┆ true │ │ true ┆ false │ └───────┴───────┘ When we call pl.all() it means all of the columns. For each column we're converting its original value into a bool of whether or not it's equal to ? Now let's do this: pl_df.select((pl.all()=="?").any()) shape: (1, 2) ┌──────┬──────┐ │ A ┆ B │ │ --- ┆ --- │ │ bool ┆ bool │ ╞══════╪══════╡ │ true ┆ true │ └──────┴──────┘ This gives you the per column. All we did was add .any which tells it that if anything in the parenthesis that preceded it is true then return True. Now let's do pl_df.select(pl.any_horizontal(pl.all()=="?")) shape: (3, 1) ┌───────┐ │ any │ │ --- │ │ bool │ ╞═══════╡ │ false │ │ true │ │ true │ └───────┘ When we call pl.any_horizontal(...) then it is going to do that rowwise for whatever ... is. Lastly, if we put them together... pl_df.select(pl.any_horizontal(pl.all()=="?").any()) shape: (1, 1) ┌──────┐ │ any │ │ --- │ │ bool │ ╞══════╡ │ true │ └──────┘ then we get the single value indicating that somewhere in the dataframe is an item that is equal to "?" | 4 | 3 |
76,083,428 | 2023-4-23 | https://stackoverflow.com/questions/76083428/importerror-cannot-import-name-json-normalize-from-pandas-io-json | python 3.9.2-3 pandas 2.0.0 pandas-io 0.0.1 Error: from pandas.io.json import json_normalize ImportError: cannot import name 'json_normalize' from 'pandas.io.json' (/home/casaos/.local/lib/python3.9/site-packages/pandas/io/json/__init__.py) Apparently this was a problem early on in the pre 1x days of pandas, but seems to have resurfaced. Suggestions? I'm running a script which was functional previously, but migrating it to a new host. It errors out on the line: from pandas.io.json import json_normalize and throws the error ImportError: cannot import name 'json_normalize' from 'pandas.io.json' (/home/casaos/.local/lib/python3.9/site-packages/pandas/io/json/__init__.py) I've attempted to reinstall pandas ('install' option), remove and reinstall, and 'install --force-reinstall' all performed as root so that the base install of python3 has it installed as opposed to a single user | This was indeed the solution simply removing the import line. I'd have liked an attribute to check to determine the pandas version installed easily, but have to settle for a try:except: to determine if the import is needed. pandas.io.json.json_normalize was deprecated and removed in the newest version. Use pandas.json_normalize. Also, the tutorial you were following is most probably severely outdated. You are on your own now. – Ξένη Γήινος Apr 23 at 7:48 | 10 | 10 |
76,065,035 | 2023-4-20 | https://stackoverflow.com/questions/76065035/yahoo-finance-v7-api-now-requiring-cookies-python | url = 'https://query2.finance.yahoo.com/v7/finance/quote?symbols=TSLA&fields=regularMarketPreviousClose®ion=US&lang=en-US' headers = { 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36' } data = requests.get(url,headers=headers) prepost_data = data.json() It seems recently, Yahoo Finance changed their V7 API to require cookies for every request. Running the code above, I get the Invalid Crumb error {"finance":{"result":null,"error":{"code":"Unauthorized","description":"Invalid Crumb"}}} This issue seems to also be known in this Github repo: https://github.com/joshuaulrich/quantmod/issues/382 They seem to have a patch that works: https://github.com/joshuaulrich/quantmod/pull/383/commits But the code is all written in R... Anyone know how to translate this to Python? | This worked for me. Make HTTP GET call to URL https://fc.yahoo.com. Although this call result in a 404 error, we just need it to extract set-cookie from response headers which is then used in the subsequent calls Now make an HTTP GET call to the URL https://query2.finance.yahoo.com/v1/test/getcrumb, by including the obtained cookie from the previous response headers. This call will retrieve the crumb value. Replace [crumb-value] in the following URL and make a HTTP GET call with cookie https://query2.finance.yahoo.com/v7/finance/quote?symbols=TSLA&crumb=[crumb-value] Cache the cookie value and crumb value to skip first two steps going forward | 9 | 21 |
76,073,605 | 2023-4-21 | https://stackoverflow.com/questions/76073605/add-py-typed-as-package-data-with-setuptools-in-pyproject-toml | From what I read, to make sure that the typing information of your code is distributed alongside your code for linters to read, the py.typed file should be part of your distribution. I find answers for how to add these to setup.py but it is not clear to me 1. whether it should be included in pyproject.toml (using setuptools), 2. if so, how it should be added. Scouring their github repository, it seems that this is not added automatically so the question remains how I should add it to my pyproject.toml. I found this general discussion about package_data but it includes reference to include_package_data and a MANIFEST.in and it gets confusing from there what should go where. Tl;dr: how should I include py.typed in pyproject.toml when using setuptools? | Yes, you should add py.typed to your package's source folder (same level as __init__.py). This informs type checkers, like mypy, that your package supports typing. See PEP 561. Here is an example of what is needed in pyproject.toml. Replace pkgname with the name of your package. [tool.setuptools.package-data] "pkgname" = ["py.typed"] It is worth noting package discovery using: [tool.setuptools.packages.find] where = ["src"] seems to be required, alas package-data wouldn't have any effect, as explained in setuptools docs: https://setuptools.pypa.io/en/latest/userguide/datafiles.html#package-data | 15 | 18 |
76,072,664 | 2023-4-21 | https://stackoverflow.com/questions/76072664/convert-pyspark-dataframe-to-pandas-dataframe-fails-on-timestamp-column | I create my pyspark dataframe: from pyspark.sql import SparkSession from pyspark.sql.types import StructType, StructField, BinaryType, ArrayType, StringType, TimestampType input_schema = StructType([ StructField("key", StringType()), StructField("headers", ArrayType( StructType([ StructField("key", StringType()), StructField("value", StringType()) ]) )), StructField("timestamp", TimestampType()) ]) input_data = [ ("key1", [{"key": "header1", "value": "value1"}], datetime(2023, 1, 1, 0, 0, 0)), ("key2", [{"key": "header2", "value": "value2"}], datetime(2023, 1, 1, 0, 0, 0)), ("key3", [{"key": "header3", "value": "value3"}], datetime(2023, 1, 1, 0, 0, 0)) ] df = spark.createDataFrame(input_data, input_schema) I want to use Pandas' assert_frame_equal(), so I want to convert my dataframe to a Pandas dataframe. df.toPandas() will throw TypeError: Casting to unit-less dtype 'datetime64' is not supported. Pass e.g. 'datetime64[ns]' instead. How can I successfully convert the "timestamp" column in order to not lose detail of the datetime value? I need them to remain to 2023-01-01 00:00:00 and not 2023-01-01. | I found the solution: from pyspark.sql.functions import date_format df = df.withColumn("timestamp", date_format("timestamp", "yyyy-MM-dd HH:mm:ss")).toPandas() Now I was able to use assert_frame_equal(df, test_df) successfully. It did not lose precision. | 12 | 12 |
76,087,301 | 2023-4-23 | https://stackoverflow.com/questions/76087301/how-to-change-a-text-color-of-a-code-cell-in-markdown | I need to change a text color of a code cell (namely code cell, not just a text block) that is defined using "`" symbols in Markdown. All my attempts have failed and the color stays the same, which is by default black I have tried to use <font color="...">...<\font> attribute and also <span style="color:...;">...<\span> attribute. All of them didn't work and the text is still the same as on the attached image. How can I solve this problem and finally change the color to whatever I want? Code cell which text I want to change | If you want to add syntax highlighting to a programming language, it can be written like this: # For example, you want to use Python ```python print("the syntax is highlighted") ``` The result would be like this: print("the syntax is highlighted") EDIT: As far as I know, Markdown doesn't support color. So what you'll need is wrap the code cell as HTML. Instead of this: `some beautiful text here` You would want to write this: <code>some beautiful text here</code> And then add the span tag to change the color: <code>some <span style='color:blue'>beautiful</span> text here</code> At this point, the word 'beautiful' will change to blue. It works on my VSCode Markdown viewer. But different app will render Markdown differently. | 3 | 3 |
76,079,699 | 2023-4-22 | https://stackoverflow.com/questions/76079699/computing-a-norm-in-a-loop-slows-down-the-computation-with-dask | I was trying to implement a conjugate gradient algorithm using Dask (for didactic purposes) when I realized that the performance were way worst that a simple numpy implementation. After a few experiments, I have been able to reduce the problem to the following snippet: import numpy as np import dask.array as da from time import time def test_operator(f, test_vector, library=np): for n in (10, 20, 30): v = test_vector() start_time = time() for i in range(n): v = f(v) k = library.linalg.norm(v) try: k = k.compute() except AttributeError: pass print(k) end_time = time() print('Time for {} iterations: {}'.format(n, end_time - start_time)) print('NUMPY!') test_operator( lambda x: x + x, lambda: np.random.rand(4_000, 4_000) ) print('DASK!') test_operator( lambda x: x + x, lambda: da.from_array(np.random.rand(4_000, 4_000), chunks=(2_000, 2_000)), da ) In the code, I simply multiply by 2 a vector (this is what f does) and print its norm. When running with dask, each iteration slows down a little bit more. This problem does not happen if I do not compute k, the norm of v. Unfortunately, in my case, that k is the norm of the residual that I use to stop the conjugate gradient algorithm. How can I avoid this problem? And why does it happen? Thank you! | I think the code snippet is missusing lazy evaluation in dask, specifically the addition operation. Without optimization, the addition lambda x: x+x is complicating the execution graph, with the depth growing with counter, hence overheads. More precisely, for the counter value i we handle the graph of O(i) when computing the norm, so that the total runtime is O(n**2). Of course optimization is possible and desired, but I stop here as the example shared is synthetic. Below I demonstrate that the graph grows linearly with the counter. To see the quadratic complexity visually, consider the following cleaned version of the snippet in question import numpy as np import dask.array as da from time import time import matplotlib.pyplot as plt ns = (10, 20, 40, 50, 60) def test_operator(f, v, norm): out = [] for n in ns: start_time = time() for i in range(n): v = f(v) norm(v) end_time = time() out.append(end_time - start_time) return out out = test_operator( lambda x:x+x, np.random.rand(4_000, 4_000), norm = np.linalg.norm ) plt.scatter(ns,out,label='numpy') out = test_operator( lambda x:x+x, da.from_array(np.random.rand(4_000, 4_000), chunks=(2_000, 2_000)), norm = lambda v: da.linalg.norm(v).compute() ) plt.scatter(ns,out,label='dask') plt.legend() plt.show() | 6 | 2 |
76,099,140 | 2023-4-25 | https://stackoverflow.com/questions/76099140/hugging-face-transformers-bart-cuda-error-cublas-status-not-initialize | I'm trying to finetune the Facebook BART model, I'm following this article in order to classify text using my own dataset. And I'm using the Trainer object in order to train: training_args = TrainingArguments( output_dir=model_directory, # output directory num_train_epochs=1, # total number of training epochs - 3 per_device_train_batch_size=4, # batch size per device during training - 16 per_device_eval_batch_size=16, # batch size for evaluation - 64 warmup_steps=50, # number of warmup steps for learning rate scheduler - 500 weight_decay=0.01, # strength of weight decay logging_dir=model_logs, # directory for storing logs logging_steps=10, ) model = BartForSequenceClassification.from_pretrained("facebook/bart-large-mnli") # bart-large-mnli trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above compute_metrics=new_compute_metrics, # a function to compute the metrics train_dataset=train_dataset, # training dataset eval_dataset=val_dataset # evaluation dataset ) This is the tokenizer I used: from transformers import BartTokenizerFast tokenizer = BartTokenizerFast.from_pretrained('facebook/bart-large-mnli') But when I use trainer.train() I get the following: Printing the following: ***** Running training ***** Num examples = 172 Num Epochs = 1 Instantaneous batch size per device = 4 Total train batch size (w. parallel, distributed & accumulation) = 16 Gradient Accumulation steps = 1 Total optimization steps = 11 Followed by this error: RuntimeError: Caught RuntimeError in replica 1 on device 1. Original Traceback (most recent call last): File "/databricks/python/lib/python3.9/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/databricks/python/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/databricks/python/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1496, in forward outputs = self.model( File "/databricks/python/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/databricks/python/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1222, in forward encoder_outputs = self.encoder( File "/databricks/python/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/databricks/python/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 846, in forward layer_outputs = encoder_layer( File "/databricks/python/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/databricks/python/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward hidden_states, attn_weights, _ = self.self_attn( File "/databricks/python/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/databricks/python/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 191, in forward query_states = self.q_proj(hidden_states) * self.scaling File "/databricks/python/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/databricks/python/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)` I've searched this site and GitHub and hugging face forum but still didn't find anything that helped me fix this for me (I tried adding more memory, lowering batches and warmup, restarting, specifying CPU or GPU, and more, but none worked for me) Databricks Clusters: Runtime: 12.2 LTS ML (includes Apache Spark 3.3.2, GPU, Scala 2.12) Worker Type: Standard_NC24s_v3 with 4 GPUs, 2 to 10 workers, I think 16GB RAM and 448GB memory for the host Runtime: 12.1 ML (includes Apache Spark 3.3.1, Scala 2.12) Worker Type: Standard_L8s (Memory optimized), 2 to 10 workers, 64GB memory with 8 cores Update: With the second cluster, depending on the flag combination, I sometimes get the error IndexError: Target {i} is out of bounds where i change from time to time If you require any other information, comment and I'll add it up asap My dataset is holding private information but here is an image of how it's built: Updates: I also tried setting the: fp16=True gradient_checkpointing=True gradient_accumulation_steps=4 Flags but still had the same error when putting each separately and together Second cluster error (it get this error only sometimes, based on the flag combination): IndexError Traceback (most recent call last) File <command-2692616476221798>:1 ----> 1 trainer.train() File /databricks/python/lib/python3.9/site-packages/transformers/trainer.py:1527, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1522 self.model_wrapped = self.model 1524 inner_training_loop = find_executable_batch_size( 1525 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size 1526 ) -> 1527 return inner_training_loop( 1528 args=args, 1529 resume_from_checkpoint=resume_from_checkpoint, 1530 trial=trial, 1531 ignore_keys_for_eval=ignore_keys_for_eval, 1532 ) File /databricks/python/lib/python3.9/site-packages/transformers/trainer.py:1775, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1773 tr_loss_step = self.training_step(model, inputs) 1774 else: -> 1775 tr_loss_step = self.training_step(model, inputs) 1777 if ( 1778 args.logging_nan_inf_filter 1779 and not is_torch_tpu_available() 1780 and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step)) 1781 ): 1782 # if loss is nan or inf simply add the average of previous logged losses 1783 tr_loss += tr_loss / (1 + self.state.global_step - self._globalstep_last_logged) File /databricks/python/lib/python3.9/site-packages/transformers/trainer.py:2523, in Trainer.training_step(self, model, inputs) 2520 return loss_mb.reduce_mean().detach().to(self.args.device) 2522 with self.compute_loss_context_manager(): -> 2523 loss = self.compute_loss(model, inputs) 2525 if self.args.n_gpu > 1: 2526 loss = loss.mean() # mean() to average on multi-gpu parallel training File /databricks/python/lib/python3.9/site-packages/transformers/trainer.py:2555, in Trainer.compute_loss(self, model, inputs, return_outputs) 2553 else: 2554 labels = None -> 2555 outputs = model(**inputs) 2556 # Save past state if it exists 2557 # TODO: this needs to be fixed and made cleaner later. 2558 if self.args.past_index >= 0: File /databricks/python/lib/python3.9/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs) 1186 # If we don't have any hooks, we want to skip the rest of the logic in 1187 # this function, and just call forward. 1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1189 or _global_forward_hooks or _global_forward_pre_hooks): -> 1190 return forward_call(*input, **kwargs) 1191 # Do not call functions when jit is used 1192 full_backward_hooks, non_full_backward_hooks = [], [] File /databricks/python/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py:1561, in BartForSequenceClassification.forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 1559 elif self.config.problem_type == "single_label_classification": 1560 loss_fct = CrossEntropyLoss() -> 1561 loss = loss_fct(logits.view(-1, self.config.num_labels), labels.view(-1)) 1562 elif self.config.problem_type == "multi_label_classification": 1563 loss_fct = BCEWithLogitsLoss() File /databricks/python/lib/python3.9/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs) 1186 # If we don't have any hooks, we want to skip the rest of the logic in 1187 # this function, and just call forward. 1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1189 or _global_forward_hooks or _global_forward_pre_hooks): -> 1190 return forward_call(*input, **kwargs) 1191 # Do not call functions when jit is used 1192 full_backward_hooks, non_full_backward_hooks = [], [] File /databricks/python/lib/python3.9/site-packages/torch/nn/modules/loss.py:1174, in CrossEntropyLoss.forward(self, input, target) 1173 def forward(self, input: Tensor, target: Tensor) -> Tensor: -> 1174 return F.cross_entropy(input, target, weight=self.weight, 1175 ignore_index=self.ignore_index, reduction=self.reduction, 1176 label_smoothing=self.label_smoothing) File /databricks/python/lib/python3.9/site-packages/torch/nn/functional.py:3026, in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing) 3024 if size_average is not None or reduce is not None: 3025 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 3026 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) IndexError: Target 11 is out of bounds. The number 11 changes from time to time. | I was able to reproduce your problem, here is how I solved it (on both of the clusters you provided). In order to solve it I used (at the from_pretrained call): ignore_mismatched_sizes=True: because the model you use has fewer labels than what you have num_labels={}: Insert the number of your labels, I used 16 just to be sure It's based on both your errors but mainly the second one, I suspect it was also the source of the second error on the GPU, please test and confirm it I also used the following (at the TrainingArguments call for memory optimizations): fp16=True gradient_checkpointing=True I tested it with up until num_train_epochs=12, per_device_train_batch_size=16, per_device_eval_batch_size=64, warmup_steps=500 and it worked just fine, hopefully, it will help you get the desired results. You can look at the 2 final links I provided for more details about how to optimize the speed and memory while training on both GPU and CPU For reference: Huggingface Models (search for ignore_mismatched_sizes) Huggingface Configuration (search for num_labels) Fine-tuning with custom datasets Efficient Training on a Single GPU Efficient Training on CPU | 3 | 3 |
76,058,279 | 2023-4-19 | https://stackoverflow.com/questions/76058279/the-travelling-salesman-problem-using-genetic-algorithm | I was looking to learn about AI and found the traveling salesman problem very interesting. I also wanted to learn about genetic algorithms, so it was a fantastic combo. The task is to find the shortest distance traveling from id 1 to each location from the list once and returning to the starting location id 1 Restriction for the problem : The location id 1 must be the starting and the ending point The maximum distance allowed is distance <= 9000 Only max of 250000 fitness calculation is allowed Code : import numpy as np import random import operator import pandas as pd val10 = 0 val9 = 0 class Locations: def __init__(self, x, y): self.x = x self.y = y def dist(self, location): x_dist = abs(float(self.x) - float(location.x)) y_dist = abs(float(self.y) - float(location.y)) # √( (x2 − x1)^2 + (𝑦2 − 𝑦1)^2 ) dist = np.sqrt((x_dist ** 2) + (y_dist ** 2)) return dist def __repr__(self): return "(" + str(self.x) + "," + str(self.y) + ")" class Fitness: def __init__(self, route): self.r = route self.dist = 0 self.fit = 0.0 def route_dist(self): if self.dist == 0: path_dist = 0 for i in range(0, len(self.r)): from_location = self.r[i] to_location = None if i + 1 < len(self.r): to_location = self.r[i+1] else: to_location = self.r[0] path_dist += from_location.dist(to_location) self.dist = path_dist return self.dist def route_fittness(self): if self.fit == 0: self.fit = 1 / float(self.route_dist()) global val9 val9 = val9 + 1 return self.fit def generate_route(location_list): route = random.sample(location_list, len(location_list)) return route def gen_zero_population(size, location_list): population = [] for i in range(0, size): population.append(generate_route(location_list)) return population def determine_fit(population): result = {} for i in range(0, len(population)): result[i] = Fitness(population[i]).route_fittness() global val10 val10 = val10 + 1 return sorted(result.items(), key=operator.itemgetter(1), reverse=True) def fit_proportionate_selection(top_pop, elite_size): result = [] df = pd.DataFrame(np.array(top_pop), columns=["index", "Fitness"]) df['cumulative_sum'] = df.Fitness.cumsum() df['Sum'] = 100*df.cumulative_sum/df.Fitness.sum() for i in range(0, elite_size): result.append(top_pop[i][0]) for i in range(0, len(top_pop) - elite_size): select = 100*random.random() for i in range(0, len(top_pop)): if select <= df.iat[i, 3]: result.append(top_pop[i][0]) break return result def select_mating_pool(populatoin, f_p_s_result): mating_pool = [] for i in range(0, len(f_p_s_result)): index = f_p_s_result[i] mating_pool.append(populatoin[index]) return mating_pool def ordered_crossover(p1, p2): child, child_p1, child_p2 = ([] for i in range(3)) first_gene = int(random.random() * len(p1)) sec_gene = int(random.random() * len(p2)) start_gene = min(first_gene, sec_gene) end_gene = max(first_gene, sec_gene) for i in range(start_gene, end_gene): child_p1.append(p1[i]) child_p2 = [item for item in p2 if item not in child_p1] child = child_p1 + child_p2 return child def ordered_crossover_pop(mating_pool, elite_size): children = [] leng = (len(mating_pool) - (elite_size)) pool = random.sample(mating_pool, len(mating_pool)) for i in range(0, elite_size): children.append(mating_pool[i]) for i in range(0, leng): var = len(mating_pool)-i - 1 child = ordered_crossover(pool[i], pool[var]) children.append(child) return children def swap_mutation(one_location, mutation_rate): for i in range(len(one_location)): if (random.random() < mutation_rate): swap = int(random.random() * len(one_location)) location1 = one_location[i] location2 = one_location[swap] one_location[i] = location2 one_location[swap] = location1 return one_location def pop_mutation(population, mutation_rate): result = [] for i in range(0, len(population)): mutaded_res = swap_mutation(population[i], mutation_rate) result.append(mutaded_res) return result def next_gen(latest_gen, elite_size, mutation_rate): route_rank = determine_fit(latest_gen) selection = fit_proportionate_selection(route_rank, elite_size) mating_selection = select_mating_pool(latest_gen, selection) children = ordered_crossover_pop(mating_selection, elite_size) next_generation = pop_mutation(children, mutation_rate) return next_generation def generic_algor(population, pop_size, elite_size, mutation_rate, gen): pop = gen_zero_population(pop_size, population) print("Initial distance: " + str(1 / determine_fit(pop)[0][1])) for i in range(0, gen): pop = next_gen(pop, elite_size, mutation_rate) print("Final distance: " + str(1 / determine_fit(pop)[0][1])) best_route_index = determine_fit(pop)[0][0] best_route = pop[best_route_index] print(best_route) return best_route def read_file(fn): a = [] with open(fn) as f: [next(f) for _ in range(6)] for line in f: line = line.rstrip() if line == 'EOF': break ID, x, y = line.split() a.append(Locations(x=x, y=y)) return a location_list = read_file(r'path_of_the_file') population = location_list pop_size = 100 elite_size = 40 mutation_rate = 0.001 gen = 500 generic_algor(population, pop_size, elite_size, mutation_rate, gen) print(val10) print(val9) Location file with x and y coordinates : |Locations | |52 Locations | |Coordinates | 1 565.0 575.0 2 25.0 185.0 3 345.0 750.0 4 945.0 685.0 5 845.0 655.0 6 880.0 660.0 7 25.0 230.0 8 525.0 1000.0 9 580.0 1175.0 10 650.0 1130.0 11 1605.0 620.0 12 1220.0 580.0 13 1465.0 200.0 14 1530.0 5.0 15 845.0 680.0 16 725.0 370.0 17 145.0 665.0 18 415.0 635.0 19 510.0 875.0 20 560.0 365.0 21 300.0 465.0 22 520.0 585.0 23 480.0 415.0 24 835.0 625.0 25 975.0 580.0 26 1215.0 245.0 27 1320.0 315.0 28 1250.0 400.0 29 660.0 180.0 30 410.0 250.0 31 420.0 555.0 32 575.0 665.0 33 1150.0 1160.0 34 700.0 580.0 35 685.0 595.0 36 685.0 610.0 37 770.0 610.0 38 795.0 645.0 39 720.0 635.0 40 760.0 650.0 41 475.0 960.0 42 95.0 260.0 43 875.0 920.0 44 700.0 500.0 45 555.0 815.0 46 830.0 485.0 47 1170.0 65.0 48 830.0 610.0 49 605.0 625.0 50 595.0 360.0 51 1340.0 725.0 52 1740.0 245.0 EOF I have tried to tweak the value of the parameters but it has never gone below or be 9000 it is always around the upper 9500 What can I improve to get it to work for my location file? | Turns out everything is fine tweaking the distance function and the fit_proportionate_selection ensures a faster run time which makes testing the parameters faster. The other change is to have a fixed seed for the random() this way the results can be compared without much variant. def fit_proportionate_selection(top_pop, elite_size): indices, fitness = zip(*top_pop) cumulative_sum = list(it.accumulate(fitness)) total = cumulative_sum[-1] weights = [100*cs/total for cs in cumulative_sum] result = list(indices[:elite_size]) for i in range(len(top_pop) - elite_size): select = random.randrange(100) for i, weight in enumerate(weights): if select <= weight: result.append(top_pop[i][0]) break return result Taken from my review question Code review of the same code The parameters picked where from the comment of the review question: pop_size = 150, elite_size=50, mutation_rate=0.0005, gen=400 | 4 | 0 |
76,068,150 | 2023-4-20 | https://stackoverflow.com/questions/76068150/how-to-use-inspect-signature-to-check-that-a-function-needs-one-and-only-one-par | I want to validate (at runtime, this is not a typing question), that a function passed as an argument takes only 1 positional variable (basically that function will be called with a string as input and returns a truthy). Naively, this is what I had: def check(v_in : Callable): """check that the function can be called with 1 positional parameter supplied""" sig = signature(v_in) len_param = len(sig.parameters) if not len_param == 1: raise ValueError( f"expecting 1 parameter, of type `str` for {v_in}. got {len_param}" ) return v_in If I check the following function, it is OK, which is good: def f_ok1_1param(v : str): pass but the next fails, though the *args would receive 1 param just fine and **kwargs would be empty def f_ok4_vargs_kwargs(*args,**kwargs): pass from rich import inspect as rinspect try: check(f_ok1_1param) print("\n\npasses:") rinspect(f_ok1_1param, title=False,docs=False) except (Exception,) as e: print("\n\nfails:") rinspect(f_ok1_1param, title=False,docs=False) try: check(f_ok4_vargs_kwargs) print("\n\npasses:") rinspect(f_ok4_vargs_kwargs, title=False,docs=False) except (Exception,) as e: print("\n\nfails:") rinspect(f_ok4_vargs_kwargs, title=False,docs=False) which passes the first, and fails the second, instead of passing both: passes: ╭─────────── <function f_ok1_1param at 0x1013c0f40> ───────────╮ │ def f_ok1_1param(v: str): │ │ │ │ 37 attribute(s) not shown. Run inspect(inspect) for options. │ ╰──────────────────────────────────────────────────────────────╯ fails: ╭──────── <function f_ok4_vargs_kwargs at 0x101512200> ────────╮ │ def f_ok4_vargs_kwargs(*args, **kwargs): │ │ │ │ 37 attribute(s) not shown. Run inspect(inspect) for options. │ ╰──────────────────────────────────────────────────────────────╯ all the different combination of signatures are defined below: def f_ok1_1param(v : str): pass def f_ok2_1param(v): pass def f_ok3_vargs(*v): pass def f_ok4_p_vargs(p, *v): pass def f_ok4_vargs_kwargs(*args,**kwargs): pass def f_ok5_p_varg_kwarg(param,*args,**kwargs): pass def f_bad1_2params(p1, p2): pass def f_bad2_kwargs(**kwargs): pass def f_bad3_noparam(): pass Now, I did already check a bit more about the Parameters: rinspect(signature(f_ok4_vargs_kwargs).parameters["args"]) ╭─────────────────── <class 'inspect.Parameter'> ───────────────────╮ │ Represents a parameter in a function signature. │ │ │ │ ╭───────────────────────────────────────────────────────────────╮ │ │ │ <Parameter "*args"> │ │ │ ╰───────────────────────────────────────────────────────────────╯ │ │ │ │ KEYWORD_ONLY = <_ParameterKind.KEYWORD_ONLY: 3> │ │ kind = <_ParameterKind.VAR_POSITIONAL: 2> │ │ name = 'args' │ │ POSITIONAL_ONLY = <_ParameterKind.POSITIONAL_ONLY: 0> │ │ POSITIONAL_OR_KEYWORD = <_ParameterKind.POSITIONAL_OR_KEYWORD: 1> │ │ VAR_KEYWORD = <_ParameterKind.VAR_KEYWORD: 4> │ │ VAR_POSITIONAL = <_ParameterKind.VAR_POSITIONAL: 2> │ ╰───────────────────────────────────────────────────────────────────╯ I suppose checking Parameter.kind vs _ParameterKind enumeration of each parameter is how this needs to be approached, but I wonder if I am overthinking this or if something already exists to do this, either in inspect or if the typing protocol support can be used to do, but at runtime. Note, theoretically def f_ok_cuz_default(p, p2 = None): would also work, but let's ignore that for now. p.s. The motivation is providing a custom callback function in a validation framework. The call location is deep in the framework and that particular argument can also be a string (which gets converted to a regex). It can even be None. Easiest here is just to stick a def myfilter(*args,**kwargs): breakpoint. Or myfilter(foo). Then look at what you get from the framework and adjust body. It’s one thing to have exceptions in your function, another for the framework to accept it but then error before calling into it. So a quick “will this work when we call it?” is more user friendly. | I don't think your problem is trivial, at all. And I am not aware of any given implementation, so I followed your train of thoughts and came to the conclusion that in the end, the problem boils down to answering the following questions: Does the callable have any arguments, at all? If so, is the first argument positional? If so, are all other arguments optional? If you answer all questions with yes, then your function can be called with exactly one (positional) argument, otherwise not. In this context, positional means: a single positional-or-keyword argument, as in f(a), or a single positional-only argument, as in f(a, /), or a variable list of positional arguments, as in f(*args); optional means: a variable list of positional arguments, as in f(a, *args), or a variable mapping of keyword arguments, as in f(a, **kwargs), or an argument with a default value, as in f(a, b=None). Implementation You can implement the corresponding check as follows: from inspect import Parameter, signature from typing import Callable def _is_positional(param: Parameter) -> bool: return param.kind in [ Parameter.POSITIONAL_OR_KEYWORD, Parameter.POSITIONAL_ONLY, Parameter.VAR_POSITIONAL] def _is_optional(param: Parameter) -> bool: return (param.kind in [ Parameter.VAR_POSITIONAL, Parameter.VAR_KEYWORD] or param.default is not Parameter.empty) def has_one_positional_only(fct: Callable) -> bool: args = list(signature(fct).parameters.values()) return (len(args) > 0 and # 1. We have one or more args _is_positional(args[0]) and # 2. First is positional all(_is_optional(a) for a in args[1:])) # 3. Others are optional Test cases This will return the correct results for your test cases (which I extended a bit): def f_ok1_1param(v : str): pass def f_ok2_1param(v): pass def f_ok3_vargs(*v): pass def f_ok4_p_vargs(p, *v): pass def f_ok4_vargs_kwargs(*args, **kwargs): pass def f_ok5_p_varg_kwarg(param,*args,**kwargs): pass def f_ok6_pos_only(v, /): pass # also ok: explicitly positional only def f_ok7_defaults(p, d=None): pass # also ok: with default value def f_bad1_2params(p1, p2): pass def f_bad2_kwargs(**kwargs): pass def f_bad3_noparam(): pass def f_bad4_kwarg_after_args(*args, v): pass # also not ok: v after *args is keyword-only def f_bad5_kwarg_only(*, v): pass # also not ok: explicitly keyword-only print(has_one_positional_only(f_ok1_1param)) # True print(has_one_positional_only(f_ok2_1param)) # True print(has_one_positional_only(f_ok3_vargs)) # True print(has_one_positional_only(f_ok4_p_vargs)) # True print(has_one_positional_only(f_ok5_p_varg_kwarg)) # True print(has_one_positional_only(f_ok6_pos_only)) # True print(has_one_positional_only(f_ok7_defaults)) # True print(has_one_positional_only(f_bad1_2params)) # False print(has_one_positional_only(f_bad2_kwargs)) # False print(has_one_positional_only(f_bad3_noparam)) # False print(has_one_positional_only(f_bad4_kwarg_after_args)) # False print(has_one_positional_only(f_bad5_kwarg_only)) # False Two final thoughts: At first I thought that the 2nd question needs to be formulated more generally, namely: Is any argument positional? However, I did not come up with any combination where it is not the first argument that must be the positional one (according to the given definition of positional, that is), and I am quite confident it is not possible with current Python syntax rules. The solution for sure is not the most efficient one, given that with the knowledge about the parameters that you already checked, certain others are impossible to follow and thus actually don't need to be checked again (e.g. if the first optional argument was **kwargs then an optional *args cannot follow, any more). Update for clarification: The provided solution will "work" in the sense that it checks whether a given callable's signature is compatible with calling it with exactly one positional argument. What it cannot ensure is that one given positional argument satisfies the internal logic of the callable. For example, if f(*args) internally tries to access an element after the 1st element in args, it will fail if called with one argument only, even though the proposed check lets it pass. There is not much to do to check the internal logic from the outside this way, other than running the callable or inspecting its actual code. (Also, to my understanding, the latter is not asked for in the question.) | 3 | 2 |
76,103,041 | 2023-4-25 | https://stackoverflow.com/questions/76103041/how-to-change-background-color-of-the-title-of-legend | In the example below, how can I change the background color of "Title" to be red (with alpha=.5), not the font color of "Title"? import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot([1,2,3,], [1,2,3], label='Series') legend = ax.legend(title='Title') # Access to the legend title box adapted from # https://stackoverflow.com/a/63570572 legend._legend_title_box._text.set_color('red') | You can change the color of the bounding box of the title text: import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot([1, 2, 3, ], [1, 2, 3], label='Series') legend = ax.legend(title='Title') legend._legend_title_box._text.set_bbox(dict(facecolor='red', alpha=0.3, lw=0)) plt.show() PS: For the red background to span the entire width of the legend frame, you could pad the title with whitespace, as proposed by Trenton Mckinney. E.g. legend = ax.legend(title=f'{"Title": ^15}', frameon=False) | 3 | 4 |
76,105,881 | 2023-4-25 | https://stackoverflow.com/questions/76105881/why-does-python-defaultdict-use-factory-method-pattern | Python has a defaultdict class which behaves like a dictionary, except that it outputs a default value when encountering a new key. This default value is created using the default_factory attribute, which is a function that takes no arguments. For example, d = defaultdict(int) creates a dictionary d whose default value is 0, since int() returns 0. Why is the default value stored as a function which takes no arguments, rather than storing the default value itself? I am assuming there is a reason behind this choice of implementation, but I don't see what it is. From a naive perspective, it seems like storing the default value would be just as effective and less convoluted. | Think about how this would work for mutable types. For example, d = defaultdict(list) creates a new list object for each unassigned key that is requested. If you used a single instance, then each key would be returning the same instance of that list. Appending to an instance returned by one key, e.g. d[1].append(2) would have the observable behavior of appending the value to every other key in the dict. That behavior in almost all cases would be undesirable. Yes, for primitive/immutable datatypes, like int, float, str, etc, a default value would be acceptable, but that does not cover a huge use-case for containers or other mutable structures where you would want the value for each key to be unique. | 3 | 6 |
76,082,216 | 2023-4-22 | https://stackoverflow.com/questions/76082216/python-pip-error-externally-managed-environment-after-upgrading-to-ubuntu-23-0 | I'm running Ubuntu Server and upgraded it from 22.10 to 23.04. After the installation finished, I tried running one of my python scripts and one of my modules went missing. I tried to reinstall it with pip3 install playsound but I get this error: error: externally-managed-environment × This environment is externally managed ╰─> To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. If you wish to install a non-Debian-packaged Python package, create a virtual environment using python3 -m venv path/to/venv. Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make sure you have python3-full installed. If you wish to install a non-Debian packaged Python application, it may be easiest to use pipx install xyz, which will manage a virtual environment for you. Make sure you have pipx installed. See /usr/share/doc/python3.11/README.venv for more information. note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification. As the error says, I tried to install playsound with sudo apt install python3-playsound but it's not available through apt. I also tried python3 -m pip install playsound and that gives the same error. I don't really know what else to do because I have never needed to mess with pip like this before. Any thoughts/ideas appreciated! | Figured it out, just needed to activate the virtual environment I made. | 7 | 2 |
76,099,798 | 2023-4-25 | https://stackoverflow.com/questions/76099798/print-all-python-logs-within-github-action | I have a GitHub Action that calls a python script: - name: Test Python Script run: | python python_script.py This runs fine and prints me the logs from within here: if __name__ == "__ main __": print('Calling function') func_call() print('End of script') However, I don't get the print statements from func_call itself: def func_call() : print('inside func_call') When I run from my own terminal, the output is: Calling function inside func_call End of script Yet when I run from a GitHub Action I only get: Calling function End of script Is there a way I can propagate all print statements to the GitHub Action logs? | The solution was relatively simple, just adding the -u flag: - name: Test Python Script run: | python -u python_script.py From man python: -u Force stdin, stdout and stderr to be totally unbuffered. On systems where it matters, also put stdin, stdout and stderr in binary mode. | 5 | 4 |
76,101,455 | 2023-4-25 | https://stackoverflow.com/questions/76101455/reformat-bidirectional-bar-chart-to-match-example | I have generated this bar-chart Using this code: s = """level,margins_fluid,margins_vp Volume,0,0 1L*,0.718,0.690 2L,0.501,0.808 5L,0.181,0.920 MAP,0,0 64*,0.434,0.647 58,0.477,0.854 52,0.489,0.904 Exam,0,0 dry,0.668,0.713 euvolemic*,0.475,0.798 wet,0.262,0.893 History,0,0 COPD*,0.506,0.804 Kidney,0.441,0.778 HF,0.450,0.832 Case,0,0 1 (PIV),0.435,0.802 2 (CVC)*,0.497,0.809""" data = np.array([a.split(',') for a in s.split("\n")]) fluid_vp_1_2 = pd.DataFrame(data[1:], columns=data[0]) fluid_vp_1_2['margins_fluid'] = fluid_vp_1_2['margins_fluid'].apply(float) fluid_vp_1_2['margins_vp'] = fluid_vp_1_2['margins_vp'].apply(float) fluid_vp_1_2 variableNames = {'Volume', 'MAP', 'Exam', 'History', 'Case'} font_color = '#525252' hfont = {'fontname':'DejaVu Sans'} facecolor = '#eaeaf2' index = fluid_vp_1_2.index#['level'] column0 = fluid_vp_1_2['margins_fluid']*100 column1 = fluid_vp_1_2['margins_vp']*100 title0 = 'Fluids' title1 = 'Vasopressors' fig, axes = plt.subplots(figsize=(10,5), facecolor=facecolor, ncols=2, sharey=True) axes[0].barh(index, column0, align='center', color='dimgray', zorder=10) axes[0].set_title(title0, fontsize=18, pad=15, color='black', **hfont) axes[1].barh(index, column1, align='center', color='silver', zorder=10) axes[1].set_title(title1, fontsize=18, pad=15, color='black', **hfont) # If you have positive numbers and want to invert the x-axis of the left plot axes[0].invert_xaxis() # To show data from highest to lowest plt.gca().invert_yaxis() axes[0].set(xlim = [100,0]) axes[1].set(xlim = [0,100]) axes[0].yaxis.tick_right() axes[0].set_yticks(range(len(fluid_vp_1_2))) maxWordLength = fluid_vp_1_2['level'].apply(lambda x: len(x)).max() formattedyticklabels = [r'$\bf{'+f"{t}"+r'}$' if t in variableNames else t for t in fluid_vp_1_2['level']] axes[0].set_yticklabels(formattedyticklabels, ha='center', position=(1.12, 0)) axes[0].tick_params(right = False) axes[1].tick_params(left = False) fig.tight_layout() plt.savefig("fluid_vp_1_2.jpg") plt.show() However, I would like to modify this chart to more closely resemble the below example, where the y-axis labels are on the left-hand side, bi-directional bars are making contact in the center, white background, more vertical in shape (shrunken x-axis), add x-axis label (“adjusted proportion of respondents”), but I would still like to maintain the order of variables and the gaps in bars caused by the bolded header labels like Volume, MAP, etc. Any tips? | There is a some simplification/factorization you can deal with to make styling your plots easier. But you are basically almost there. Just set the tick labels and remove spaces between plots with fig.subplots_adjust(wspace=0) (you have to remove fig.tight_layout()): from io import StringIO import matplotlib.pyplot as plt import pandas as pd s = """level,margins_fluid,margins_vp Volume,0,0 1L*,0.718,0.690 2L,0.501,0.808 5L,0.181,0.920 MAP,0,0 64*,0.434,0.647 58,0.477,0.854 52,0.489,0.904 Exam,0,0 dry,0.668,0.713 euvolemic*,0.475,0.798 wet,0.262,0.893 History,0,0 COPD*,0.506,0.804 Kidney,0.441,0.778 HF,0.450,0.832 Case,0,0 1 (PIV),0.435,0.802 2 (CVC)*,0.497,0.809""" # building df directly with pandas fluid_vp_1_2 = pd.read_csv(StringIO(s)) fluid_vp_1_2['margins_fluid'] = fluid_vp_1_2['margins_fluid']*100 fluid_vp_1_2['margins_vp'] = fluid_vp_1_2['margins_vp']*100 # style parameters for all plots title_format = dict( fontsize=18, pad=15, color='black', fontname='DejaVu Sans' ) plot_params = dict( align='center', zorder=10, legend=None, width=0.9 ) grid_params = dict( zorder=0, axis='x' ) tick_params = dict( left=False, which='both' ) variableNames = {'Volume', 'MAP', 'Exam', 'History', 'Case'} fig, axes = plt.subplots(figsize=(8,10), ncols=2, sharey=True, facecolor='#eaeaf2') # removing spaces between plots fig.subplots_adjust(wspace=0) # plotting Fluids fluid_vp_1_2.plot.barh(y='margins_fluid', ax=axes[0], color='dimgray', **plot_params) axes[0].grid(**grid_params) axes[0].set_title('Fluids', **title_format) axes[0].tick_params(**tick_params) # plotting Vasopressors fluid_vp_1_2.plot.barh(y='margins_vp', ax=axes[1], color='silver', **plot_params) axes[1].grid(**grid_params) axes[1].set_title('Vasopressors', **title_format) axes[1].tick_params(**tick_params) # adjust axes axes[0].invert_xaxis() plt.gca().invert_yaxis() axes[0].set(xlim = [100,0]) axes[1].set(xlim = [0,100]) # adding y labels formattedyticklabels = [rf'$\bf{{{t}}}$' if t in variableNames else t for t in fluid_vp_1_2['level']] axes[0].set_yticklabels(formattedyticklabels) plt.show() Edit: you can get a "longer" plot by changing figsize. Output for figsize=(8,10): | 3 | 2 |
76,071,127 | 2023-4-21 | https://stackoverflow.com/questions/76071127/find-spline-knots-by-variable-in-python | When fitting a linear GAM model in python imposing n_splines=5, a piecewise-linear function is fitted: import statsmodels.api as sm from pygam import LinearGAM data = sm.datasets.get_rdataset('mtcars').data Y = data['mpg'] X = data.drop("mpg",axis=1) model = LinearGAM(spline_order=1,n_splines=5).fit(X, Y) By using .coef from fitted model, the coefficientes for every splines can be recovered for further analysis: model.coef_ However, how can we obtain the sections of each of the 5 splines for each variable? As an example, for cyl variable we would fit the following splines: The 5 sections are determined by the knots, so, in the plot we would see the variable limits for the computed betas. (i.e.:4-5,5-6,6-7,7-8). The only thing I find in the documentation the method model.edge_knots which is array-like of floats of length 2. The minimum and maximum domain of the spline function. In this example it corresponds for cyl to [4,8]. | Finally I have come up with a solution, in this one I use partial dependence to calculate the function with its slope changes. In this one I take double differences and with it the change of slope. XX = model_gam.generate_X_grid(term=i) pdep, confi = model_gam.partial_dependence(term=i, X=XX, width=0.95) first_diff = [float("{:.3f}".format(i)) for i in np.diff(pdep)] second_diff = abs(np.diff(first_derivative)) values_list = XX[np.where(second_diff > 0)[0],i] This leads to this result which is suboptimal: But seems a good enough first apporach. | 5 | 2 |
76,089,319 | 2023-4-24 | https://stackoverflow.com/questions/76089319/express-relation-between-enum-and-its-member-in-python-typing | How to type (in Python, e.g. for MyPy) a function that expects two parameters - an enum and one of its values/members? from enum import Enum from typing import TypeVar, Type class MyEnumA(Enum): A = 1 B = 2 class MyEnumB(Enum): A = 1 B = 2 TE = TypeVar('TE', bound=Enum) def myfunction(member: TE, e: Type[TE]) -> None: pass myfunction(MyEnumA.A, MyEnumA) # all right myfunction(MyEnumA.A, MyEnumB) # I expect mypy-error here but it passed print(type(MyEnumA.A)) # says: <enum 'MyEnumA'> print(type(MyEnumB.A)) # says: <enum 'MyEnumB'> print(f'{isinstance(MyEnumA.A, MyEnumA)=}') # says: isinstance(MyEnumA.A, MyEnumA)=True print(f'{isinstance(MyEnumA.A, MyEnumB)=}') # says: isinstance(MyEnumA.A, MyEnumB)=False reveal_type(MyEnumA) # mypy: Revealed type is "def (value: builtins.object) -> e.MyEnumA" reveal_type(MyEnumA.A) # mypy: Revealed type is "Literal[e.MyEnumA.A]?" I would like to understand Why MyPy does not state an error for the second call; and How to type myfunction so that MyPy detects an error there. More examples: myfunction(MyEnumA.A, MyEnumA) # should pass - member of enum myfunction(MyEnumB.A, MyEnumB) # should pass - member of enum myfunction(MyEnumA.A, MyEnumB) # should fail - member of other enum myfunction(MyEnumB.A, MyEnumA) # should fail - member of other enum | The key observation is what MyPy actually considers a class MyEnumA to be: reveal_type(MyEnumA) # mypy: Revealed type is "def (value: builtins.object) -> e.MyEnumA" It gives a clue to annotate myfunction in the following way, no matter how unobvious it is: P = ParamSpec('P') def myfunction(member: TE, e: Callable[P, TE]) -> None: That makes MyPy to find typing error in the second call of myfunction: error: Argument 2 to "myfunction" has incompatible type "Type[MyEnumA]"; expected "Callable[[object], MyEnumB]" [arg-type] | 8 | 1 |
76,087,617 | 2023-4-23 | https://stackoverflow.com/questions/76087617/how-to-search-with-regexp-within-blocks | I have a long file that is composed of many blocks like this: x=0 abc 20 def 76 ghi 11 x=1 def 45 ghi 32 x=2 jkl 56 mno 76 abc 134 . . . x=393 xyz 21 abc 9 x=394 ghi 43 def 166 xyz 25 I need the regular expression (for python) to get this output: return_list = [20,'x',134,...,9,'x'] (so matches for abc and x should be assigned to the same group) So I need to look in each block if abc is available, if it is match integer after abc, if not match the 'x' at the beginning. Block length (lines) are different for each block. I tried that regexp: (?ms)(?=x=\d).*?(?=\n\n) That matches all blocks separately. I now need the extension to search for abc in the individual blocks and return the x if not present. I have tried many things but since I am a beginner I can not find a solution. I hope someone can help me. | You can achieve the result you want with a single regex, by changing your regex to terminate at either abc = \d+ or \n\n or \Z (to allow a match of the last group at the end of the string - this is required for your sample data as it doesn't finish with \n\n). (?ms)(?=x=\d).*?(?:abc (\d+)|\n\n|\Z) If abc = \d+ is present in the block, the digits will be captured in group 1, otherwise it will be empty. You can then use this fact to substitute the value with an 'x': res = [m.group(1) or 'x' for m in re.finditer(r'(?ms)(?=x=\d).*?(?:abc (\d+)|\n\n|\Z)', data)] Output: ['20', 'x', '134', '9', 'x'] Note if you want integers rather than strings you can modify the code slightly: res = [int(m.group(1)) if m.group(1) else 'x' for m in re.finditer(r'(?ms)(?=x=\d).*?(?:abc (\d+)|\n\n|\Z)', data)] Output: [20, 'x', 134, 9, 'x'] | 3 | 4 |
76,086,714 | 2023-4-23 | https://stackoverflow.com/questions/76086714/can-i-set-a-variable-with-the-result-of-match | Setting a variable with a match can be done simply like this: Mode = 'fast' puts = '' match Mode: case "slow": puts = 'v2 k5' case "balanced": puts = 'v3 k5' case "fast": puts = 'v3 k7' But can you do something like this? Mode = 'fast' puts = match Mode: case "slow": 'v2 k5' case "balanced": 'v3 k5' case "fast": 'v3 k7' Currently it results in a syntax error. | Not as far as I know in python. In this case I wouldn't use pattern matching at all: puts = { "slow": "v2 k5", "balanced": "v3 k5", "fast": "v3 k7" }.get(Mode, default) Clearer, more declarative, and no repeating yourself. But of course you can't use clever pattern destructuring. | 8 | 13 |
76,077,950 | 2023-4-22 | https://stackoverflow.com/questions/76077950/how-do-you-copy-a-dataframe-in-polars | In polars, what is the way to make a copy of a dataframe? In pandas it would be: df_copy = df.copy() But what is the syntax for polars? | That would be clone : df = pl.DataFrame( { "a": [1, 2, 3, 4], "b": [0.5, 4, 10, 13], "c": [True, True, False, True], } ) df_copy = df.clone() #cheap deepcopy/clone Output : print(df_copy) shape: (4, 3) ┌─────┬──────┬───────┐ │ a ┆ b ┆ c │ │ --- ┆ --- ┆ --- │ │ i64 ┆ f64 ┆ bool │ ╞═════╪══════╪═══════╡ │ 1 ┆ 0.5 ┆ true │ │ 2 ┆ 4.0 ┆ true │ │ 3 ┆ 10.0 ┆ false │ │ 4 ┆ 13.0 ┆ true │ └─────┴──────┴───────┘ | 7 | 11 |
76,067,104 | 2023-4-20 | https://stackoverflow.com/questions/76067104/using-vicuna-langchain-llama-index-for-creating-a-self-hosted-llm-model | I want to create a self hosted LLM model that will be able to have a context of my own custom data (Slack conversations for that matter). I've heard Vicuna is a great alternative to ChatGPT and so I made the below code: from llama_index import SimpleDirectoryReader, LangchainEmbedding, GPTListIndex, \ GPTSimpleVectorIndex, PromptHelper, LLMPredictor, Document, ServiceContext from langchain.embeddings.huggingface import HuggingFaceEmbeddings import torch from langchain.llms.base import LLM from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM !export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512 class CustomLLM(LLM): model_name = "eachadea/vicuna-13b-1.1" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) pipeline = pipeline("text2text-generation", model=model, tokenizer=tokenizer, device=0, model_kwargs={"torch_dtype":torch.bfloat16}) def _call(self, prompt, stop=None): return self.pipeline(prompt, max_length=9999)[0]["generated_text"] def _identifying_params(self): return {"name_of_model": self.model_name} def _llm_type(self): return "custom" llm_predictor = LLMPredictor(llm=CustomLLM()) But sadly I'm hitting the below error: OutOfMemoryError: CUDA out of memory. Tried to allocate 270.00 MiB (GPU 0; 22.03 GiB total capacity; 21.65 GiB already allocated; 94.88 MiB free; 21.65 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Here's the output of !nvidia-smi (before running anything): Thu Apr 20 18:04:00 2023 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 530.30.02 Driver Version: 530.30.02 CUDA Version: 12.1 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA A10G Off| 00000000:00:1E.0 Off | 0 | | 0% 23C P0 52W / 300W| 0MiB / 23028MiB | 18% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+ Any idea how to modify my code to make it work? | length is too long, 9999 will consume huge amount of GPU RAM, especially using 13b model. try 7b model. And try using something like peft/bitsandbytes to reduce GPU RAM usage. set load_in_8bit=True is a good start. | 6 | 2 |
76,073,794 | 2023-4-21 | https://stackoverflow.com/questions/76073794/difference-between-creating-indexes-in-django-using-db-index-true-and-class-meta | I was looking into how to create an index in Django. I came across two ways: Class Student(models.Model): name= models.CharField(........, db_index=True) age= models.CharField(........, db_index=True) Class Student(models.Model): name= models.CharField(........) age= models.CharField(........) Class Meta: indexes = [models.Index(name="first_index",fields=["name","age"],)] Can someone clarify what the difference between them is? Moreover, in the db_index=true method, is there a single index for all the fields with db_index=true ? | The latter will make a combined index, so an index that takes the two fields, and makes a retrieval if both fields are specified efficient. This means that if only name is given to filter on, the index will (likely) not be useful, if you only specify the age, this will be the same, only if you specify both name and age that index in case two will be useful. It will however be more useful if you specify both when you have an index on the combination and it thus can easily find the Student(s) with that name and age. Otherwise it will probably look in both indexes, and then make a set of records that appear in both. The two are however not mutually exclusive, you can make an index for name, for age, and for name and age combined with: class Student(models.Model): name= models.CharField(max_length=128, db_index=True) age = models.IntegerField(db_index=True) class Meta: indexes = [models.Index(name='name_age_index', fields=['name','age'],)] | 5 | 0 |
76,064,194 | 2023-4-20 | https://stackoverflow.com/questions/76064194/store-users-credentials-in-python-package | I'm working on a Python package where I need to get credentials (email and password) from the user via CLI and they can enter their credentials just as follows. $ my_package auth --email [email protected] --password testpass123 My package is responsible for storing and using the provided credentials in the next calls (even if the system reboots). What is the best way of implementing this? Using environment variables? Online password managers? Keeping them in the user's $HOME directory? | In memory: These days most operating systems have memory protection which prevents other processes to access the memory used by some other process(you may specifically ask for sharing the memory though). So even a Python variable could do the job(just get it from sys.argv and store it in a variable) or an environment variable but you need to make sure your code doesn't make use of insecure calls like loading a pickle, shelves, or using exec, eval etc from unknown source. While the process is running, Python variable and environment variable basically have same security. If one can read that variable, can read env variable by os.environ. The difference is that if your script forks another process, environment variables are gonna get copied to the child. In file system: Since you mentioned "after a reboot" you need to store the credentials in the file system. There is a well-known package called keyring. It helps you store your credential encrypted. If I'm not wrong it uses the logged-in user's password to encrypt the data. It has a friendly interface to use. You can use the client's email as the username for setup. There is nothing wrong with online password managers but if you have one locally and the system has set permissions properly it saves you one extra request. | 3 | 2 |
76,072,637 | 2023-4-21 | https://stackoverflow.com/questions/76072637/how-to-check-the-frequency-of-a-word-in-the-english-language | My problem: I want to check if the provided word is a common English word. I'm using pyenchant currently to see if a word is an actual word but I can't find a function in it that returns the frequency of a word/if it's a common word. Example code: import enchant eng_dict = enchant.Dict("en_US") words = ['hello', 'world', 'thisisntaword', 'anachronism'] good_words = [] for word in words: if eng_dict.check(word): # currently this checks if it's an english word, but I also want it to check it it's commonly used word good_words.append(word) print(good_words) What it returns as is: ['hello', 'world', 'anachronism']. What I want it to return:['hello', 'world'] because anachronism is obviously not a common word. Any solutions my problem? | You could use the Google Ngram API for this: url = "https://books.google.com/ngrams/json" query_params = { "content": <my_noun_phrase/string of noun phrases>, "year_start": 2017, "year_end": 2019, "corpus": 26, "smoothing": 1, "case_insensitive": True } response = requests.get(url=url, params=query_params) This API lets you access v3 of the Google ngram database, which is the most recent version available. Note, however, that the API is not officially documented, and since you run into rate limits quite easily, it's not production-proof. Alternative tools are PhraseFinder (https://phrasefinder.io/) and NGRAMS (https://ngrams.dev/). PhraseFinder is a wrapper around v2 of the Google ngram database; NGRAM is a wrapper around v3 of the same database. They are both free and can handle more traffic than the Google API. | 3 | 4 |
76,071,934 | 2023-4-21 | https://stackoverflow.com/questions/76071934/how-to-correctly-subtype-dict-so-that-mypy-recognizes-it-as-generic | I have a subclass of dict: class MyDict(dict): pass Later I use the definition: my_var: MyDict[str, int] = {'a': 1, 'b': 2} MyPy complains: error: "MyDict" expects no type arguments, but 2 given [type-arg] How can I define MyDict so that MyPy recognizes it as generic with two type arguments? I have tried deriving from typing.Dict and adding protocol MutableMapping, both to no avail. | As a general rule you can remember to always provide type arguments to generic types, regardless of context. (Remember: explicit is better than implicit.) This means in annotations (e.g. d: dict[str, int]) and when subclassing (e.g. class MyDict(dict[str, int]): ...). If you want to retain genericity in terms of some or all type parameters of the base class in your own subclass, then pass type variables in place of specific type arguments: from collections.abc import Hashable from typing import TypeVar K = TypeVar("K", bound=Hashable) V = TypeVar("V") class MyDict(dict[K, V]): ... class MyStrKeyDict(dict[str, V]): ... class MyIntValDict(dict[K, int]): ... x: MyDict[str, int] y: MyStrKeyDict[int] z: MyIntValDict[str] The reason MyPy told you it did not expect any type arguments for MyDict is that it implicitly assumed MyDict is a subclass of dict[Any, Any] because you did not specify the type arguments. Side note: It would still be wrong to now assign e.g. x = {"a": 1} as you did in your example because dict is not a subtype of MyDict. This is why it is usually preferable to subclass the ABC (in this case Mapping or MutableMapping) or use protocols. | 7 | 5 |
76,065,984 | 2023-4-20 | https://stackoverflow.com/questions/76065984/how-to-provide-type-hint-for-a-function-that-returns-an-protocol-subclass-in-pyt | If a function returns a subclass of a Protocol, what is the recommended type-hint for the return type of that function? The following is a simplified piece of code for representation from typing import Protocol, Type from abc import abstractmethod class Template(Protocol): @abstractmethod def display(self) -> None: ... class Concrete1(Template): def __init__(self, grade: int) -> None: self._grade = grade def display(self) -> None: print(f"Displaying {self._grade}") class Concrete2(Template): def __init__(self, name: str) -> None: self._name = name def display(self) -> None: print(f"Printing {self._name}") def give_class(type: int) -> Type[Template]: if type == 1: return Concrete1 else: return Concrete2 concrete_class = give_class(1) concrete_class(5) In the line concrete_class(5), Pylance informs Expected no arguments to "Template" constructor. | Protocols are not ABCs Let me start of by emphasizing that protocols were introduced specifically so you do not have to define a nominal subclass to create a subtype relation. That is why it is called structural subtyping. To quote PEP 544, the goal was allowing users to write [...] code without explicit base classes in the class definition. While you can subclass a protocol explicitly when defining a concrete class, that is not what they were designed for. Protocols are not abstract base classes. By using your protocol like an ABC, you are basically discarding everything that makes a protocol useful in the first place. Your error and possible solutions As to why you are getting that error, that is easily explained. Your Template protocol does not define its own __init__ method. When a variable is declared to be of type type[Template] (i.e. a class implementing the Template protocol) and you want to instantiate it, the type checker will see that Template does not define an __init__ and fall back to the object.__init__, which takes no arguments. Thus, providing an argument to the constructor is correctly marked as an error. Since you want to use your protocol not only to annotate pure instances that follow it, but also classes that you want to instantiate (i.e. type[Template]), you need to think about the __init__ method. If you want to express that for a class to implement your Template protocol it can have any constructor whatsoever, you should include such a permissive __init__ signature in the protocol, for example: class Template(Protocol): def __init__(self, *args: Any, **kwargs: Any) -> None: ... If you want to be more specific/restrictive, that is possible of course. You could for example declare that Template-compliant classes must take exactly one argument in their __init__, but it can be of any type: class Template(Protocol): def __init__(self, _arg: Any) -> None: ... Both of these solutions would work in your example. The latter however would restrict you from passing a keyword-argument to the constructor with any name other than _arg obviously. Proper structural subtyping To conclude, I would suggest you actually utilize the power of protocols properly to allow for structural subtyping and get rid of the explicit subclassing and abstractmethod decorators. If all you care about is a fairly general constructor and your display method, you can achieve that like this: from typing import Any, Protocol class Template(Protocol): def __init__(self, _arg: Any) -> None: ... def display(self) -> None: ... class Concrete1: def __init__(self, grade: int) -> None: self._grade = grade def display(self) -> None: print(f"Displaying {self._grade}") class Concrete2: def __init__(self, name: str) -> None: self._name = name def display(self) -> None: print(f"Printing {self._name}") def give_class(type_: int) -> type[Template]: if type_ == 1: return Concrete1 else: return Concrete2 concrete_class = give_class(1) concrete_class(5) This passes mypy --strict without errors (and should satisfy Pyright too). As you can see, both Concrete1 and Concrete2 are accepted as return values for give_class because they both follow the Template protocol. Proper use of ABCs There are of course still valid applications for abstract base classes. For example, if you wanted to define an actual implementation of a method in your base class that itself calls an abstract method, subclassing that explicitly (nominal subtyping) can make perfect sense. Example: from abc import ABC, abstractmethod from typing import Any class Template(ABC): @abstractmethod def __init__(self, _arg: Any) -> None: ... @abstractmethod def display(self) -> None: ... def call_display(self) -> None: self.display() class Concrete1(Template): def __init__(self, grade: int) -> None: self._grade = grade def display(self) -> None: print(f"Displaying {self._grade}") class Concrete2(Template): def __init__(self, name: str) -> None: self._name = name def display(self) -> None: print(f"Printing {self._name}") def give_class(type_: int) -> type[Template]: if type_ == 1: return Concrete1 else: return Concrete2 concrete_class = give_class(1) obj = concrete_class(5) obj.call_display() # Displaying 5 But that is a totally different use case. Here we have the benefit that Concrete1 and Concrete2 are nominal subclasses of Template, thus inherit call_display from it. Since they are nominal subclasses anyway, there is no need for Template to be a protocol. And all this is not to say that it is impossible to find applications, where it is useful for something to be both a protocol and an abstract base class. But such a use case should be properly justified and from the context of your question I really do not see any justification for it. | 3 | 3 |
76,070,545 | 2023-4-21 | https://stackoverflow.com/questions/76070545/fastapi-difference-between-json-dumps-and-jsonresponse | I am exploring FastAPI, and got it working on my Docker Desktop on Windows. Here's my main.py which is deployed successfully in Docker: #main.py import fastapi import json from fastapi.responses import JSONResponse app = fastapi.FastAPI() @app.get('/api/get_weights1') async def get_weights1(): weights = {'aa': 10, 'bb': 20} return json.dumps(weights) @app.get('/api/get_weights2') async def get_weights2(): weights = {'aa': 10, 'bb': 20} return JSONResponse(content=weights, status_code=200) And I have a simple python file get_weights.py to make requests to those 2 APIs: #get_weights.py import requests import json resp = requests.get('http://127.0.0.1:8000/api/get_weights1') print('ok', resp.status_code) if resp.status_code == 200: print(resp.json()) resp = requests.get('http://127.0.0.1:8000/api/get_weights2') print('ok', resp.status_code) if resp.status_code == 200: print(resp.json()) I get the same responses from the 2 APIs, output: ok 200 {"aa": 10, "bb": 20} ok 200 {'aa': 10, 'bb': 20} The response seems the same whether I use json.dumps() or JSONResponse(). I've read the FastAPI documentation on JSONResponse, but I still have below questions: May I know if there is any difference between the 2 methods? If there is a difference, which method is recommended (and why?)? | In FastAPI you can create response 3 different ways (from most concise to most flexible): 1 return dict # Or model or ... In docs you linked we can see that FastAPI will automatically stringify this dict and wrap it in JSONResponse. This way is most concise and covers majority of use cases. 2 However sometimes you have to return custom headers (for example REMOTE-USER=username) or different status code (maybe 201 - Created or 202 - Accepted). This case you need to use JSONResponse. return JSONResponse(content=dict) # Here we need to have dict. Problem is that now if we don't have simple dict, we have to use jsonable_encoder(some_model) # -> dict to get it. So it's a tad more verbose. For available options check Starlette documentation, since FastAPI just reexports it. More complex example: return JSONResponse(content=jsonable_encoder(some_model), status_code=201, headers={"REMOTE-USER": username}) 3 Finally you do not need to return json - you can also return csv, html or any other type of file. This case we have to use Response and specify media_type. Likewise use Starlette docs. return Response('Hello, world!', media_type='text/plain') In short Note that Fastapi documentation states: When you return a Response directly its data is not validated, converted (serialized), nor documented automatically. So we see what is difference: first method has good integration with other FastAPI functionality, so should always be preferred. Use second option only if you need to provide custom headers or status codes. Finally, use third option only if you want to return something that is not json. | 3 | 3 |
76,069,958 | 2023-4-21 | https://stackoverflow.com/questions/76069958/how-to-change-the-format-of-a-date-in-python-to-the-spanish-format | I have a small problem with my date code. I would like the date to appear in Spanish add the date in Spanish. The date is currently formatted like this: 2004/03/30 But I want the formatting to be like this: 04/21/2023 What I'm trying to do is display any date in Spanish, as follows: Viernes, 02 De Junio De 2023 Is there any way to do this? The code I am using is the following: from datetime import datetime, timedelta s = '2004/03/30' date = datetime.strptime(s, "%B") modified_date = date + timedelta(days=1) print(datetime.strftime(modified_date, "%B")) | You can use the babel library to localize the date to Spanish. pip install babel The code would look like this: from datetime import datetime from babel.dates import format_date s = '2004/03/30' date = datetime.strptime(s, "%Y/%m/%d") # Spanish locale spanish_date = format_date(date, "EEEE, dd 'de' MMMM 'de' yyyy", locale='es') print(spanish_date) Output: martes, 30 de marzo de 2004 | 3 | 3 |
76,069,849 | 2023-4-21 | https://stackoverflow.com/questions/76069849/matplotlib-and-axes3d-give-a-blank-picture | I want to draw a 3D picture with these data using matplotlib and Axes3D import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D data = np.array([[4244.95, 4151.69, 2157.41, 829.769, 426.253, 215.655], [8263.14, 4282.98, 2024.68, 1014.6, 504.515, 250.906], [8658.01, 4339.53, 2173.56, 1087.65, 544.069, 544.073]]) x = np.array([1, 2, 4, 8, 16, 32]) y = np.array([2, 4, 8]) x, y = np.meshgrid(x, y) z = data fig = plt.figure() ax = Axes3D(fig) ax.plot_surface(x, y, z, rstride=1, cstride=1, cmap='rainbow') ax.set_xlabel('Stride') ax.set_ylabel('Bitwidth') ax.set_zlabel('Performance') plt.show() In my computer, it gives a blank picture. But I run this code in other two computers, one is correct, the other one is blank. matplotlib: 3.7.1 numpy: 1.24.2 I have tried in windows11 and wsl ubuntu-20.04, but still the blank pictures. | replace this line ax = Axes3D(fig) with ax = fig.add_subplot(projection='3d') you do this in order to alert Matplotlib that we're using 3d data. | 3 | 10 |
76,069,337 | 2023-4-21 | https://stackoverflow.com/questions/76069337/how-to-create-a-function-which-applies-lag-to-a-numpy-array-in-python | Here are 3 numpy arrays with some numbers: import numpy as np arr_a = np.array([[0.1, 0.2, 0.3, 0.4], [0.2, 0.3, 0.4, 0.5], [0.3, 0.4, 0.5, 0.6], [0.4, 0.5, 0.6, 0.7], [0.5, 0.6, 0.7, 0.8], [0.6, 0.7, 0.8, 0.9]]) arr_b = np.array([[0.15, 0.25, 0.35, 0.45], [0.35, 0.45, 0.55, 0.65], [0.55, 0.65, 0.75, 0.85], [0.75, 0.85, 0.95, 1.05], [0.95, 1.05, 1.15, 1.25], [1.15, 1.25, 1.35, 1.45]]) arr_c = np.array([[0.3, 0.6, 0.9, 1.2], [0.6, 0.9, 1.2, 1.5], [0.9, 1.2, 1.5, 1.8], [1.2, 1.5, 1.8, 2.1], [1.5, 1.8, 2.1, 2.4], [1.8, 2.1, 2.4, 2.7]]) Each array has a shape of (6, 4). Where rows=person and columns=time(seconds), consider each row to represent a unique person and each column to represent acceleration associated with that person for that particular moment in time. I would like to create a function called calc_acc_change which computes the change in acceleration according to a given lag value. I would like this function to take in an array and a lag value (default=2) which satisfies the following formula: acc_change(t) = acc(t) - acc(t- lag), where t=time. And I would like the output to remain an array. I've started my function as follow, but I don't know how to complete it: def calc_acc_change(array, lag=2): ... return acc_change To test the function works, please input arr_a, arr_b and arr_c into the function and print the output. Any help is much appreciated :) | Slice your input array based on the given lag and then subtract the sliced arrays. import numpy as np def calc_acc_change(array, lag=2): acc_change = np.zeros_like(array) acc_change[:, lag:] = array[:, lag:] - array[:, :-lag] return acc_change arr_a = np.array([[0.1, 0.2, 0.3, 0.4], [0.2, 0.3, 0.4, 0.5], [0.3, 0.4, 0.5, 0.6], [0.4, 0.5, 0.6, 0.7], [0.5, 0.6, 0.7, 0.8], [0.6, 0.7, 0.8, 0.9]]) arr_b = np.array([[0.15, 0.25, 0.35, 0.45], [0.35, 0.45, 0.55, 0.65], [0.55, 0.65, 0.75, 0.85], [0.75, 0.85, 0.95, 1.05], [0.95, 1.05, 1.15, 1.25], [1.15, 1.25, 1.35, 1.45]]) arr_c = np.array([[0.3, 0.6, 0.9, 1.2], [0.6, 0.9, 1.2, 1.5], [0.9, 1.2, 1.5, 1.8], [1.2, 1.5, 1.8, 2.1], [1.5, 1.8, 2.1, 2.4], [1.8, 2.1, 2.4, 2.7]]) print("Array A:\n", calc_acc_change(arr_a)) print("Array B:\n", calc_acc_change(arr_b)) print("Array C:\n", calc_acc_change(arr_c)) | 3 | 3 |
76,059,511 | 2023-4-19 | https://stackoverflow.com/questions/76059511/how-would-you-properly-use-the-python-c-api-in-a-c-shared-library | Apologies in advance for the extremely long question--it has been a journey... so thanks in advance for your patience. TL;DR: python throws a segfault when using this set up: [numpy code] <-- [python/C api] <-- [C-implemented .so library] <-- [py driver using ctypes' cdll] Note the py driver is just for testing the .so library, the lib is ultimately intended to be used in another 3rd party app. Background Our team has some python code that we are trying to integrate into another 3rd party application, and the application in question expects symbol implementations in the form of a shared object file (.so library). To accomplish this, I am trying to make a shared object library in C that leverages the Python/C API. I have been running into some issues getting things to run as expected, so I've tried to distill the problem down to an "as-simple-as-possible" example: I am making a basic shared object library written in C, using the Python/C API, and in particular this library's methods have a numpy dependency (this will be important later). The lib is ultimately intended to be used by a "runtime application" (via dynamically loading symbols, for example with dlfcn.h) that is "pre-existing" (I don't have control over the compilation/linking etc. of said program). I am running in to what seems like a linking issue, but may be an issue with either how the so is compiled and/or how I am using/configuring python via the C API. I know this is possible to do because of some existing use cases I've found about on the internets, but I cannot seem to zero in on where my setup is failing. Setup Let's say I am on ubuntu 20.04 with python3.8-dev installed system-wide (in this example I am actually working in a docker container, so if it ends up possibly a system setup issue I am happy to provide the dockerfile as well). Further, I have a virtual environment set up in e.g. ~/venv, it is activated, and numpy is installed--I can confirm this with, for example, (venv) user@4189d31a5bbe:~$ which python && python -c "import numpy; print(numpy)" /home/user/venv/bin/python <module 'numpy' from '/home/user/venv/lib/python3.8/site-packages/numpy/__init__.py'> Library Files I have the following header/source files: mylibwithpy.h: #ifndef __MYLIBWITHPY__ #define __MYLIBWITHPY__ #include <stdio.h> #include <Python.h> void someFunctionWithPython(); #endif mylibwithpy.c: All the function someFunctionWithPython does is check if python is initialized, initializes if it isn't, and then tries to import numpy. #include "mylibwithpy.h" void someFunctionWithPython() { if (!Py_IsInitialized()) { printf("Initializing python...\n"); Py_Initialize(); } else { printf("python alread initialized.\n"); } printf("importing numpy...\n"); PyObject* numpy = PyImport_ImportModule("numpy"); if (numpy == NULL) { printf("Warning: error during import:\n"); PyErr_Print(); Py_Finalize(); exit(1); } return; } The library *.so file is compiled via these Makefile targets: mylibwithpy.o: gcc -L/usr/lib/x86_64-linux-gnu -I/usr/include/python3.8 -Wall -c mylibwithpy.c -o $@ -lpython3.8 mylibwithpy.so: mylibwithpy.o gcc -L/usr/lib/x86_64-linux-gnu -Wall -fPIC -shared -Wl,-soname,$@ -o $@ mylibwithpy.o -lpython3.8 At this point, ldd seems to check out as so far "ok": (venv) user@4189d31a5bbe:~$ ldd mylibwithpy.so linux-vdso.so.1 (0x00007ffc36ba6000) libpython3.8.so.1.0 => /lib/x86_64-linux-gnu/libpython3.8.so.1.0 (0x00007f295c5ea000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f295c3f8000) libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007f295c3ca000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f295c3ae000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f295c38b000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f295c385000) libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007f295c37e000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f295c22f000) /lib64/ld-linux-x86-64.so.2 (0x00007f295cb4f000) Example runtime This is where things start going wrong... First, let's try just a basic python driver, using python's ctypes library. driver.py from ctypes import cdll if __name__ == "__main__": print("opening mylibwithpy.so..."); my_so = cdll.LoadLibrary("mylibwithpy.so") print(".so object: ", my_so) print(".so object's 'someFunctionWithPython': ", my_so.someFunctionWithPython) print("calling someFunctionWithPython..."); my_so.someFunctionWithPython() This basic script will result in a (Segmentation fault) error at the point that numpy is trying to be imported in the lib function: (venv) user@4189d31a5bbe:~$ LD_LIBRARY_PATH=. python driver.py opening mylibwithpy.so... .so object: <CDLL 'mylibwithpy.so', handle 19f43b0 at 0x7fd873b72610> .so object's 'someFunctionWithPython': <_FuncPtr object at 0x7fd873ac91c0> calling someFunctionWithPython... python alread initialized. importing numpy... Segmentation fault (core dumped) Ok, so I'm not really sure how to even start to debug this guy, so let's try again with an equivalent driver in C: driver.c: #include <dlfcn.h> #include <stdio.h> #include <stdlib.h> int main() { printf("opening mylibwithpy.so...\n"); void* mylibwithpy_so = dlopen("mylibwithpy.so", RTLD_LAZY); if (mylibwithpy_so == NULL){ printf("an error occurred during loading mylibwithpy.so: \n%s\n", dlerror()); exit(1); } void (*soFunc)(); soFunc = dlsym(mylibwithpy_so, "someFunctionWithPython"); if (soFunc == NULL){ printf("an error occurred during loading symbol someFunctionWithPython: \n%s\n", dlerror()); exit(1); } soFunc(); return 0; } Compiling this program via: gcc -L/usr/lib/x86_64-linux-gnu -Wall driver.c -o cdriver -ldl And running this driver results in an interestingly-much-more verbose error being reported: (venv) user@4189d31a5bbe:~$ LD_LIBRARY_PATH=. ./cdriver opening mylibwithpy.so... Initializing python... importing numpy... Warning: error during import: Traceback (most recent call last): File "/home/user/venv/lib/python3.8/site-packages/numpy/core/__init__.py", line 23, in <module> from . import multiarray File "/home/user/venv/lib/python3.8/site-packages/numpy/core/multiarray.py", line 10, in <module> from . import overrides File "/home/user/venv/lib/python3.8/site-packages/numpy/core/overrides.py", line 6, in <module> from numpy.core._multiarray_umath import ( ImportError: /home/user/venv/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-x86_64-linux-gnu.so: undefined symbol: PyObject_SelfIter During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/user/venv/lib/python3.8/site-packages/numpy/__init__.py", line 141, in <module> from . import core File "/home/user/venv/lib/python3.8/site-packages/numpy/core/__init__.py", line 49, in <module> raise ImportError(msg) ImportError: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.8 from "/home/user/venv/bin/python3" * The NumPy version is: "1.24.2" and make sure that they are the versions you expect. Please carefully study the documentation linked above for further help. Original error was: /home/user/venv/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-x86_64-linux-gnu.so: undefined symbol: PyObject_SelfIter AHA!(...?) According to this, it seems numpy has some shared objects of its own, but is somehow missing some symbols (PyObject_SelfIter, to be precise--side note, this symbol is listed in the Python/C API "stable ABI contents"): /home/user/venv/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-x86_64-linux-gnu.so: undefined symbol: PyObject_SelfIter (Another side note: the listed numpy docs reference https://numpy.org/devdocs/user/troubleshooting-importerror.html does not seem very applicable to the situation or the error I'm getting, but a more discerning eye may find something helpful that I overlooked...) Also, a quick ldd check shows that indeed libpython is not among the dynamically linked libraries: (venv) user@4189d31a5bbe:~$ ldd /home/user/venv/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-x86_64-linux-gnu.so linux-vdso.so.1 (0x00007ffed7df2000) libopenblas64_p-r0-15028c96.3.21.so => /home/user/venv/lib/python3.8/site-packages/numpy/core/../../numpy.libs/libopenblas64_p-r0-15028c96.3.21.so (0x00007f98a1ae6000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f98a198f000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f98a196c000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f98a177a000) /lib64/ld-linux-x86-64.so.2 (0x00007f98a3f29000) libgfortran-040039e1.so.5.0.0 => /home/user/venv/lib/python3.8/site-packages/numpy/core/../../numpy.libs/libgfortran-040039e1.so.5.0.0 (0x00007f98a12ed000) libquadmath-96973f99.so.0.0.0 => /home/user/venv/lib/python3.8/site-packages/numpy/core/../../numpy.libs/libquadmath-96973f99.so.0.0.0 (0x00007f98a10ae000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f98a1092000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f98a1077000) Testing the linker issue hypothesis Since we're getting an undefined symbol error printed out in the cdriver program, I can try to force-link libpython to the cdriver via: gcc -L/usr/lib/x86_64-linux-gnu -Wall driver.c -o cdriver -ldl -Wl,--no-as-needed -lpython3.8 and lo and behold this time the program completes without error: (venv) user@4189d31a5bbe:~$ LD_LIBRARY_PATH=. ./cdriver opening mylibwithpy.so... Initializing python... importing numpy... (venv) user@4189d31a5bbe:~$ Note that in the real build I will not have access to compiling/linking the runtime program, so this check is NOT a solution but seems to help diagnose the problem? SO... down to the actual questions What is needed to, for example, get the python driver to work as expected? Why do numpy's internal shared objects files not link to libpython3.8.so? What am I missing?! :sob: I'm hoping I'm just missing some small but crucial step in compiling the .so or configuring python. EDIT: I am able to consistently recreate the issue using a very simple docker container: Dockerfile: FROM ubuntu:20.04 RUN apt-get update; \ DEBIAN_FRONTEND=noninteractive apt-get install -y \ build-essential \ vim \ python3.8-dev \ python3.8-venv RUN useradd --create-home --shell /bin/bash user USER user WORKDIR /home/user RUN python3 -m venv venv RUN /bin/bash -c "source venv/bin/activate && pip install numpy" Makefile: all: mylibwithpy.so cdriver clean: rm mylibwithpy.o mylibwithpy.so cdriver cdriver: gcc -L/usr/lib/x86_64-linux-gnu -Wall driver.c -o $@ -ldl mylibwithpy.o: gcc -L/usr/lib/x86_64-linux-gnu -I/usr/include/python3.8 -Wall -c mylibwithpy.c -o $@ -lpython3.8 mylibwithpy.so: mylibwithpy.o gcc -L/usr/lib/x86_64-linux-gnu -Wall -fPIC -shared -Wl,-soname,$@ -o $@ mylibwithpy.o -lpython3.8 Then executing with the same command as above LD_LIBRARY_PATH=. ./cdriver | When using functions exported by cdll.LoadLibrary, you're releasing the Global Interpreter Lock (GIL) as you enter the method. If you want to call python code, you need to re-acquire the lock. e.g. void someFunctionWithPython() { ... PyGILState_STATE state = PyGILState_Ensure(); printf("importing numpy...\n"); PyObject* numpy = PyImport_ImportModule("numpy"); if (numpy == NULL) { printf("Warning: error during import:\n"); PyErr_Print(); Py_Finalize(); PyGILState_Release(state); exit(1); } PyObject* repr = PyObject_Repr(numpy); PyObject* str = PyUnicode_AsEncodedString(repr, "utf-8", "~E~"); const char *bytes = PyBytes_AS_STRING(str); printf("REPR: %s\n", bytes); Py_XDECREF(repr); Py_XDECREF(str); PyGILState_Release(state); return; } $ gcc $(python3.9-config --includes --ldflags --embed) -shared -o mylibwithpy.so mylibwithpy.c $ LD_LIBRARY_PATH=. python driver.py opening mylibwithpy.so... .so object: <CDLL 'mylibwithpy.so', handle 1749f50 at 0x7fb603702fa0> .so object's 'someFunctionWithPython': <_FuncPtr object at 0x7fb603679040> calling someFunctionWithPython... python alread initialized. importing numpy... REPR: <module 'numpy' from '/home/me/test/.venv/lib/python3.9/site-packages/numpy/__init__.py'> Also if you look at PyDLL it says: Instances of this class behave like CDLL instances, except that the Python GIL is not released during the function call, and after the function execution the Python error flag is checked. If the error flag is set, a Python exception is raised. So if you use PyDLL for your driver then you wouldn't need to re-acquire the lock in the C code: from ctypes import PyDLL if __name__ == "__main__": print("opening mylibwithpy.so..."); my_so = PyDLL("mylibwithpy.so") print(".so object: ", my_so) print(".so object's 'someFunctionWithPython': ", my_so.someFunctionWithPython) print("calling someFunctionWithPython..."); my_so.someFunctionWithPython() UPDATE Why do numpy's internal shared objects files not link to libpython3.8.so? I believe numpy is setup this way because it expects to be called by the python interpreter where libpython will already be loaded and have the symbols made available. That said, we can make the python libraries available for when mylibwithpy calls the import of numpy by using RTLD_GLOBAL. The symbols defined by this shared object will be made available for symbol resolution of subsequently loaded shared objects. The update to your code is simple: void* mylibwithpy_so = dlopen("mylibwithpy.so", RTLD_LAZY | RTLD_GLOBAL); Now all of the python libraries will be included because they are a dependency of mylibwithpy, meaning they will be available by the time that numpy loads its own shared libraries. Alternatively, you could choose to load just libpythonX.Y.so with RTLD_GLOBAL to prior to loading mylibwithpy.so to minimize the amount symbols made globally available. printf("opening libpython3.9.so...\n"); void* libpython3_so = dlopen("libpython3.9.so", RTLD_LAZY | RTLD_GLOBAL); if (libpython3_so == NULL){ printf("an error occurred during loading libpython3.9.so: \n%s\n", dlerror()); exit(1); } printf("opening mylibwithpy.so...\n"); void* mylibwithpy_so = dlopen("mylibwithpy.so", RTLD_LAZY); if (mylibwithpy_so == NULL){ printf("an error occurred during loading mylibwithpy.so: \n%s\n", dlerror()); exit(1); } Docker setup I used to recreate and test: FROM ubuntu:20.04 ARG DEBIAN_FRONTEND=noninteractive RUN apt-get update && apt-get install -y \ build-essential \ python3.9-dev \ python3.9-venv RUN mkdir /workspace WORKDIR /workspace RUN python3.9 -m venv .venv RUN .venv/bin/python -m pip install numpy COPY . /workspace RUN gcc -o mylibwithpy.so mylibwithpy.c -fPIC -shared \ $(python3.9-config --includes --ldflags --embed --cflags) RUN gcc -o cdriver driver.c -L/usr/lib/x86_64-linux-gnu -Wall -ldl ENV LD_LIBRARY_PATH=/workspace # Then run: . .venv/bin/activate && ./cdriver | 3 | 3 |
76,068,163 | 2023-4-20 | https://stackoverflow.com/questions/76068163/is-it-possible-to-get-list-of-parameters-passed-into-a-constructor | Let's say I have this code snippet: class A: def __init__(self, p1=1, p2=None, p3="test"): self.p1 = p1 self.p2 = p2 self.p3 = p3 a1 = A() a2 = A(p1=2, p3="blah") What I want to do is to somehow retrieve ONLY the named parameters that are passed into the constructor, not including the optional parameters with default values. To clarify, this does NOT mean to use inspect to return the list of all parameters. So for example for a1 this would be {} since the constructor is called with no parameters, and for a2 this would be {"p1": 2, "p3": "blah"} since the constructor is called with p1 and p3. I've tried googling this and it seems that this isn't really possible, or at least I haven't found a way to do this. inspect does not work because I want to do this for each individual instantiation. using locals() within the constructor also doesn't work because it includes the optional parameters as well even if they aren't explicitly passed in. | You can intercept the arguments in __new__ like this: class A: def __init__(self, p1=1, p2=None, p3="test"): self.p1 = p1 self.p2 = p2 self.p3 = p3 def __new__(cls, *args, **kwargs): print(f"{kwargs = }") return super().__new__(cls) >>> a1 = A() kwargs = {} >>> a2 = A(p1=2, p3="blah") kwargs = {'p1': 2, 'p3': 'blah'} | 5 | 4 |
76,059,875 | 2023-4-20 | https://stackoverflow.com/questions/76059875/add-annotation-between-line-gap-in-plotly | I have a graph like this: Instead of the days being on top of the symbol, I was wondering if there is a way I can add this annotation between the lines? From one dot to another. I apologize if in case, this is a possible duplicate. This is my expected output: This is my current code for the annotation: fig.add_annotation( go.layout.Annotation( x=row["order_date"], y=row["sales"], text=f"{row['time_diff']} days", showarrow=False, align='center', yanchor='auto', yshift=10, textangle=-0 ) Any suggestions on how can I do this? | What you can do is create a new dataframe for your annotations by ordering your original df by the order_date, then take the average datetime and average sales between consecutive rows to determine where to place the annotations. To obtain the text, you can take the difference in time between consecutive orders. Your dataframe for annotations would look something like this: time_diff sales order_date 0 5 days 5.0 2019-08-27 12:00:00 1 11 days 5.0 2019-09-04 12:00:00 2 0 days 7.5 2019-09-10 00:00:00 3 0 days 15.0 2019-09-10 00:00:00 4 5 days 20.0 2019-09-12 12:00:00 5 0 days 17.5 2019-09-15 00:00:00 6 139 days 15.0 2019-11-23 12:00:00 7 NaT NaN NaT And here is a figure I made using some sample data similar to yours: import pandas as pd import plotly.express as px import plotly.graph_objects as go ## create sample data and figure similar to yours df = pd.DataFrame({ 'order_date': ['2019-08-25 00:00:00','2019-08-30','2019-09-10','2019-09-10','2019-09-10','2019-09-15','2019-09-15','2020-02-01'], 'sales': [5,5,5,10,20,20,15,15], }) df['order_date'] = pd.to_datetime(df['order_date']) fig = px.line(df, x='order_date', y='sales', markers=True, ) fig.update_traces(marker_color='blue', line_color='darkgrey') ## between consecutive points: ## get the difference in time, and the average sales df_diff = pd.DataFrame({ 'time_diff': df['order_date'].diff(), 'sales': (df['sales'] + df['sales'].shift(-1)) / 2, }) df_diff['order_date'] = df['order_date'] + (df_diff['time_diff'] / 2).shift(-1) df_diff['time_diff'] = df_diff['time_diff'].shift(-1) y_axis_range = 1.25*(df.sales.max() - df.sales.min()) fig.add_trace( go.Scatter( x=df_diff["order_date"], y=df_diff["sales"] + 0.01*y_axis_range, text=df_diff["time_diff"].astype(str), mode='text', showlegend=False, ) ) fig.show() | 3 | 3 |
76,057,261 | 2023-4-19 | https://stackoverflow.com/questions/76057261/why-does-react-to-flask-call-fail-with-cors-despite-flask-cors-being-included | I need to read data from flask into react. React: const data = { name: 'John', email: '[email protected]' }; axios.post('http://127.0.0.1:5000/api/data', data) .then(response => { console.log(response); }) .catch(error => { console.log(error); }); Flask: from flask import Flask, request, jsonify from flask_cors import CORS app = Flask(__name__) CORS(app, origins='http://localhost:3000') @app.route('/api/data', methods=['POST']) def process_data(): data = request.get_json() # обработка данных response_data = {'message': 'Sucsess'} return jsonify(response_data), 200 if __name__ == '__main__': app.run(debug=True) And i take in browser an error: Access to XMLHttpRequest at 'http://127.0.0.1:5000/api/data' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. ShortPage.jsx:24 AxiosError {message: 'Network Error', name: 'AxiosError', code: 'ERR_NETWORK', config: {…}, request: XMLHttpRequest, …} How can i fix this? | The problem is that you are limiting the path /api/data to the http method POST @app.route('/api/data', methods=['POST']) CORS requires the client to make an OPTIONS 'pre-flight' call on that path to find out which origins are allowed (and other stuff like headers). Because your code only permits POST the OPTIONS call is declined and the CORS mechanism fails. The fix is to add the OPTIONS to the list of permitted methods on the route. @app.route('/api/data', methods=['POST', 'OPTIONS']) | 4 | 5 |
76,062,184 | 2023-4-20 | https://stackoverflow.com/questions/76062184/remove-last-character-if-it-is-number-from-data-frames-that-are-stored-as-a-valu | I wrote the code as follows to remove last character of Data Frames columns if it is number. Data Frames are stored as a values of dictionary in python and their name as a key of dictionary. my code doesn't produce the right result, how should I develop it? for key,val in df.items(): if key != "name1": for col in val.columns: m = re.search(r'\d+$', col) if m is not None: col = col[:-1] | You can simplify your code by just replacing a single digit at the end of a column name with an empty string. This will not affect any values that don't end with a digit: for key, val in df.items(): if key != "name1": val.columns = val.columns.str.replace(r'\d$', '', regex=True) | 3 | 3 |
76,060,546 | 2023-4-20 | https://stackoverflow.com/questions/76060546/what-allows-nan-to-work-with-the-python-list-inclusion-operator | Pretty much anyone who works with IEEE floating-point values has run into NaN, or "not a number", at some point. Famously, NaN is not equal to itself. >>> x = float('nan') >>> x == x False Now, I had come to terms with this, but there's a strange behavior I'm struggling to wrap my head around. Namely, >>> x in [x] True I had always assumed that list.__contains__ was written something like def __contains__(self, element): for x in self: if element == x: return True return False i.e., it used __eq__ on the relevant data type internally. And indeed it does. If I define a custom class with an __eq__ method of my own design, then I can verify that Python does in fact call __eq__ when doing the inclusion check. But then how can there exist a value x (NaN in our case) such that x == x is false but x in [x] is true? We can observe the same behavior with a custom __eq__ as well. class Example: def __eq__(self, other): return False x = Example() print(x == x) # False print(x in [x]) # True | According to the docs it first uses the is operator to check for equality, and since x is x is True, x in [x] is also True: For container types such as list, tuple, set, frozenset, dict, or collections.deque, the expression x in y is equivalent to any(x is e or x == e for e in y). Note that identity (is) is different from equality (==). Also note that not all NaN values are represented by the same object, so if you try your test with two different NaN objects: >>> float('nan') in [float('nan')] False you'll see different results. | 5 | 5 |
76,059,187 | 2023-4-19 | https://stackoverflow.com/questions/76059187/how-do-i-indicate-that-the-value-of-an-enum-is-an-unstable-implementation-detai | The official Enum HOWTO has this example: class Planet(Enum): MERCURY = (3.303e+23, 2.4397e6) VENUS = (4.869e+24, 6.0518e6) EARTH = (5.976e+24, 6.37814e6) MARS = (6.421e+23, 3.3972e6) JUPITER = (1.9e+27, 7.1492e7) SATURN = (5.688e+26, 6.0268e7) URANUS = (8.686e+25, 2.5559e7) NEPTUNE = (1.024e+26, 2.4746e7) def __init__(self, mass, radius): self.mass = mass # in kilograms self.radius = radius # in meters @property def surface_gravity(self): # universal gravitational constant (m3 kg-1 s-2) G = 6.67300E-11 return G * self.mass / (self.radius * self.radius) >>> Planet.EARTH.value (5.976e+24, 6378140.0) >>> Planet.EARTH.surface_gravity 9.802652743337129 Suppose I'm doing something like this, and I want to treat the .values—the tuples like (3.303e+23, 2.4397e6)—as unstable implementation details of the Planet API. I don't want my API consumers to ever rely on them. Instead, I want them to use the properties that I explicitly expose myself, like .surface_gravity. Is there a conventional way to indicate this? I'm currently just adding a note in the docstring like this: class Planet(Enum): """.value is an implementation detail. Use .surface_gravity instead.""" But that seems too easy to miss. If it were a normal class, I would just make it ._value instead of .value. But here, .value is added automatically because I subclassed from Enum, and I don't see a way to override that. | The exact value of any enum is nearly always an implementation detail; the reason it's exposed at all is that sometimes it's useful to be able to access it. There are several ways to de-emphasize it's presence: change the repr() of that enum class def __repr__(self): return '<%s.%s>' % (self.__class__.__name__, self._name_) have a custom __new__ to make the value obviously "wrong" to treat as surface gratity (or whatever) (probably combined with the __repr__ above): def __new__(cls, mass, radius): member = object.__new__(cls): member._value_ = len(cls._member_names_) member.mass = mass member.radius = radius return member you could combine with a dataclass (which automatically updates the repr): @dataclass class PlanetData: mass: float radius: float class Planet(PlanetData, Enum): MERCURY = (3.303e+23, 2.4397e6) VENUS = (4.869e+24, 6.0518e6) # etc >>> Planet.VENUS <Planet.VENUS: mass=4.869e+24, radius=6051800.0> Disclosure: I am the author of the Python stdlib Enum, the enum34 backport, and the Advanced Enumeration (aenum) library. | 3 | 2 |
76,056,906 | 2023-4-19 | https://stackoverflow.com/questions/76056906/secure-route-in-fastapi | I have a Flask-based backend REST API and I want to migrate to FastAPI. However, I am not sure how to implement secure routes and create access tokens in FastAPI. In Flask, we have methods from the flask_jwt_extended library such as: @jwt_required() decorator for secure routes create_access_token() function for creating JWT tokens. Does FastAPI have a similar feature or capability, or how can I implement this in FastAPI? Thank you in advance. Here is an example implementation of secure route and create access token in Flask: import hashlib import traceback from datetime import timedelta from http import HTTPStatus from flask import Flask, jsonify, request from flask_jwt_extended import JWTManager, jwt_required, get_jwt_identity, create_access_token app = Flask(__name__) jwt = JWTManager(app) app.config["JWT_SECRET_KEY"] = "very-secret1234567890" app.config["JWT_ACCESS_TOKEN_EXPIRES"] = timedelta(minutes=15) app.config["JWT_REFRESH_TOKEN_EXPIRES"] = timedelta(days=30) host = "localhost" port = 5000 test_password = "test_password" db = [ { "username": "test_user", "email": "test_email.gmail.com", "password": hashlib.sha256(test_password.encode()).hexdigest() } ] @app.route('/login', methods=['POST']) def login(): try: json_data = request.get_json() email = json_data.get("email") password = json_data.get("password") if not email or not password: response = jsonify(error="'email' and 'password' are required") return response, HTTPStatus.BAD_REQUEST # Check if email exists in DB user_result = [user for user in db if user["email"].lower() == email.lower()] # Check if the password is correct encoded_password = hashlib.sha256(password.encode()).hexdigest() if not user_result or user_result[0]["password"] != encoded_password: response = jsonify(error="Wrong credentials") return response, HTTPStatus.BAD_REQUEST user = user_result[0] # Generate JWT token and return it access_token = create_access_token(identity=user["username"]) response = jsonify(username=user["username"], token=access_token) return response, HTTPStatus.OK except Exception as e: print(f"Error: {e}") print(traceback.format_exc()) response = jsonify(result={"error": "Server error"}) return response, HTTPStatus.INTERNAL_SERVER_ERROR @app.route('/secured_page', methods=['GET']) @jwt_required() def __create_participant(): try: response = jsonify(message="You are logged in as {}".format(get_jwt_identity())) return response, HTTPStatus.OK except Exception as e: print(f"Error: {e}") print(traceback.format_exc()) response = jsonify(result={"error": "Server error"}) return response, HTTPStatus.INTERNAL_SERVER_ERROR if __name__ == '__main__': app.run(host=host, port=port, debug=True) | You can use libs like python-jose for JWT functionalities, but you have to implement jwt_required and create_access_token by yourself, for example create_access_token from jose import JWTError, jwt from datetime import datetime, timedelta def create_access_token(data: dict): to_encode = data.copy() expire = datetime.utcnow() + timedelta(minutes=15) to_encode.update({"exp": expire}) encoded_jwt = jwt.encode(to_encode, SECRET_KEY, algorithm=ALGORITHM) return encoded_jwt jwt_required (instead of a decorator, you create a function and set it as a dependency of your route) from fastapi import Depends, HTTPException, status from fastapi.security import OAuth2PasswordBearer from jose import JWTError, jwt oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token") async def jwt_required(token: str = Depends(oauth2_scheme)): credentials_exception = HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail="Could not validate credentials", headers={"WWW-Authenticate": "Bearer"}, ) try: payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM]) username: str = payload.get("sub") if username is None: raise credentials_exception except JWTError: raise credentials_exception user = get_user(username=username) if user is None: raise credentials_exception @app.get('/secured_page', dependencies=[jwt_required]) def __create_participant(): ... | 4 | 1 |
76,055,891 | 2023-4-19 | https://stackoverflow.com/questions/76055891/fastapi-background-task-takes-up-to-100-times-longer-to-execute-than-calling-fun | I have simple fastAPI endpoint deployed on Google Cloud Run. I wrote the Workflow class myself. When the Workflow instance is executed, some steps happen, e.g., the files are processed and the result are put in a vectorstore database. Usually, this takes a few seconds per file like this: from .workflow import Workflow ... @app.post('/execute_workflow_directly') async def execute_workflow_directly(request: Request) ... # get files from request object workflow = Workflow.get_simple_workflow(files=files) workflow.execute() return JSONResponse(status_code=200, content={'message': 'Successfully processed files'}) Now, if many files are involved, this might take a while, and I don't want to let the caller of the endpoint wait, so I want to run the workflow execution in the background like this: from .workflow import Workflow from fastapi import BackgroundTasks ... def run_workflow_in_background(workflow: Workflow): workflow.execute() @app.post('/execute_workflow_in_background') async def execute_workflow_in_background(request: Request, background_tasks: BackgroundTasks): ... # get files from request object workflow = Workflow.get_simple_workflow(files=files) background_tasks.add_task(run_workflow_in_background, workflow) return JSONResponse(status_code=202, content={'message': 'File processing started'}) Testing this with still only one file, I already run into a problem: Locally, it works fine, but when I deploy it to my Google Cloud Run service, execution time goes through the roof: In one example, background execution it took almost ~500s until I saw the result in the database, compared to ~5s when executing the workflow directly. I already tried to increase the number of CPU cores to 4 and subsequently the number of gunicorn workers to 4 as well. Not sure if that makes much sense, but it did not decrease the execution times. Can I solve this problem by allocating more resources to Google Cloud run somehow or is my approach flawed and I'm doing something wrong or should already switch to something more sophisticated like Celery? Edit (not really relevant to the problem I had, see accepted answer): I read the accepted answer to this question and it helped clarify some things, but doesn't really answer my question why there is such a big difference in execution time between running directly vs. as a background task. Both versions call the CPU-intensive workflow.execute() asynchronously if I'm not mistaken. I can't really change the endpoint's definition to def, because I am awaiting other code inside. I tried changing the background function to async def run_workflow_in_background(workflow: Workflow): await run_in_threadpool(workflow.execute) and async def run_workflow_in_background(workflow: Workflow): loop = asyncio.get_running_loop() with concurrent.futures.ThreadPoolExecutor() as pool: res = await loop.run_in_executor(pool, workflow.execute) and async def run_workflow_in_background(workflow: Workflow): res = await asyncio.to_thread(workflow.execute) and async def run_workflow_in_background(workflow: Workflow): loop = asyncio.get_running_loop() with concurrent.futures.ProcessPoolExecutor() as pool: res = await loop.run_in_executor(pool, workflow.execute) as suggested and it didn't help. I tried increasing the number of workers as suggested and it didn't help. I guess I will look into switching to Celery, but still eager to understand why it works so slowly with fastAPI background tasks. | With Cloud Function, like Cloud Run, the CPU is allocated (and billed) only when a request is processed. A request is considered being processed between the reception of the request and the sending of the response. The rest of the time, the CPU is throttled (below 5%). That's being said, look back to your functions. The fastest one get the data, process the data, and send the response. The CPU is allocated full time during the processing. The slowest one get the data, run a task in background (multi thread, fork or whatever) and send the response immediately. After the response sent, the CPU is throttled, and the processing begin. Of course, it is very slow, you are out of the CPU allocation boundaries. To solve that, you can use Cloud Run with the option, CPU Always allocated (or no-cpu-throttling with the GCLOUD command line). There is no option with Cloud Functions | 5 | 4 |
76,056,936 | 2023-4-19 | https://stackoverflow.com/questions/76056936/mysql-data-folder-keeps-growing-even-after-drop-table | I'm using MySQL (8.0.33-winx64, for Windows) with Python 3 and mysql.connector package. Initially my mysql-8.0.33-winx64\data folder was rather small: < 100 MB. Then after a few tests of CREATE TABLE..., INSERT... and DROP TABLE..., I notice that even after I totally drop the tables, the data folder keeps growing: #innodb_redo folder seems to stay at max 100 MB #innodb_temp seems to be small binlog.000001: this one seems to be the culprit: it keeps growing even if I drop tables! How to clean this data store after I drop tables, to avoid unused disk space with MySQL? Is it possible directly from Python3 mysql.connector API? Or from a SQL command to be execute (I already tried "OPTIMIZE" without success)? Or do I need to use an OS function manually (os.remove(...))? Note: the config file seems to be in mysql-8.0.33-winx64\data\auto.cnf in the portable Windows version (non-used as a service, but started with mysqld --console) (no default config file is created after a first run of the server, we can create it in mysql-8.0.33-winx64\my.cnf) | You can disable the binary log, but only by setting disable_log_bin in your my.cnf file and restarting the MySQL Server. (See Disable MySQL binary logging with log_bin variable ) You can't change binary logging dynamically. See https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_log_bin You can make the binary log automatically expire old logs as it rolls over to a new binlog file. This helps to limit the overall storage. See https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_binlog_expire_logs_seconds You do need to understand what the binary log is used for before you decide to disable it. You might need it! The binary log is commonly used for three or four things: Replication Point-in-time recovery Change Data Capture (CDC) tools A poor form of change auditing. A real audit log is better, but some sites don't have the audit log plugin installed. can you edit to clarify the difference between undo log vs. bin log? The binary log is for logging logical changes to your data. Nothing is written to the binary log until you COMMIT a transaction. It is not used for rollback, because by definition anything in the binary log has been committed. The binary log applies to all storage engines in MySQL. The undo log is only for the InnoDB storage engine. As you make changes to data during a transaction, the old version of the data is added to the undo log (this is also called the rollback segment in some documentation). So if you ROLLBACK, InnoDB can restore the original data. If you COMMIT, then the contents of the undo log for that transaction is discarded. Notes: setting disable_log_bin in my.cf and restarting the MySQL server won't delete old binlogs. if you set disable_log_bin and restart the server first, and then do FLUSH LOGS; PURGE BINARY LOGS BEFORE NOW();, it won't delete old binlogs you have to do FLUSH LOGS; PURGE BINARY LOGS BEFORE NOW(); first, and only then edit the config mysql-8.0.33-winx64\my.cnf to include: [mysqld] disable_log_bin Then the old logs are deleted, and no new binlog will be created. | 3 | 5 |
76,056,289 | 2023-4-19 | https://stackoverflow.com/questions/76056289/using-pydantic-parent-attribute-to-validate-child | Is it possible to use a containing object's attribute during the validation of a child object in a pydantic model? Given the json data: # example.json { "multiplier": 5, "field_1": { "value": 1 }, "field_2": { "value": 2 } } and the corresponding Pydantic model: # example.py from pydantic import BaseModel, validator class Item(BaseModel): value: int class Container(BaseModel): multiplier: int field_1: Item field_2: Item is it possible to use the Container object's multiplier attribute during validation of the Item values? For instance, I'd like to do something like this to Item at runtime: class Item(BaseModel): value: int @validator("value") @classmethod def validate_value(cls, value): return value # * multiplier # <--- can I get access to Container's multiplier here? but I cannot determine if it possible to get access to the Container.multiplier value in a case like this? In my actual use case, the nesting is much, much deeper and so I would prefer not have the validator up at the Container level as access becomes fairly complicated, but I also do not want to duplicate the multiplier value down at the Item level? Is there any way to pass parameters up and down the object hierarchy within a model of this sort? | Simplest approach is to perform validation on the parent instead of the child: from pydantic import BaseModel, validator class Item(BaseModel): value: int class Container(BaseModel): multiplier: int field_1: Item field_2: Item @validator("field_1", "field_2") def validate_value(cls, v, values): """Validate each item""" m = values["multiplier"] If you want to define the validator on the child, you can create a function and then call the validation function from the parent: class Item(BaseModel): value: int @classmethod def validate_value(cls, v, **kwargs): """Validate each item""" m = kwargs.get("multiplier") if m * v.value < 10: raise ValueError return v class Container(BaseModel): multiplier: int field_1: Item field_2: Item @validator("field_1", "field_2", pre=False) def validate_items(cls, v, values): return Item.validate_value(v, **values) Another alternative is to pass the multiplier as a private model attribute to the children, then the children can use the pydantic validation function, but you'll still need to assign dynamically to the children: from pydantic import BaseModel, Field, validator class Item(BaseModel): multiplier: int # exclude from parent serialization, workaround for validation value: int @validator("value", pre=False) def validate_value(cls, v, values): """Validate each item""" m = values["multiplier"] if m * v < 10: raise ValueError return v class Container(BaseModel): multiplier: int field_1: Item = Field(..., exclude={'multiplier'}) field_2: Item = Field(..., exclude={'multiplier'}) @validator("field_1", "field_2", pre=True) def validate_items(cls, v, values): # Construct from a value, another workaround if isinstance(v, int): return Item(value=v, multiplier=values["multiplier"]) elif isinstance(v, Item): return Item(value=v.value, multiplier=values["multiplier"]) if __name__ == '__main__': c = Container( multiplier=1, field_1=11, field_2=22 ) | 5 | 3 |
76,057,225 | 2023-4-19 | https://stackoverflow.com/questions/76057225/replace-a-part-of-string-in-pandas-data-column-replace-doesnt-work | I have been trying to clean my data column by taking a part of the text out. Unfortunately cannot get my head around it. I tried using the .replace method in pandas series, but that did not seem to have worked df['Salary Estimate'].str.replace(' (Glassdoor est.)', '',regex=True) 0 $53K-$91K (Glassdoor est.) 1 $63K-$112K (Glassdoor est.) 2 $80K-$90K (Glassdoor est.) 3 $56K-$97K (Glassdoor est.) 4 $86K-$143K (Glassdoor est.) ... 922 -1 925 -1 928 $59K-$125K (Glassdoor est.) 945 $80K-$142K (Glassdoor est.) 948 $62K-$113K (Glassdoor est.) Name: Salary Estimate, Length: 600, dtype: object What I expected was 0 $53K-$91K 1 $63K-$112K 2 $80K-$90K 3 $56K-$97K 4 $86K-$143K ... 922 -1 925 -1 928 $59K-$125K 945 $80K-$142K 948 $62K-$113K Name: Salary Estimate, Length: 600, dtype: object` | If you enable regex, you have to escape regex symbol like (, ) or .: import re >>> df['Salary Estimate'].str.replace(re.escape(r' (Glassdoor est.)'), '',regex=True) 0 $53K-$91K 1 $63K-$112K 2 $80K-$90K 3 $56K-$97K 4 $86K-$143K Name: Salary Estimate, dtype: object # Or without import re module >>> df['Salary Estimate'].str.replace(r' \(Glassdoor est\.\)', '',regex=True) 0 $53K-$91K 1 $63K-$112K 2 $80K-$90K 3 $56K-$97K 4 $86K-$143K Name: Salary Estimate, dtype: object You can also extract numbers: >>> df['Salary Estimate'].str.extract(r'\$(?P<min>\d+)K-\$(?P<max>\d+)K') min max 0 53 91 1 63 112 2 80 90 3 56 97 4 86 143 | 3 | 3 |
76,054,155 | 2023-4-19 | https://stackoverflow.com/questions/76054155/django-count-in-annotate-and-subquery-are-much-slower-than-simply-iterating-over | I'm trying to count how many documents per project (among other things) have been registered, Registered also holds other values like DateTime and User (just a disclosure they are unrelated to my issue). Iterating through a list and querying ~54 times ends up being much faster than Annotate or Subquery (13 vs 27 seconds) I have 4 tables which go like this Project -> Batch -> Document -> Registered. Right now we have 54 Projects, 20478 Batches, 3231095 Documents and 312498 Registered. The timings listed below are for the entire view, not just the bits represented, but you can have an idea on their impact... projects = Project.objects.filter(completion_date__isnull=True) # "Fast", takes around 13.5 seconds to evaluate for project in list(projects.values('id')): count = Document.objects.filter(batch__project_id=project['id], registered__isnull=False) # Both methods below are much slower, taking around 27 seconds to evaluate # Simple annotation projects = projects.annotate(count=Count('batches__documents', filter=Q(batches__documents__registered__isnull=False), distinct=True)) # An attempt to simulate the first list iteration query without multiple db hits, ends up exactly as slow as annotate subq_registered = Document.objects.filter(batch__project_id=OuterRef('id'), registered__isnull=False).annotate(count=Func(F('id'), function='Count')).values('count') projects = projects.annotate(count=Subquery(subq_registered)) I expected Subquery to be faster than my first method, as it would mimic it, but its just as slow as simple count + annotation. Is there something I can do to speed any of my methods up? perhaps adding a Project foreign key to Document | I think you can use GROUP BY functionality here like this: Document.objects.filter(batch__project__completion_date__isnull=True).values('batch__project').annotate(doc_count= Count('pk')).values('batch__project','doc_count') More information on documentation Regarding the performance issue, there is no straight answer on why the performance is slow. Because (at least) the simple annotation makes single DB call, where others make more DB hits. There could be an issue with Database itself, maybe it has indexing issues (which needs debugging). I would recommend looking into this documentation regarding how to debug and optimize database related issues. | 3 | 3 |
76,054,181 | 2023-4-19 | https://stackoverflow.com/questions/76054181/cleaning-a-large-csv-file-efficiently | Problem: I have a CSV file with a large amount of data, and I need to perform some data cleaning and filtering operations on it using Python. For instance, the CSV file contains a column with dates in the format "YYYY-MM-DD", but some of the entries have incorrect formatting or missing values. I need to clean up these entries so that they are all in the correct format and remove any rows that have missing dates. How can I clean and filter a large CSV file using Python with the shortest runtime possible? import csv # Read the CSV file into a list of dictionaries data = [] with open('data.csv') as csvfile: reader = csv.DictReader(csvfile) for row in reader: data.append(row) # Loop through the data and clean up the date column for i in range(len(data)): if 'date' in data[i]: date = data[i]['date'] if date: try: year, month, day = date.split('-') year = int(year) month = int(month) day = int(day) if year < 1000 or year > 9999 or month < 1 or month > 12 or day < 1 or day > 31: raise ValueError('Invalid date format') data[i]['date'] = f'{year}-{month:02d}-{day:02d}' except ValueError: del data[i]['date'] # Loop through the data and remove rows with missing dates clean_data = [] for row in data: if 'date' in row and row['date']: clean_data.append(row) | I would recommend to use the pandas module which enables to handle csv data efficiently in Python. For instance, the following code solves your problem: import pandas as pd # Read the CSV file into a pandas dataframe df = pd.read_csv('data.csv') # Clean up the date column df['date'] = pd.to_datetime(df['date'], errors='coerce') # Remove rows with missing dates df.dropna(subset=['date'], inplace=True) Please note that this does not anylonger check whether your date is in format "YYYY-MM-DD" but in any format which is suitable for dates. Since this is more flexible, it might be an advantage. Otherwise, you can simply modify the code to your needs. | 4 | 2 |
76,050,901 | 2023-4-19 | https://stackoverflow.com/questions/76050901/haystack-save-inmemorydocumentstore-and-load-it-in-retriever-later-to-save-embe | I am using InMemory Document Store and an Embedding retriever for the Q/A pipeline. from haystack.document_stores import InMemoryDocumentStore document_store = InMemoryDocumentStore(embedding_dim =768,use_bm25=True) document_store.write_documents(docs_processed) from haystack.nodes import EmbeddingRetriever retriever_model_path ='downloaded_models\local\my_local_multi-qa-mpnet-base-dot-v1' retriever = EmbeddingRetriever(document_store=document_store, embedding_model=retriever_model_path, use_gpu=True) document_store.update_embeddings(retriever=retriever) As the embedding takes a while, I want to load the embeddings and later use them again in the retriever. (in rest API side). I don't want to use ElasticSearch or Faiss. How can I achieve this using In Memory Store? I tried to use Pickle, but there is no way to store the embeddings. Again, in the embedding retriever, there is no load function. I tried to do the following: with open("document_store_res.pkl", "wb") as f: pickle.dump(document_store.get_all_documents(), f) And in the rest API, I am trying to load the document store : def reader_retriever(): # Load the pickled model with open(os.path.join(settings.BASE_DIR,'\downloaded_models\document_store_res.pkl'), 'rb') as f: document_store_new = pickle.load(f) retriever_model_path = os.path.join(settings.BASE_DIR, '\downloaded_models\my_local_multi-qa-mpnet-base-dot-v1') retriever = EmbeddingRetriever(document_store=document_store_new, embedding_model=retriever_model_path, use_gpu=True) document_store_new.update_embeddings(retriever=retriever, batch_size=100) farm_reader_path = os.path.join(settings.BASE_DIR, '\downloaded_models\my_local_bert-large-uncased-whole-word-masking-squad2') reader = FARMReader(model_name_or_path=farm_reader_path, use_gpu=True) return reader, retriever | InMemoryDocumentStore: features and limitations From Haystack docs: Use the InMemoryDocumentStore, if you are just giving Haystack a quick try on a small sample and are working in a restricted environment that complicates running Elasticsearch or other databases. Slow retrieval on larger datasets. No Approximate Nearest Neighbours (ANN). Not recommended for production. Possible lightweight alternatives To overcome the limitations of InMemoryDocumentStore, if you don't want to use FAISS or ElasticSearch, you could also consider adopting Qdrant which can run smoothly and lightly on Haystack. Pickling InMemoryDocumentStore As you can see, I do not recommend this solution. In any case, I would pickle the document store (which also contains the embeddings): with open("document_store_res.pkl", "wb") as f: pickle.dump(document_store, f) In the REST API, you can change your method as follows: def reader_retriever(): # Load the pickled model with open(os.path.join(settings.BASE_DIR,'\downloaded_models\document_store_res.pkl'), 'rb') as f: document_store_new = pickle.load(f) retriever_model_path = os.path.join(settings.BASE_DIR, '\downloaded_models\my_local_multi-qa-mpnet-base-dot-v1') retriever = EmbeddingRetriever(document_store=document_store_new, embedding_model=retriever_model_path, use_gpu=True) ### DO NOT UPDATE THE EMBEDDINGS, AS THEY HAVE ALREADY BEEN CALCULATED farm_reader_path = os.path.join(settings.BASE_DIR, '\downloaded_models\my_local_bert-large-uncased-whole-word-masking-squad2') reader = FARMReader(model_name_or_path=farm_reader_path, use_gpu=True) return reader, retriever | 4 | 3 |
76,051,250 | 2023-4-19 | https://stackoverflow.com/questions/76051250/how-to-remove-unused-packages-installed-with-pip-in-python | I have Python project and have installed hundreds of packages using pip over time. However, I have since improved my code and suspect that some of these packages are no longer in use. I want to remove all of the unused packages. I am already familiar with the pip uninstall package-name command and have seen suggestions to use pip-autoremove package-name, but it's not considered to be the optimal option for my case as there are hundreds of unused packages, and I do not want to manually uninstall each one. Additionally, I do not want to accidentally remove packages that are still being used in my project. I am wondering if there is a better solution, or a way to look for python imports inside files and detect for unused. Any latest or alternative approaches would be highly appreciated. Thank you. Here's my requirements.txt with thousands of packages: | Dependency management in Python is a painful topic. Check out this excellent overview: https://chriswarrick.com/blog/2023/01/15/how-to-improve-python-packaging/ I highly recommend you check out Poetry or Hatch for dependency and packaging management. In general you should at least always use virtual environment. This allows you to completely delete it and create new, if you need. You have your requirements.txt so you can do python -m venv <venv> where <venv> is the name you want to give to your virtual environment. It creates a directory where you will install the packages. Activating the virtual environment goes like this: source <venv>/bin/activate which executes a shell script. Among other things it defines a command deactivate which you can call if you wish to exit from the virtual environment. With the activated venv everything you install will be installed there. | 3 | 2 |
76,050,130 | 2023-4-19 | https://stackoverflow.com/questions/76050130/copy-the-schema-from-one-dataframe-to-another | I have a Spark data frame (df1) with a particular schema, and I have another dataframe with the same columns, but different schema. I know how to do it column by column, but since I have a large set of columns, it would be quite lengthy. To keep the schema consistent across dataframes, I was wondering if I could be able to apply one schema to another data frame or creating a function that do the job. Here is an example: df1 # root # |-- A: date (nullable = true) # |-- B: integer (nullable = true) # |-- C: string (nullable = true) df2 # root # |-- A: string (nullable = true) # |-- B: string (nullable = true) # |-- C: string (nullable = true)` I want to copy apply the schema of df1 to df2. I tried this approach for one column. Given that I have a large number of columns, it would be quite a lengthy way to do it. df2 = df2.withColumn("B", df2["B"].cast('int')) | Yes, its possible dynamically with dataframe.schema.fields df2.select(*[(col(x.name).cast(x.dataType)) for x in df1.schema.fields]) Example: from pyspark.sql.functions import * df1 = spark.createDataFrame([('2022-02-02',2,'a')],['A','B','C']).withColumn("A",to_date(col("A"))) print("df1 Schema") df1.printSchema() #df1 Schema #root # |-- A: date (nullable = true) # |-- B: long (nullable = true) # |-- C: string (nullable = true) df2 = spark.createDataFrame([('2022-02-02','2','a')],['A','B','C']) print("df2 Schema") df2.printSchema() #df2 Schema #root # |-- A: string (nullable = true) # |-- B: string (nullable = true) # |-- C: string (nullable = true) # #casting the df2 columns by getting df1 schema using select clause df3 = df2.select(*[(col(x.name).cast(x.dataType)) for x in df1.schema.fields]) df3.show(10,False) print("df3 Schema") df3.printSchema() #+----------+---+---+ #|A |B |C | #+----------+---+---+ #|2022-02-02|2 |a | #+----------+---+---+ #df3 Schema #root # |-- A: date (nullable = true) # |-- B: long (nullable = true) # |-- C: string (nullable = true) In this example I have df1 defined with Integer,date,long types. df2 is defined with string type. df3 is defined by using df2 as source data and attached df1 schema. | 6 | 5 |
76,046,269 | 2023-4-18 | https://stackoverflow.com/questions/76046269/how-to-align-annotation-to-the-edge-of-whole-figure-in-plotly | I know how to align annotation respective to the plotting area (using xref="paper"). I want to place the annotation to right bottom corner of the whole figure. Absolute or relative offset does not work, since the width of the margin outside the plotting area is changing among several figures, depending on their content. Is there a way how to do it? Thanks a ton. Edit: Example code import plotly.express as px fig = px.scatter(x=[1, 2, 3], y=[1, 2, 3], title="Try panning or zooming!") fig.add_annotation(text="Absolutely-positioned annotation", xref="paper", yref="paper", x=0.3, y=0.3, showarrow=False) fig.show() This annotation is positioned respectively to the origin of the plot, not respectively to the plot edges. | So the thing is that you technically can't automatically calculate and position annotations at a perfect spot in the corner of the figure for every plot (outside of the plot, but inside the figure). The problem is that to do so, you would need to be able to get the dimensions of every plot, which could be undefined when you do not specify a width or height (autosizing will automatically calculate dimensions when the plot is shown). And even if you could do this, when you resize the window/page the annotation's location will change, and no longer be in the corner as you wanted it to be. Refer here for a proper explanation. However, you could work around this by embedding the plotly plot inside an app/frame, and add the text to that frame instead of the plot itself. Here is a solution to a similar question asked by someone else from before that uses the jinja2 module's Template object to embed the plot (though it creates HTML code which you will then need to separately render). Using the same method, you could implement a frame into the example plot like so: import plotly.express as px from jinja2 import Template fig = px.scatter(x=[1, 2, 3], y=[1, 2, 3], title="Try panning or zooming!") # Your annotation code and figure showing ''' fig.add_annotation(text="Absolutely-positioned annotation", xref="paper", yref="paper", x=1, y=-0.1, showarrow=False) fig.show() ''' version_number = 'Absolutely-positioned annotation' #Your text to be displayed fig_html = {"fig":fig.to_html(full_html=False, include_plotlyjs='cdn')} #Convert your plot to HTML to embed # HTML Frame to embed the figure and text frame = """<!DOCTYPE html> <html> <body> {{ fig }} <p style="font-size:15px; position:fixed; bottom:0; right:0;">{{ version_number }}</p> </body> </html>""" j2_template = Template(frame) #Create the Template object for our frame plot = j2_template.render(fig_html,version_number=version_number) #Embed the plot and text into the frame At this point you have yourself HTML code with your plot and the text you desire. To render it, use any module capable of rendering HTML code, or write it to an HTML file and open that instead. Example: from flask import Flask, render_template_string app = Flask(__name__) @app.route('/') def home(): return render_template_string(plot) #variable containing HTML from previous code if __name__ == '__main__': app.run(debug=True) | 4 | 2 |
75,998,227 | 2023-4-12 | https://stackoverflow.com/questions/75998227/how-to-define-query-parameters-using-pydantic-model-in-fastapi | I am trying to have an endpoint like /services?status=New status is going to be either New or Old Here is my code: from fastapi import APIRouter, Depends from pydantic import BaseModel from enum import Enum router = APIRouter() class ServiceStatusEnum(str, Enum): new = "New" old = "Old" class ServiceStatusQueryParam(BaseModel): status: ServiceStatusEnum @router.get("/services") def get_services( status: ServiceStatusQueryParam = Query(..., title="Services", description="my desc"), ): pass #my code for handling this route..... The result is that I get an error that seems to be relevant to this issue here The error says AssertionError: Param: status can only be a request body, using Body() Then I found another solution explained here. So, my code will be like this: from fastapi import APIRouter, Depends from pydantic import BaseModel from enum import Enum router = APIRouter() class ServiceStatusEnum(str, Enum): new = "New" old = "Old" class ServicesQueryParam(BaseModel): status: ServiceStatusEnum @router.get("/services") def get_services( q: ServicesQueryParam = Depends(), ): pass #my code for handling this route..... It is working (and I don't understand why) - but the question is how and where do I add the description and title? | To create a Pydantic model and use it to define query parameters, you would need to use Depends() along with the parameter in your endpoint. To add description, title, etc., to query parameters, you could wrap the Query() in a Field(). It should also be noted that one could use the Literal type instead of Enum, as described here and here. Additionally, if one would like to define a List field inside a Pydantic model and use it as a query parameter, they would either need to implement this in a separate dependency class, as demonstrated here and here, or again wrap the Query() in a Field(), as shown below. Moreover, to perform validation on query parameters inside a Pydnatic model, one could do this as usual using Pydantic's @validator, as demonstrated here, as well as here and here. Note that in this case, where the BaseModel is used for query parameters, raising ValueError would cause an Internal Server Error. Hence, you should instead raise an HTTPException when a validation fails, or use a custom exception handler, in order to handle ValueError exceptions, as shown in Option 2 of this answer. Besides @validator, one could also have additional validations for Query parameters, as described in FastAPI's documentation (see Query class implementation as well). As a side note, regarding defining optional parameters, the example below uses the Optional type hint (accompanied by None as the default value in Query) from the typing module; however, you may also would like to have a look at this answer and this answer, which describe all the available ways on how to do that. Working Example from fastapi import FastAPI, Depends, Query, HTTPException from pydantic import BaseModel, Field, validator from typing import List, Optional, Literal from enum import Enum app = FastAPI() class Status(str, Enum): new = 'New' old = 'Old' class ServiceStatus(BaseModel): status: Optional[Status] = Field (Query(None, description='Select service status')) msg: Optional[str] = Field (Query(None, description='Type something')) choice: Literal['a', 'b', 'c', 'd'] = Field (Query(..., description='Choose something')) comments: List[str] = Field (Query(..., description='Add some comments')) @validator('choice') def check_choice(cls, v): if v == 'b': raise HTTPException(status_code=422, detail='Wrong choice') return v @app.get('/status') def main(status: ServiceStatus = Depends()): return status Update (regarding @validator) Please note that in Pydantic V2, @validator has been deprecated and replaced by @field_validator. Please have a look at this answer for more details and examples. | 9 | 23 |
76,035,847 | 2023-4-17 | https://stackoverflow.com/questions/76035847/polars-python-limits-number-of-printed-table-output-rows | Does anyone know why polars (or maybe my pycharm setup or python debugger) limits the number of rows in the output? This drives me nuts. Here is the polars code i am running but I do suspect its not polars specific as there isnt much out there on google (and chatgpt said its info is too old haha). import polars as pl df = pl.scan_parquet('/path/to/file.parquet') result_df =( df .filter(pl.col("condition_category") == 'unknown') .groupby("type") .agg( [ pl.col("type").count().alias("counts"), ] ) ).collect() print(result_df) | Looks like the following will work. Thanks to @wayoshi for sharing this. I will say that the defaults are way too conservative! with pl.Config(tbl_rows=1000): print(result_df) or throw this at the top of your script if you prefer to not manage contexts. import polars as pl # Configure Polars cfg = pl.Config() cfg.set_tbl_rows(2000) | 6 | 9 |
76,000,750 | 2023-4-13 | https://stackoverflow.com/questions/76000750/pandas-problem-when-assigning-value-using-loc | So what is happening is the values in column B are becoming NaN. How would I fix this so that it does not override other values? import pandas as pd import numpy as np # %% # df=pd.read_csv('testing/example.csv') data = { 'Name' : ['Abby', 'Bob', 'Chris'], 'Active' : ['Y', 'Y', 'N'], 'A' : [89, 92, np.nan], 'B' : ['eye', 'hand', np.nan], 'C' : ['right', 'left', 'right'] } df = pd.DataFrame(data) df.loc[((df['Active'] =='N') & (df['A'].isna())), ['A', 'B']] = [99, df['C']] df What I want the results to be is: Name Active A B C Abby Y 89.0 eye right Bob Y 92.0 hand left Chris N 99 right right | You can implement this by creating a boolean mask using the condition: Where the 'Active' column is 'N' and the 'A' column has missing values (np.nan). We can then use this mask to filter rows in the DataFrame. First, we will replace the values in column 'A' with 99 for rows where the mask condition is True. Then, replace the values in column 'B' with the corresponding values from column 'C' where the mask condition is True. Following is the modified code: import pandas as pd import numpy as np data = { 'Name' : ['Abby', 'Bob', 'Chris'], 'Active' : ['Y', 'Y', 'N'], 'A' : [89, 92, np.nan], 'B' : ['eye', 'hand', np.nan], 'C' : ['right', 'left', 'right'] } df = pd.DataFrame(data) mask = (df['Active'] =='N') & (df['A'].isna()) df.loc[mask, 'A'] = 99 df.loc[mask, 'B'] = df.loc[mask, 'C'] print(df) Now, the DataFrame will be updated correctly. Following is the output: Name Active A B C 0 Abby Y 89.0 eye right 1 Bob Y 92.0 hand left 2 Chris N 99.0 right right | 3 | 3 |
76,005,401 | 2023-4-13 | https://stackoverflow.com/questions/76005401/cant-install-xmlsec-via-pip | I'm getting the following when running pip install xmlsec in macOS Big Sur 11.3.1: Building wheels for collected packages: xmlsec Building wheel for xmlsec (PEP 517) ... error ERROR: Command errored out with exit status 1: command: /Users/davidmasip/.pyenv/versions/3.9.9/bin/python3.9 /Users/davidmasip/.pyenv/versions/3.9.9/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/tmpm51b1yso cwd: /private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d Complete output (65 lines): running bdist_wheel running build running build_py creating build creating build/lib.macosx-11.3-x86_64-cpython-39 creating build/lib.macosx-11.3-x86_64-cpython-39/xmlsec copying src/xmlsec/py.typed -> build/lib.macosx-11.3-x86_64-cpython-39/xmlsec copying src/xmlsec/tree.pyi -> build/lib.macosx-11.3-x86_64-cpython-39/xmlsec copying src/xmlsec/__init__.pyi -> build/lib.macosx-11.3-x86_64-cpython-39/xmlsec copying src/xmlsec/constants.pyi -> build/lib.macosx-11.3-x86_64-cpython-39/xmlsec copying src/xmlsec/template.pyi -> build/lib.macosx-11.3-x86_64-cpython-39/xmlsec running build_ext building 'xmlsec' extension creating build/temp.macosx-11.3-x86_64-cpython-39 creating build/temp.macosx-11.3-x86_64-cpython-39/private creating build/temp.macosx-11.3-x86_64-cpython-39/private/var creating build/temp.macosx-11.3-x86_64-cpython-39/private/var/folders creating build/temp.macosx-11.3-x86_64-cpython-39/private/var/folders/ff creating build/temp.macosx-11.3-x86_64-cpython-39/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp creating build/temp.macosx-11.3-x86_64-cpython-39/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T creating build/temp.macosx-11.3-x86_64-cpython-39/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud creating build/temp.macosx-11.3-x86_64-cpython-39/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d creating build/temp.macosx-11.3-x86_64-cpython-39/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -D__XMLSEC_FUNCTION__=__func__ -DXMLSEC_NO_FTP=1 -DXMLSEC_NO_MD5=1 -DXMLSEC_NO_GOST=1 -DXMLSEC_NO_GOST2012=1 -DXMLSEC_NO_CRYPTO_DYNAMIC_LOADING=1 -DXMLSEC_CRYPTO_OPENSSL=1 -DMODULE_NAME=xmlsec -DMODULE_VERSION=1.3.13 -I/usr/local/Cellar/libxmlsec1/1.3.0/include/xmlsec1 -I/usr/local/opt/[email protected]/include -I/usr/local/opt/[email protected]/include/openssl -I/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-build-env-gsnsoluq/overlay/lib/python3.9/site-packages/lxml/includes -I/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-build-env-gsnsoluq/overlay/lib/python3.9/site-packages/lxml -I/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-build-env-gsnsoluq/overlay/lib/python3.9/site-packages/lxml/includes/libxml -I/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-build-env-gsnsoluq/overlay/lib/python3.9/site-packages/lxml/includes/libxslt -I/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-build-env-gsnsoluq/overlay/lib/python3.9/site-packages/lxml/includes/libexslt -I/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-build-env-gsnsoluq/overlay/lib/python3.9/site-packages/lxml/includes/extlibs -I/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-build-env-gsnsoluq/overlay/lib/python3.9/site-packages/lxml/includes/__pycache__ -I/Users/davidmasip/.pyenv/versions/3.9.9/include/python3.9 -c /private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/constants.c -o build/temp.macosx-11.3-x86_64-cpython-39/private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/constants.o -g -std=c99 -fPIC -fno-strict-aliasing -Wno-error=declaration-after-statement -Werror=implicit-function-declaration -Os /private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/constants.c:319:5: error: use of undeclared identifier 'xmlSecSoap11Ns' PYXMLSEC_ADD_NS_CONSTANT(Soap11Ns, "SOAP11"); ^ /private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/constants.c:304:46: note: expanded from macro 'PYXMLSEC_ADD_NS_CONSTANT' tmp = PyUnicode_FromString((const char*)(JOIN(xmlSec, name))); \ ^ /private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/common.h:19:19: note: expanded from macro 'JOIN' #define JOIN(X,Y) DO_JOIN1(X,Y) ^ /private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/common.h:20:23: note: expanded from macro 'DO_JOIN1' #define DO_JOIN1(X,Y) DO_JOIN2(X,Y) ^ /private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/common.h:21:23: note: expanded from macro 'DO_JOIN2' #define DO_JOIN2(X,Y) X##Y ^ <scratch space>:23:1: note: expanded from here xmlSecSoap11Ns ^ /private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/constants.c:320:5: error: use of undeclared identifier 'xmlSecSoap12Ns'; did you mean 'xmlSecXPath2Ns'? PYXMLSEC_ADD_NS_CONSTANT(Soap12Ns, "SOAP12"); ^ /private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/constants.c:304:46: note: expanded from macro 'PYXMLSEC_ADD_NS_CONSTANT' tmp = PyUnicode_FromString((const char*)(JOIN(xmlSec, name))); \ ^ /private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/common.h:19:19: note: expanded from macro 'JOIN' #define JOIN(X,Y) DO_JOIN1(X,Y) ^ /private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/common.h:20:23: note: expanded from macro 'DO_JOIN1' #define DO_JOIN1(X,Y) DO_JOIN2(X,Y) ^ /private/var/folders/ff/3y2196b13bq0nbm3_ms25nyh0000gp/T/pip-install-qm2a1dud/xmlsec_cd7a81ea26444cc4b8ae24acd3ec379d/src/common.h:21:23: note: expanded from macro 'DO_JOIN2' #define DO_JOIN2(X,Y) X##Y ^ <scratch space>:25:1: note: expanded from here xmlSecSoap12Ns ^ /usr/local/Cellar/libxmlsec1/1.3.0/include/xmlsec1/xmlsec/strings.h:34:33: note: 'xmlSecXPath2Ns' declared here XMLSEC_EXPORT_VAR const xmlChar xmlSecXPath2Ns[]; ^ 2 errors generated. error: command '/usr/bin/clang' failed with exit code 1 ---------------------------------------- ERROR: Failed building wheel for xmlsec Failed to build xmlsec ERROR: Could not build wheels for xmlsec which use PEP 517 and cannot be installed directly WARNING: You are using pip version 21.2.4; however, version 23.0.1 is available. You should consider upgrading via the '/Users/davidmasip/.pyenv/versions/3.9.9/bin/python3.9 -m pip install --upgrade pip' command. I've also run before: brew install libxml2 libxmlsec1 pkg-config xz And I get: Warning: libxml2 2.10.4 is already installed and up-to-date. To reinstall 2.10.4, run: brew reinstall libxml2 Warning: libxmlsec1 1.3.0 is already installed and up-to-date. To reinstall 1.3.0, run: brew reinstall libxmlsec1 Warning: pkg-config 0.29.2_3 is already installed and up-to-date. To reinstall 0.29.2_3, run: brew reinstall pkg-config Warning: xz 5.4.2 is already installed and up-to-date. To reinstall 5.4.2, run: brew reinstall xz how can I install xmlsec in macOS? | EDIT This GitHub comment is the simplest fix. https://github.com/xmlsec/python-xmlsec/issues/254#issuecomment-1612005910 When libxmlsec1 was bumped to v1.3 it broke pip install xmlsec. https://github.com/xmlsec/python-xmlsec/issues/254 I am using a Macbook with Apple Silicon (M1 / M2) and struggled for a while to find this. I managed to workaround it via the Homebrew formula hack in this comment by @dpritchett: https://raw.githubusercontent.com/Homebrew/homebrew-core/7f35e6ede954326a10949891af2dba47bbe1fc17/Formula/libxmlsec1.rb Specific workaround steps: brew edit libxmlsec1. An editor opens up, full of the contents of the latest downloaded xmlsec formula from GitHub or wherever they come from i paste in the contents of the last pre-1.3.0 brew formula and hit save Install that local formula: brew install /usr/local/Homebrew/Library/Taps/homebrew/homebrew-core/Formula/libxmlsec1.rb pip install xmlsec ~~EDIT~~ See comments below and comments on GitHub issue for additional info about the workaround that depend on your version of homebrew. https://github.com/xmlsec/python-xmlsec/issues/254#issuecomment-1522646438 | 15 | 34 |
76,038,966 | 2023-4-17 | https://stackoverflow.com/questions/76038966/type-hinting-pandas-dataframe-content-and-columns | I am writing a function that returns a Pandas DataFrame object. I would like to have a type hint that specifies which columns this DataFrame contains, besides just specifying in the docstring, to make it easier for the end user to read the data. Is there a way to type hint DataFrame content like this? Ideally, this would integrate well with tools like Visual Studio Code and PyCharm when editing Python files and Jupyter Notebooks. An example function: def generate_data(bunch, of, inputs) -> pd.DataFrame: """Massages the input to a nice and easy DataFrame. :return: DataFrame with columns a(int), b(float), c(string), d(us dollars as float) """ | The most powerful project for strong typing of pandas DataFrame as of now (Apr 2023) is pandera. Unfortunately, what it offers is quite limited and far from what we might have wanted. Here is an example of how you can use pandera in your case†: import pandas as pd import pandera as pa from pandera.typing import DataFrame class MySchema(pa.DataFrameModel): a: int b: float c: str = pa.Field(nullable=True) # For example, allow None values d: float # US dollars class OtherSchema(pa.DataFrameModel): year: int = pa.Field(ge=1900, le=2050) def generate_data() -> DataFrame[MySchema]: df = pd.DataFrame({ "a": [1, 2, 3], "b": [10.0, 20.0, 30.0], "c": ["A", "B", "C"], "d": [0.1, 0.2, 0.3], }) # Runtime verification here, throws on schema mismatch strongly_typed_df = DataFrame[MySchema](df) return strongly_typed_df def transform(input: DataFrame[MySchema]) -> DataFrame[OtherSchema]: # This demonstrates that you can use strongly # typed column names from the schema df = input.filter(items=[MySchema.a]).rename( columns={MySchema.a: OtherSchema.year} ) return DataFrame[OtherSchema](df) # This will throw on range validation! df1 = generate_data() df2 = transform(df1) transform(df2) # mypy prints error here - incompatible type! You can see mypy producing static type check error on the last line: Discussion of advantages and limitations With pandera we get – Clear and readable (dataclass style) DataFrame schema definitions and ability to use them as type hints. Run-time schema verification. Schema can define even more constraints than just types (see year in the example below and pandera docs for more). Experimental support for static type checking by mypy. What we still miss – Full static type checking for column level verification. Any IDE support for column name auto-completion. Inline syntax for schema declaration, we have to explicitly define each schema as separate class before using it. More examples Pandera docs - https://pandera.readthedocs.io/en/stable/dataframe_models.html Similar question - Type hints for a pandas DataFrame with mixed dtypes Other typing projects pandas-stubs is an active project providing type declarations for the pandas public API which is richer than type stubs included in pandas itself. But it doesn't provide any facilities for column level schemas. There are quite a few outdated libraries related to this and pandas typing in general - dataenforce, data-science-types, python-type-stubs † pandera provides two different APIs that seem to be equally powerful - object-based API and class-based API. I demonstrate the later here. | 14 | 18 |
76,028,283 | 2023-4-16 | https://stackoverflow.com/questions/76028283/missing-the-lzma-lib | The following message means that Python has not been installed completely. If Yes! Do I have to install the 'lzma' extension? ModuleNotFoundError: No module name '_lzma' Warning: The Python lzma extension was not compiled. Missing the lzma lib? Installed Python-3.11.3 to /Users/admin/.pyenv/versions/3.11.3 Thank you! | Per Chris' comment, the solution is to install OS-specific dependencies then use pyenv to install a python version. | 25 | 12 |
76,038,513 | 2023-4-17 | https://stackoverflow.com/questions/76038513/isort-black-strange-behavior | When applying isort 5.12.0 in pre-commit within a python file, it re-orders the imports in bad. In bitbucket pipelines, the same code orders correctly. This correct code: from dagster import build_init_resource_context from module1 import setting from module1.resources.apple import AppleConnector as apple_connector from module1.resources.apple import apple_resource from module1.samples.apple import ( apple_schema as apple_schemas, ) from jsonschema import validate Gets re-ordered this way: from dagster import build_init_resource_context from module1.resources.apple import AppleConnector as apple_connector from module1.resources.apple import apple_resource from module1.samples.apple import ( apple_schema as apple_schemas, ) from jsonschema import validate from module1 import setting ¿Why is it happening? | In .pre-commit-config.yaml isort configuration, I had to add module1 as a first-party package. That avoided different interpretations in different repositories. Then pre-commit worked consistently in all the pipelines. - repo: https://github.com/PyCQA/isort rev: 5.12.0 hooks: - id: isort args: ["--profile", "black", "--filter-files", "--project", "module1"] | 3 | 0 |
76,043,689 | 2023-4-18 | https://stackoverflow.com/questions/76043689/pkg-resources-is-deprecated-as-an-api | When I try to install from a .tar.gz package, while making warnings into errors: python -W error -m pip install /some/path/nspace.pkga-0.1.0.tar.gz I get this error: ERROR: Exception: Traceback (most recent call last): File "/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/cli/base_command.py", line 169, in exc_logging_wrapper status = run_func(*args) ^^^^^^^^^^^^^^^ File "/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/cli/req_command.py", line 248, in wrapper return func(self, options, args) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/commands/install.py", line 324, in run session = self.get_default_session(options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/cli/req_command.py", line 98, in get_default_session self._session = self.enter_context(self._build_session(options)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/cli/req_command.py", line 125, in _build_session session = PipSession( ^^^^^^^^^^^ File "/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/network/session.py", line 342, in __init__ self.headers["User-Agent"] = user_agent() ^^^^^^^^^^^^ File "/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/network/session.py", line 175, in user_agent setuptools_dist = get_default_environment().get_distribution("setuptools") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/metadata/importlib/_envs.py", line 188, in get_distribution return next(matches, None) ^^^^^^^^^^^^^^^^^^^ File "/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/metadata/importlib/_envs.py", line 183, in <genexpr> matches = ( ^ File "/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/metadata/base.py", line 612, in iter_all_distributions for dist in self._iter_distributions(): File "/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/metadata/importlib/_envs.py", line 176, in _iter_distributions for dist in finder.find_eggs(location): File "/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/metadata/importlib/_envs.py", line 144, in find_eggs yield from self._find_eggs_in_dir(location) File "/opt/util/nspace1/lib/python3.11/site-packages/pip/_internal/metadata/importlib/_envs.py", line 111, in _find_eggs_in_dir from pip._vendor.pkg_resources import find_distributions File "/opt/util/nspace1/lib/python3.11/site-packages/pip/_vendor/pkg_resources/__init__.py", line 121, in <module> warnings.warn("pkg_resources is deprecated as an API", DeprecationWarning) DeprecationWarning: pkg_resources is deprecated as an API pip seems to have vendored in a deprecated package. The pip code responsible is in pip/_internal/metadata/importlib/_envs.py in class Environment: def _iter_distributions(self) -> Iterator[BaseDistribution]: finder = _DistributionFinder() for location in self._paths: yield from finder.find(location) for dist in finder.find_eggs(location): # _emit_egg_deprecation(dist.location) # TODO: Enable this. yield dist # This must go last because that's how pkg_resources tie-breaks. yield from finder.find_linked(location) If I comment out the nested for loop (doing the find_eggs) thinks work fine: I get no error and a working package installed. How do I monkeypatch that Environment instance from my setup.py file? This is Python 3.11.3 (so it should be using importlib.metadata and not pkg_resources) on macOS, pip==23.1, setuptools==67.6.1 Background: I am just trying this out on an example package, the reason for this is based on a bug report for my ruamel.yaml package, where it is build in such a less forgiving environment. I could of course say don't use -W error, but I rather solve this, by not invoking the offending, unused code in the first place | There is a related discussion on pip's ticket tracker. It seems like this issue has been solved in pip 23.1.1: "Revert pkg_resources (via setuptools) back to 65.6.3". And pip 23.1.2 seems to vendor the new setuptools (and pkg_resources) as expected but without the deprecation warnings (see also this message). | 27 | 19 |
76,018,799 | 2023-4-14 | https://stackoverflow.com/questions/76018799/how-to-clear-user-input-before-python-exits | If a program exits before all of its input has been consumed, then the remaining input will be sent to the shell. Here's an example: import sys for line in sys.stdin: sys.exit() Try running the example and copy-paste this multi-line input: foo bar baz The result will look like this: ❯ python example.py foo bar baz% ❯ bar zsh: command not found: bar ❯ baz In this case, the program exited after consuming foo, so bar was automatically sent to the shell. How can I clear the remaining input before Python exits? The for loop in the example represents complicated logic, so I'm looking for a solution that doesn't modify the for loop. I tried registering an atexit handler to clear the input: import sys, atexit def clear_stdin(): sys.stdin.read() atexit.register(clear_stdin) for line in sys.stdin: sys.exit() This solution does clear the input and prevent it from being sent to the shell, but it unfortunately causes the program to hang until the user enters a blank line. I'd like to clear the remaining input without pausing the program. | On Unix systems you can use os.set_blocking before reading the input. If the user pastes a large amount of text, then the terminal may be holding on to additional input that hasn't been sent to stdin yet. It may not be possible for the program to see this, so the following code includes a sleep to allow for the terminal to provide more input. This code also includes several guards to prevent errors: import sys, atexit, os from time import sleep def clear_stdin(): if sys.__stdin__ and sys.__stdin__.isatty() and hasattr(os, 'set_blocking'): try: os.set_blocking(sys.__stdin__.fileno(), False) except OSError: return try: while sys.__stdin__.read(): sleep(0.1) except TypeError: return atexit.register(clear_stdin) for line in sys.stdin: sys.exit() This will prevent the remaining input from being sent to the shell while still allowing the program to exit immediately: ❯ python example.py foo bar baz% ❯ baz If the last line does not end with a newline character then it won't be consumed, but it won't be executed by the shell either. | 4 | 2 |
76,019,272 | 2023-4-14 | https://stackoverflow.com/questions/76019272/error-when-i-open-spyder-pylsp-1-7-2-1-8-0-1-7-1-nok | I tried to upgrade conda and spyder, but I encountered an issue. I tried: conda update anaconda conda install spyder=5.4.3 pip install pylsp-mypy I used option 1 in the spyder terminal first and then I used it in the conda prompt, I don't know if that caused the error. How can I solve this issue? Edit (temporal solution): I used conda install --revision 0, and this fresh Anaconda installation, with the drawback of needing to install every package again. However, all environments created remain intact. I don't know what happened, but when I write 1 and 2 in the prompt again, I got the same error. I don't know if is a version issue. | Please use this instead: conda install -c conda-forge python-lsp-server | 12 | 14 |
76,031,802 | 2023-4-17 | https://stackoverflow.com/questions/76031802/python-kubernetes-client-equivalent-of-kubectl-api-resources-namespaced-false | Via the CLI, I can use kubectl api-resources --namespaced=false to list all available cluster-scoped resources in a cluster. I am writing a custom operator with the Python Kubernetes Client API, however I can't seem to find anything in the API that allows me to do this. The closest I have found is the following code, which was included as an example in the repository: from kubernetes import client, config def main(): # Configs can be set in Configuration class directly or using helper # utility. If no argument provided, the config will be loaded from # default location. config.load_kube_config() print("Supported APIs (* is preferred version):") print("%-40s %s" % ("core", ",".join(client.CoreApi().get_api_versions().versions))) for api in client.ApisApi().get_api_versions().groups: versions = [] for v in api.versions: name = "" if v.version == api.preferred_version.version and len( api.versions) > 1: name += "*" name += v.version versions.append(name) print("%-40s %s" % (api.name, ",".join(versions))) if __name__ == '__main__': main() Unfortunately, client.ApisApi() doesn't have a get_api_resources() option. Does anybody know of a way that I can list all the api-resources? | I managed to solve the issue. For anyone wondering, the Python client API unfortunately does not have an equivalent function to kubectl api-resources, or kubectl api-resources --namespaced=false. I was able to create my own solution to this using the API, which you can see below. To get similar output to kubectl api-resources --namespaced=false, you would just call getNonNamespacedApiResources. Note that this python program runs inside of a Pod associated with a ServiceAccount, which gives it the necessary permissions. def query_api(api_group): SERVICE_TOKEN_FILENAME = "/var/run/secrets/kubernetes.io/serviceaccount/token" SERVICE_CERT_FILENAME = "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt" KUBERNETES_HOST = "https://%s:%s" % (os.getenv("KUBERNETES_SERVICE_HOST"), os.getenv("KUBERNETES_SERVICE_PORT")) if not os.path.isfile(SERVICE_TOKEN_FILENAME): raise client.rest.ApiException("Service token file does not exists.") with open(SERVICE_TOKEN_FILENAME) as f: token = f.read() if not token: raise client.rest.ApiException("Token file exists but empty.") # set token bearer in header headers = {"Authorization": "Bearer " + token.strip('\n')} if not os.path.isfile(SERVICE_CERT_FILENAME): raise client.rest.ApiException("Service certification file does not exists.") with open(SERVICE_CERT_FILENAME) as f: if not f.read(): raise client.rest.ApiException("Cert file exists but empty.") # query the API response = requests.get( KUBERNETES_HOST + "/apis/" + api_group, headers=headers, verify=config.incluster_config.SERVICE_CERT_FILENAME ).json() return response def extractClusterScopedResources(api_group): api_response_json = query_api(api_group) scoped_resources = [] for obj in api_response_json['resources']: if not obj['namespaced'] and "storageVersionHash" in obj: scoped_resources.append(obj['name']) return scoped_resources def getNonNamespacedApiResources(): apis = [] # handle kubernetes/apis/ for api in client.ApisApi().get_api_versions().groups: for v in api.versions: if v.version == api.preferred_version.version: apis.append(api.name + "/" + v.version) continue api_resources = [] for api_group in apis: api_resources.extend(extractClusterScopedResources(api_group)) # handle kubernetes/api/v1 for api in client.CoreV1Api().get_api_resources().resources: if not api.namespaced and api.storage_version_hash: api_resources.append(api.name) return api_resources | 3 | 4 |
76,040,523 | 2023-4-18 | https://stackoverflow.com/questions/76040523/auto-gpt-command-evaluate-code-returned-error-the-model-gpt-4-does-not-exis | I'm working with auto-gpt and I got this error: Command evaluate_code returned: Error: The model: `gpt-4` does not exist and it's like it can't go furthermore. what should I do? | Update your .env file in your repo so that both are = to gpt-3.5-turbo SMART_LLM_MODEL=gpt-3.5-turbo FAST_LLM_MODEL=gpt-3.5-turbo this issue is happening because you do not have API access to GPT4. | 5 | 11 |
76,046,538 | 2023-4-18 | https://stackoverflow.com/questions/76046538/is-there-a-way-to-check-if-a-dataframe-is-contained-within-another | I am using pandas to check wether two dataframes are contained within each others. the method .isin() is only helpful (e.g., returns True) only when labels match (ref: https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.isin.html) but I want to check further that this to include cases where the labels don't match. Example: df1: +----+----+----+----+----+ | 3 | 4 | 5 | 6 | 7 | +----+----+----+----+----+ | 11 | 13 | 10 | 15 | 12 | +----+----+----+----+----+ | 8 | 2 | 9 | 0 | 1 | +----+----+----+----+----+ | 14 | 23 | 31 | 21 | 19 | +----+----+----+----+----+ df2: +----+----+ | 13 | 10 | +----+----+ | 2 | 9 | +----+----+ I want the output to be True since df2 is inside df1 Any ideas how to do that using Pandas? | You can use numpy's sliding_window_view: from numpy.lib.stride_tricks import sliding_window_view as swv (swv(df1, df2.shape)==df2.to_numpy()).all((-2, -1)).any() Output: True Intermediate: (swv(df1, df2.shape)==df2.to_numpy()).all((-2, -1)) array([[False, False, False, False], [False, True, False, False], # df2 is found in position 1,1 [False, False, False, False]]) using a partial match Example 1: ≥ 75% of matches: from numpy.lib.stride_tricks import sliding_window_view as swv ((swv(df1, df2.shape)==df2.to_numpy()).mean((-2, -1))>=0.75).any() Example 2: ≥ 3 matches: from numpy.lib.stride_tricks import sliding_window_view as swv ((swv(df1, df2.shape)==df2.to_numpy()).sum((-2, -1))>=3).any() Alternative input: df2 = pd.DataFrame([[13, 10], [2, 8]]) | 4 | 4 |
76,034,389 | 2023-4-17 | https://stackoverflow.com/questions/76034389/google-analytics-is-not-working-on-streamlit-application | I have built a web application using streamlit and hosted it on the Google Cloud Platform (App Engine). The URL is something like https://xxx-11111.uc.r.appspot.com/ which is given for the Stream URL. I enabled Google Analytics 2 days back but apparently, it is not set up correctly. It was given that I need to add in the head tag. This is the code where I added the Google Analytics tag... What is wrong?? def page_header(): st.set_page_config(page_title="xx", page_icon="images/logo.png") header = st.container() with header: # Add banner image logo = Image.open("images/logo.png") st.image(logo, width=300) # Add Google Analytics code to the header ga_code = """ <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-xxxxxx"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-xxxxxx'); </script> """ st.markdown(ga_code, unsafe_allow_html=True) # Define the main function to run the app def main(): # Render the page header page_header() ..... if __name__ == "__main__": main() | One way to implement Google Analytics into your GAE Streamlit app would be to add the GA global site tag JS code to the default /site-packages/streamlit/static/index.html file NOTE: This script should be run prior to running the streamlit server from bs4 import BeautifulSoup import shutil import pathlib import logging import streamlit as st def add_analytics_tag(): # replace G-XXXXXXXXXX to your web app's ID analytics_js = """ <!-- Global site tag (gtag.js) - Google Analytics --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-XXXXXXXXXX"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-XXXXXXXXXX'); </script> <div id="G-XXXXXXXXXX"></div> """ analytics_id = "G-XXXXXXXXXX" # Identify html path of streamlit index_path = pathlib.Path(st.__file__).parent / "static" / "index.html" logging.info(f'editing {index_path}') soup = BeautifulSoup(index_path.read_text(), features="html.parser") if not soup.find(id=analytics_id): # if id not found within html file bck_index = index_path.with_suffix('.bck') if bck_index.exists(): shutil.copy(bck_index, index_path) # backup recovery else: shutil.copy(index_path, bck_index) # save backup html = str(soup) new_html = html.replace('<head>', '<head>\n' + analytics_js) index_path.write_text(new_html) # insert analytics tag at top of head | 6 | 4 |
76,012,502 | 2023-4-14 | https://stackoverflow.com/questions/76012502/cant-reach-restapi-fastapi-from-my-flutter-web-cross-origin-request-blocked | I have a Linux Server. On that I have two Docker Containers.In the first one I am deploying my Flutter Web and in the other one I am running my RestAPI with FastAPI(). I set both the Docker containers in the same Network, so the communication should work. I also set origins with origins = ['*'] (Wildcard). I reverse proxy my Flutter web with nginx from the Linux server. I also include *.crt and *.key with nginx to my Flutter Web. Now, obviously, since my Flutter Web App has https, I cant make http calls. When I am trying to make a call with https, I get the Error (from catch): "XMLHttpRequest error", and in the Browser Console I get: Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://172.21.0.2:8070/. (Reason: CORS request did not succeed). Status code: (null). (172.21.0.2 is the ip of the Docker and 8070 the port on RestApi running) I am new to the RestAPI world. I normaly develop only Frontend. But I wanted to give it a try. So I'm sorry if I expressed some things wrong. I am searching since days but cant find a solution to my problem. I would be grateful for any help! (If i missed some information or you need more, feel free to write in the comments, I will update the Question immediately!) Thank You! | I solved my problem. If anyone face the same problem in the future, here is the answer. My mistake was to call the server with http/https, while in the same (Docker) Network. So I changed: final Uri tokenUri = Uri.https(urlList[index]['url']!, ''); to final Uri tokenUri = Uri.parse('${urlList[index]['url']!}/'); One week of search for such an easy solution. | 3 | 4 |
76,045,605 | 2023-4-18 | https://stackoverflow.com/questions/76045605/using-a-custom-trained-huggingface-tokenizer | I’ve trained a custom tokenizer using a custom dataset using this code that’s on the documentation. Is there a method for me to add this tokenizer to the hub and to use it as the other tokenizers by calling the AutoTokenizer.from_pretrained() function? If I can’t do that how can I use the tokenizer to train a custom model from scratch? Thanks for your help!!! Here's the code below: from tokenizers import Tokenizer from tokenizers.models import BPE tokenizer = Tokenizer(BPE(unk_token="[UNK]")) from tokenizers.trainers import BpeTrainer trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) from tokenizers.pre_tokenizers import Whitespace tokenizer.pre_tokenizer = Whitespace() folder = 'dataset_unicode' files = [f"/content/drive/MyDrive/{folder}/{split}.txt" for split in ["test", "train", "valid"]] tokenizer.train(files, trainer) from tokenizers.processors import TemplateProcessing tokenizer.post_processor = TemplateProcessing( single="[CLS] $A [SEP]", pair="[CLS] $A [SEP] $B:1 [SEP]:1", special_tokens=[ ("[CLS]", tokenizer.token_to_id("[CLS]")), ("[SEP]", tokenizer.token_to_id("[SEP]")), ], ) # I've tried saving it like this but it doesn't work as I expect it: tokenizer.save("data/tokenizer-custom.json") | The AutoTokenizer expects a few files in the directory: awesometokenizer/ tokenizer_config.json special_tokens_map.json tokenizer.json But the default tokenizer.Tokenizer.save() function only saves the vocab file in awesometokenizer/tokenizer.json, open up the json file and compare the ['model']['vocab'] keys to your json from data/tokenizer-custom.json. The simplest way to let AutoTokenizer load .from_pretrained is to follow the answer that @cronoik posted in the comment, using PreTrainedTokenizerFast, i.e. adding a few lines to your existing code: from tokenizers import Tokenizer from tokenizers.models import BPE from tokenizers.trainers import BpeTrainer from tokenizers.pre_tokenizers import Whitespace from tokenizers.processors import TemplateProcessing from transformers import PreTrainedTokenizerFast # <---- Add this line. trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) tokenizer = Tokenizer(BPE(unk_token="[UNK]")) tokenizer.pre_tokenizer = Whitespace() files = ["big.txt"] # e.g. training with https://norvig.com/big.txt tokenizer.train(files, trainer) tokenizer.post_processor = TemplateProcessing( single="[CLS] $A [SEP]", pair="[CLS] $A [SEP] $B:1 [SEP]:1", special_tokens=[ ("[CLS]", tokenizer.token_to_id("[CLS]")), ("[SEP]", tokenizer.token_to_id("[SEP]")), ], ) # Add these lines: # | # | # V awesome_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer) awesome_tokenizer.save_pretrained("awesome_tokenizer") Then you can load the trained tokenizer: from transformers import AutoTokenizer auto_loaded_tokenizer = AutoTokenizer.from_pretrained( "awesome_tokenizer", local_files_only=True ) Note: tokenizers though can be pip installed, is a library in Rust with Python bindings | 4 | 4 |
76,039,364 | 2023-4-17 | https://stackoverflow.com/questions/76039364/extract-raw-sql-from-sqlalchemy-with-replaced-parameters | Given a sql query such as query = """ select some_col from tbl where some_col > :value """ I'm executing this with sqlalchemy using connection.execute(sa.text(query), {'value' : 5}) Though this does what's expected, I would like to be able to get the raw sql, with replaced parameters. Meaning I would like a way to be able to get select some_column from tbl where some_column > 5 I've tried to echo the sql using: engine = sa.create_engine( '<CONNECTION STRING>', echo=True, ) But this didn't replace the parameters. If there's not a way to do this in sqlalchemy, but is a way using something like psycopg2 (as long as the syntax :value doesn't change) then that would be of interest. | The SQLAlchemy documentation for Rendering Bound Parameters Inline explains how we can use literal_binds: query = "select some_col from tbl where some_col > :value" print( sa.text(query) .bindparams(value=5) .compile(compile_kwargs={"literal_binds": True}) ) # select some_col from tbl where some_col > 5 | 3 | 5 |
76,047,250 | 2023-4-18 | https://stackoverflow.com/questions/76047250/what-else-goes-in-a-python-src-folder-according-to-actual-or-de-facto-standards | When using src layout rather than flat layout in a Python project, is anything other than the project module expected to live in the src folder? My understanding is that if I added mypkg2 under src in the layout below, and published the result to PyPI, anyone who did a pip install would be able to import mypkg and import mypkg2 (which might be surprising). Am I missing something? project_root_directory ├── pyproject.toml # AND/OR setup.cfg, setup.py ├── ... └── src/ └── mypkg/ ├── __init__.py ├── ... ├── module.py ├── subpkg1/ │ ├── __init__.py │ ├── ... │ └── module1.py └── subpkg2/ ├── __init__.py ├── ... └── module2.py Sample layout from https://setuptools.pypa.io/en/latest/userguide/package_discovery.html#src-layout I haven't been able to find an example of a project with anything else present, nor an explicit instruction not to put anything else in there. I am looking for a PEP or packaging document that answers this question. | In a correct src-layout project, the only things that should be present in the src directory are the top-level import packages and top-level import modules (sub-packages are in the sub-directories of the top-level packages, obviously). By convention each distribution package contains one and only one top-level import package (or top-level import module). And I guess that this convention is the "de facto standard" that is asked for in the question. In the case of the example shown in the question, if the project had both src/mypkg and src/mypkg2 directories, then the project would have 2 top-level import packages, which is unconventional (but possible). Famous example of project not following this convention is setuptools, it has 2 top-level imports: setuptools and pkg_resources. I do not know of any authoritative documentation that clarifies this specific point of the src-layout, the closest I can find is "src layout vs flat layout" on the Python Packaging User Guide. | 7 | 7 |
75,992,698 | 2023-4-12 | https://stackoverflow.com/questions/75992698/how-do-i-click-on-clickable-element-with-selenium-in-shadow-root-closed | An "Agree with the terms" button appears on https://www.sreality.cz/hledani/prodej/domy I am trying to go through that with a .click() using Selenium and Python. The button element is: <button data-testid="button-agree" type="button" class="scmp-btn scmp-btn--default w-button--footer sm:scmp-ml-sm md:scmp-ml-md lg:scmp-ml-dialog">Souhlasím</button> My approach is: driver = webdriver.Chrome() driver.implicitly_wait(20) driver.get("https://www.sreality.cz/hledani/prodej/domy") button = driver.find_element_by_css_selector("button[data-testid='button-agree']") button.click() Any idea what to change to make it work? Thanks! :) | Check the below working workaround solution: driver = webdriver.Chrome() driver.implicitly_wait(10) driver.get("https://www.sreality.cz/hledani/prodej/domy") driver.maximize_window() # Below line creates instance of ActionChains class action = ActionChains(driver) # Below line locates and stores an element which is outside the shadow-root element_outside_shadow = driver.find_element(By.XPATH, "//div[@class='szn-cmp-dialog-container']") # Below 2 lines clicks on the browser at an offset of co-ordinates x=5 and y=5 action.move_to_element_with_offset(element_outside_shadow, 5, 5) action.click() # Below 2 lines presses TAB key 9 times so that pointer moves to "Souhlasím" button and presses ENTER key once action.send_keys(Keys.TAB * 9).perform() action.send_keys(Keys.ENTER).perform() Imports required: from selenium import webdriver from selenium.webdriver import ActionChains from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By NOTE: This is a workaround solution, as there is no direct selenium solution to handle it. | 3 | 3 |
76,043,784 | 2023-4-18 | https://stackoverflow.com/questions/76043784/how-to-add-pydantic-serialization-deserialization-support-for-a-foreign-third-p | Consider a third-party class that doesn't support pydantic serialization, and you're not under control of the source code of that class, i.e., you cannot make it inherit from BaseModel. Let's assume the entire state of that class can be constructed and obtained via its public interface (but the class may have private fields). So in theory we'd be able to write serialization/deserialization functions of that class based on its public interface. Is it somehow possible to use such a class inside a pydantic BaseModel? I.e., the goal would be to somehow arrive at: class MySerializableClass(BaseModel): foreign_class_instance: ForeignClass How could we add serialization/deserialization functions to properly support the foreign_class_instance field? As a concrete let's take for instance a Numpy array as an example of a foreign class that we want to support. By serializing/deserializing based on the "public interface" I mean: For serialization we can use the public interface of np.array to get its data including meta data like dtype and shape. The output of the serialization function could be something like {"dtype": "int", "shape": [3], "data": [1, 2, 3]} (or any other composition of JSON-serializable data). The serialization function would have a signature like serialize(x: np.ndarray) -> object (for lack of the JsonData type). The deserialization function would get this serialized representation, and would construct the np.ndarray instance again on the public interface of np.ndarray, which typically means using its constructor. The signature would be the inverse: deserialize(o: object) -> np.ndarray. My question is: Assuming I can implement these two serialize and deserialize functions just fine like in this example, how can I integrate them into pydantic so that serializing/deserializing a BaseModel implicitly makes use of the two functions. | In general, the process of defining custom field types is described in this section of the documentation. The bare minimum is a method called __get_validators__ returning an iterator of validation methods, the last of which should return an instance of that custom type. That means with third-party classes you will almost certainly need to subclass (or monkey-patch) them. If the type you are dealing with has no built-in JSON serialization support (types like str or list for example are supported out of the box), you also need to sepcify how instances of that type are supposed to be dumped to JSON. This can be done selectively, when calling the json method or configured model-wide via the json_encoders dictionary. The example of the Numpy ndarray is slightly more involved because we need to define a sensible basic constructor for our subclass and decide what type of data we want to support during parsing/validation. A very simple setup might look like this: from __future__ import annotations from collections.abc import Callable, Iterator, Mapping from typing import Any import numpy as np from numpy.typing import ArrayLike from pydantic import BaseModel class SerializableNDArray(np.ndarray[Any, Any]): def __new__(cls, array_like: ArrayLike) -> SerializableNDArray: return np.asarray(array_like).view(cls) @classmethod def __get_validators__(cls) -> Iterator[Callable[..., Any]]: yield cls.validate @classmethod def validate(cls, v: Any) -> SerializableNDArray: if isinstance(v, Mapping): return cls(np.array(**v)) return cls(v) def json_dump(self) -> dict[str, Any]: return {"object": self.tolist(), "dtype": str(self.dtype)} class Model(BaseModel): a: SerializableNDArray class Config: json_encoders = { SerializableNDArray: SerializableNDArray.json_dump } Here is a little demo: def main() -> None: test_json_1 = '{"a": [1.0, 2.0, 3.0]}' test_json_2 = '{"a": {"object": [[1, 2], [3, 4]], "dtype": "float32"}}' obj1 = Model.parse_raw(test_json_1) obj2 = Model.parse_raw(test_json_2) print(obj1.json(indent=4)) print(obj2.json(indent=4)) if __name__ == "__main__": main() Output: { "a": { "object": [ 1.0, 2.0, 3.0 ], "dtype": "float64" } } { "a": { "object": [ [ 1.0, 2.0 ], [ 3.0, 4.0 ] ], "dtype": "float32" } } Notice that I decided to make the JSON representation of our custom array type always be an object with the keys object and dtype instead of simply an array of numbers because this allows us to preserve more information (in this example the data type) and the validator accepts a mapping that can be unpacked into the numpy.array function. (So those keys need to be a subset of the parameter names of that function.) I left out the shape deliberately here because it is not a parameter of numpy.array. The shape is inferred from the way the "array-like" object passed to that function is structured (list of lists for example). If the data to validate is not some kind of mapping, the validator will always to try to coerce the value directly via our constructor, i.e. via numpy.asarray. That should give type errors that are nice enough, when invalid data is parsed. Obviously you could design this very differently. You could use a different constructor or a totally different validator that allows you to e.g. pass data that can be unpacked into the numpy.arange function instead (there you could use shape for example). Or you could decide that dumping it simply as an array of numbers is sufficient for your needs. This is just an example, but I think it illustrates the approach. | 4 | 4 |
76,044,951 | 2023-4-18 | https://stackoverflow.com/questions/76044951/how-to-handle-errors-correctly-in-django | I have a question, I know that it is possible to do global exception handling in python, but how to do it for all views, namely, I am interested in DetailView, but others will also come in handy.I understand that you need to give your own examples of solutions, but could you give an example of how this can be done, because I have no ideas at all how to do it. I want to do global error handling so that it can be done like this try: current_profile = Profile.objects.get(user_connected_id=current_user_object) except ObjectDoesNotExist SomeERROR....: logger.info('Object user_connected_id not found....') raise ObjectDoesNotExist And if I don't handle the error that appears, then I want it to work in the global handler. And if it's not difficult: what should I do correctly with errors in this handler, other than logging? I need the DetailView handler most of all | The methodology here involves a couple things. First, within those views, you want to make sure you are getting those errors within an except block as your code does. Then you should return an HttpResponseRedirect object redirecting the user to the error page or another page where you would like for the user to return. The next thing you can do is return a particular view for all errors of a type on the site when they hit urls.py. For example, to redirect all 404 errors, add the following code to the base site urls.py handler404 = 'App.views.error_404_view' Then, in the view simply return the custom error page you'd like or perform a new HttpResponseRedirect def error_404_view(request, exception): return render(request, 'error_404.html') | 3 | 2 |
76,045,518 | 2023-4-18 | https://stackoverflow.com/questions/76045518/finding-the-average-of-differences-between-each-row | Good morning, everyone. I'm trying to try to find the differences between two rows. I'm trying to put a formula together, but I feel like I'm making things too complicated when an easier answer may be available, and my code's not perfect. Here's my example dataset. cols = ['Name', 'Math', 'Science', 'English', "History"] data = [['Tom', 100, 93, 95, 92], ['Nick', 89, 75, 82, 57], ['Julie', 99, 89, 76, 88], ['Sarah', 79, 78, 94, 88]] df = pd.DataFrame(data, columns=cols) df The output is this: My current (and non-working) formula is: students = ['Tom', 'Nick', 'Julie', 'Sarah'] differences = [] def student_diff(student): for col in df.columns[1:]: for classmate in students: differences.append(abs(student[col] - classmate[col])) print (student, differences.mean()) student_diff('Tom') The error is: TypeError: string indices must be integers All in all, I was hoping the output would be something like (for example with Tom): Nick 19.25 | # student to find difference student = 'Tom' # create your mask where the name is the student name mask = df['Name'].eq(student) # concat you masks together and set the index data = pd.concat([df[mask], df[~mask]]).set_index('Name') # get the mean from the differnce abs(data.iloc[0, :] - data.iloc[1:, :]).mean(axis=1) Name Nick 19.25 Julie 7.00 Sarah 10.25 dtype: float64 or if you did want a function def student_diff(df: pd.DataFrame, student: str) -> pd.Series: # create your mask where the name is the student name mask = df['Name'].eq(student) # concat you masks together and set the index data = pd.concat([df[mask], df[~mask]]).set_index('Name') # get the mean from the differnce return abs(data.iloc[0, :] - data.iloc[1:, :]).mean(axis=1) student_diff(df=df, student='Nick') Name Tom 19.25 Julie 15.25 Sarah 14.00 dtype: float64 | 3 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.