question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
74,972,261 | 2022-12-31 | https://stackoverflow.com/questions/74972261/multiple-lists-in-a-for-loop-numpy-shape-command-requiring-list-entry | I created the following for loop for ItemTemplate in ItemTemplates: x = 0 self.needle_images.append(cv.imread(ItemTemplate, cv.IMREAD_UNCHANGED)) self.needle_widths.append(self.needle_images[x].shape[1]) self.needle_heights.append(self.needle_images[x].shape[0]) x = x+1 I originally tried to write the for loop like this: for ItemTemplate in ItemTemplates: self.needle_images.append(cv.imread(ItemTemplate, cv.IMREAD_UNCHANGED)) self.needle_widths.append(self.needle_images.shape[1]) self.needle_heights.append(self.needle_images.shape[0]) I was assuming I didn't need to add the list entries to this code and perhaps there is a better way to do this but my Python skills are very young. The top example is fine and my code runs ok with it, but I am looking to see if there was a better way to accomplish this task. | The last two lines of the loop are always using the last image appended to needle_images. The index of the last item in a list is -1. for ItemTemplate in ItemTemplates: self.needle_images.append(cv.imread(ItemTemplate, cv.IMREAD_UNCHANGED)) self.needle_widths.append(self.needle_images[-1].shape[1]) self.needle_heights.append(self.needle_images[-1].shape[0]) Or just bite the bullet and assign the image to a name at the top of the loop. for ItemTemplate in ItemTemplates: temp = cv.imread(ItemTemplate, cv.IMREAD_UNCHANGED) self.needle_images.append(temp) self.needle_widths.append(temp.shape[1]) self.needle_heights.append(temp.shape[0]) Or even... for ItemTemplate in ItemTemplates: temp = cv.imread(ItemTemplate, cv.IMREAD_UNCHANGED) w,h = temp.shape self.needle_images.append(temp) self.needle_widths.append(w) self.needle_heights.append(h) | 3 | 1 |
74,971,657 | 2022-12-31 | https://stackoverflow.com/questions/74971657/split-every-occurrence-of-key-value-pairs-in-a-string-where-the-value-include-on | I have a situation where user can enter commands with optional key value pairs and value may contain spaces .. here are 4 - different form user input where key and value are separated with = sign and values have space: "cmd=create-folder name=SelfServe - Test ride" "cmd=create-folder name=SelfServe - Test ride server=prd" "cmd=create-folder name=cert - Test ride server=dev site=Service" "cmd=create-folder name=cert - Test ride server=dev site=Service permission=locked" Requirement: I am trying to parse this string and split into a dictionary based on the key and value present on a string . If user enter First form of Statement, that wold produce a dictionary like : query_dict = { 'cmd' : 'create-folder', 'name' : 'selfserve - Test ride' } if user enter second form of statement that would produce /add the additional key /value pair query_dict = { 'cmd' : 'create-folder', 'name' : 'selfserve - Test ride', 'server' : 'prd' } if user enter third form of statement that would produce query_dict ={ 'cmd' : 'create-folder', 'name' : 'cert - Test ride', 'server' : 'dev', 'site': 'Service' } forth form produce the dictionary with key/value split like below query_dict ={ 'cmd' : 'create-folder', 'name' : 'cert - Test ride', 'server' : 'dev', 'site': 'Service', 'permission' : 'locked' } -idea is to parse a string where key and value are separated with = symbol and where the values can have one or more space and extract the matching key /value pair . I tried multiple methods to match but unable to figure out a single generic regular expression pattern which can match/extract any string where we have this kind of pattern Appreciate your help. i tried several pattern map based different possible user input but that is not a scalable approach . example : i created three pattern to match three variety of user input but it would be nice if i can have one generic pattern that can match any combination of key=values in a string (i am hard coding the key in the pattern which is not ideal '(cmd=create-folder).*(name=.*).*' , '(cmd=create-pfolder).*(name=.*).*(server=.*).*', '(cmd=create-pfolder).*(name=.*).*(server=.*).*(site=.*)' | I would suggest using split, and then zip to feed the dict constructor: def get_dict(s): parts = re.split(r"\s*(\w+)=", s) return dict(zip(parts[1::2], parts[2::2])) Example runs: print(get_dict("cmd=create-folder name=SelfServe - Test ride")) print(get_dict("cmd=create-folder name=SelfServe - Test ride server=prd")) print(get_dict("cmd=create-folder name=cert - Test ride server=dev site=Service")) print(get_dict("cmd=create-folder name=cert - Test ride server=dev site=Service permission=locked")) Outputs: {'cmd': 'create-folder', 'name': 'SelfServe - Test ride'} {'cmd': 'create-folder', 'name': 'SelfServe - Test ride', 'server': 'prd'} {'cmd': 'create-folder', 'name': 'cert - Test ride', 'server': 'dev', 'site': 'Service'} {'cmd': 'create-folder', 'name': 'cert - Test ride', 'server': 'dev', 'site': 'Service', 'permission': 'locked'} Explanation Using this input as example: "cmd=create-folder name=SelfServe - Test ride" The split regex identifies these parts: "cmd=create-folder name=SelfServe - Test ride" ^^^^ ^^^^^^^^^ The strings that are not matched by it will end up a results, so we have these: "", "create-folder", "SelfServe - Test ride" The first string is empty, because it is what precedes the first match. Now, as the regex has a capture group, the string that is captured by that group, is also returned in the result list, at odd indices. So parts ends up like this: ["", "cmd", "create-folder", "name", "SelfServe - Test ride"] The keys we are interested in, occur at odd indices. We can get those with parts[1::2], where 1 is the starting index, and 2 is the step. The corresponding values for those keys occur at even indices, ignoring the empty string at index 0. So we get those with parts[2::2]. With the call to zip, we pair those keys and values together as we want them. Finally, the dict constructor can take an argument with key/value pairs, which is exactly what that zip call provides. | 3 | 4 |
74,968,761 | 2022-12-31 | https://stackoverflow.com/questions/74968761/pip3-cant-download-the-latest-tflite-runtime | The current version of tflite-runtime is 2.11.0: https://pypi.org/project/tflite-runtime/ Here is a testing for downloading the tflite-runtime to the tmp folder: mkdir -p /tmp/test cd /tmp/test echo "tflite-runtime == 2.11.0" > ./test.txt pip3 download -r ./test.txt Here is the error: ERROR: Could not find a version that satisfies the requirement tflite-runtime==2.11.0 (from versions: none) ERROR: No matching distribution found for tflite-runtime==2.11.0 Here is the pip3 version: # pip3 --version pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10) What's wrong in the above pip3 download? Why can't it find the latest version? And how to fix? | tflite-runtime 2.11.0 released packages: https://pypi.org/project/tflite-runtime/2.11.0/#files Python 3.7, 3.8 and 3.9. Only Linux, different Intel and ARM 64-bit architectures. No Python 3.10 and no source code. Use Python 3.9 if you don't want to compile from sources. | 7 | 3 |
74,966,861 | 2022-12-31 | https://stackoverflow.com/questions/74966861/activating-python-virtual-environment-on-windows-11 | I am trying to create a venv virtual enviroment for Python in Window's command prompt. I created the enviroment; however, I am having difficulties using it because when I run the "activate" command it is not working. I think the problem is related to that the virtual enviroment does not have a scripts file like other window machines do, but rahter a bin file which has the activate script. When I run the activate command with the bin in the file directory I still get an error. enter image description here I have been trying to solve the problem for the past 4-5 hours and am completely stuck. I tried destroying and reconstructing the virtual enviroment, I tried using different extensions (.bat, .exe, .ps1, and just \activate), and tried using powershell. Please let me know if you have any ideas what I am doing wrong! | Open a command prompt terminal by either searching command prompt in the Windows search bar, or press the Windows Key + R and enter cmd. Create the virtual environment in a desired directory using the following command: python -m venv env This will create a new folder called env inside the directory where you executed the command. You can activate the created virtual environment by running the following command in the same directory where you executed the last command: cd env/Scripts && activate && cd ../../ I hope this helps. | 6 | 9 |
74,967,657 | 2022-12-31 | https://stackoverflow.com/questions/74967657/userwarning-the-grad-attribute-of-a-tensor-that-is-not-a-leaf-tensor-is-being | import torch from torch.autograd import Variable x = Variable(torch.FloatTensor([11.2]), requires_grad=True) y = 2 * x print(x) print(y) print(x.data) print(y.data) print(x.grad_fn) print(y.grad_fn) y.backward() # Calculates the gradients print(x.grad) print(y.grad) Error: C:\Users\donhu\AppData\Local\Temp\ipykernel_9572\106071707.py:2: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten\src\ATen/core/TensorBody.h:485.) print(y.grad) Source code https://github.com/donhuvy/Deep-learning-with-PyTorch-video/blob/master/1.5.variables.ipynb How to fix? | Call y.retain_grad() before calling y.backward(). The reason is because by default PyTorch only populate .grad for leaf variables (variables that aren't results of operations), which is x in your example. To ensure .grad is also populated for non-leaf variables like y, you need to call their .retain_grad() method. Also worth noting that it's a warning rather than an error. | 3 | 6 |
74,964,779 | 2022-12-30 | https://stackoverflow.com/questions/74964779/how-do-i-read-extended-annotations-from-annotated | PEP 593 added extended annotations via Annotated. But neither the PEP nor the documentation for Annotated describes how we're supposed to access the underlying annotations. How are we supposed to read the extra annotations stored in the Annotated object? from typing import Annotated class Foo: bar: Annotated[int, "save"] = 5 hints = get_type_hints(Foo, include_extras=True) # {'bar': typing.Annotated[int, 'save']} # Get list of data in Annotated object ??? hints["bar"].get_list_of_annotations() | To avoid accessing dunder names, which are reserved by python (such access should be avoided whenever possible): [...] Any use of __*__ names, in any context, that does not follow explicitly documented use, is subject to breakage without warning. You should use typing.get_args helper. It is actually smarter than getting __args__ attribute because of additional expansion step, see source code for details. It is a public API, so this should be preferred to manual dunder attribute examination. from typing import Annotated, get_type_hints, get_args class Foo: bar: Annotated[int, "save"] = 5 hints = get_type_hints(Foo, include_extras=True) annotations = get_args(hints['bar']) print(annotations) # (<class 'int'>, 'save') | 3 | 4 |
74,964,472 | 2022-12-30 | https://stackoverflow.com/questions/74964472/sum-columns-in-numpy-2d-array | I have a 2D NumPy array V: import numpy as np np.random.seed(10) V = np.random.randint(-10, 10, size=(6,8)) This gives V as: [[ -1 -6 5 -10 7 6 7 -2] [ -1 -10 0 -2 -6 9 6 -6] [ 5 1 1 -9 -2 -6 4 7] [ 9 3 -5 3 9 3 2 -9] [ -6 8 3 1 0 -1 5 8] [ 6 -3 1 7 4 -3 1 -9]] Now, I have 2 lists, r1 and r2, containing column indices as follows: r1 = [1, 2, 5] r2 = [3, 4, 7] What I want is to add the columns of V based on the indices pair (r1, r2) and store it in column indices r1. That is, for this case, add columns (1, 3), (2, 4) and (5, 7) and store them respectively in columns 1, 2 and 5 of V. It can be done easily like this: V[:, 1] = V[:, [1,3]].sum(axis=1) V[:, 2] = V[:, [2,4]].sum(axis=1) V[:, 5] = V[:, [5,7]].sum(axis=1) which gives V as: [[ -1 -16 12 -10 7 4 7 -2] [ -1 -12 -6 -2 -6 3 6 -6] [ 5 -8 -1 -9 -2 1 4 7] [ 9 6 4 3 9 -6 2 -9] [ -6 9 3 1 0 7 5 8] [ 6 4 5 7 4 -12 1 -9]] My concern is that is there a way we can do it without loops? Thanks in advance :) | Just add V[:, r2] at V[:, r2], like below: V[:, r1] += V[:, r2] print(V) Output [[ -1 -16 12 -10 7 4 7 -2] [ -1 -12 -6 -2 -6 3 6 -6] [ 5 -8 -1 -9 -2 1 4 7] [ 9 6 4 3 9 -6 2 -9] [ -6 9 3 1 0 7 5 8] [ 6 4 5 7 4 -12 1 -9]] | 5 | 3 |
74,963,990 | 2022-12-30 | https://stackoverflow.com/questions/74963990/different-behavior-for-dict-and-list-in-match-case-check-if-it-is-empty-or-not | I know there are other ways to check if dict is empty or not using match/case (for example, dict(data) if len(data) == 0), but I can't understand why python give different answers for list and dict types while we check for emptiness data = [1, 2] match data: case []: print("empty") case [1, 2]: print("1, 2") case _: print("other") # 1, 2 data = {1: 1, 2: 2} match data: case {}: print("empty") case {1: 1, 2: 2}: print("1, 2") case _: print("other") # empty | There are different patterns used on the match case Python implementation. When a list or tuple is used, the pattern used is the one called Sequence Patterns. Here's an example found on the PEP: match collection: case 1, [x, *others]: print("Got 1 and a nested sequence") case (1, x): print(f"Got 1 and {x}") So let's say that we set collection as [1, 2] The output will be: Got 1 and 2 If we set is as [1, [2, 3]] the output will be: Got 1 and a nested sequence This happens because the Sequence Pattern, tries to match a given sequence on the list. So in the case you passed, it is going to try to match exactly [1, 2]. On the other hand, when a dict is used, the Mapping Patterns is the rule. If you execute this code, you will understand better. config1 = {"router": "CNC", "network": "lan"} config2 = {"router": "MAC"} config3 = {"router": None} config4 = {"network": "lan"} configs = [config1, config2, config3, config4] for config in configs: match config: case {"router": "MAC"}: print("Configuring MAC router") case {"router": "CNC"}: print("Configuring CNC router") case {"router": None} | {}: print("No router available, please check config again") The output is going to be this: Configuring CNC router Configuring MAC router No router avaiable, please cheg config again No router avaiable, please cheg config again So, as you can see, the code actually does not compare using == or is is actually only look on the configs, by the keys defined on the case. So if all the keys and values for the case dict can be found on the config dict, they are going to enter on that case. This happens because the idea is that you do not wanna compare the dict object itself, but check if some keys are matching with what you expecte to execute a certain action. Basically all the keys that are present on the config dict but are not on the cases dicts is going to be ignored (not relevant for the match case) When you put a case with an empty dict, you have no key, values, so any dict passed on the match will be accepted. More information about this subject can be found on the official PEP: https://peps.python.org/pep-0622/#sequence-patterns | 4 | 2 |
74,963,651 | 2022-12-30 | https://stackoverflow.com/questions/74963651/how-do-i-get-find-a-word-that-contains-some-specific-letter-and-one-in-particula | Hello everyone and thank you in advance, I am trying to get a all the words in the following list except for "motiu" and "diomar" using regex and python: amfora difamador difamar dimorf dofi fada far farao farda fiar fiord fira firar firma for motiu diomar The word must not contain a letter outside the list [diomarf], but it must contain an "f" I don't know much about regex...I have tried with some, they are getting more complex but I haven't got the solution yet. Some of the expressions I have tried with are: > (?:.*f)(?:.*[diomarf]) > (?:.*[diomarf])(?:.*f) > (?:((?:f)+)(?:[diomarf])*) > (?:((?:[diomarf])+)(?:f)*) > (?:((?:[diomarf])*)((?:f)+)) > (?:(((?:f)+)((?:[diomarf])*))) > (?:((?:f)+((?:[diomarf])*))) The expression with which I think I got the closest result is: (?:(((?:f)+)((?:[diomarf])*))) But it only checks from the first f of the word, for example, for "dimorf" I am only getting the last "f" | ^f[diomarf]*$|^[diomarf]*f[diomarf]*$|^[diomarf]*f$ demo Explanation: ^f[diomarf]*$ : A string either starts with f and then has any number of characters from this list [diomarf] (including 0!) OR ^[diomarf]*f[diomarf]*$ the string has f somewhere in the middle OR ^[diomarf]*f$ f at the end The previous solution I proposed fails when disallowed characters are added to the end of the string (for example diomarfg). Old solution for reference: (?=[diomarf]).*f.* demo here Explanation: (?=[diomarf]) - use positive lookahead to assert that at any point in the string one of the allowed letters is matched. .*f.* - make sure that the letter f is somewhere in the string. | 3 | 2 |
74,962,787 | 2022-12-30 | https://stackoverflow.com/questions/74962787/sqlalchemy-module-not-found-despite-definitely-being-installed-with-pipenv | I'm learning to use FastAPI, psycopg2 and SQLAlchemy with python, which has been working fine. Now for some reason whenever I run my web app, the SQLAlchemy module cannot be found. I am running this in a Pipenv, with python 3.11.1 and SQLAlchemy 1.4.45, and running pip freeze shows SQLAlchemy is definitely installed, and my source is definitely my pipenv environment, the same from which I'm running my fastAPI server. I have tried uninstalling and reinstalling SQLAlchemy with Pipenv, and when I run python in interactive mode, it is the expected python version and I'm able to import SQLAlchemy and check sqalalchemy.version . Any ideas why it's saying it can't import when I run FastAPI? Code from my models.py module being imported into main.py: from sqlalchemy import Column, Integer, String, Boolean from app.database import Base class Post(Base): __tablename__ = "posts" id = Column(Integer, primary_key=True, nullable=False) title = Column(String, nullable=False) content = Column(String, nullable=False) published = Column(Boolean, default=True) # timestamp = Column(TIMESTAMP, default=now()) main.py: from fastapi import FastAPI, Response, status, HTTPException, Depends from pydantic import BaseModel import psycopg2 from psycopg2.extras import RealDictCursor import time from app import models from sqlalchemy.orm import Session from app.database import engine, SessionLocal models.Base.metadata.create_all(bind=engine) # FastAPI initialisation app = FastAPI() # function to initialise SQlAlchemy DB session dependency def get_db(): db = SessionLocal() try: yield db finally: db.close() # psycopg2 DB connection initialisation while True: try: conn = psycopg2.connect(host="localhost", dbname="fastapi", user="postgres", password="*********", cursor_factory=RealDictCursor) cursor = conn.cursor() print('Database connection successful.') break except Exception as error: print("Connecting to database failed.") print("Error: ", error) print("Reconnecting after 2 seconds") time.sleep(2) # this class defines the expected fields for the posts extending the BaseModel class # from Pydantic for input validation and exception handling ==> a "schema" class Post(BaseModel): title: str content: str published: bool = True # this list holds posts, with 2 hard coded for testing purposes my_posts = [{"title": "title of post 1", "content": "content of post 1", "id": 1}, {"title": "title of post 2", "content": "content of post 2", "id": 2}] # this small function simply finds posts by id by iterating though the my_posts list def find_post(find_id): for post in my_posts: if post["id"] == find_id: return post def find_index(find_id): try: index = my_posts.index(find_post(find_id)) except: raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"Post of id: {find_id} not found.") return index # these decorated functions act as routes for FastAPI. # the decorator is used to define the HTTP request verb (e.g. get, post, delete, patch, put), # as well as API endpoints within the app (e.g. "/" is root), # and default HTTP status codes. @app.get("/") async def root(): return {"message": "Hello World"} # "CRUD" (Create, Read, Update, Delete) says to use same endpoint # but with different HTTP request verbs for the different request types. # (e.g. using "/posts" for all four CRUD operations, but using POST, GET, PUT/PATCH, DELETE respectively.) @app.get("/posts") def get_data(): cursor.execute("SELECT * FROM posts") posts = cursor.fetchall() print(posts) return {"data": posts} @app.post("/posts", status_code=status.HTTP_201_CREATED) def create_posts(post: Post): cursor.execute("INSERT INTO posts (title, content, published) VALUES (%s, %s, %s) RETURNING *", (post.title, post.content, post.published)) new_post = cursor.fetchone() conn.commit() return {"created post": new_post} @app.delete("/posts/{id}") def delete_post(id: int): cursor.execute("DELETE FROM posts * WHERE id = %s RETURNING *", str(id)) deleted_post = cursor.fetchone() conn.commit() if deleted_post is None: raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"post with id: {id} not found.") else: print("deleted post:", deleted_post) return Response(status_code=status.HTTP_204_NO_CONTENT) @app.get("/posts/{id}") def get_post(id: int): cursor.execute("SELECT * FROM posts WHERE id = %s", str(id)) post = cursor.fetchone() if post is None: raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"post with id: {id} was not found.") return {"post_detail": post} @app.put("/posts/{id}") def update_post(id: int, put: Post): cursor.execute("UPDATE posts SET title = %s, content = %s, published= %s WHERE id = %s RETURNING *", (put.title, put.content, put.published, str(id))) updated_post = cursor.fetchone() if updated_post is None: raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"post with id: {id} was not found.") return {"updated_post_detail": updated_post} @app.get("/sqlalchemy") def test_posts(db: Session = Depends(get_db)): return {"status": "success"} ERROR LOG: louisgreenhalgh@MacBook-Pro ξ° ~/PycharmProjects/FASTAPI ξ° uvicorn app.main:app --reload ξ² β ξ² FASTAPI-3Pf2tu2f INFO: Will watch for changes in these directories: ['/Users/louisgreenhalgh/PycharmProjects/FASTAPI'] INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) INFO: Started reloader process [32662] using WatchFiles Process SpawnProcess-1: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started target(sockets=sockets) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/uvicorn/server.py", line 60, in run return asyncio.run(self.serve(sockets=sockets)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/uvicorn/server.py", line 67, in serve config.load() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/uvicorn/config.py", line 477, in load self.loaded_app = import_from_string(self.app) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/uvicorn/importer.py", line 24, in import_from_string raise exc from None File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/uvicorn/importer.py", line 21, in import_from_string module = importlib.import_module(module_str) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1206, in _gcd_import File "<frozen importlib._bootstrap>", line 1178, in _find_and_load File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/Users/louisgreenhalgh/PycharmProjects/FASTAPI/app/main.py", line 6, in <module> from app import models File "/Users/louisgreenhalgh/PycharmProjects/FASTAPI/app/models.py", line 1, in <module> from sqlalchemy import Column, Integer, String, Boolean ModuleNotFoundError: No module named 'sqlalchemy' Pipenv Graph Output: fastapi==0.88.0 - pydantic [required: >=1.6.2,<2.0.0,!=1.8.1,!=1.8,!=1.7.3,!=1.7.2,!=1.7.1,!=1.7, installed: 1.10.4] - typing-extensions [required: >=4.2.0, installed: 4.4.0] - starlette [required: ==0.22.0, installed: 0.22.0] - anyio [required: >=3.4.0,<5, installed: 3.6.2] - idna [required: >=2.8, installed: 3.4] - sniffio [required: >=1.1, installed: 1.3.0] greenlet==2.0.1 psycopg2-binary==2.9.5 SQLAlchemy==1.4.45 | Most likely issue is that the uvicorn executable is not present in the same python (v)env. When a python process starts, it looks into the location of the binary (uvicorn in this case), determines the python base location (either the same folder as the binary is in, or one above), and finally adds the appropriate site_packages location based on that base location. So in your case, try pip(env) install uvicorn into the same virtual environment | 3 | 2 |
74,957,732 | 2022-12-30 | https://stackoverflow.com/questions/74957732/how-can-i-order-dates-and-show-only-monthyear-on-the-x-axis-in-matplotlib | I would like to improve my bitcoin dataset but I found that the date is not sorted in the right way and want to show only the month and year. How can I do it? data = Bitcoin_Historical['Price'] Date1 = Bitcoin_Historical['Date'] train1 = Bitcoin_Historical[['Date','Price']] #Setting the Date as Index train2 = train1.set_index('Date') train2.sort_index(inplace=True) cols = ['Price'] train2 = train2[cols].apply(lambda x: pd.to_numeric(x.astype(str) .str.replace(',',''), errors='coerce')) print (type(train2)) print (train2.head()) plt.figure(figsize=(15, 5)) plt.plot(train2) plt.xlabel('Date', fontsize=12) plt.xlim(0,20) plt.ylabel('Price', fontsize=12) plt.title("Closing price distribution of bitcoin", fontsize=15) plt.gcf().autofmt_xdate() plt.show() The result shows picture below: It's not ordered and shows all dates. I would like to order by month+year and show only the month name+year. How can that be done? Example of Data: Thank you | I've made the following edits to your code: converted the column Date column as datetime type cleaned up the Price column and converting to float removed the line plt.xlim(0,20) which is causing the output to display 1970 used alternative way to plot, so that the x-axis can be formatted to get monthly tick marks, more info here Please try the code below: import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as mdates pd.options.mode.chained_assignment = None Bitcoin_Historical = pd.read_csv('data.csv') train1 = Bitcoin_Historical[['Date','Price']] train1['Date'] = pd.to_datetime(train1['Date'], infer_datetime_format=True, errors='coerce') train1['Price'] = train1['Price'].str.replace(',','').str.replace(' ','').astype(float) train2 = train1.set_index('Date') #Setting the Date as Index train2.sort_index(inplace=True) print (type(train2)) print (train2.head()) ax = train2.plot(figsize=(15, 5)) ax.xaxis.set_major_locator(mdates.MonthLocator(interval=1)) ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%b')) plt.xlabel('Date', fontsize=12) plt.ylabel('Price', fontsize=12) plt.title("Closing price distribution of bitcoin", fontsize=15) plt.show() Output | 4 | 2 |
74,959,199 | 2022-12-30 | https://stackoverflow.com/questions/74959199/add-new-value-to-specific-position-in-json-using-python | I have a JSON file and need to update with new key value pair. cuurent json: [{'Name': 'AMAZON', 'Type': 'Web', 'eventTimeEpoch': 1611667194}] I need to add a location parameter and update it as "USA".But when try to update it with below code it append to location parameter with value to end. Like below. [{'Name': 'AMAZON', 'Type': 'Web', 'eventTimeEpoch': 1611667194, 'location': 'USA'}] How can I add this location parameter after the Name. Expected output: [{'Name': 'AMAZON', 'location': 'USA', 'Type': 'Web', 'eventTimeEpoch': 1611667194 }] Current code: filename='test.json' jsonFile = open(filename, "r") # Open the JSON file for reading data = json.load(jsonFile) jsonFile.close() data[0]["location"] = "USA" data | First off, your current JSON file is not correctly formatted. JSON needs double quotes not single quotes. While the order of the inserted key-value pairs is guaranteed to be preserved in Python 3.7 above, you don't have any option to "insert" to specific location inside a dictionary.(like list for example.) And people usually don't count on the order of the keys when working with JSON files. You get your values by "keys" anyway. With that being said, you can do something like: import json with open("test.json") as f: data = json.load(f) print(data) new_d = {"Name": data[0].pop("Name"), "location": "USA", **data[0]} print(new_d) This way we created a new dictionary with the desired order. Since "location" key is near to the start of the items, we pop first key-value pair, then insert the "location" key, then unpack the rest with ** operator. | 3 | 4 |
74,958,496 | 2022-12-30 | https://stackoverflow.com/questions/74958496/how-to-transpose-a-list-of-lists | Let ll be a list of lists, and tt a tuple of tuples Input: ll = [["a1","a2"],["b1","b2"],["c1","c2"]] Desired output: tt = (("a1","b1","c1"),("a2","b2","c2")) I have managed to solve it for a list of two-element lists, meaning that the internal list only contained two elements each. def list_of_list_to_tuple_of_tuple(ll): first_elements = [i[0] for i in ll] second_elements = [i[1] for i in ll] new_list = [] new_list.append(tuple(first_elements)) new_list.append(tuple(second_elements)) return tuple(new_list) ll = [["a1","a2"],["b1","b2"],["c1","c2"]] list_of_list_to_tuple_of_tuple(ll) Now, the questions are: Is there any other method to easily accomplish what I have done? Is there any method to easily generalize this algorithm if we have a list of 3 internal lists and each internal list containing n elements? For example: Input: ll = [["a1","a2","a3",..."an"],["b1","b2","b3",..."bn"],["c1","c2","c3",..."cn"]] Desired Output: tt = (("a1","b1","c1"),("a2","b2","c2"),("a3","b3","c3"),...,("an","bn","cn")) | Try this one-liner - tuple(zip(*l)) Example 1 l = [["a1","a2"], ["b1","b2"], ["c1","c2"]] tuple(zip(*l)) (('a1', 'b1', 'c1'), ('a2', 'b2', 'c2')) Example 2 l2 = [["a1","a2","a3","an"], ["b1","b2","b3","bn"], ["c1","c2","c3","cn"]] tuple(zip(*l2)) (('a1', 'b1', 'c1'), ('a2', 'b2', 'c2'), ('a3', 'b3', 'c3'), ('an', 'bn', 'cn')) EXPLANATION The unpacking operator allows you to unpack the list into the sublists, and passes them as individual parameters to zip, as it expects the same. The zip combines the first, second, third ... nth respective elements of each sublist into n tuples object The tuple converts this zip object converts the overall zip object to a tuple. Bonus Intuitively, this operation resembles taking a transpose of a matrix. This can be seen easily if you convert your list of lists to a numpy array and then take a transpose. import numpy as np l = [["a1","a2"],["b1","b2"],["c1","c2"]] arr = np.array(l) transpose = arr.T transpose array([['a1', 'b1', 'c1'], ['a2', 'b2', 'c2']], dtype='<U2') | 3 | 9 |
74,955,725 | 2022-12-29 | https://stackoverflow.com/questions/74955725/getting-the-generic-arguments-of-a-subclass | I have a generic base class and I want to be able to inspect the provided type for it. My approach was using typing.get_args which works like so: from typing import Generic, Tuple, TypeVarTuple, get_args T = TypeVarTuple("T") class Base(Generic[*T]): values: Tuple[*T] Example = Base[int, str] print(get_args(Example)) # (<class 'int'>, <class 'str'>) But when I'm inheriting the class, I'm getting an empty list of parameters like so: class Example2(Base[int, str]): pass print(get_args(Example2)) # () What I actually need is to know what types are expected for the values property. I might have the wrong approach but I've also tried to use typing.get_type_hints which seems to just return Tuple[*T] as the type. So how can I get the typed parameters? Edit: I need to know the types of the class, not the object. | Use get_args with __orig_bases__: print(get_args(Example2.__orig_bases__[0])) # prints "(<class 'int'>, <class 'str'>)" For convenience, you can store the generic type parameters in the __init_subclass__ hook: from typing import Generic, TypeVarTuple, get_args T = TypeVarTuple("T") class Base(Generic[*T]): values: tuple[*T] type_T: tuple[type, ...] def __init_subclass__(cls) -> None: cls.type_T = get_args(cls.__orig_bases__[0]) # type: ignore class Example2(Base[int, str]): pass print(Example2.type_T) # prints "(<class 'int'>, <class 'str'>)" | 4 | 5 |
74,946,632 | 2022-12-29 | https://stackoverflow.com/questions/74946632/jax-jit-compatible-sparse-matrix-slicing | I have a boolean sparse matrix that I represent with row indices and column indices of True values. import numpy as np import jax from jax import numpy as jnp N = 10000 M = 1000 X = np.random.randint(0, 100, size=(N, M)) == 0 # data setup rows, cols = np.where(X == True) rows = jax.device_put(rows) cols = jax.device_put(cols) I want to get a column slice of the matrix like X[:, 3], but just from rows indices and column indices. I managed to do that by using jnp.isin like below, but the problem is that this is not JIT compatible because of the data-dependent shaped array rows[cols == m]. def not_jit_compatible_slice(rows, cols, m): return jnp.isin(jnp.arange(N), rows[cols == m]) I could make it JIT compatible by using jnp.where in the three-argument form, but this operation is much slower than the previous one. def jit_compatible_but_slow_slice(rows, cols, m): return jnp.isin(jnp.arange(N), jnp.where(cols == m, rows, -1)) Is there any fast and JIT compatible solution to acheive the same output? | You can do a bit better than the first answer by using the mode argument of set() to drop out-of-bound indices, eliminating the final slice: out = jnp.zeros(N, bool).at[jnp.where(cols==3, rows, N)].set(True, mode='drop') | 3 | 3 |
74,941,717 | 2022-12-28 | https://stackoverflow.com/questions/74941717/what-would-a-python-list-nested-parser-look-like-in-pyparsing | I would like to understand how to use pyparsing to parse something like a nested Python list. This is a question to understand pyparsing. Solutions that circumvent the problem because the list of the example might look like JSON or Python itself should not prevent the usage of pyparsing. So before people start throwing json and literal_eval at me let's consider a string and result that looks like this: Input: {1,2,3,{4,5}} Expected Output (Python list): [1,2,3,[4,5]] I currently have this code but the output does not parse the nested list import pyparsing print( pyparsing.delimited_list( pyparsing.Word(pyparsing.nums) | pyparsing.nested_expr("{", "}") ) .parse_string("{1,2,3,{4,5}}") .as_list() ) # [['1,2,3,', ['4,5']]] There is pretty much the same question here already but this one was circumvented by using json parsing: Python parse comma seperated nested brackets using Pyparsing | Thanks to the answer from Xiddoc I was able to slightly adjust the answer to also work when the expression starts with a list (no idea why the solution with nested_expr does not work) import pyparsing as pp expr = pp.Forward() group_start, group_end = map(pp.Suppress, r"{}") number = pp.Word(pp.nums).setParseAction(lambda s, l, t: int(t[0])) nested_list = pp.Group(group_start + expr[...] + group_end) expr <<= pp.delimited_list(number | nested_list) print(expr.parse_string(r"{{1},2,3,{4,5}}", parse_all=True).as_list()[0]) # [[1],2,3,[4,5]] | 3 | 0 |
74,949,455 | 2022-12-29 | https://stackoverflow.com/questions/74949455/combine-date-and-time-inputs-in-streamlit-with-dataframe-time-column | I have a df that has a column 'Time' in seconds: Time 1 2 3 4 I want the user to input the date with a timestamp (eg format: 25/09/2022 12:30:00). Then, I need to add a new column 'DateTime' which combines the user input datetime with my 'Time' column. The 'DateTime' column should look like this: DateTime 25/09/2022 12:30:01 25/09/2022 12:30:02 25/09/2022 12:30:03 25/09/2022 12:30:04 I managed to do this in python, where the user input is on the terminal, however, I would like to have this in Streamlit. From the documentation, there is currently no possibility to input date with a timestamp in Streamlit, unless you enter them separately, as follows: start_date = st.date_input('Enter start date', value=datetime.datetime(2019,7,6)) start_time = st.time_input('Enter start time', datetime.time(8, 45)) So, this gives the user the possibility to enter the date and time separately, however I don't know how to derive my 'DateTime' column and add it to the df. Appreciate any advice on how to accomplish this. | You can use the .combine() function from pandas to combine your start_date with start_time, after you have accomplished that. Make a new df named DateTime and loop through your Time df to concatenate the seconds to DateTime, after the concatenation, format the DateTime. You can then drop Time column after haven looped through it. Example: # Your df that contains "Time" column df = pd.DataFrame({"Time":[1, 2, 3, 4]}) start_date = st.date_input('Enter start date', value=datetime.datetime(2019,7,6)) start_time = st.time_input('Enter start time', datetime.time(8, 45)) start_datetime = datetime.datetime.combine(start_date, start_time) df["DateTime"] = [start_datetime + datetime.timedelta(seconds=time) for time in df["Time"]] df["DateTime"] = [date.strftime("%d/%m/%Y %H:%M:%S") for date in df["DateTime"]] df = df.drop(columns=["Time"]) st.dataframe(df) Output: DateTime 0 06/07/2019 08:45:01 1 06/07/2019 08:45:02 2 06/07/2019 08:45:03 3 06/07/2019 08:45:04 | 3 | 2 |
74,948,525 | 2022-12-29 | https://stackoverflow.com/questions/74948525/futurewarning-save-is-not-part-of-the-public-api-in-python | I am using Python to convert Pandas df to .xlsx (in Plotly-Dash app.). All working well so far but with this warning tho: "FutureWarning: save is not part of the public API, usage can give unexpected results and will be removed in a future version" How should I modify the code below in order to keep its functionality and stability in future? Thanks! writer = pd.ExcelWriter("File.xlsx", engine = "xlsxwriter") workbook = writer.book df.to_excel(writer, sheet_name = 'Sheet', index = False) writer.save() | just replace save with close. writer = pd.ExcelWriter("File.xlsx", engine = "xlsxwriter") workbook = writer.book df.to_excel(writer, sheet_name = 'Sheet', index = False) writer.close() | 23 | 35 |
74,945,819 | 2022-12-28 | https://stackoverflow.com/questions/74945819/draw-shapes-on-top-of-networkx-graph | Given an existing networkx graph import networkx as nx import numpy as np np.random.seed(123) graph = nx.erdos_renyi_graph(5, 0.3, seed=123, directed=True) nx.draw_networkx(graph) or import networkx as nx G = nx.path_graph(4) nx.spring_layout(G) nx.draw_networkx(G) how can you draw a red circle on top of (in the same position as) one of the nodes, like the node labeled 1? | To be able to draw a networkx graph, each node needs to be assigned a position. By default, nx.spring_layout() is used to calculate positions when calling nx.draw_networkx(), but these positions aren't stored. They are recalculated each time the function is drawn, except when the positions are explicitly added as a parameter. Therefore, you can calculate these positions beforehand, and then use these to plot circles: import matplotlib.pyplot as plt from matplotlib.colors import to_rgba import networkx as nx import numpy as np np.random.seed(123) graph = nx.erdos_renyi_graph(5, 0.3, seed=123, directed=True) pos = nx.spring_layout(graph) nx.draw_networkx(graph, pos=pos) ax = plt.gca() for node_id, color in zip([1, 4], ['crimson', 'limegreen']): ax.add_patch(plt.Circle(pos[node_id], 0.15, facecolor=to_rgba(color, alpha=0.2), edgecolor=color)) ax.set_aspect('equal', 'datalim') # equal aspect ratio is needed to show circles undistorted plt.show() | 4 | 5 |
74,944,224 | 2022-12-28 | https://stackoverflow.com/questions/74944224/add-a-value-to-a-list-of-paired-values | I have an array that has pairs of numbers representing row, col values in a model domain. I am trying to add the layer value to have a list of lay, row, col. I have an array rowcol: array([(25, 65), (25, 66), (25, 67), (25, 68), (26, 65), (26, 66), (26, 67), (26, 68), (26, 69), (27, 66), (27, 67), (27, 68), (27, 69), (28, 67), (28, 68)], dtype=object) and I want to add an 8 to each pair so it looks like array([(8, 25, 65), (8, 25, 66), (8, 25, 67), (8, 25, 68), (8, 26, 65), (8, 26, 66), (8, 26, 67), (8, 26, 68), (8. 26, 69), (8, 27, 66), (8, 27, 67), (8, 27, 68), (8, 27, 69), (8, 28, 67), (8, 28, 68)], dtype=object) I created a new array (layer) that was the same length as rowcol and zipped the 2 with: layrowcol = list(zip(layer, rowcol)) and ended up with: [(8, (25, 65)), (8, (25, 66)), (8, (25, 67)), (8, (25, 68)), (8, (26, 65)), (8, (26, 66)), (8, (26, 67)), (8, (26, 68)), (8, (26, 69)), (8, (27, 66)), (8, (27, 67)), (8, (27, 68)), (8, (27, 69)), (8, (28, 67)), (8, (28, 68))] So it sort of worked and yet didn't quite. Is there a way to combine them and leave out the unwanted parentheses or some better way to add the layer value to each pair without using zip(). Any help is appreciated. | You can use numpy.insert. >>> import numpy as np >>> a = np.array([(25, 65), (25, 66), (25, 67), (25, 68), (26, 65), (26, 66),(26, 67), (26, 68), (26, 69), (27, 66), (27, 67), (27, 68),(27, 69), (28, 67), (28, 68)], dtype=object) >>> b = np.insert(a, 0, 8, axis=1) Output: array([[8, 25, 65], [8, 25, 66], [8, 25, 67], [8, 25, 68], [8, 26, 65], [8, 26, 66], [8, 26, 67], [8, 26, 68], [8, 26, 69], [8, 27, 66], [8, 27, 67], [8, 27, 68], [8, 27, 69], [8, 28, 67], [8, 28, 68]], dtype=object) If you want back to the list of tuples. >>> list(map(tuple, b)) [(8, 25, 65), (8, 25, 66), (8, 25, 67), (8, 25, 68), (8, 26, 65), (8, 26, 66), (8, 26, 67), (8, 26, 68), (8, 26, 69), (8, 27, 66), (8, 27, 67), (8, 27, 68), (8, 27, 69), (8, 28, 67), (8, 28, 68)] | 3 | 2 |
74,943,259 | 2022-12-28 | https://stackoverflow.com/questions/74943259/minmax-scaling-on-numpy-array-multiple-dimensions | How to minmax normalize in the most efficient way, a XD-numpy array in "columns" of each 2D matrix of the array. For example with a 3D-array : a = np.array([[[ 0, 10], [ 20, 30]], [[ 40, 50], [ 60, 70]], [[ 80, 90], [100, 110]]]) into the normalized array : b = np.array([[[0., 0.], [1., 1.]], [[0., 0.], [1., 1.]], [[0., 0.], [1., 1.]]]) | With sklearn.preprocessing.minmax_scale + numpy.apply_along_axis single applying: from sklearn.preprocessing import minmax_scale a = np.array([[[0, 10], [20, 30]], [[40, 50], [60, 70]], [[80, 90], [100, 110]]]) a_scaled = np.apply_along_axis(minmax_scale, 1, a) # a_scaled [[[0. 0.] [1. 1.]] [[0. 0.] [1. 1.]] [[0. 0.] [1. 1.]]] | 3 | 2 |
74,941,669 | 2022-12-28 | https://stackoverflow.com/questions/74941669/how-to-interpret-the-output-of-statsmodels-model-summary-for-multivariate-line | I'm using the statsmodels library to check for the impact of confounding variables on a dependent variable by performing multivariate linear regression: model = ols(f'{metric}_diff ~ {" + ".join(confounding_variable_names)}', data=df).fit() This is how my data looks like (pasted only 2 rows): Age Sex Experience using a gamepad (1-4) Experience using a VR headset (1-4) Experience using hand tracking (1-3) Experience using controllers in VR (1-3) Glasses ID_1 ID_2 Method_1 Method_2 ID_controller ID_handTracking CorrectGestureCounter_controller CorrectGestureCounter_handTracking IncorrectGestureCounter_controller IncorrectGestureCounter_handTracking IDs ID_K_1_3 25 Female 4 3 1 2 Yes K_1 K_3 controller handTracking K_1 K_3 21 34 5 2 ID_K_4_5 19 Male 4 2 1 2 Yes K_4 K_5 controller handTracking K_4 K_5 21 36 14 17 When I execute model.summary() I get output like this: OLS Regression Results ====================================================================================== Dep. Variable: CorrectGestureCounter_diff R-squared: 0.477 Model: OLS Adj. R-squared: 0.249 Method: Least Squares F-statistic: 2.088 Date: Wed, 28 Dec 2022 Prob (F-statistic): 0.105 Time: 15:29:41 Log-Likelihood: -73.565 No. Observations: 24 AIC: 163.1 Df Residuals: 16 BIC: 172.6 Df Model: 7 Covariance Type: nonrobust ========================================================================================================== coef std err t P>|t| [0.025 0.975] ---------------------------------------------------------------------------------------------------------- Intercept -24.6404 9.326 -2.642 0.018 -44.410 -4.871 Sex[T.Male] -7.3225 3.170 -2.310 0.035 -14.043 -0.602 Glasses[T.Yes] -2.4210 2.995 -0.808 0.431 -8.771 3.929 Age 0.2957 0.183 1.613 0.126 -0.093 0.684 Experience_using_a_gamepad_1_4 1.8810 1.853 1.015 0.325 -2.047 5.809 Experience_using_a_VR_headset_1_4 0.9559 3.213 0.297 0.770 -5.856 7.768 Experience_using_hand_tracking_1_3 -2.4689 3.633 -0.680 0.506 -10.170 5.232 Experience_using_controllers_in_VR_1_3 2.3592 4.840 0.487 0.633 -7.902 12.620 ============================================================================== Omnibus: 0.621 Durbin-Watson: 2.566 Prob(Omnibus): 0.733 Jarque-Bera (JB): 0.702 Skew: -0.277 Prob(JB): 0.704 Kurtosis: 2.371 Cond. No. 205. ============================================================================== What do the [T.Male] or [T.Yes] next to Sex and Glasses mean? How should I interpret this? Also why is Intercept added next to my variables? Should I care about it in the context of confounding variables? | This is more of a stats question but I'll do my best to help. A multivariate regression is of the form: Where, Y, B, and, U are vectors associated with the dependent variable, coefficients, and error terms respectively. X then is the design matrix that houses all of your predictor variables. Such as Age, Glasses, etc. Onto your question of the intercept, the above equation can be written as: Thus from this, we can determine that "beta naught" is an intercept that does not depend on any of your predictor variables that is to say that just like in y=mx+b basic slope formula-speak, that beta naught term is the intercept that your regression is showing. Meaning that if all other terms are zero, your response variable would start at -24.6404. This is sort of the base value of your regression, meaning this term is added to each and every prediction. As for the other variables i.e. glasses and sex..you basically have what is called a "dummy variable" that is to say: Where I(t) is an indicator function indicating if that's true or false, so your x vectors corresponding to Age and Sex are binary vectors. Thus in your example, Male (T.male) is encoded as a 1, and having glasses (T.Yes) is also a 1. Thus female and no glasses is a zero. Thus the interpretation is, if you are a male and wear glasses, add -7.3225 and -2.4210 respectively, else add nothing (because anything times zero is zero). Hope that helped! I can't say much about your specific use case because I don't know exactly what statistical questions you have but this is at least a quick crash course in understanding the output of your regression. | 3 | 5 |
74,940,964 | 2022-12-28 | https://stackoverflow.com/questions/74940964/how-to-extend-sqlalchemy-base-class-with-a-static-method | I have multiple classes similar to the following: class Weather(Base): __tablename__ = "Weather" id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) temperature = Column(Integer) humidity = Column(Integer) wind_speed = Column(Float) wind_direction = Column(String) I want to add a method df() that returns me the Pandas dataframe of that table. I know I can write it like this: class Weather(Base): __tablename__ = "Weather" id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) temperature = Column(Integer) humidity = Column(Integer) wind_speed = Column(Float) wind_direction = Column(String) @staticmethod def df(): with engine.connect() as conn: return pd.read_sql_table(Weather.__tablename__ , conn) But I want to implement this for every table. I guess if I can extend the Base class with this method I should be able to implement it once and use it in every class. Everything I have tried has failed because I do not have access to __tablename__ attribute. SOLUTION I ended up with a mix of both answers. I have used the first method proposed by @snakecharmerb (it allows to introduce the change without modifying the rest of the code) with the @classmethod proposed by @RomanPerekhrest (which is the bit I was missing). class MyBase: __tablename__ = None @classmethod def df(cls): with engine.connect() as conn: return pd.read_sql_table(cls.__tablename__ , conn) Base = declarative_base(cls=MyBase) | Declare an auxiliary class (say DfBase) with classmethod df(cls) having the desired behavior. Then each derived class will access its __tablename__ attribute seamlessly via cls object which refers to the derived class itself. class DfBase: __tablename__ = None @classmethod def df(cls): with engine.connect() as conn: return pd.read_sql_table(cls.__tablename__ , conn) class Weather(Base, DfBase): __tablename__ = "Weather" ... | 3 | 2 |
74,941,247 | 2022-12-28 | https://stackoverflow.com/questions/74941247/mypy-using-other-pyhton-version-as-in-venv-positional-only-parameters-are-only | MyPy thinks it has to check for Python <3.8 when instead it should use 3.10 As you can see, Python3.10 is active (myvenv) gitpod /workspace/myfolder (mybranch) $ python --version Python 3.10.7 however mypy think its <3.8? (myvenv) gitpod /workspace/myfolder (mybranch) $ mypy -p my_folder_with_code /workspace/.pyenv_mirror/poetry/virtualenvs/myenv/lib/python3.10/site-packages/numpy/__init__.pyi:641: error: Positional-only parameters are only supported in Python 3.8 and greater Found 1 error in 1 file (errors prevented further checking) even mypy --python-version 3.10 -p my_folder_with_code produces the same error This happens only in this platform (gitpod). On other devices it runs fine (so no error in code) I googled around but did found what i'm looking for... can somebody help? | Ok, i found it out miself. Apparently it was a bug in mypy. Update MyPy and it works again pip install -U mypy Or in my case poetry update mypy | 5 | 6 |
74,940,265 | 2022-12-28 | https://stackoverflow.com/questions/74940265/apply-custom-function-on-all-columns-increase-efficiency | I apply this function def calculate_recency_for_one_column(column: pd.Series) -> int: """Returns the inverse position of the last non-zero value in a pd.Series of numerics. If the last value is non-zero, returns 1. If all values are non-zero, returns 0.""" non_zero_values_of_col = column[column.astype(bool)] if non_zero_values_of_col.empty: return 0 return len(column) - non_zero_values_of_col.index[-1] to all columns of this example dataframe df = pd.DataFrame(np.random.binomial(n=1, p=0.001, size=[1000000]).reshape((1000,1000))) by using df.apply(lambda column: calculate_recency_for_one_column(column),axis=0) The result is: 0 436 1 0 2 624 3 0 ... 996 155 997 715 998 442 999 163 Length: 1000, dtype: int64 Everything works fine, but my programm has to do this operation often, so I need a more efficient alternative. Does anybody have an idea how to make this faster? I think calculate_recency_for_one_column() is efficient enough and the df.apply() has the most potential for improvement. Here a as benchmark (100 reps): >> timeit.timeit(lambda: df.apply(lambda column: calculate_recency_for_one_column(column),axis=0), number=100) 14.700050864834338 Update Mustafa's answer: >> timeit.timeit(lambda: pd.Series(np.where(df.eq(0).all(), 0, len(df) - df[::-1].idxmax())), number=100) 0.8847485752776265 padu's answer: >> timeit.timeit(lambda: df.apply(calculate_recency_for_one_column_numpy, raw=True, axis=0), number=100) 0.8892530500888824 | You can treat columns not as Series objects but as numpy arrays. To do this, simply specify the raw=True parameter in the apply method. also need to slightly change the original function. import time import numpy as np import pandas as pd def calculate_recency_for_one_column(column: np.ndarray) -> int: """Returns the inverse position of the last non-zero value in a np.ndarray of numerics. If the last value is non-zero, returns 1. If all values are non-zero, returns 0.""" non_zero_values_of_col = np.nonzero(column)[0] if not non_zero_values_of_col.any(): return 0 return len(column) - non_zero_values_of_col[-1] df = pd.DataFrame(np.random.binomial(n=1, p=0.001, size=[1000000]).reshape((1000,1000))) start = time.perf_counter() res = df.apply(calculate_recency_for_one_column, raw=True) print(f'time took {time.perf_counter() - start:.3f} s.') Out: 0.005 s. | 3 | 3 |
74,939,164 | 2022-12-28 | https://stackoverflow.com/questions/74939164/i-want-to-validate-password-for-user-input-in-fastapi-python | i need a password validation in fastapi python, in this when user signup and create a password and passowrd are too sort not capital letter, special character etc. than fastapi give validation error i make a password validation code in python but i don't know how to use in fastapi def validate_password(s): l, u, p, d = 0, 0, 0, 0 capitalalphabets="ABCDEFGHIJKLMNOPQRSTUVWXYZ" smallalphabets="abcdefghijklmnopqrstuvwxyz" specialchar=""" ~`!@#$%^&*()_-+={[}]|\:;"'<,>.?/ """ digits="0123456789" if (len(s) >= 8): for i in s: # counting lowercase alphabets if (i in smallalphabets): l+=1 # counting uppercase alphabets if (i in capitalalphabets): u+=1 # counting digits if (i in digits): d+=1 # counting the mentioned special characters if(i in specialchar): p+=1 if (l>=1 and u>=1 and p>=1 and d>=1 and l+p+u+d==len(s)): print("Valid Password") else: print("Invalid Password") s = input("Enter the password: ") validate_password(s) | You can import validator from Pydantic and fill it by your field name of your schema (in this case "password"). Usage in your schema file: from pydantic import BaseModel, validator class User(BaseModel): password: str @validator("password") def validate_password(cls, password, **kwargs): # Put your validations here return password For this problem, a better solution is using regex for password validation and using regex in your Pydantic schema. Example of strong password regex validation: from pydantic import BaseModel, Field password_regex = "((?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[\W]).{8,64})" class User(BaseModel): password: str = Field(..., regex=password_regex) | 3 | 6 |
74,938,890 | 2022-12-28 | https://stackoverflow.com/questions/74938890/transform-n-columns-into-rows | i am looking for a way to disaggregate data from a single row in a pandas df. My data looks like this edit: n stands for an unspecified number, e.g. in my working dataset I have 8 plots giving me 8 x 2 = 16 columns I would like to transform. data = { 'key':['k1', 'k2'], 'plot_name_1':['name', 'name'], 'plot_area_1':[1,2], 'plot_name_2':['name', 'name'], 'plot_area_2':[1,2], 'plot_name_n':['name', 'name'], 'plot_area_n':[1,2] } df = pd.DataFrame(data) And I would like to end up here, adding an extra column to identify the plot number: data = { 'key':['k1','k1','k1', 'k2', 'k2', 'k2'], 'plot_number':['1', '2', 'n','1', '2', 'n'], 'plot_name':['name', 'name','name', 'name','name', 'name'], 'plot_area':[1,2,1,2,1,2], } df = pd.DataFrame(data) | pd.wide_to_long can do this: In [160]: pd.wide_to_long(df, stubnames=["plot_name", "plot_area"], i="key", j="plot_number", sep="_", suffix=r"(?:\d+|n)").reset_index() Out[160]: key plot_number plot_name plot_area 0 k1 1 name 1 1 k2 1 name 2 2 k1 2 name 1 3 k2 2 name 2 4 k1 n name 1 5 k2 n name 2 where "stubnames" are the common column prefix names to capture "sep" is the separator after those prefixes "_" in your case "suffix" is what's expected after the separator one or more digits, or literal "n" in your case; if "n" was symbolic, you can have suffix=r"\d+" there "i" argument is the "index" (i.e., the identifier variables) "j" signifies the name under which the suffixes are gathered. As an aside, we need to paranthesise the regex when multiple suffixes are possible because of the way pandas uses the suffix under the hood when constructing the regex: regex = rf"^{re.escape(stub)}{re.escape(sep)}{suffix}$" We see that suffix is interpolated directly, and an alternator (i.e., |) in it will see the left side as, e.g., not only \d+ but also what comes from stub & sep, too. | 3 | 5 |
74,936,196 | 2022-12-28 | https://stackoverflow.com/questions/74936196/how-should-i-organize-my-path-operations-in-fastapi | I am creating an application with FastAPI and so far it goes like this: But I'm having a problem with the endpoints. The /api/items/filter route has two query parameters: name and category. However, it gives me the impression that it is being taken as if it were api/items/{user_id}/filter, since when I do the validation in the documentation it throws me an error saying that I have not passed a value for user_id. (Also, previously it asked me to be authenticated (the only route that needed authentication was api/items/{user_id}. The problems are fixed when I define this endpoint first as shown below: Why is this happening? Is there a concept that I am not clear? | Ordering your endpoints matters! Endpoints are matched in order they are declared in your FastAPI object. Let say you have only two endpoints, in this order: api/items/{user_id} api/items/filter In this order, when you request endpoint api/items/user_a, your request will be routed to (1) api/items/{user_id}. However, if you request api/items/filter, this also will be routed to (1) api/items/{user_id}! That is because filter is a match for {user_id}, and since this endpoint is evaluated before the second endpoint is evaluated for a match, the second endpoint is not evaluated at all. That is also why you are asked for authorization; you think you are requesting endpoint 2, but your request is actually routed to endpoint 1, with path parameter {user_id} = "filter". So, ordering your endpoints is important, and it is just where in your application you are defining them. See here in the docs. | 4 | 4 |
74,877,580 | 2022-12-21 | https://stackoverflow.com/questions/74877580/discover-missing-module-using-command-line-dll-load-failed-error | On Windows, when we try to import a .pyd file, and a DLL that the .pyd depends on cannot be found, we get this traceback: Traceback (most recent call last): ... ImportError: DLL load failed: The specified module could not be found. When this happens, often one has to resort to a graphical tool like Dependencies to figure out what is the name of the missing module. How can I obtain the missing module name via the command-line? Context: often we get this error in CI, and it would be easier to login via SSH to find out the missing module name, rather than having to log via GUI. | First, let's chose a concrete example: NumPy's _multiarray_umath*.pyd (from Python 3.9 (pc064)). Note that I'll be reusing this console: [cfati@CFATI-5510-0:e:\Work\Dev\StackOverflow\q074877580]> sopr.bat ### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ### [prompt]> [prompt]> "e:\Work\Dev\VEnvs\py_pc064_03.09_test0\Scripts\python.exe" Python 3.9.9 (tags/v3.9.9:ccb0e6a, Nov 15 2021, 18:08:50) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> >>> import os >>> >>> os.getpid() 12788 >>> >>> from numpy.core import _multiarray_umath as _mu >>> >>> _mu <module 'numpy.core._multiarray_umath' from 'e:\\Work\\Dev\\VEnvs\\py_pc064_03.09_test0\\lib\\site-packages\\numpy\\core\\_multiarray_umath.cp39-win_amd64.pyd'> >>> ^Z [prompt]> [prompt]> :: Backup %PATH% [prompt]> set _PATH=%PATH% To make things as generic as possible, that .pyd depends on a custom .dll (OpenBLAS): Here's a snapshot of the above (Python) process: Notice where the dependent .dll was loaded from (2 rows below our (selected) .pyd). Now, back to the question: there are a bunch of tools that can do that. But it's important to mention that no matter what tool you use, will (most likely) depend on the PATH environment variable contents (in the 1st image, the dependent .dll (and others) was not found). Check [MS.Learn]: Dynamic-Link Library Search Order for more details about .dlls. As a note, since (some) tools generate a lot of output, I'll be filtering it out (using commands like FindStr (Grep)), only showing the relevant parts, in order to avoid filling the answer with junk. 1. [GitHub]: lucasg/Dependencies Besides the GUI application that you mentioned (DependenciesGui.exe), there's a command line tool as well next to it: Dependencies.exe: [prompt]> [prompt]> :: Restore %PATH% [prompt]> set PATH=%_PATH% [prompt]> [prompt]> "f:\Install\pc064\LucasG\DependencyWalkerPolitistTexan\Version\Dependencies.exe" -h Dependencies.exe : command line tool for dumping dependencies and various utilities. Usage : Dependencies.exe [OPTIONS] <FILE> Options : -h -help : display this help -json : activate json output. -cache : load and use binary cache in order to prevent dll file locking. -depth : limit recursion depth when analysing loaded modules or dependency chain. Default value is infinite. -apisets : dump the system's ApiSet schema (api set dll -> host dll) -apisetsdll : dump the ApiSet schema from apisetschema <FILE> (api set dll -> host dll) -knowndll : dump all the system's known dlls (x86 and x64) -manifest : dump <FILE> embedded manifest, if it exists. -sxsentries : dump all of <FILE>'s sxs dependencies. -imports : dump <FILE> imports -exports : dump <FILE> exports -modules : dump <FILE> resolved modules -chain : dump <FILE> whole dependency chain [prompt]> [prompt]> "f:\Install\pc064\LucasG\DependencyWalkerPolitistTexan\Version\Dependencies.exe" -modules "e:\Work\Dev\VEnvs\py_pc064_03.09_test0\Lib\site-packages\numpy\core\_multiarray_umath.cp39-win_amd64.pyd" | findstr "libopenblas" [NOT_FOUND] libopenblas.FB5AE2TYXYH2IJRDKGDGQ3XBKLKTF43H.gfortran-win_amd64.dll : [prompt]> [prompt]> set PATH=%_PATH%;e:\Work\Dev\VEnvs\py_pc064_03.09_test0\Lib\site-packages\numpy\.libs [prompt]> [prompt]> "f:\Install\pc064\LucasG\DependencyWalkerPolitistTexan\Version\Dependencies.exe" -modules "e:\Work\Dev\VEnvs\py_pc064_03.09_test0\Lib\site-packages\numpy\core\_multiarray_umath.cp39-win_amd64.pyd" | findstr "libopenblas" [Environment] libopenblas.FB5AE2TYXYH2IJRDKGDGQ3XBKLKTF43H.gfortran-win_amd64.dll : e:\Work\Dev\VEnvs\py_pc064_03.09_test0\Lib\site-packages\numpy\.libs\libopenblas.FB5AE2TYXYH2IJRDKGDGQ3XBKLKTF43H.gfortran-win_amd64.dll Side note - as seen in [SO]: How to run a fortran skript with ctypes? (@CristiFati's answer), sometimes (for some reason unknown to me) it doesn't show the exports (the GUI, at least). 2. Dependency Walker Although it's no longer maintained, it's a very nice tool and before Dependencies it was the best I could find. I also used it for [SO]: How to build a DLL version of libjpeg 9b? (@CristiFati's answer) (somewhere at the end). The drawback is that it's spitting the output in a file, so an additional step is required: [prompt]> [prompt]> :: Restore %PATH% [prompt]> set PATH=%_PATH% [prompt]> [prompt]> dir /b [prompt]> [prompt]> :: Help not available in console (/? will open GUI) [prompt]> [prompt]> "c:\Install\pc064\Depends\DependencyWalkerPolitistTexan\Version\depends.exe" /c /ot:_mu0.txt "e:\Work\Dev\VEnvs\py_pc064_03.09_test0\Lib\site-packages\numpy\core\_multiarray_umath.cp39-win_amd64.pyd" [prompt]> type _mu0.txt | findstr -i "libopenblas" [ ? ] LIBOPENBLAS.FB5AE2TYXYH2IJRDKGDGQ3XBKLKTF43H.GFORTRAN-WIN_AMD64.DLL [ ? ] LIBOPENBLAS.FB5AE2TYXYH2IJRDKGDGQ3XBKLKTF43H.GFORTRAN-WIN_AMD64.DLL Error opening file. The system cannot find the file specified (2). [prompt]> [prompt]> set PATH=%_PATH%;e:\Work\Dev\VEnvs\py_pc064_03.09_test0\Lib\site-packages\numpy\.libs [prompt]> [prompt]> "c:\Install\pc064\Depends\DependencyWalkerPolitistTexan\Version\depends.exe" /c /ot:_mu1.txt "e:\Work\Dev\VEnvs\py_pc064_03.09_test0\Lib\site-packages\numpy\core\_multiarray_umath.cp39-win_amd64.pyd" [prompt]> type _mu1.txt | findstr -i "libopenblas" [ 6] e:\work\dev\venvs\py_pc064_03.09_test0\lib\site-packages\numpy\.libs\LIBOPENBLAS.FB5AE2TYXYH2IJRDKGDGQ3XBKLKTF43H.GFORTRAN-WIN_AMD64.DLL [ 6] e:\work\dev\venvs\py_pc064_03.09_test0\lib\site-packages\numpy\.libs\LIBOPENBLAS.FB5AE2TYXYH2IJRDKGDGQ3XBKLKTF43H.GFORTRAN-WIN_AMD64.DLL 2022/11/30 14:57 2022/11/20 00:44 35,695,412 A 0x0220BC27 0x0220BC27 x64 Console None 0x00000000622C0000 Unknown 0x01E88000 Not Loaded N/A N/A 0.0 2.30 4.0 5.2 3. [MS.Learn]: DUMPBIN Reference Part of VStudio. I am only listing it as a reference, because it can display a .dll dependents, but not whether they can be loaded (and if yes, where from): [prompt]> [prompt]> :: Restore %PATH% [prompt]> set PATH=%_PATH% [prompt]> [prompt]> "c:\Install\pc032\Microsoft\VisualStudioCommunity\2019\VC\Auxiliary\Build\vcvarsall.bat" x64 > nul [prompt]> [prompt]> dumpbin /DEPENDENTS "e:\Work\Dev\VEnvs\py_pc064_03.09_test0\Lib\site-packages\numpy\core\_multiarray_umath.cp39-win_amd64.pyd" Microsoft (R) COFF/PE Dumper Version 14.29.30147.0 Copyright (C) Microsoft Corporation. All rights reserved. Dump of file e:\Work\Dev\VEnvs\py_pc064_03.09_test0\Lib\site-packages\numpy\core\_multiarray_umath.cp39-win_amd64.pyd File Type: DLL Image has the following dependencies: libopenblas.FB5AE2TYXYH2IJRDKGDGQ3XBKLKTF43H.gfortran-win_amd64.dll python39.dll KERNEL32.dll VCRUNTIME140.dll api-ms-win-crt-math-l1-1-0.dll api-ms-win-crt-heap-l1-1-0.dll api-ms-win-crt-stdio-l1-1-0.dll api-ms-win-crt-string-l1-1-0.dll api-ms-win-crt-environment-l1-1-0.dll api-ms-win-crt-runtime-l1-1-0.dll api-ms-win-crt-convert-l1-1-0.dll api-ms-win-crt-time-l1-1-0.dll api-ms-win-crt-utility-l1-1-0.dll api-ms-win-crt-locale-l1-1-0.dll Summary 40000 .data 18000 .pdata 64000 .rdata 3000 .reloc 1000 .rsrc 1F3000 .text 4. Nix emulators Invoke [Man7]: LDD(1). I guess this can be a favorite, since it's leaning towards Nix world (where these kind of things are easier) and you also mentioned SSH connection. I'll be exemplifying on MSYS2, but same thing is achievable from others (Cygwin, maybe MinGW, ...). [cfati@cfati-5510-0:/e/Work/Dev/StackOverflow/q074877580]> ~/sopr.sh ### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ### [064bit prompt]> [064bit prompt]> uname -a MSYS_NT-10.0-19045 cfati-5510-0 3.4.3-dirty.x86_64 2022-12-19 20:20 UTC x86_64 Msys [064bit prompt]> [064bit prompt]> _PATH="${PATH}" [064bit prompt]> [064bit prompt]> ls "/e/Work/Dev/VEnvs/py_pc064_03.09_test0/Lib/site-packages/numpy/core/_multiarray_umath.cp39-win_amd64.pyd" /e/Work/Dev/VEnvs/py_pc064_03.09_test0/Lib/site-packages/numpy/core/_multiarray_umath.cp39-win_amd64.pyd [064bit prompt]> [064bit prompt]> ldd "/e/Work/Dev/VEnvs/py_pc064_03.09_test0/Lib/site-packages/numpy/core/_multiarray_umath.cp39-win_amd64.pyd" ldd: /e/Work/Dev/VEnvs/py_pc064_03.09_test0/Lib/site-packages/numpy/core/_multiarray_umath.cp39-win_amd64.pyd: Bad file descriptor [064bit prompt]> [064bit prompt]> # Change extension [064bit prompt]> cp "/e/Work/Dev/VEnvs/py_pc064_03.09_test0/Lib/site-packages/numpy/core/_multiarray_umath.cp39-win_amd64.pyd" ./_mu.dll [064bit prompt]> ls _mu.dll _mu0.txt _mu1.txt [064bit prompt]> file _mu.dll _mu.dll: PE32+ executable (DLL) (GUI) x86-64, for MS Windows [064bit prompt]> [064bit prompt]> ldd _mu.dll ntdll.dll => /c/WINDOWS/SYSTEM32/ntdll.dll (0x7ff8ba930000) KERNEL32.DLL => /c/WINDOWS/System32/KERNEL32.DLL (0x7ff8ba320000) KERNELBASE.dll => /c/WINDOWS/System32/KERNELBASE.dll (0x7ff8b8070000) msvcrt.dll => /c/WINDOWS/System32/msvcrt.dll (0x7ff8b8f40000) _mu.dll => /e/Work/Dev/StackOverflow/q074877580/_mu.dll (0x7ff86ab50000) ucrtbase.dll => /c/Windows/System32/ucrtbase.dll (0x7ff8b8840000) vcruntime140.dll => /c/Windows/System32/vcruntime140.dll (0x7ff8a0980000) libopenblas.FB5AE2TYXYH2IJRDKGDGQ3XBKLKTF43H.gfortran-win_amd64.dll => not found python39.dll => not found api-ms-win-crt-math-l1-1-0.dll => not found api-ms-win-crt-heap-l1-1-0.dll => not found api-ms-win-crt-stdio-l1-1-0.dll => not found api-ms-win-crt-string-l1-1-0.dll => not found api-ms-win-crt-environment-l1-1-0.dll => not found api-ms-win-crt-runtime-l1-1-0.dll => not found api-ms-win-crt-convert-l1-1-0.dll => not found api-ms-win-crt-time-l1-1-0.dll => not found api-ms-win-crt-utility-l1-1-0.dll => not found api-ms-win-crt-locale-l1-1-0.dll => not found [064bit prompt]> [064bit prompt]> PATH="${_PATH}:/e/Work/Dev/VEnvs/py_pc064_03.09_test0/Lib/site-packages/numpy/.libs" [064bit prompt]> [064bit prompt]> ldd _mu.dll ntdll.dll => /c/WINDOWS/SYSTEM32/ntdll.dll (0x7ff8ba930000) KERNEL32.DLL => /c/WINDOWS/System32/KERNEL32.DLL (0x7ff8ba320000) KERNELBASE.dll => /c/WINDOWS/System32/KERNELBASE.dll (0x7ff8b8070000) msvcrt.dll => /c/WINDOWS/System32/msvcrt.dll (0x7ff8b8f40000) _mu.dll => /e/Work/Dev/StackOverflow/q074877580/_mu.dll (0x7ff86ab50000) ucrtbase.dll => /c/Windows/System32/ucrtbase.dll (0x7ff8b8840000) vcruntime140.dll => /c/Windows/System32/vcruntime140.dll (0x7ff8a0980000) libopenblas.FB5AE2TYXYH2IJRDKGDGQ3XBKLKTF43H.gfortran-win_amd64.dll => /e/Work/Dev/VEnvs/py_pc064_03.09_test0/Lib/site-packages/numpy/.libs/libopenblas.FB5AE2TYXYH2IJRDKGDGQ3XBKLKTF43H.gfortran-win_amd64.dll (0x622c0000) user32.dll => /c/Windows/System32/user32.dll (0x7ff8b92f0000) win32u.dll => /c/Windows/System32/win32u.dll (0x7ff8b8940000) gdi32.dll => /c/Windows/System32/gdi32.dll (0x7ff8b94a0000) gdi32full.dll => /c/Windows/System32/gdi32full.dll (0x7ff8b8400000) msvcp_win.dll => /c/Windows/System32/msvcp_win.dll (0x7ff8b8610000) python39.dll => not found api-ms-win-crt-math-l1-1-0.dll => not found api-ms-win-crt-heap-l1-1-0.dll => not found api-ms-win-crt-stdio-l1-1-0.dll => not found api-ms-win-crt-string-l1-1-0.dll => not found api-ms-win-crt-environment-l1-1-0.dll => not found api-ms-win-crt-runtime-l1-1-0.dll => not found api-ms-win-crt-convert-l1-1-0.dll => not found api-ms-win-crt-time-l1-1-0.dll => not found api-ms-win-crt-utility-l1-1-0.dll => not found api-ms-win-crt-locale-l1-1-0.dll => not found Of course, there can be more tools that I am not aware of (or if .dlls coming from .NET are involved). Related (more or less): [SO]: Can't import dll module in Python (@CristiFati's answer) [SO]: Python Ctypes - loading dll throws OSError: [WinError 193] %1 is not a valid Win32 application (@CristiFati's answer) [SO]: Load a DLL with dependencies in Python (@CristiFati's answer) [SO]: How to check for DLL dependency? [SO]: C DLL loads in C++ program, not in python Ctypes (@MarkTolonen's answer) [SO]: ImportError: DLL load failed while importing _ctypes : The specified module could not be found (@CristiFati's answer) [SO]: Can't get FontForge to import as a module in a custom Python script (@CristiFati's answer) | 7 | 4 |
74,904,389 | 2022-12-23 | https://stackoverflow.com/questions/74904389/how-to-check-if-pyspark-dataframe-is-empty-quickly | I'm trying to check if my pyspark dataframe is empty and I have tried different ways to do that, like: df.count() == 0 df.rdd.isEmpty() df.first().isEmpty() But all this solutions are to slow, taking up to 2 minutes to run. How can I quicly check if my pyspark dataframe is empty or not? Do anyone have a solution for that? Thank you in advance! | The best way to check if your dataframe is empty or not after reading a table or at any point in time is by using limit(1) first which will reduce the number of rows to only 1 and will increase the speed of all the operation that you are going to do for dataframe checks. df.limit(1).count() == 0 df.limit(1).rdd.isEmpty() df.limit(1).take() If you are just doing a data dependency check on a table and just want to know if table has data or not, it's always best to just apply limit 1 while reading from table itself for e.g. df = spark.sql("select * from <table> limit 1") With that being said about the efficiency of checking dataframe is empty or not, now coming on to which is the fastest way of doing it is using .rdd.isEmpty() compared to count(), first() or take(1) Also if you see the backend implementation of first() and take(1) it's completely implemented on top of collect() which is mostly costly and should only be used when extremely necessary. The below-mentioned time is based on reading a parquet file with 2390491 records and having 138 columns. >>> df.count() 2390491 >>> len(df.columns) 138 Note: These are the time taken after applying .limit(1) to the dataframe for checking whether the dataframe is empty or not. Also lastly using, df.rdd.isEmpty() took the least amount of time 29ms after reducing the number of rows to 1. Hope that helps..!! :) UPDATE: If you are using Spark >= 3.3, now you can directly use, df.isEmpty() This is the fastest of all for checking an empty data frame in Spark >= 3.3 | 4 | 3 |
74,895,640 | 2022-12-23 | https://stackoverflow.com/questions/74895640/how-to-do-regression-simple-linear-for-example-in-polars-select-or-groupby-con | I am using polars in place of pandas. I am quite amazed by the speed and lazy computation/evaluation. Right now, there are a lot of methods on lazy dataframe, but they can only drive me so far. So, I am wondering what is the best way to use polars in combination with other tools to achieve more complicated operations, such as regression/model fitting. To be more specific, I will give an example involving linear regression. Assume I have a polars dataframe with columns day, y, x1 and x2, and I want to generate a series, which is the residual of regressing y on x1 and x2 group by day. I have included the code example as follows and how it can be solved using pandas and statsmodels. How can I get the same result with the most efficiency using idiomatic polars? import pandas as pd import statsmodels.api as sm def regress_resid(df, yvar, xvars): result = sm.OLS(df[yvar], sm.add_constant(df[xvars])).fit() return result.resid df = pd.DataFrame( { "day": [1, 1, 1, 1, 1, 2, 2, 2, 2, 2], "y": [1, 6, 3, 2, 8, 4, 5, 2, 7, 3], "x1": [1, 8, 2, 3, 5, 2, 1, 2, 7, 3], "x2": [8, 5, 3, 6, 3, 7, 3, 2, 9, 1], } ) df.groupby("day").apply(regress_resid, "y", ["x1", "x2"]) # day # 1 0 0.772431 # 1 -0.689233 # 2 -1.167210 # 3 -0.827896 # 4 1.911909 # 2 5 -0.851691 # 6 1.719451 # 7 -1.167727 # 8 0.354871 # 9 -0.054905 Thanks for your help. | If you want to pass multiple columns to a function, you have to pack them into a Struct as polars expression always map from Series -> Series. Because polars does not use numpy memory which statsmodels does, you must convert the polars types to_numpy. This is often free in case of 1D structures. Finally, the function should not return a numpy array, but a polars Series instead, so we convert the result. import polars as pl from functools import partial import statsmodels.api as sm def regress_resid(s: pl.Series, yvar: str, xvars: list[str]) -> pl.Series: df = s.struct.unnest() yvar = df[yvar].to_numpy() xvars = df[xvars].to_numpy() result = sm.OLS(yvar, sm.add_constant(xvars)).fit() return pl.Series(result.resid) df = pl.DataFrame( { "day": [1, 1, 1, 1, 1, 2, 2, 2, 2, 2], "y": [1, 6, 3, 2, 8, 4, 5, 2, 7, 3], "x1": [1, 8, 2, 3, 5, 2, 1, 2, 7, 3], "x2": [8, 5, 3, 6, 3, 7, 3, 2, 9, 1], } ) (df.group_by("day") .agg( pl.struct("y", "x1", "x2").map_elements(partial(regress_resid, yvar="y", xvars=["x1", "x2"])) ) ) | 11 | 9 |
74,902,695 | 2022-12-23 | https://stackoverflow.com/questions/74902695/multiple-aggregations-on-multiple-columns-in-python-polars | Checking out how to implement binning with Python polars, I can easily calculate aggregates for individual columns: import polars as pl import numpy as np t, v = np.arange(0, 100, 2), np.arange(0, 100, 2) df = pl.DataFrame({"t": t, "v0": v, "v1": v}) df = df.with_columns((pl.datetime(2022,10,30) + pl.duration(seconds=df["t"])).alias("datetime")).drop("t") df.group_by_dynamic("datetime", every="10s").agg(pl.col("v0").mean()) shape: (10, 2) βββββββββββββββββββββββ¬βββββββ β datetime β v0 β β --- β --- β β datetime[ΞΌs] β f64 β βββββββββββββββββββββββͺβββββββ‘ β 2022-10-30 00:00:00 β 4.0 β β 2022-10-30 00:00:10 β 14.0 β β 2022-10-30 00:00:20 β 24.0 β β 2022-10-30 00:00:30 β 34.0 β β ... β ... β or calculate multiple aggregations like df.group_by_dynamic("datetime", every="10s").agg( pl.col("v0").mean().alias("v0_binmean"), pl.col("v0").count().alias("v0_bincount") ) βββββββββββββββββββββββ¬βββββββββββββ¬ββββββββββββββ β datetime β v0_binmean β v0_bincount β β --- β --- β --- β β datetime[ΞΌs] β f64 β u32 β βββββββββββββββββββββββͺβββββββββββββͺββββββββββββββ‘ β 2022-10-30 00:00:00 β 4.0 β 5 β β 2022-10-30 00:00:10 β 14.0 β 5 β β 2022-10-30 00:00:20 β 24.0 β 5 β β 2022-10-30 00:00:30 β 34.0 β 5 β β ... β ... β ... β or calculate one aggregation for multiple columns like cols = [c for c in df.columns if "datetime" not in c] df.group_by_dynamic("datetime", every="10s").agg( pl.col(f"{c}").mean().alias(f"{c}_binmean") for c in cols ) βββββββββββββββββββββββ¬βββββββββββββ¬βββββββββββββ β datetime β v0_binmean β v1_binmean β β --- β --- β --- β β datetime[ΞΌs] β f64 β f64 β βββββββββββββββββββββββͺβββββββββββββͺβββββββββββββ‘ β 2022-10-30 00:00:00 β 4.0 β 4.0 β β 2022-10-30 00:00:10 β 14.0 β 14.0 β β 2022-10-30 00:00:20 β 24.0 β 24.0 β β 2022-10-30 00:00:30 β 34.0 β 34.0 β β ... β ... β ... β However, combining both approaches fails! df.group_by_dynamic("datetime", every="10s").agg( [ pl.col(f"{c}").mean().alias(f"{c}_binmean"), pl.col(f"{c}").count().alias(f"{c}_bincount") ] for c in cols ) DuplicateError: column with name 'literal' has more than one occurrences Is there a "polarustic" approach to calculate multiple statistical parameters for multiple (all) columns of the dataframe in one go? related, pandas-specific: Python pandas groupby aggregate on multiple columns | There are various ways of selecting multiple columns "at once" in polars: df.select(pl.all()).columns # ['v0', 'v1', 'datetime'] df.select(pl.col("v0", "v1")).columns # by name(s) # ['v0', 'v1'] df.select(pl.exclude("datetime")).columns # by exclusion # ['v0', 'v1'] The output column names can be controlled using the .name.* methods e.g. name.suffix() df.select(pl.exclude("datetime").mean().name.suffix("_binmean")) shape: (1, 2) ββββββββββββββ¬βββββββββββββ β v0_binmean β v1_binmean β β --- β --- β β f64 β f64 β ββββββββββββββͺβββββββββββββ‘ β 49.0 β 49.0 β ββββββββββββββ΄βββββββββββββ As such, we can rewrite your example using: df.group_by_dynamic("datetime", every="10s").agg( pl.exclude("datetime").mean().name.suffix("_binmean"), pl.exclude("datetime").count().name.suffix("_bincount") ) shape: (10, 5) βββββββββββββββββββββββ¬βββββββββββββ¬βββββββββββββ¬ββββββββββββββ¬ββββββββββββββ β datetime β v0_binmean β v1_binmean β v0_bincount β v1_bincount β β --- β --- β --- β --- β --- β β datetime[ΞΌs] β f64 β f64 β u32 β u32 β βββββββββββββββββββββββͺβββββββββββββͺβββββββββββββͺββββββββββββββͺββββββββββββββ‘ β 2022-10-30 00:00:00 β 4.0 β 4.0 β 5 β 5 β β 2022-10-30 00:00:10 β 14.0 β 14.0 β 5 β 5 β β 2022-10-30 00:00:20 β 24.0 β 24.0 β 5 β 5 β β 2022-10-30 00:00:30 β 34.0 β 34.0 β 5 β 5 β β 2022-10-30 00:00:40 β 44.0 β 44.0 β 5 β 5 β β 2022-10-30 00:00:50 β 54.0 β 54.0 β 5 β 5 β β 2022-10-30 00:01:00 β 64.0 β 64.0 β 5 β 5 β β 2022-10-30 00:01:10 β 74.0 β 74.0 β 5 β 5 β β 2022-10-30 00:01:20 β 84.0 β 84.0 β 5 β 5 β β 2022-10-30 00:01:30 β 94.0 β 94.0 β 5 β 5 β βββββββββββββββββββββββ΄βββββββββββββ΄βββββββββββββ΄ββββββββββββββ΄ββββββββββββββ | 3 | 3 |
74,917,772 | 2022-12-26 | https://stackoverflow.com/questions/74917772/how-to-make-an-empty-tensor-in-pytorch | In python, we can make an empty list easily by doing a = []. I want to do a similar thing but with Pytorch tensors. If you want to know why I need that, I want to get all of the data inside a given dataloader (to create another customer dataloader). Having an empty tensor can help me gather all of the data inside a tensor using a for-loop. This is a sudo code for it. all_data_tensor = # An empty tensor for data in dataloader: all_data_tensor = torch.cat((all_data_tensor, data), 0) Is there any way to do this? | We can do this using torch.empty. But notice torch.empty needs dimensions and we should give 0 to the first dimension to have an empty tensor. The code will be like this: # suppose the data generated by the dataloader has the size of (batch, 25) all_data_tensor = torch.empty((0, 25), dtype=torch.float32) # first dimension should be zero. for data in dataloader: all_data_tensor = torch.cat((all_data_tensor, data), 0) | 9 | 8 |
74,917,051 | 2022-12-26 | https://stackoverflow.com/questions/74917051/tensorflow-error-on-macbook-m1-pro-notfounderror-graph-execution-error | I've installed Tensorflow on a Macbook Pro M1 Max Pro by first using Anaconda to install the dependencies: conda install -c apple tensorflow-deps Then after, I install the Tensorflow distribution that is specific for the M1 architecture and additionally a toolkit that works with the Metal GPUs: pip install tensorflow-metal tensorflow-macos I then write a very simple feedforward architecture with some dummy training and validation data to see if I can execute a training session: from tensorflow.keras.models import Sequential from tensorflow.keras.optimizers import Adam from tensorflow.keras import layers import numpy as np model = Sequential([layers.Input((3, 1)), layers.LSTM(64), layers.Dense(32, activation='relu'), layers.Dense(32, activation='relu'), layers.Dense(1)]) model.compile(loss='mse', optimizer=Adam(learning_rate=0.001), metrics=['mean_absolute_error']) X_train = np.random.rand(100,3) y_train = np.random.rand(100) X_val = np.random.rand(100,3) y_val = np.random.rand(100) model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs=100) When I execute this, I get a slew of errors, the origin being a NotFoundError: Graph execution error. I assume this has something to do with the computational graph of the network that Tensorflow is setting up for me, based on my Sequential definition specified before compilation and training: File ~/test.py:20 18 X_val = np.random.rand(100,3) 19 y_val = np.random.rand(100) ---> 20 model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs=100) File ~/anaconda3/envs/cv/lib/python3.8/site-packages/keras/utils/traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs) 67 filtered_tb = _process_traceback_frames(e.__traceback__) 68 # To get the full stack trace, call: 69 # `tf.debugging.disable_traceback_filtering()` ---> 70 raise e.with_traceback(filtered_tb) from None 71 finally: 72 del filtered_tb File ~/anaconda3/envs/cv/lib/python3.8/site-packages/tensorflow/python/eager/execute.py:52, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 50 try: 51 ctx.ensure_initialized() ---> 52 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, 53 inputs, attrs, num_outputs) 54 except core._NotOkStatusException as e: 55 if name is not None: NotFoundError: Graph execution error: Detected at node 'StatefulPartitionedCall_7' defined at (most recent call last): File "/Users/rphan/anaconda3/envs/cv/bin/ipython", line 8, in <module> sys.exit(start_ipython()) File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/__init__.py", line 123, in start_ipython return launch_new_instance(argv=argv, **kwargs) File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/traitlets/config/application.py", line 1041, in launch_instance app.start() File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/terminal/ipapp.py", line 318, in start self.shell.mainloop() File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/terminal/interactiveshell.py", line 685, in mainloop self.interact() File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/terminal/interactiveshell.py", line 678, in interact self.run_cell(code, store_history=True) File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 2940, in run_cell result = self._run_cell( File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 2995, in _run_cell return runner(coro) File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/async_helpers.py", line 129, in _pseudo_sync_runner coro.send(None) File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3194, in run_cell_async has_raised = await self.run_ast_nodes(code_ast.body, cell_name, File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3373, in run_ast_nodes if await self.run_code(code, result, async_=asy): File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3433, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-0ed839f9b556>", line 1, in <module> get_ipython().run_line_magic('run', 'test.py') File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 2364, in run_line_magic result = fn(*args, **kwargs) File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/magics/execution.py", line 829, in run run() File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/magics/execution.py", line 814, in run runner(filename, prog_ns, prog_ns, File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 2797, in safe_execfile py3compat.execfile( File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/IPython/utils/py3compat.py", line 55, in execfile exec(compiler(f.read(), fname, "exec"), glob, loc) File "/Users/rphan/test.py", line 20, in <module> model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs=100) File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler return fn(*args, **kwargs) File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/engine/training.py", line 1650, in fit tmp_logs = self.train_function(iterator) File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/engine/training.py", line 1249, in train_function return step_function(self, iterator) File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/engine/training.py", line 1233, in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/engine/training.py", line 1222, in run_step outputs = model.train_step(data) File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/engine/training.py", line 1027, in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 527, in minimize self.apply_gradients(grads_and_vars) File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1140, in apply_gradients return super().apply_gradients(grads_and_vars, name=name) File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 634, in apply_gradients iteration = self._internal_apply_gradients(grads_and_vars) File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1166, in _internal_apply_gradients return tf.__internal__.distribute.interim.maybe_merge_call( File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1216, in _distributed_apply_gradients_fn distribution.extended.update( File "/Users/rphan/anaconda3/envs/cv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1211, in apply_grad_to_update_var return self._update_step_xla(grad, var, id(self._var_key(var))) Node: 'StatefulPartitionedCall_7' could not find registered platform with id: 0x1056be9e0 [[{{node StatefulPartitionedCall_7}}]] [Op:__inference_train_function_4146] I have no further insight on what this Graph execution error means. Has someone seen these errors before? This seems to be a very simple network and I can't seem to understand why the training doesn't execute. | After extensive searching, this is due to the dependencies with Anaconda compared to the Tensorflow version installed via pip: conda list | grep tensorflow tensorflow-deps 2.9.0 0 apple tensorflow-estimator 2.11.0 pypi_0 pypi tensorflow-macos 2.11.0 pypi_0 pypi tensorflow-metal 0.7.0 pypi_0 pypi The version of Tensorflow I have installed does not match the Tensorflow dependencies, hence the error. The solution was to downgrade to the same version as the dependencies as well as downgrade tensorflow-metal: pip install tensorflow-metal==0.5.0 pip install tensorflow-macos==2.9.0 Consulting the tensorflow-metal plugin documentation, tensorflow-macos 2.9.0 is the last known working version to successfully interface with tensorflow-metal, and the documentation here lists that version 0.5.0 is the highest that is supported. This resulted in downgrading both tensorflow-macos to 2.9.0 and tensorflow-metal to 0.5.0. Once I did this and ran the sample code, the training was successful. | 3 | 11 |
74,886,164 | 2022-12-22 | https://stackoverflow.com/questions/74886164/pycharm-doesnt-recognize-packages-with-remote-interpreter | TL;DR - This is a PyCharm remote interpreter question. Remote libraries are not properly synced, and PyCharm is unable to index properly when using remote interpreter. Everything runs fine. Following is the entire (currently unsuccessful) debug process See update section for a narrowing down of the problem I am using a virtual environment created with python -m venv venv, then pointing to it as I always have using ssh interpreter. The exact same happens with conda as well. After configuring the interpreter, many of the installed packages are marked red by PyCharm, not giving auto complete, and not knowing these packages. Here is the requirements.txt file, which is used with pip install -r requirements.txt --index https:<our_internal_pypi_server> --extra-index-url <some_external_pypi_server> algo_api>=2.5.0 algo_flows>=2.4.0 DateTime==4.7 fastapi==0.88.0 imagesize==1.4.1 numpy==1.23.1 opencv_python==4.6.0.66 overrides==6.1.0 pydantic==1.9.0 pymongo==4.1.1 pytest==7.1.2 pytorch_lightning==1.6.4 PyYAML==6.0 scikit_learn==1.1.3 setuptools==59.5.0 tinytree==0.2.1 #torch==1.10.2+cu113 #torchvision==0.11.3+cu113 tqdm==4.64.0 uv_build_utils==1.4.0 uv_python_utils>=1.11.1 allegroai pymongo[srv] Here is pip freeze absl-py==1.3.0 aggdraw==1.3.15 aiohttp==3.8.3 aiosignal==1.3.1 albumentations==1.3.0 algo-api==2.5.0 algo-flows==2.4.0 allegroai==3.6.1 altair==4.2.0 amqp==5.1.1 anomalib==0.3.2 antlr4-python3-runtime==4.9.3 anyio==3.6.2 astunparse==1.6.3 async-timeout==4.0.2 attrs==20.3.0 bcrypt==4.0.1 bleach==5.0.1 boto3==1.26.34 botocore==1.29.34 cachetools==5.2.0 certifi==2022.12.7 cffi==1.15.1 charset-normalizer==2.1.1 clearml==1.8.3 click==8.1.3 commonmark==0.9.1 contourpy==1.0.6 cpu-cores==0.1.3 cryptography==38.0.4 cycler==0.11.0 DateTime==4.7 decorator==5.1.1 deepmerge==1.1.0 dnspython==2.2.1 docker-pycreds==0.4.0 docopt==0.6.2 docutils==0.19 dotsi==0.0.3 efficientnet==1.0.0 einops==0.6.0 entrypoints==0.4 fastapi==0.88.0 ffmpy==0.3.0 fire==0.5.0 Flask==2.2.2 flatbuffers==1.12 focal-loss==0.0.7 fonttools==4.38.0 frozenlist==1.3.3 fsspec==2022.11.0 furl==2.1.3 future==0.18.2 gast==0.4.0 gitdb==4.0.10 GitPython==3.1.29 google-auth==2.15.0 google-auth-oauthlib==0.4.6 google-pasta==0.2.0 gradio==3.15.0 grpcio==1.51.1 gunicorn==20.1.0 h11==0.14.0 h5py==3.7.0 httpcore==0.16.3 httpx==0.23.1 humanfriendly==9.2 idna==3.4 image-classifiers==1.0.0 imageio==2.23.0 imagesize==1.4.1 imgaug==0.4.0 importlib-metadata==5.2.0 importlib-resources==5.10.1 imutils==0.5.4 inflection==0.5.1 iniconfig==1.1.1 itsdangerous==2.1.2 jaraco.classes==3.2.3 jeepney==0.8.0 Jinja2==3.1.2 jmespath==1.0.1 joblib==1.2.0 jsonschema==3.2.0 keras==2.9.0 Keras-Applications==1.0.8 Keras-Preprocessing==1.1.2 keyring==23.13.1 kiwisolver==1.4.4 kmeans1d==0.3.1 kornia==0.6.8 libclang==14.0.6 linkify-it-py==1.0.3 luqum==0.11.0 Markdown==3.4.1 markdown-it-py==2.1.0 MarkupSafe==2.1.1 maskrcnn-benchmark==1.1.2+cu113 matplotlib==3.6.2 mdit-py-plugins==0.3.3 mdurl==0.1.2 ml-distillery==1.0.1 more-itertools==9.0.0 multidict==6.0.3 networkx==2.8.8 numpy==1.23.1 oauthlib==3.2.2 omegaconf==2.3.0 opencv-python==4.6.0.66 opencv-python-headless==4.6.0.66 opt-einsum==3.3.0 orderedmultidict==1.0.1 orjson==3.8.3 overrides==6.1.0 packaging==22.0 pandas==1.5.2 paramiko==2.12.0 pathlib==1.0.1 pathlib2==2.3.7.post1 pathtools==0.1.2 pika==1.3.1 Pillow==9.3.0 pkginfo==1.9.2 pluggy==1.0.0 ply==3.11 promise==2.3 protobuf==3.19.6 psd-tools==1.9.23 psutil==5.9.4 py==1.11.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 pyclipper==1.3.0.post4 pycocotools==2.0.6 pycparser==2.21 pycpd==2.0.0 pycryptodome==3.16.0 pydantic==1.9.0 pyDeprecate==0.3.2 pydub==0.25.1 pygit2==1.11.1 Pygments==2.13.0 pyhumps==3.8.0 PyJWT==2.4.0 pymongo==4.1.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyrsistent==0.19.2 pytest==7.1.2 python-dateutil==2.8.2 python-multipart==0.0.5 pytorch-lightning==1.6.4 pytz==2022.7 PyWavelets==1.4.1 PyYAML==6.0 qudida==0.0.4 readme-renderer==37.3 requests==2.28.1 requests-oauthlib==1.3.1 requests-toolbelt==0.10.1 rfc3986==1.5.0 rich==12.6.0 rsa==4.9 s3transfer==0.6.0 scikit-image==0.19.3 scikit-learn==1.1.3 scipy==1.9.3 SecretStorage==3.3.3 segmentation-models==1.0.1 sentry-sdk==1.12.1 setproctitle==1.3.2 shapely==2.0.0 shortuuid==1.0.11 six==1.16.0 sklearn==0.0.post1 smmap==5.0.0 sniffio==1.3.0 starlette==0.22.0 tensorboard==2.9.1 tensorboard-data-server==0.6.1 tensorboard-plugin-wit==1.8.1 tensorflow==2.9.1 tensorflow-estimator==2.9.0 tensorflow-io-gcs-filesystem==0.29.0 termcolor==2.1.1 threadpoolctl==3.1.0 tifffile==2022.10.10 timm==0.5.4 tinytree==0.2.1 tomli==2.0.1 toolz==0.12.0 torch==1.10.2+cu113 torchmetrics==0.9.0 torchtext==0.11.2 torchvision==0.11.3+cu113 tqdm==4.64.0 twine==4.0.2 typing-utils==0.1.0 typing_extensions==4.4.0 uc-micro-py==1.0.1 urllib3==1.26.13 uv-build-utils==1.4.0 uv-envyaml==2.0.1 uv-python-serving==2.0.1 uv-python-utils==1.12.0 uvicorn==0.20.0 uvrabbit==1.4.1 validators==0.20.0 vine==5.0.0 wandb==0.12.17 webencodings==0.5.1 websockets==10.4 Werkzeug==2.2.2 windshield-grid-localisation==1.0.0.dev5 wrapt==1.14.1 yacs==0.1.8 yarl==1.8.2 zipp==3.11.0 zope.interface==5.5.2 The following minimal test program import pytest import uv_python_utils from importlib_metadata import version as version_query from pkg_resources import parse_version import requests installed_pytest_version = parse_version(version_query('pytest')) installed_uv_python_utils_version = parse_version(version_query('uv_python_utils')) installed_importlib_metadata_version = parse_version(version_query('importlib_metadata')) print(installed_pytest_version) print(installed_uv_python_utils_version) print(installed_importlib_metadata_version) runs with output 7.1.2 1.12.0 5.2.0 but in the IDE, it looks like this: Here is the support ticket for JetBrains (not sure if visible for everyone or not). They were not able to help yet. They offered, and I have done all of the following which did not help: Delete ~/.pycharm_helpers on remote Go to Help | Find Action... and search for "Registry...". In the registry, search for python.use.targets.api and disable it. Reconfigure your project interpreter. They looked in "the logs" (not sure which log), coming from Help --> "Collect Logs and Diagnostic Data", and saw the following at java.desktop/java.awt.EventDispatchThread.run(EventDispatchThread.java:92) 2022-12-15 11:14:42,932 [ 478638] WARN - net.schmizz.sshj.xfer.FileSystemFile - Could not set permissions for C:\Users\noam.s\AppData\Local\JetBrains\PyCharm2022.3\remote_sources\-2115534621\.\site-packages__1.zip to 1a4 2022-12-15 11:14:42,986 [ 478692] WARN - net.schmizz.sshj.xfer.FileSystemFile - Could not set permissions for C:\Users\noam.s\AppData\Local\JetBrains\PyCharm2022.3\remote_sources\-2115534621\.\.state.json to 1a4 2022-12-15 11:14:43,077 [ 478783] WARN - net.schmizz.sshj.xfer.FileSystemFile - Could not set permissions for C:\Users\noam.s\AppData\Local\JetBrains\PyCharm2022.3\remote_sources\-2115534621\.\python3.8.zip to 1a4 I could not find any permission irregularities though. I also tried to purge everything from Pycharm from both local and remote, and reinstall, and this persists. Uninstall PyCharm, resinstall an older version that works for a colleague (works on the same remote in the same directory for the colleague, so the problem is local) Delete .idea Delete C:\Users\noam.s\AppData\Roaming\JetBrains Obviously I tried invalidate caches & restart. The libraries just don't get downloaded to the External Libraries [See update below], as shown in the Project menu, which doesn't agree with pip freeze In the venv case: In the conda case, the downloaded remote libraries don't even agree with the Pycharm interpreter screen! This really makes it hard for me to work and I am not able to find any workaround. Any ideas? Update - The problem occurs when Pycharm tries to unpack from skeletons.zip. I found a workaround to avoid the "reds": Open the Remote Libraries in explorer Delete that folder. Manually extract the folder from skeletons.zip Reindex pycharm This gave the folowing warnings: ! Attempting to correct the invalid file or folder name ! Renaming C:\Users\noam.s\AppData\Local\Temp\Rar$DRa30340.29792\756417188\uvrabbit\aux.py to C:\Users\noam.s\AppData\Local\Temp\Rar$DRa30340.29792\756417188\uvrabbit\_aux.py but allowed me to start working. This is not a valid solution in my opinion though, as it required manual handling, rather then let the IDE do it's one job. Why does this happen? How to fix it? How to avoid it? | The problem was a file in one of the packages (uvrabbit) which is named aux.py. Naming files aux in Windows is forbidden. Good to know. This made the auto-unpacking crash, and thus the indexing failed. It also made it so git clone fails [which is how I found it eventually]. When manually unpacking, Winrar renamed aux.py to _aux.py, thus bypassing most of the problems. Changing the file name and updating the package to a version without a file named aux solved it. So to answer directly Because one of the packages had a file named aux.py Rename the file [from a linux computer!] Remember not to name files aux in linux, so as to not break Windows. [Or don't use Windows]. | 12 | 4 |
74,885,225 | 2022-12-22 | https://stackoverflow.com/questions/74885225/cast-features-to-classlabel | I have a dataset with type dictionary which I converted to Dataset: ds = datasets.Dataset.from_dict(bio_dict) The shape now is: Dataset({ features: ['id', 'text', 'ner_tags', 'input_ids', 'attention_mask', 'label'], num_rows: 8805 }) When I use the train_test_split function of Datasets I receive the following error: train_testvalid = ds.train_test_split(test_size=0.5, shuffle=True, stratify_by_column="label") ValueError: Stratifying by column is only supported for ClassLabel column, and column label is Sequence. How can I change the type to ClassLabel so that stratify works? | You should apply the following class_encode_column function: ds = ds.class_encode_column("label") | 8 | 9 |
74,923,308 | 2022-12-26 | https://stackoverflow.com/questions/74923308/how-can-i-keep-poetry-and-commitizen-versions-synced | I have a pyproject.toml with [tool.poetry] name = "my-project" version = "0.1.0" [tool.commitizen] name = "cz_conventional_commits" version = "0.1.0" I add a new feature and commit with commit message feat: add parameter for new feature That's one commit. Then I call commitizen bump Commitizen will recognize a minor version increase, update my pyproject.toml, and commit again with the updated pyproject.toml and a tag 0.2.0. That's a second commit. But now my pyproject.toml is "out of whack" (assuming I want my build version in sync with my git tags). [tool.poetry] name = "my-project" version = "0.1.0" [tool.commitizen] name = "cz_conventional_commits" version = "0.2.0" I'm two commits in, one tagged, and things still aren't quite right. Is there workflow to keep everything aligned? | refer to support-for-pep621 and version_files you can add "pyproject.toml:^version" to pyproject.toml: [tool.commitizen] version_files = [ "pyproject.toml:^version" ] | 7 | 6 |
74,892,481 | 2022-12-22 | https://stackoverflow.com/questions/74892481/how-to-authenticate-a-github-actions-workflow-as-a-github-app-so-it-can-trigger | By default (when using the default secrets.GITHUB_TOKEN) GitHub Actions workflows can't trigger other workflows. So for example if a workflow sends a pull request to a repo that has a CI workflow that normally runs the tests on pull requests, the CI workflow won't run for a pull request that was sent by another workflow. There are probably lots of other GitHub API actions that a workflow authenticating with the default secrets.GITHUB_TOKEN can't take either. How can I authenticate my workflow runs as a GitHub App, so that they can trigger other workfows and take any other actions that I grant the GitHub App permissions for? Why not just use a personal access token? The GitHub docs linked above recommend authenticating workflows using a personal access token (PAT) to allow them to trigger other workflows, but PATs have some downsides: You probably don't want your workflow to authenticate as any human user's account because any pull requests, issues, etc created by the workflow will appear to have been created by that human rather than appearing to be automated. The PAT would also become a very sensitive secret because it would grant access to all repos that the human user's account has access to. You could create a machine user account to own the PAT. But if you grant the machine user access to all repos in your organization then the PAT again becomes a very sensitive secret. You can add the machine user as a collaborator on only the individual repos that you need, but this is inconvenient because you'll always need to add the user to each new repo that you want it to have access to. Classic PATs have only broad-grained permissions. The recently-introduced fine-grained PATs don't work with GitHub CLI (which is the easiest way to send PRs, open issues, etc from workflows) and there's no ETA for when support will be added. GitHub Apps offer the best balance of convenience and security for authenticating workflows: apps can have fine-grained permissions and they can be installed only in individual repos or in all of a user or organization's repos (including automatically installing the app in new repos when they're created). Apps also get a nice page where you can type in some docs (example), the app's avatar and username on PRs, issues, etc link to this page. Apps are also clearly labelled as "bot" on any PRs, issues, etc that they create. This third-party documentation is a good summary of the different ways of authenticating workflows and their pros and cons. I don't want to use a third-party GitHub Action There are guides out there on the internet that will tell you how to authenticate a workflow as an app but they all tell you to use third-party actions (from the marketplace) to do the necessary token exchange with the GitHub API. I don't want to do this because it requires sending my app's private key to a third-party action. I'd rather write (or copy-paste) my own code to do the token exchange. | Links: Working demo repo Workflow that sends PRs Example PR sent by the workflow (notice that the ci.yml workflow was automatically triggered on the PR) Workflow run that sent the PR To create a GitHub App and a workflow that authenticates as that app and sends PRs that can trigger other workflows: Create a GitHub App in your user or organization account. Take note of your app's App ID which is shown on your app's settings page, you'll need it later. Grant your app the necessary permissions. To send pull requests an app will probably need: Read and write access for the Contents permission Read and write access for the Pull requests permission Read and write access for the Workflows permission if you intend for it to send pull requests that change workflow files Generate a private key for your app. Store a copy of the private key in a GitHub Actions secret named MY_GITHUB_APP_PRIVATE_KEY. This can either be a repo-level secret in the repo that will contain the workflow that you're going to write, or it can be a user- or organization-level secret in which case the repo that will contain the workflow will need access to the secret. Install your GitHub app in your repo or user or organization account. Take note of the Installation ID. This is the 8-digit number that is at the end of the URL of the installation's page in the settings of the repo, user or organization where you installed the app. You'll need this later. A workflow run that wants to authenticate as your app needs to get a temporary installation access token each time it runs, and use that token to authenticate itself. This is called authenticating as an installation in GitHub's docs and they give a code example in Ruby. The steps are: Generate a JSON Web Token (JWT) with an iat ("issued at" time) 60s in the past, an exp (expiration time) 10m in the future, and your App ID as the iss (issuer), and sign the token using your private key and the RS256 algorithm. Make an HTTP POST request to https://api.github.com/app/installations/:installation_id/access_tokens (replacing :installation_id with your installation ID) with the signed JWT in an Authorization: Bearer <SIGNED_JWT> header. The temporary installation access token will be in the GitHub API's JSON response. Here's a Python script that implements this token exchange using PyJWT and requests (you'll need to install pyjwt with the cryptography dependency: pip install pyjwt[crypto]). from argparse import ArgumentParser from datetime import datetime, timedelta, timezone import jwt import requests def get_token(app_id, private_key, installation_id): payload = { "iat": datetime.now(tz=timezone.utc) - timedelta(seconds=60), "exp": datetime.now(tz=timezone.utc) + timedelta(minutes=10), "iss": app_id, } encoded_jwt = jwt.encode(payload, private_key, algorithm="RS256") response = requests.post( f"https://api.github.com/app/installations/{installation_id}/access_tokens", headers={ "Accept": "application/vnd.github+json", "Authorization": f"Bearer {encoded_jwt}", }, timeout=60, ) return response.json()["token"] def cli(): parser = ArgumentParser() parser.add_argument("--app-id", required=True) parser.add_argument("--private-key", required=True) parser.add_argument("--installation-id", required=True) args = parser.parse_args() token = get_token(args.app_id, args.private_key, args.installation_id) print(token) if __name__ == "__main__": cli() https://github.com/hypothesis/gha-token is a version of the above code as an installable Python package. To install it with pipx and get a token: $ pipx install git+https://github.com/hypothesis/gha-token.git $ gha-token --app-id $APP_ID --installation-id $INSTALLATION_ID --private-key $PRIVATE_KEY ghs_xyz*** You can write a workflow that uses gha-token to get a token and authenticate any API requests or GitHub CLI calls made by the workflow. The workflow below will: Install Python 3.11 and gha-token Call gha-token to get an installation access token using the App ID, Installation ID, and MY_GITHUB_APP_PRIVATE_KEY GitHub secret that you created earlier Make an automated change and commit it Use the access token to authenticate git push and gh pr create (GitHub CLI) commands to push a branch and open a PR Copy the workflow below to a file named .github/workflows/send-pr.yml in your repo. Replace <APP_ID> with your App ID and replace <INSTALLATION_ID> with your Installation ID: name: Send a Pull Request on: workflow_dispatch: jobs: my_job: name: Send a Pull Request runs-on: ubuntu-latest steps: - name: Install Python uses: actions/setup-python@v4 with: python-version: "3.11" - name: Install gha-token run: python3.11 -m pip install "git+https://github.com/hypothesis/gha-token.git" - name: Get GitHub token id: github_token run: echo GITHUB_TOKEN=$(gha-token --app-id <APP_ID> --installation-id <INSTALLATION_ID> --private-key "$PRIVATE_KEY") >> $GITHUB_OUTPUT env: PRIVATE_KEY: ${{ secrets.MY_GITHUB_APP_PRIVATE_KEY }} - name: Checkout repo uses: actions/checkout@v3 with: token: ${{ steps.github_token.outputs.GITHUB_TOKEN }} - name: Make some automated changes run: echo "Automated changes" >> README.md - name: Configure git run: | git config --global user.name "send-pr.yml workflow" git config --global user.email "<>" - name: Switch to a branch run: git switch --force-create my-branch main - name: Commit the changes run: git commit README.md -m "Automated commit" - name: Push branch run: git push --force origin my-branch:my-branch - name: Open PR run: gh pr create --fill env: GITHUB_TOKEN: ${{ steps.github_token.outputs.GITHUB_TOKEN }} | 4 | 5 |
74,904,221 | 2022-12-23 | https://stackoverflow.com/questions/74904221/how-to-switch-page-on-button-click-using-streamlit | I have made separate functions for each page, but i want to change page to file upload when I click button on welcome_page.py. Found a switch_page function but I don't think I understand how it works. import streamlit as st from login import check_password from file_upload import file_upload from welcome import welcome_page from PIL import Image from numpy import asarray from streamlit_extras.switch_page_button import switch_page def file_upload(): datafile = st.file_uploader("Upload JPG",type=['jpg']) if datafile is not None: numpydata = asarray(datafile) print(type(numpydata)) st.image(datafile) return True return False if check_password(): if welcome_page(): switch_page('file_upload') fileDetails = file_upload() if fileDetails: print("file uploaded") | Unfortunately, there isn't a built-in way to switch pages programmatically in a Streamlit multipage app. There is an open feature request here if you'd like to upvote it. | 4 | 2 |
74,892,927 | 2022-12-22 | https://stackoverflow.com/questions/74892927/seaborn-lineplot-typeerror-ufunc-isfinite-not-supported-for-the-input-types | I am getting the following error when trying to plot a lineplot with seaborn. TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' Minimal example reproducing error: import matplotlib.pyplot as plt import seaborn as sns import pandas as pd dataset = { "x": [1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3], "y": [1.0, 2.3, 4.5, 1.2, 3.4, 5.3, 1.1, 2.4, 3.6, 1.1, 3.3, 5.3], "id": ["a", "a", "a", "b", "b", "b", "a", "a", "a", "b", "b", "b"], "seed": [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1], } df = pd.DataFrame(data=dataset) print(df.dtypes) g = sns.lineplot(data=df, x="x", y="y", hue="id", errorbar="sd") plt.show() plt.close() I have tried checking the datatypes of all inputs and Dataframe columns, and changing "id" to be an integer type (even though that is not my goal) and the error persists. | Numpy 1.24.0 has a bug that causes an exception within several seaborn functions. The bug has been fixed and numpy has released a new version to address it. The solution is to install numpy 1.24.1 the same way that you installed numpy 1.24.0. | 9 | 12 |
74,894,318 | 2022-12-22 | https://stackoverflow.com/questions/74894318/how-can-i-match-a-pattern-and-then-everything-upto-that-pattern-again-so-matc | Context I have the following paragraph: text = """ ΧΧΧΧΧΧ "Χ‘ - ΧΧΧΧͺ ΧΧΧ Χ‘Χͺ ΧΧ"Χ - ΧΧΧ ΧΧΧ©ΧΧΧ ΧΧͺ"Χ - ΧΧͺΧΧ§ΧΧ Χ ΧΧΧ§Χ Χ ΧΧΧ"Χ¨ - ΧΧΧ©ΧΧΧΧͺ ΧΧ¨ΧΧΧ ΧΧΧͺ Χ"Χ - Χ' ΧΧΧΧ§ΧΧΧ ΧΧͺΧΧ' - ΧΧͺΧΧΧΧ ΧΧΧ "Χ - ΧΧΧ ΧΧΧ¨ ΧΧ’ΧΧ Χ"Χ - Χ' ΧΧΧ§ΧΧ ΧΧΧ"Χ - ΧΧΧΧ¨ ΧΧ ΧΧΧ©ΧΧ΄Χͺ - ΧΧΧ©Χ ΧΧͺΧΧ¨Χ Χ"Χ - ΧΧ¨Χ ΧΧΧ / ΧΧΧ ΧΧΧΧ ΧΧΧͺ"Χ - ΧΧΧΧΧΧ ΧͺΧΧΧΧΧ """ this paragraph is combined with Hebrew words and their acronyms. A word contains quotation marks ("). So for example, some words would be: [ 'ΧΧΧΧΧΧ "Χ‘', 'ΧΧ"Χ', 'ΧΧͺ"Χ' ] Now, I'm able to match all the words with this regex: (\b[\u05D0-\u05EA]*\"\b[\u05D0-\u05EA]*\b) Question But how can I also match all the corresponding acronyms as a separate group? (the acronyms are what's not matched, so not the green in the picture). Example acronyms are: [ 'ΧΧΧΧͺ ΧΧΧ Χ‘Χͺ', 'ΧΧΧ ΧΧΧ©ΧΧΧ', 'ΧΧͺΧΧ§ΧΧ Χ ΧΧΧ§Χ Χ' ] Expected output The expected output should be a dictionary with the Words as keys and the Acronyms as values: { 'ΧΧΧΧΧΧ Χ‘': 'ΧΧΧΧͺ ΧΧΧ Χ‘Χͺ', 'ΧΧ"Χ': 'ΧΧΧ ΧΧΧ©ΧΧΧ', 'ΧΧͺ"Χ': 'ΧΧͺΧΧ§ΧΧ Χ ΧΧΧ§Χ Χ' } My attempt What I tried was to match all the words (as above picture): (\b[\u05D0-\u05EA]*\"\b[\u05D0-\u05EA]*\b) and then match everything until the pattern appears again with .*\1, so the entire regex would be: (\b[\u05D0-\u05EA]*\"\b[\u05D0-\u05EA]*\b).*\1 But as you can see, that doesn't work: How can I match the words and acronyms to compose a dictionary with the words/acronyms? Note When you print the output, it might be printed in Left-to-right order. But it should really be from Right to left. So if you want to print from right to left, see this answer: right-to-left languages in Python | You can do something like this: import re pattern = r'(\b[\u05D0-\u05EA]*\"\b[\u05D0-\u05EA]*\b)\s*-\s*([^"]+)(\s|$)' text = """ΧΧΧΧΧΧ "Χ‘ - ΧΧΧΧͺ ΧΧΧ Χ‘Χͺ ΧΧ"Χ - ΧΧΧ ΧΧΧ©ΧΧΧ ΧΧͺ"Χ - ΧΧͺΧΧ§ΧΧ Χ ΧΧΧ§Χ Χ""" for word, acronym, _ in re.findall(pattern, text): print(word + ' == ' + acronym) which outputs ΧΧΧΧΧΧ "Χ‘ == ΧΧΧΧͺ ΧΧΧ Χ‘Χͺ ΧΧ"Χ == ΧΧΧ ΧΧΧ©ΧΧΧ ΧΧͺ"Χ == ΧΧͺΧΧ§ΧΧ Χ ΧΧΧ§Χ Χ Let's take a closer look how I built the regex pattern. Here's the pattern from your question that matches words: (\b[\u05D0-\u05EA]*\"\b[\u05D0-\u05EA]*\b) This part will match the delimiter between a word and it's acronym: \s*-\s* (spaces then dash then spaces) This part will match anything except for double quote: ([^"]+) Finally, not to match the beginning of the next word let's match space/EOL in the end: (\s|$). Concatenate all the parts above and you'll get my pattern: (\b[\u05D0-\u05EA]*\"\b[\u05D0-\u05EA]*\b)\s*-\s*([^"]+)(\s|$) re.findall() will return a list of tuples, one tuple for one match. Each tuple will contain strings matching the groups (the stuff within parenthesis) in the same order that groups appear in the pattern. So we need group number 0 (word) and group number 1 (acronym) to build our dict. Group number 2 is not needed. | 3 | 1 |
74,871,172 | 2022-12-21 | https://stackoverflow.com/questions/74871172/python-how-to-speed-up-this-function-and-make-it-more-scalable | I have the following function which accepts an indicator matrix of shape (20,000 x 20,000). And I have to run the function 20,000 x 20,000 = 400,000,000 times. Note that the indicator_Matrix has to be in the form of a pandas dataframe when passed as parameter into the function, as my actual problem's dataframe has timeIndex and integer columns but I have simplified this a bit for the sake of understanding the problem. Pandas Implementation indicator_Matrix = pd.DataFrame(np.random.randint(0,2,[20000,20000])) def operations(indicator_Matrix): s = indicator_Matrix.sum(axis=1) d = indicator_Matrix.div(s,axis=0) res = d[d>0].mean(axis=0) return res.iloc[-1] I tried to improve it by using numpy but it is still taking ages to run. I also tried concurrent.future.ThreadPoolExecutor but it still take a long time to run and not much improvement from list comprehension. Numpy Implementation indicator_Matrix = pd.DataFrame(np.random.randint(0,2,[20000,20000])) def operations(indicator_Matrix): s = indicator_Matrix.to_numpy().sum(axis=1) d = (indicator_Matrix.to_numpy().T / s).T d = pd.DataFrame(d, index = indicator_Matrix.index, columns = indicator_Matrix.columns) res = d[d>0].mean(axis=0) return res.iloc[-1] output = [operations(indicator_Matrix) for i in range(0,20000**2)] Note that the reason I convert d to a dataframe again is because I need to obtain the column means and retain only the last column mean using .iloc[-1]. d[d>0].mean(axis=0) return column means, i.e. 2478 1.0 0 1.0 Update: I am still stuck in this problem. I wonder if using gpu packages like cudf and CuPy on my local desktop would make any difference. | Assuming the answer of @CrazyChucky is correct, one can implement a faster parallel Numba implementation. The idea is to use plain loops and care about reading data the contiguous way. Reading data contiguously is important so to make the computation cache-friendly/memory-efficient. Here is an implementation: import numba as nb @nb.njit(['(int_[:,:],)', '(int_[:,::1],)', '(int_[::1,:],)'], parallel=True) def compute_fastest(matrix): n, m = matrix.shape sum_by_row = np.zeros(n, matrix.dtype) is_row_major = matrix.strides[0] >= matrix.strides[1] if is_row_major: for i in nb.prange(n): s = 0 for j in range(m): s += matrix[i, j] sum_by_row[i] = s else: for chunk_id in nb.prange(0, (n+63)//64): start = chunk_id * 64 end = min(start+64, n) for j in range(m): for i2 in range(start, end): sum_by_row[i2] += matrix[i2, j] count = 0 s = 0.0 for i in range(n): value = matrix[i, -1] / sum_by_row[i] if value > 0: s += value count += 1 return s / count # output = [compute_fastest(indicator_Matrix.to_numpy()) for i in range(0,20000**2)] Pandas dataframes can contain both row-major and column-major arrays. Regarding the memory layout, it is better to iterate over the rows or the column. This is why there is two implementations of the sum based on is_row_major. There is also 3 Numba signatures: one for row-major contiguous arrays, one for columns-major contiguous arrays and one for non-contiguous arrays. Numba will compile the 3 function variants and automatically pick the best one at runtime. The JIT-compiler of Numba can generate a faster implementation (eg. using SIMD instructions) when the input 2D array is known to be contiguous. Experimental Results This computation is about 14.5 times faster than operations_simpler on my i5-9600KF processor (6 cores). It still takes a lot of time but the computation is memory-bound and nearly optimal on my machine: it is bounded by the main-memory which has to be read: On a 2000x2000 dataframe with 32-bit integers: - operations: 86.310 ms/iter - operations_simpler: 5.450 ms/iter - compute_fastest: 0.375 ms/iter - optimal: 0.345-0.370 ms/iter If you want to get a faster code, then you need to use more compact data types. For example, a uint8 data type is large enough to contain the values 0 and 1, and it is 4 times smaller in memory on Windows. This means the code can be up to 4 time faster in this case. The smaller the data type, the faster the program. One could even try to compact 8 columns in 1 using bit tweaks though it is generally significantly slower using Numba unless you have a lot of available cores. Notes & Discussion The above code works only with uniformly-typed columns. If this is not the case, you can split the dataframe in multiple groups and convert each column group to Numpy array so to then call the Numba function (modified to support groups). Note the @CrazyChucky code has a similar issue: a dataframe column with mixed datatypes converted to a Numpy array results in an object-based Numpy array which is very inefficient (especially a row-major Numpy array). Note that using a GPU will not make the computation faster unless the input dataframe is already stored in the GPU memory. Indeed, CPU-GPU data transfers are more expensive than just reading the RAM (due to the interconnect overhead which is generally a quite slow PCI one). Note that the GPU memory is quite limited compared to the CPU. If the target dataframe(s) do not need to be transferred, then using cudf is relatively simple and should give a small speed up. For a faster code, one need to implement a fast CUDA code but this is clearly far from being easy for dataframes with mixed dataype. In the end, the resulting speed up should be main_ram_throughput / gpu_ram_througput assuming there is no data transfer. Note that this factor is generally 5-12. Note also that CUDA and cudf require a Nvidia GPU. Finally, reducing the input data size or just the amount of computation is certainly the best solution (as indicated in the comment by @zvone) since it is very computationally intensive. | 4 | 8 |
74,922,884 | 2022-12-26 | https://stackoverflow.com/questions/74922884/dynamically-change-tray-icon-with-pystray | I want the tray icon to change according to the value of p_out. Specifically depending on its value, I want it to get a different color. Here's the code import pystray import ping3 while True: p_out = ping3.ping("google.com", unit="ms") if p_out == 0: img = white elif p_out >= 999: img = red else: print(f'\n{p_out:4.0f}', end='') if p_out <= 50: img = green elif p_out <= 60: img = yellow elif p_out < 100: img = orange elif p_out >= 100: img = red icon = pystray.Icon(" ", img) icon.run() I tried to 'reset' the pystray icon on every loop but it didn't work. The icon only changes when I stop and rerun the script. | As correctly stated in the comments, providing code that cannot run, does not help the community members to assist you. Ξ€he code references variables named white, red, green, yellow and orange, but these variables have not been defined or assigned values. Despite all this, the dynamic update of the tray icon is certainly something that can be useful to others. Therefore, below you may find your code with the necessary corrections applied. import ping3 import pystray import threading # Import the threading module for creating threads from PIL import Image # Import the Image module from PIL for creating images def update_icon(): while True: ping_output = ping3.ping("google.com", unit="ms") print(f'\n{ping_output:4.0f}', end='') if ping_output == 0: img = Image.new("RGB", (32, 32), (255, 255, 255)) # 32px32px, white elif 0 < ping_output <= 50: img = Image.new("RGB", (32, 32), (0, 255, 0)) # 32px32px, green elif 50 < ping_output <= 60: img = Image.new("RGB", (32, 32), (255, 255, 0)) # 32px32px, yellow elif 60 < ping_output < 100: img = Image.new("RGB", (32, 32), (255, 165, 0)) # 32px32px, orange elif ping_output >= 100: img = Image.new("RGB", (32, 32), (255, 0, 0)) # 32px32px, red icon.icon = img if __name__ == "__main__": icon = pystray.Icon("ping") icon.icon = Image.new("RGB", (32, 32), (255, 255, 255)) # 32px32px, white # Create a new thread to run the update_icon() function thread = threading.Thread(target=update_icon) # Start the thread thread.start() icon.run() NOTES: In order to use the Image module, you must first install Pillow, with a package manager like pip, using a command like this: pip install pillow. The purpose of creating a new thread -to run update_icon function- is to allow the tray icon to continue updating in the background, without blocking the main thread of execution. While 32x32px is a common size for icons, it is not necessarily the only size that can be used. Using a while True loop to run a continuous task can consume a large amount of system resources, such as CPU and memory, which can impact the overall performance of your program. However, I will not suggest anything about it as it is not relevant to the present question. | 5 | 9 |
74,893,922 | 2022-12-22 | https://stackoverflow.com/questions/74893922/sublime-text-4-folding-python-functions-with-a-line-break | I'm running Sublime Text Build 4143. Given a function with parameters that spill over the 80 character limit like so: def func(parameter_1, parameter_2, parameter_3, parameter_4, parameter_5, parameter_6): """ """ print("hello world") return ST will show a PEP8 E501: line too long warning and highlight the line (which is fine): But I can fold this function appropriately: If I modify it to avoid the PEP8 warning: I can no longer fold it: This changed in the last update I think, because I used to be able to fold these functions without issues. How can I get around ths? | This is not a "fair" answer, but may be helpful. If you use black-like line folding (or anything similar, but with one important property: the closing parenthesis should be on its own line and be indented to the same level as def, see below), then folding works even better: def func( parameter_1, parameter_2, parameter_3, parameter_4, parameter_5, parameter_6, ): """ """ print("hello world") return Now you have three arrows: the first folds function arguments, the second folds all function body and the third folds only the docstring. You can wrap parameters in any way you like, if closing parenthesis remains in place. I personally prefer this style, and it can be auto-formatted with black. Your solution is PEP8-compatible, but ST doesn't like it. It folds to the next line with the same level of indentation (so I'm very surprised that it worked before). This line-wrapping style is especially cute if you use type hinting: every argument appears on its own line together with type, and return type is written on the last line - still separate). This problem also arises in languages with goto construct and labels, which can be indented to the same level as function body - wrapping dies as well. | 3 | 1 |
74,925,822 | 2022-12-27 | https://stackoverflow.com/questions/74925822/how-to-shorten-the-command-when-filtering-data-frame-in-python | In Python, a common way to filter a data frame is like this df.loc[(df['field 1'] == 'a') & (df['field 2'] == 'b'), 'field 3']... When df name is long, or when there are more filter conditions (only two in the above), the above line will be long naturally. Moreover, it is a bit tedious to have type out df name for each condition. In R or SQL, we don't really need to do that. So, my question is if there is a way to shorten the above line in Python. For example, is there a way that I don't have to write down df name in each condition? Thanks. | you can use the query method as follows which is has an SQL like syntax df.query('field_1 == "a" and field_2 == "b" and field_3 > 3.2 ') Here are the docs for it https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.query.html | 3 | 3 |
74,910,247 | 2022-12-24 | https://stackoverflow.com/questions/74910247/python-multiprocessing-sharing-variables-between-processes | I'm trying to write a multiprocessing program which shares one or more variables (values or matrix) between the child processes. In my current test program I'm trying to spawn two processes, each sharing the num variable. Each adds 1 to the variable and then prints. However, whenever I try to run the program it tells me a TypeError has occurred, saying 'Synchronized' object is not iterable. How can I get this to work? The code is shown below: import multiprocessing import os import time def info(title): print(title) print('module name:', __name__) print('parent process:', os.getppid()) print('process id:', os.getpid()) def f(num): while True: time.sleep(1.5) num = num + 1 print("process 1: %i \n" % num) if num > 50: break def f2(num): while True: num= num + 1 print("process 2: %i \n" % num) time.sleep(1.9) if num > 50: break if __name__ == '__main__': data = multiprocessing.Value('i',0) p = multiprocessing.Process(target=f, args=(data)) j = multiprocessing.Process(target=f2, args=(data)) p.start() j.start() "Synchronized' object is not iterable results from running the program when it tries to create the first process: p = multiprocessing.Process(target=f, args=(data)) I'm not sure whether using a queue would work as I'd eventually like to have a program which has a looping process and another which occasionally grabs the most recent result returned from the looping process. | You have several issues with your code: When you create a multiprocessing.Value instance (num in your case), you must use the value attribute of that instance to read or write actual value of that shared variable. Incrementing such an instance, even if you replace num.value = num.value + 1 with num.value += 1 as I have done, is not an atomic operation. It is equivalent to temp = num.value; temp += 1; num.value = temp, which means that this incrementing must take place under control of the lock provided for such synchronized instances to ensure that the value is truly incremeneted. Otherwise, two processes may read the same value and increment it to the same final value and you will have only incremented the value once instead of twice. The args parameter to the multiprocessing.Process initializer should be an iterable where each element of the iterable becomes an argument to your worker functions f and f2. When you specify args=(data), the parentheses has no effect and is equivalent to simply specifying args=data and data is not an iterable. You needed to have args=(data,). Note the comma (',') such that (data,) is now a tuple (i.e. an iterable) containing the single element data instead of a parenthesized expression. import multiprocessing import os import time def info(title): print(title) print('module name:', __name__) print('parent process:', os.getppid()) print('process id:', os.getpid()) def f(num): while True: time.sleep(1.5) with num.get_lock(): num.value += 1 print("process 1: %i \n" % num.value) if num.value > 50: break def f2(num): while True: with num.get_lock(): num.value += 1 print("process 2: %i \n" % num.value) time.sleep(1.9) if num.value > 50: break if __name__ == '__main__': data = multiprocessing.Value('i',0) p = multiprocessing.Process(target=f, args=(data,)) j = multiprocessing.Process(target=f2, args=(data,)) p.start() j.start() p.join() j.join() print(data.value) Prints: ... process 2: 47 process 1: 48 process 2: 49 process 1: 50 process 2: 51 process 1: 52 52 But note that in the above code, each process acquires and hold the lock for a very long time shutting out any other process from acquiring the lock. We can minimize how long the lock is held in the following way (and so the program will complete sooner): import multiprocessing import os import time def info(title): print(title) print('module name:', __name__) print('parent process:', os.getppid()) print('process id:', os.getpid()) def f(num): while True: time.sleep(1.5) with num.get_lock(): num.value += 1 saved_value = num.value print("process 1: %i \n" % saved_value) if saved_value > 50: break def f2(num): while True: with num.get_lock(): num.value += 1 saved_value = num.value print("process 2: %i \n" % saved_value) time.sleep(1.9) if saved_value > 50: break if __name__ == '__main__': data = multiprocessing.Value('i',0) p = multiprocessing.Process(target=f, args=(data,)) j = multiprocessing.Process(target=f2, args=(data,)) p.start() j.start() p.join() j.join() | 4 | 5 |
74,930,893 | 2022-12-27 | https://stackoverflow.com/questions/74930893/how-to-modify-in-channel-of-the-firstly-layer-cnn-in-the-timm-model | everyone. I hope to train a CV model in the timm library on my dataset. Due to the shape of the input data is (batch_size, 15, 224, 224), I need to modify the "in_channel" of the first CNN layer of different CV models. I try different methods but still fail. Could you help me solve this problem? Thanks! import torch import torch.nn as nn import timm class FrequencyModel(nn.Module): def __init__( self, in_channels = 6, output = 9, model_name = 'resnet200d', pretrained = False ): super(FrequencyModel, self).__init__() self.in_channels = in_channels self.output = output self.model_name = model_name self.pretrained = pretrained self.m = timm.create_model(self.model_name, pretrained=self.pretrained, num_classes=output) for layer in self.m.modules(): if(isinstance(layer,nn.Conv2d)): layer.in_channels = self.in_channels break def forward(self,x): out=self.m(x) return out if __name__ == "__main__": x = torch.randn((8, 15, 224, 224)) model=FrequencyModel( in_channels = 15, output = 9, model_name = 'resnet200d', pretrained = False ) print(model) print(model(x).shape) The error is: RuntimeError: Given groups=1, weight of size [32, 3, 3, 3], expected input[8, 15, 224, 224] to have 3 channels, but got 15 channels instead I hope I can test different CV model easily but not adjust it one by one. | Do you want to change the first conv2d layer of resnet200d? Let me point out a few things that went wrong here. Changing only the value of in_channels does not change the shape of the weight. resnet200d is composed of sequential layers and has a conv2d layer in them. So, you cannot access conv2d with a for statement like the code above. Use the apply method for a recursive approach. If you want to actually change the layer, use ._modules['module name']. If you access a layer with m.modules(), the layer does not change because deepcopy occurs. Thus, get both name and module using model.named_modules() So you probably want to change it like this: import torch import torch.nn as nn import timm x = torch.randn((8, 15, 224, 224)) m = timm.create_model('resnet200d', pretrained=False, num_classes=9) m._modules['conv1']._modules['0'] = nn.Conv2d(15, 32, 3, stride=2, padding=1, bias=False) print(m) print(model(x).shape) More generally, you can change like this: change_first_layer function that changes the in_channels of the first conv2d layer to 15 for all models. import torch import torch.nn as nn import timm def change_first_layer(m): for name, child in m.named_children(): if isinstance(child, nn.Conv2d): kwargs = { 'out_channels': child.out_channels, 'kernel_size': child.kernel_size, 'stride': child.stride, 'padding': child.padding, 'bias': False if child.bias == None else True } m._modules[name] = nn.Conv2d(15, **kwargs) return True else: if(change_first_layer(child)): return True return False x = torch.randn((8, 15, 224, 224)) m = timm.create_model('resnet200d', pretrained=False, num_classes=9) change_first_layer(m) print(m) print(m(x).shape) | 4 | 1 |
74,930,922 | 2022-12-27 | https://stackoverflow.com/questions/74930922/how-to-load-a-custom-julia-package-in-python-using-pythons-juliacall | I already know How to import Julia packages into Python. However, now I have created my own simple Julia package with the following command: using Pkg;Pkg.generate("MyPack");Pkg.activate("MyPack");Pkg.add("StatsBase") where the file MyPack/src/MyPack.jl has the following contents: module MyPack using StatsBase function f1(x, y) return 3x + y end g(x) = StatsBase.std(x) export f1 end Now I would like to load this Julia package in Python via juliacall and call f1 and g functions. I have already run pip3 install juliacall from command line. How do I call the above functions from Python? | You need to run the following code to load the MyPack package from Python via juliacall from juliacall import Main as jl from juliacall import Pkg as jlPkg jlPkg.activate("MyPack") # relative path to the folder where `MyPack/Project.toml` should be used here jl.seval("using MyPack") Now you can use the function (note that calls to non exported functions require package name): >>> jl.f1(4,7) 19 >>> jl.f1([4,5,6],[7,8,9]).to_numpy() array([19, 23, 27], dtype=object) >>> jl.MyPack.g(numpy.arange(0,3)) 1.0 Note another option for calling Julia from Python that seems to be more difficult to configure as of today is the pip install julia package which is described here: I have a high-performant function written in Julia, how can I use it from Python? | 11 | 10 |
74,934,472 | 2022-12-27 | https://stackoverflow.com/questions/74934472/use-on-bad-lines-to-write-invalid-rows-from-pandas-read-csv-to-a-file | I have a CSV file in which I am using Python to parse. I found that some rows in the file have different number of columns. 001;Snow,Jon;19801201 002;Crom,Jake;19920103 003; ;Wise,Frank;19880303 <-- Invalid row 004;Wiseau,Tommy;4324;1323;2323 <-- Invalid row I would like to write these invalid rows into a separate text file. I used this line of code to read from the file. df = pd.read_csv('names.csv', header=None,sep=';') One solution I found here was to skip the problematic rows using the following code: data = pd.read_csv('file1.csv', on_bad_lines='skip') I could change from 'skip' to 'warn', which will give the row number of the problematic row and skip the row. But this will return warning messages and not the row itself. | Since pandas 1.4.0 allows callable for on_bad_lines parameter - that allows you to apply a more sophisticated handling of bad lines. New in version 1.4.0: callable, function with signature (bad_line: list[str]) -> list[str] | None that will process a single bad line. bad_line is a list of strings split by the sep. If the function returns None, the bad line will be ignored. If the function returns a new list of strings with more elements than expected, a ParserWarning will be emitted while dropping extra elements. Only supported when engine="python" So you can pass a custom function which will write an encountered bad line into specific file and return None (to skip that line on dataframe generation). from functools import partial def write_bad_line(line, fp, sep=','): fp.write(sep.join(line) + '\n') return None # return None to skip the line while processing bad_lines_fp = open('bad_lines.csv', 'a') df = pd.read_csv('test.csv', header=None, sep=';', engine='python', on_bad_lines=partial(write_bad_line, sep=';', fp=bad_lines_fp)) bad_lines_fp.close() print(df) The output of the dataframe: 0 1 2 0 1 Snow,Jon 19801201 1 2 Crom,Jake 19920103 The contents of bad_lines.csv (via cat command): $ cat bad_lines.csv 003; ;Wise,Frank;19880303 004;Wiseau,Tommy;4324;1323;2323 | 3 | 10 |
74,922,314 | 2022-12-26 | https://stackoverflow.com/questions/74922314/yield-from-vs-yield-in-for-loop | My understanding of yield from is that it is similar to yielding every item from an iterable. Yet, I observe the different behavior in the following example. I have Class1 class Class1: def __init__(self, gen): self.gen = gen def __iter__(self): for el in self.gen: yield el and Class2 that different only in replacing yield in for loop with yield from class Class2: def __init__(self, gen): self.gen = gen def __iter__(self): yield from self.gen The code below reads the first element from an instance of a given class and then reads the rest in a for loop: a = Class1((i for i in range(3))) print(next(iter(a))) for el in iter(a): print(el) This produces different outputs for Class1 and Class2. For Class1 the output is 0 1 2 and for Class2 the output is 0 Live demo What is the mechanism behind yield from that produces different behavior? | What Happened? When you use next(iter(instance_of_Class2)), iter() calls .close() on the inner generator when it (the iterator, not the generator!) goes out of scope (and is deleted), while with Class1, iter() only closes its instance >>> g = (i for i in range(3)) >>> b = Class2(g) >>> i = iter(b) # hold iterator open >>> next(i) 0 >>> next(i) 1 >>> del(i) # closes g >>> next(iter(b)) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration This behavior is described in PEP 342 in two parts the new .close() method (well, new to Python 2.5) from the Specification Summary Add support to ensure that close() is called when a generator iterator is garbage-collected. What happens is a little clearer (if perhaps surprising) when multiple generator delegations occur; only the generator being delegated is closed when its wrapping iter is deleted >>> g1 = (a for a in range(10)) >>> g2 = (a for a in range(10, 20)) >>> def test3(): ... yield from g1 ... yield from g2 ... >>> next(test3()) 0 >>> next(test3()) 10 >>> next(test3()) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration Fixing Class2 What options are there to make Class2 behave more the way you expect? Notably, other strategies, though they don't have the visually pleasing sugar of yield from or some of its potential benefits gives you a way to interact with the values, which seems like a primary benefit avoid creating a structure like this at all ("just don't do that!") if you don't interact with the generator and don't intend to keep a reference to the iterator, why bother wrapping it at all? (see above comment about interacting) create the iterator yourself internally (this may be what you expected) >>> class Class3: ... def __init__(self, gen): ... self.iterator = iter(gen) ... ... def __iter__(self): ... return self.iterator ... >>> c = Class3((i for i in range(3))) >>> next(iter(c)) 0 >>> next(iter(c)) 1 make the whole class a "proper" Generator while testing this, it plausibly highlights some iter() inconsistency - see comments below (ie. why isn't e closed?) also an opportunity to pass multiple generators with itertools.chain.from_iterable >>> class Class5(collections.abc.Generator): ... def __init__(self, gen): ... self.gen = gen ... def send(self, value): ... return next(self.gen) ... def throw(self, value): ... raise StopIteration ... def close(self): # optional, but more complete ... self.gen.close() ... >>> e = Class5((i for i in range(10))) >>> next(e) # NOTE iter is not necessary! 0 >>> next(e) 1 >>> next(iter(e)) # but still works 2 >>> next(iter(e)) # doesn't close e?? (should it?) 3 >>> e.close() >>> next(e) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3.9/_collections_abc.py", line 330, in __next__ return self.send(None) File "<stdin>", line 5, in send StopIteration Hunting the Mystery A better clue is that if you directly try again, next(iter(instance)) raises StopIteration, indicating the generator is permanently closed (either through exhaustion or .close()), and why iterating over it with a for loop yields no more values >>> a = Class1((i for i in range(3))) >>> next(iter(a)) 0 >>> next(iter(a)) 1 >>> b = Class2((i for i in range(3))) >>> next(iter(b)) 0 >>> next(iter(b)) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration However, if we name the iterator, it works as expected >>> b = Class2((i for i in range(3))) >>> i = iter(b) >>> next(i) 0 >>> next(i) 1 >>> j = iter(b) >>> next(j) 2 >>> next(i) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration To me, this suggests that when the iterator doesn't have a name, it calls .close() when it goes out of scope >>> def gen_test(iterable): ... yield from iterable ... >>> g = gen_test((i for i in range(3))) >>> next(iter(g)) 0 >>> g.close() >>> next(iter(g)) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration Disassembling the result, we find the internals are a little different >>> a = Class1((i for i in range(3))) >>> dis.dis(a.__iter__) 6 0 LOAD_FAST 0 (self) 2 LOAD_ATTR 0 (gen) 4 GET_ITER >> 6 FOR_ITER 10 (to 18) 8 STORE_FAST 1 (el) 7 10 LOAD_FAST 1 (el) 12 YIELD_VALUE 14 POP_TOP 16 JUMP_ABSOLUTE 6 >> 18 LOAD_CONST 0 (None) 20 RETURN_VALUE >>> b = Class2((i for i in range(3))) >>> dis.dis(b.__iter__) 6 0 LOAD_FAST 0 (self) 2 LOAD_ATTR 0 (gen) 4 GET_YIELD_FROM_ITER 6 LOAD_CONST 0 (None) 8 10 POP_TOP 12 LOAD_CONST 0 (None) 14 RETURN_VALUE Notably, the yield from version has GET_YIELD_FROM_ITER If TOS is a generator iterator or coroutine object it is left as is. Otherwise, implements TOS = iter(TOS). (subtly, YIELD_FROM keyword appears to be removed in 3.11) So if the given iterable (to the class) is a generator iterator, it'll be handed off directly, giving the result we (might) expect Extras Passing an iterator which isn't a generator (iter() creates a new iterator each time in both cases) >>> a = Class1([i for i in range(3)]) >>> next(iter(a)) 0 >>> next(iter(a)) 0 >>> b = Class2([i for i in range(3)]) >>> next(iter(b)) 0 >>> next(iter(b)) 0 Expressly closing Class1's internal generator >>> g = (i for i in range(3)) >>> a = Class1(g) >>> next(iter(a)) 0 >>> next(iter(a)) 1 >>> a.gen.close() >>> next(iter(a)) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration generator is only closed by iter when deleted if instance is popped >>> g = (i for i in range(10)) >>> b = Class2(g) >>> i = iter(b) >>> next(i) 0 >>> j = iter(b) >>> del(j) # next() not called on j >>> next(i) 1 >>> j = iter(b) >>> next(j) 2 >>> del(j) # generator closed >>> next(i) # now fails, despite range(10) above Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration | 32 | 27 |
74,933,956 | 2022-12-27 | https://stackoverflow.com/questions/74933956/laplace-correction-with-conditions-for-smoothing | I have a data (user_data) that represent the number of examples in each class (here we have 5 classes), for example in first row, 16 represent 16 samples in class 1 for user1, 15 represent that there is 15 samples belong to class 2 for user 1, ect. user_data = np.array([ [16, 15, 14, 10, 0], [0, 13, 6, 15, 21], [12, 29, 1, 12, 1], [0, 0, 0, 0, 55]]) I used the following method to smooth all these frequencies to avoid issues with extreme values (0 or 1) by using Laplace smoothing where k=2. Output: array([[0.29824561, 0.28070175, 0.26315789, 0.19298246, 0.01754386], [0.01754386, 0.24561404, 0.12280702, 0.28070175, 0.38596491], [0.22807018, 0.52631579, 0.03508772, 0.22807018, 0.03508772], [0.01754386, 0.01754386, 0.01754386, 0.01754386, 0.98245614]]) But I want to smooth only extreme values (0 or 1) in this data | I have noticed that your smoothing approach will cause P > 1, you are to clip or normalize the values later on: probs = user_data/55 alpha = (user_data+1)/(55+2) extreme_values_mask = (probs == 0) | (probs == 1) probs[extreme_values_mask] = alpha[extreme_values_mask] Result: array([[0.29090909, 0.27272727, 0.25454545, 0.18181818, 0.01754386], [0.01754386, 0.23636364, 0.10909091, 0.27272727, 0.38181818], [0.21818182, 0.52727273, 0.01818182, 0.21818182, 0.01818182], [0.01754386, 0.01754386, 0.01754386, 0.01754386, 0.98245614]]) Extension: # Scale by sum. probs /= probs.sum(1).reshape((-1, 1)) print(probs) print(probs.sum(1)) [[0.28589342 0.26802508 0.25015674 0.17868339 0.01724138] [0.01724138 0.2322884 0.10721003 0.26802508 0.37523511] [0.21818182 0.52727273 0.01818182 0.21818182 0.01818182] [0.01666667 0.01666667 0.01666667 0.01666667 0.93333333]] [1. 1. 1. 1.] | 3 | 1 |
74,933,637 | 2022-12-27 | https://stackoverflow.com/questions/74933637/order-of-operations-python | The order of operations in my code seems off... numbers=[7, 6, 4] result = 1 for num in numbers: result *= num - 3 print(result) In this code, I would expect the following to occur... result=1 result = 1 * 7 - 3 = 7 - 3 = 4 result = 4 * 6 - 3 = 24 - 3 = 21 result = 21 * 4 - 3 = 84 - 3 = 81 HOWEVER, running the program outputs result = 1 result = 1 * 7 - 3 = 1 * 4 = 4 result = 4 * 6 - 3 = 4 * 3 = 12 result = 12 * 4 - 3 = 12 * 1 = 12 Why with the *= operator is the order of operations altered? My understanding is it does not have any special properties it merely saves space instead of writing: result = result * num - 3 we get result *= num - 3 and for some reason.. (result = result * num - 3) != (result *= num - 3) | Why with the *= operator is the order of operations altered? The *= is not an operator, it's delimiter. Check the '2. Lexical analysis' of 'The Python Language Reference': The following tokens are operators: + - * ** / // % @ << >> & | ^ ~ := < > <= >= == != The following tokens serve as delimiters in the grammar: ( ) [ ] { } , : . ; @ = -> += -= *= /= //= %= @= &= |= ^= >>= <<= **= The period can also occur in floating-point and imaginary literals. A sequence of three periods has a special meaning as an ellipsis literal. The second half of the list, the augmented assignment operators, serve lexically as delimiters, but also perform an operation. Moreover, in your code you have the, augmented assignment statement, as per '7.2.1. Augmented assignment statements': Augmented assignment is the combination, in a single statement, of a binary operation and an assignment statement: [...] An augmented assignment evaluates the target (which, unlike normal assignment statements, cannot be an unpacking) and the expression list, performs the binary operation specific to the type of assignment on the two operands, and assigns the result to the original target. The target is only evaluated once. An augmented assignment expression like x += 1 can be rewritten as x = x + 1 to achieve a similar, but not exactly equal effect. In the augmented version, x is only evaluated once. Also, when possible, the actual operation is performed in-place, meaning that rather than creating a new object and assigning that to the target, the old object is modified instead. Unlike normal assignments, augmented assignments evaluate the left-hand side before evaluating the right-hand side. For example, a[i] += f(x) first looks-up a[i], then it evaluates f(x) and performs the addition, and lastly, it writes the result back to a[i]. The '6.16. Evaluation order' says: Python evaluates expressions from left to right. Notice that while evaluating an assignment, the right-hand side is evaluated before the left-hand side. You can also check the '6.17. Operator precedence' chapter, but it's not about the statements. | 3 | 1 |
74,930,714 | 2022-12-27 | https://stackoverflow.com/questions/74930714/why-async-function-binds-name-incorrectly-from-the-outer-scope | Here is an async function generator by iterating a for loop. I expected this closure to respect the names from the outer scope. import asyncio coroutines = [] for param in (1, 3, 5, 7, 9): async def coro(): print(param ** 2) # await some_other_function() coroutines.append(coro) # The below code is not async, I made a poor mistake. # Michael Szczesny's answer fixes this. for coro in coroutines: asyncio.run(coro()) While I was expecting the results to be 1, 9, 25, 49, 81 after running, here is the actual output: 81 81 81 81 81 Unexpectedly here, the name param has taken the same value each time. Can you explain why this happens and how I can achieve the task of creating lots of async functions in a for loop with names binding correctly? | The included code does not run asynchronously or uses coro as a closure. Functions are evaluated at runtime in python. An asynchronous solution would look like this import asyncio def create_task(param): async def coro(): await asyncio.sleep(1) # make async execution noticeable print(param ** 2) return coro # return closure capturing param async def main(): await asyncio.gather(*[create_task(param)() for param in (1,3,5,7,9)]) asyncio.run(main()) Output after 1 second 1 9 25 49 81 | 3 | 2 |
74,899,215 | 2022-12-23 | https://stackoverflow.com/questions/74899215/how-to-increase-the-number-of-vertices | I need a parametric form for a matplotlib.path.Path. So I used the .vertices attribute, and it works fine except that the number of points given is too low for the use I want. Here is a code to illustrate : import numpy as np from matplotlib import pyplot as plt import matplotlib.patches as mpat fig, ax = plt.subplots() ax.set(xlim=(-6, 6), ylim=(-6, 6)) # generate a circular path circle = mpat.Arc((0, 0), 10, 10, theta1=20, theta2=220, color='green') path = circle.get_transform().transform_path(circle.get_path()).cleaned().vertices[:-3] # get_path is not enough because of the transformation, so we apply to the path the same transformation as the circle has got from the identity circle ax.add_patch(circle) # plot path vertices plt.scatter(x=path[:, 0], y=path[:, 1], color='red', s=2) shape = len(path) plt.show() How to increase the number of points (red) to fetch better the path (green)? Or, how can I increase the len of path? Thanks in advance! | It's not ideal, but you can just loop over many shorter arcs, e.g.,: import numpy as np from matplotlib import pyplot as plt import matplotlib.patches as mpat fig, ax = plt.subplots() ax.set(xlim=(-6, 6), ylim=(-6, 6)) # generate a circular path dr = 30 # change this to control the number of points path = np.empty((0, 2)) for i in range(20, 221, dr): circle = mpat.Arc((0, 0), 10, 10, theta1=i, theta2=(i + dr), color='green') tmppath = circle.get_transform().transform_path(circle.get_path()).cleaned().vertices[:-1] path = np.vstack((path, tmppath[:, 0:2])) ax.add_patch(circle) # plot path vertices plt.scatter(x=path[:, 0], y=path[:, 1], color='red', s=2) plt.show() Update Another option is to hack the Arc class, so that during initialisation, the Path.arc takes in the n keyword. E.g., from matplotlib.patches import Path, Arc class NewArc(Arc): def __init__(self, xy, width, height, angle=0.0, theta1=0.0, theta2=360.0, n=30, **kwargs): """ An Arc class with the n keyword argument """ fill = kwargs.setdefault('fill', False) if fill: raise ValueError("Arc objects can not be filled") super().__init__(xy, width, height, angle=angle, **kwargs) self.theta1 = theta1 self.theta2 = theta2 (self._theta1, self._theta2, self._stretched_width, self._stretched_height) = self._theta_stretch() # add in the n keyword here self._path = Path.arc(self._theta1, self._theta2, n=n) The original code (without the for loop) can then be used and the number of points increased, e.g., from matplotlib import pyplot as plt fig, ax = plt.subplots() ax.set(xlim=(-6, 6), ylim=(-6, 6)) # generate a circular path using NewArc and the n keyword circle = NewArc((0, 0), 10, 10, theta1=20, theta2=220, n=40, color='green') path = circle.get_transform().transform_path(circle.get_path()).cleaned().vertices[:-3] ax.add_patch(circle) # plot path vertices ax.scatter(x=path[:, 0], y=path[:, 1], color='red', s=2) fig.show() | 4 | 1 |
74,928,049 | 2022-12-27 | https://stackoverflow.com/questions/74928049/how-to-multiply-all-columns-with-each-other | I have a pandas dataframe and I want to add to it new features, like this: Say I have features X_1,X_2,X_3 and X_4, then I want to add X_1 * X_2, X_1 * X_3, X_1 * X_4, and similarly X_2 * X_3, X_2 * X_4 and X_3 * X_4. I want to add them, not replace the original features. How do I do that? | for c1, c2 in combinations(df.columns, r=2): df[f"{c1} * {c2}"] = df[c1] * df[c2] you can take every r = 2 combination of the columns, multiply them and assign. Example run: In [66]: df Out[66]: x1 y1 x2 y2 0 20 5 22 10 1 25 8 27 2 In [67]: from itertools import combinations In [68]: for c1, c2 in combinations(df.columns, r=2): ...: df[f"{c1} * {c2}"] = df[c1] * df[c2] ...: In [69]: df Out[69]: x1 y1 x2 y2 x1 * y1 x1 * x2 x1 * y2 y1 * x2 y1 * y2 x2 * y2 0 20 5 22 10 100 440 200 110 50 220 1 25 8 27 2 200 675 50 216 16 54 Another way via sklearn.preprocessing.PolynomialFeatures: In [74]: df Out[74]: x1 y1 x2 y2 0 20 5 22 10 1 25 8 27 2 In [75]: from sklearn.preprocessing import PolynomialFeatures In [76]: poly = PolynomialFeatures(degree=2, interaction_only=True, include_bias=False) In [77]: poly.fit_transform(df) Out[77]: array([[ 20., 5., 22., 10., 100., 440., 200., 110., 50., 220.], [ 25., 8., 27., 2., 200., 675., 50., 216., 16., 54.]]) In [78]: new_columns = df.columns.tolist() + [*map(" * ".join, combinations(df.columns, r=2))] In [79]: df = pd.DataFrame(poly.fit_transform(df), columns=new_columns) In [80]: df Out[80]: x1 y1 x2 y2 x1 * y1 x1 * x2 x1 * y2 y1 * x2 y1 * y2 x2 * y2 0 20.0 5.0 22.0 10.0 100.0 440.0 200.0 110.0 50.0 220.0 1 25.0 8.0 27.0 2.0 200.0 675.0 50.0 216.0 16.0 54.0 | 3 | 1 |
74,921,764 | 2022-12-26 | https://stackoverflow.com/questions/74921764/how-to-display-plotly-dash-on-python-to-a-website | I have created many interactive graphs on Plotly Dash (using Python) that are working perfectly on the local port. But I would like to integrate this into other products on a website. Is this possible? What would be the best way to display exactly what I'm seeing on my local port? I have read that we may use <iframe> for this, but I don't have expertise to say if this is the best solution | This is a somewhat open-ended question, but I would recommend following @Federico Tartarini's excellent YouTube guide here that will allow you to deploy your plotly-dash app to Google Cloud. His video will provide more exact instructions than I'll be able to type in this answer, but I'll summarize the main points: Youβll put your app.py file inside of a dash project with this structure: Dash project/ ββ venv/ β ββ Dockerfile/ β ββ app.py/ β ββ requirements.txt/ ββ assets/ ββ data/ Then you'll run commands to deploy your project (including the dash app) to Google Cloud β and Google Cloud will follow instructions inside the Dockerfile and requirements.txt to create a virtual environment that can run your app.py file. Once the app is deployed to the cloud, you'll be able to access the app and embed it on your webpage. This is how I deployed my trigonometric interpolation plotly-dash app to Google Cloud and embedded it on the projects section of my personal webpage. | 3 | 1 |
74,925,007 | 2022-12-27 | https://stackoverflow.com/questions/74925007/python-package-exists-on-pypi-but-cant-install-it-via-pip | The package PyAudioWPatch is shown as available on PyPi with a big old green check mark. https://pypi.org/project/PyAudioWPatch/ However when I try to install it, I am getting the following error: % pip install PyAudioWPatch ERROR: Could not find a version that satisfies the requirement PyAudioWPatch (from versions: none) ERROR: No matching distribution found for PyAudioWPatch For context: % python -V; pip -V Python 3.9.13 pip 22.3.1 from /Users/petertoth/Documents/Desktop_record_sum/py/venv/lib/python3.9/site-packages/pip (python 3.9) Why is this the case? | The project has only wheels for Windows, and your system is not Windows, hence the error. And I should assume that to be by design, because it declares to be a PortAudio fork with WASAPI loopback support. As WASAPI is a Windows thing, it does not make sense to install it on a non Windows system. IMHO, you'd better install the original project. If you really want this one, then you should try to get it from GitHub | 4 | 7 |
74,882,136 | 2022-12-21 | https://stackoverflow.com/questions/74882136/memory-efficient-dot-product-between-a-sparse-matrix-and-a-non-sparse-numpy-matr | I have gone through similar questions that has been asked before (for example [1] [2]). However, none of them completely relevant for my problem. I am trying to calculate a dot product between two large matrices and I have some memory constraint that I have to meet. I have a numpy sparse matrix, which is a shape of (10000,600000). For example, from scipy import sparse as sps x = sps.random(m=10000, n=600000, density=0.1).toarray() The second numpy matrix is of size (600000, 256), which consists of only (-1, 1). import numpy as np y = np.random.choice([-1,1], size=(600000, 256)) I need dot product of x and y at lowest possible memory required. Speed is not the primary concern. Here is what I have tried so far: Scipy Sparse Format: Naturally, I converted the numpy sparse matrix to scipy csr_matrix. However, task is still getting killed due to memory issue. There is no error, I just get killed on the terminal. from scipy import sparse as sps sparse_x = sps.csr_matrix(x, copy=False) z = sparse_x.dot(y) # killed Decreasing dtype precision + Scipy Sparse Format: from scipy import sparse as sps x = x.astype("float16", copy=False) y = y.astype("int8", copy=False) sparse_x = sps.csr_matrix(x, copy=False) z = sparse_x.dot(y) # Increases the memory requirement for some reason and dies np.einsum Not sure if it helps/works with sparse matrix. Found something interesting in this answer. However, following doesn't help either: z = np.einsum('ij,jk->ik', x, y) # similar memory requirement as the scipy sparse dot Suggestions? If you have any suggestions to improve any of these. Please let me know. Further, I am thinking in the following directions: It would be great If I can get rid of dot product itself somehow. My second matrix (i.e. y is randomly generated and it just has [-1, 1]. I am hoping if there is way I could take advantage of its features. May be diving dot product into several small dot product and then, aggregate. | TL;DR: SciPy consumes significantly more memory than strictly needed for this due to temporary arrays, type promotion, and due to an inefficient usage. It is also not very fast. The setup can be optimized so to use less memory and Numba can be used to perform the computation efficiently (both for the memory usage and time). I have a numpy sparse matrix, which is a shape of (10000,600000). For example, x = sps.random(m=10000, n=600000, density=0.1).toarray() sps.random(...) is a COO sparse matrix so it is relatively compact in memory. Using .toarray() on it causes the sparse matrix to be converted to a huge dense matrix. Indeed, this resulting dense matrix (x) takes 10000*600000*8 = 44.7 GiB (since the default type for floating-point numbers is 64-bit wide). This can likely cause memory issue. On some machines with a swap memory or a large RAM (eg. 64 GiB), the program can be much slower and allocating only few GiB of RAM can cause the process to be kill if the memory is closed to be saturated (eg. due to OOM killer on Linux). Note that Windows compresses data in RAM when the remaining memory is pretty limited (ie. ZRAM). This methods works well on arrays having a lot of zeros. However, when additional arrays are allocated later and the dense array is read back from RAM, the OS needs to uncompress data from the RAM and it needs more space. If the rest of the RAM content cannot be compressed so well during the decompression, an out of memory is likely to occur. Naturally, I converted the numpy sparse matrix to scipy csr_matrix CSR matrices are encoded as 3 1D arrays: a data array containing the non-zero value of the source matrix; a column index array referencing the location of the non-zero items of the current row; a row index referencing the offset of the first item in the two first arrays. In practice, the 2 last array are 32-bit integer arrays and likely 64-bit integers on Linux. Since your input matrix has 10% of non-zero values, this means data should take 10000*600000*8*0.1 = 4.5 GiB, column should take also the same size on Linux (since they both contains 64-bit values) and 2.2 GiB on Windows, row should take 10000*8 = 78 KiB on Linux and be even smaller on Windows. Thus, the CSR matrix should take about 9 GiB overall on Linux and 6.7 GiB on Windows. This is still pretty big. Using copy=False is not a good idea since the CSR matrix will reference the huge initial array (AFAIK data will reference x). This means x needs to be kept in memory and so the resulting memory spec is about 44.7 GiB + 2.2~4.5 GiB = 46.9~49.2 GiB. This is not to mention matrix multiplication involving sparse matrices are generally slower (unless the sparsity factor is very small). It is better to copy the array content so to only keep the non-zero values with copy=True. x.astype("float16", copy=False) This is not possible to convert an array to another one with a different data-type without copying it especially if the size of each item is different in memory. Even if it would be, this make no sense to do that since the goal is to reduce the size of the input so to create a new array and not to keep the initial one. # Increases the memory requirement for some reason and dies There are 2 reasons for this. Firstly, Scipy does not support the float16 data-type yet for sparse matrices. You can see this by checking sparse_x.dtype: it is float32 when x is float16. This is because the float16 data-type is not supported natively on most platforms (x86-64 processors mainly support the conversion from/to this type). As the result, the CSR matrix data part is twice bigger than what it could be. Secondly, Numpy does not internally support binary operations on arrays with different data-types. Numpy first converts the inputs of binary operations so they matches. To do this, it follows semantics rules. This is called type promotion. SciPy generally works the same way (especially since it often makes use of Numpy internally). This means the int8 array will likely implicitly converted in a float32 array in your case, that is, a 4 times bigger array in memory. We need to delve into the implementation of SciPy to understand what is really happening. Under the hood Let's understand what is going on internally in SciPy when we do the matrix multiplication. As pointed out by @hpaulj, sparse_x.dot(y) calls _mul_multivector which creates the output array and calls csr_matvecs. The last is a C++ wrapped function calling a template function instantiated by the macro SPTOOLS_CSR_DEFINE_TEMPLATE. This macro is provided to the other macro SPTOOLS_FOR_EACH_INDEX_DATA_TYPE_COMBINATION responsible for generating all the possible instances from a list of predefined supported types. The implementation of csr_matvecs is available here. Based on this code, we can see the float16 data-type is not supported by SciPy (at least for sparse matrices). We can also see that self and other must have the same type when calling the C++ function csr_matvecs (the type promotion is certainly done before calling the C++ function using in a wrapping code like this one). We can also see that the C++ implementation is not particularly optimized and it is also simple. One can easily write a code having the same logic and the same performance using Numba or Cython so to support your specific compact input type. We can finally see than nothing is allocated in the C++ computing part (only in the Python code and certainly in the C++ wrapper). Memory efficient implementation First of all, here is a code to set the array in a compact way: from scipy import sparse as sps import numpy as np sparse_x = sps.random(m=10_000, n=600_000, density=0.1, dtype=np.float32) sparse_x = sps.csr_matrix(sparse_x) # Efficient conversion y = np.random.randint(0, 2, size=(600_000, 256), dtype=np.int8) np.subtract(np.multiply(y, 2, out=y), 1, out=y) # In-place modification One simple pure-Numpy solution to mitigate the overhead of temporary arrays is to compute the matrix multiplication by chunk. Here is an example computing the matrix band by band: chunk_count = 4 m, n = sparse_x.shape p, q = y.shape assert n == p result = np.empty((m, q), dtype=np.float16) for i in range(chunk_count): start, end = m*i//chunk_count, m*(i+1)//chunk_count result[start:end,:] = sparse_x[start:end,:] @ y This is <5% slower on my machine and it takes less memory since the temporary arrays of only 1 band of the output matrix are created at a time. If you still have memory issues using this code, then please check if the output matrix can actually fit in RAM using: for l in result: l[:] = np.random.rand(l.size). Indeed, creating an array does not mean the memory space is reserved in RAM (see this post). A more memory efficient solution and faster one is to use Numba or Cython so to do what Scipy does manually without creating any temporary array. The bad news is that Numba does not support the float16 data-type yet so float32 needs to be used instead. Here is an implementation: import numba as nb # Only for CSR sparse matrices @nb.njit(['(float32[::1], int32[::1], int32[::1], int8[:,::1])', '(float32[::1], int64[::1], int64[::1], int8[:,::1])'], fastmath=True, parallel=True) def sparse_compute(x_data, x_cols, x_rows, y): result = np.empty((x_rows.size-1, y.shape[1]), dtype=np.float32) for i in nb.prange(x_rows.size-1): line = np.zeros(y.shape[1], dtype=np.float32) for j in range(x_rows[i], x_rows[i+1]): factor = x_data[j] y_line = y[x_cols[j],:] for k in range(line.size): line[k] += y_line[k] * factor for k in range(line.size): result[i, k] = line[k] return result z = sparse_compute(sparse_x.data, sparse_x.indices, sparse_x.indptr, y) This is significantly faster than the previous solution and it also consumes far less memory. Indeed, it consumes only 10 MiB to compute the result (that is 50-200 times less than the initial solution on my machine), and it is about 3.5 times faster than SciPy on a 10 year old mobile processor with only 2 core (i7-3520M)! Generalization on LIL sparse matrices The above Numba code is only for CSR matrices. LIL matrices are not efficient. They are designed for building matrices quickly, but inefficient for computations. The doc say to convert them to CSR/CSC matrices (once created) to do computations. LIL matrices are internally 2 Numpy array of CPython list objects. Lists are inefficient and cannot be much optimized by Numba/Cython/C because of how CPython lists are designed. They also use a lot of memory (COO is certainly better for that). Indeed, on mainstream 64-bit platforms and using CPython, each list item is an object and a CPython object typically takes 32 bytes, not to mention the 8 byte taken by the reference in the CPython list. 2 objects are needed per non-zero value, so 80 byte per non-zero value. As a result, the resulting LIL input matrix is pretty huge. Since Numba does not really support CPython lists, lists can be converted to Numpy arrays on the fly so to be able to perform a relatively fast memory-efficient computation. Fortunately, this conversion is not so expensive on large matrices. Here is an implementation: @nb.njit('(float32[::1], int32[::1], int8[:,::1], float32[::1])', fastmath=True) def nb_compute_lil_row(row_data, row_idx, y, result): assert row_data.size == row_idx.size assert result.size == y.shape[1] for i in range(row_data.size): factor = row_data[i] y_line = y[row_idx[i],:] for k in range(y_line.size): result[k] += y_line[k] * factor def nb_compute_lil(sparse_x, y): result = np.zeros((sparse_x.shape[0], y.shape[1]), dtype=np.float32) for i in range(len(sparse_x.rows)): row_data = np.fromiter(sparse_x.data[i], np.float32) row_idx = np.fromiter(sparse_x.rows[i], np.int32) nb_compute_lil_row(row_data, row_idx, y, result[i]) return result z = nb_compute_lil(sparse_x, y) This code is twice slower than Scipy on my machine but it consumes less memory. The worst performance is due to Numba failing to generate a fast SIMD code in this specific case (due to missed optimizations of the internal LLVM-JIT compiler). | 5 | 4 |
74,924,478 | 2022-12-26 | https://stackoverflow.com/questions/74924478/python-eval-fails-to-regonize-numpy-and-math-symbols-if-used-together-with-a-d | I have the following formula that I would like to evaluate: import math import numpy as np formula = 'np.e**x + math.erf(x) + np.pi + math.erf(u)' I can easily then evaluate the formula for given float values of x and u and eval() recognizes math.erf, np.pi and np.e. For example: x=1.0; u=0.3; eval(formula) yields 7.03. But, I want x and u to be arrays. I tried to use eval with a dictionary following this post.: var = {'x':np.array([1,1]),'u':np.array([0.1,0.2])} eval(formula, var) which yields error messages, 'np' and 'math' are not defined, which was not the case above when eval was used without a dict. The same error messages are also obtained when 'x' and 'u' are set to floats instead of array with the var dictionary. Also, there are no problems if the dictionary is used with a 'formula' without np. and math., e.g. " formula = 'x + u'. Does anybody have an idea how can I evaluate formula containing np.e, math.erf, etc., when x and u are arrays? P.S. I am using Python 3.8. | Passing var as globals to eval() means the global names np and math are not available to the expression (formula). Simply pass var as locals instead. var = {'x': 1.0, 'u': 0.3} eval(formula, None, var) # -> 7.031202034457681 Note that this code doesn't work with arrays since the math module only works on scalars. | 3 | 4 |
74,920,178 | 2022-12-26 | https://stackoverflow.com/questions/74920178/minmaxscaler-for-a-number-of-columns-in-a-pandas-dataframe | I want to apply MinmaxScaler on a number of pandas DataFrame 'together'. Meaning that I want the scaler to perform on all data in those columns, not separately on each column. My DataFrame has 20 columns. I want to apply the scaler on 12 of the columns at the same time. I have already read this. But it does not solve my problem since it acts on each column separately. | IIUC, you want the sklearn scaler to fit and transform multiple columns with the same criteria (in this case min and max definitions). Here is one way you can do this - You can save the initial shape of the columns and then transform the numpy array of those columns into a 1D array from a 2D array. Next you can fit your scaler and transform this 1D array Finally you can use the old shape to reshape the array back into the n columns you need and save them The advantage of this approach is that this works with any of the sklearn scalers you need to use, MinMaxScaler, StandardScaler etc. import pandas as pd from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() dfTest = pd.DataFrame({'A':[14.00,90.20,90.95,96.27,91.21], 'B':[103.02,107.26,110.35,114.23,114.68], 'C':['big','small','big','small','small']}) cols = ['A','B'] old_shape = dfTest[cols].shape #(5,2) dfTest[cols] = scaler.fit_transform(dfTest[cols].to_numpy().reshape(-1,1)).reshape(old_shape) print(dfTest) A B C 0 0.000000 0.884188 big 1 0.756853 0.926301 small 2 0.764303 0.956992 big 3 0.817143 0.995530 small 4 0.766885 1.000000 small | 3 | 1 |
74,895,750 | 2022-12-23 | https://stackoverflow.com/questions/74895750/should-i-use-poetry-in-production-dockerfile | I have a web app built with a framework like FastAPI or Django, and my project uses Poetry to manage the dependencies. I didn't find any topic similar to this. The question is: should I install poetry in my production dockerfile and install the dependencies using the poetry, or should I export the requirements.txt and just use pip inside my docker image? Actually, I am exporting the requirements.txt to the project's root before deploy the app and just using it inside the docker image. My motivation is that I don't need the "complexity" of using poetry inside a dockerfile, since the requirements.txt is already generated by the poetry and use it inside the image will generate a new step into docker build that can impact the build speed. However, I have seen much dockerfiles with poetry installation, what makes me think that I am doing a bad use of the tool. | There's no need to use poetry in production. To understand this we should look back to what the original reason poetry exists. There are basically two main reasons for poetry:- To manage python venv for us - in the past people use different range of tools, from home grown script to something like virtualenvwrapper to automatically manage the virtual env. To help us publishing packages to PyPI Reason no. 2 not really a concern for this question so let just look at reason no. 1. Why we need something like poetry in dev? It because dev environment could be different between developers. My venv could be in /home/kamal/.venv while John probably want to be fancy and place his virtualenv in /home/john/.local/venv. When writing notes on how to setup and run your project, how would you write the notes to cater the difference between me and John? We probably use some placeholder such as /path/to/your/venv. Using poetry, we don't have to worry about this. Just write in the notes that you should run the command as:- poetry run python manage.py runserver ... Poetry take care of all the differences. But in production, we don't have this problem. Our app in production will be in single place, let say in /app. When writing notes on how to run command on production, we can just write:- /app/.venv/bin/myapp manage collectstatic ... Below is a sample Dockerfile we use to deploy our app using docker:- FROM python:3.10-buster as py-build # [Optional] Uncomment this section to install additional OS packages. RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \ && apt-get -y install --no-install-recommends netcat util-linux \ vim bash-completion yamllint postgresql-client RUN curl -sSL https://install.python-poetry.org | POETRY_HOME=/opt/poetry python3 - COPY . /app WORKDIR /app ENV PATH=/opt/poetry/bin:$PATH RUN poetry config virtualenvs.in-project true && poetry install FROM node:14.20.0 as js-build COPY . /app WORKDIR /app RUN npm install && npm run production FROM python:3.10-slim-buster EXPOSE 8000 COPY --from=py-build /app /app COPY --from=js-build /app/static /app/static WORKDIR /app CMD /app/.venv/bin/run We use multistage build where in the build stage, we still use poetry to install all the dependecies but in the final stage, we just copy /app which would also include .venv virtualenv folder. | 7 | 13 |
74,916,685 | 2022-12-26 | https://stackoverflow.com/questions/74916685/installing-greenlet-with-pip-fails-on-macos-13-1 | I am trying to install greenlet in a virtualenv on my mac. pip install greenlet This renders the following output: Collecting greenlet Using cached greenlet-2.0.1.tar.gz (163 kB) Preparing metadata (setup.py) ... done Building wheels for collected packages: greenlet Building wheel for greenlet (setup.py) ... error error: subprocess-exited-with-error Γ python setup.py bdist_wheel did not run successfully. β exit code: 1 β°β> [98 lines of output] running bdist_wheel running build running build_py creating build creating build/lib.macosx-10.9-universal2-cpython-39 creating build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/__init__.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet creating build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/__init__.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform creating build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_version.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_weakref.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_gc.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/leakcheck.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_generator.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_greenlet_trash.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_throw.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_tracing.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_cpp.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_contextvars.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_greenlet.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_extension_interface.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/__init__.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_generator_nested.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_stack_saved.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_leaks.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests running egg_info writing src/greenlet.egg-info/PKG-INFO writing dependency_links to src/greenlet.egg-info/dependency_links.txt writing requirements to src/greenlet.egg-info/requires.txt writing top-level names to src/greenlet.egg-info/top_level.txt reading manifest file 'src/greenlet.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no previously-included files found matching 'benchmarks/*.json' no previously-included directories found matching 'docs/_build' warning: no files found matching '*.py' under directory 'appveyor' warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '*.pyd' found anywhere in distribution warning: no previously-included files matching '*.so' found anywhere in distribution warning: no previously-included files matching '.coverage' found anywhere in distribution adding license file 'LICENSE' adding license file 'LICENSE.PSF' adding license file 'AUTHORS' writing manifest file 'src/greenlet.egg-info/SOURCES.txt' copying src/greenlet/greenlet.cpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_allocator.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_compiler_compat.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_cpython_compat.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_exceptions.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_greenlet.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_internal.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_refs.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_slp_switch.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_thread_state.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_thread_state_dict_cleanup.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_thread_support.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/slp_platformselect.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/platform/setup_switch_x64_masm.cmd -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_aarch64_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_alpha_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_amd64_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_arm32_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_arm32_ios.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_arm64_masm.asm -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_arm64_masm.obj -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_arm64_msvc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_csky_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_m68k_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_mips_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_ppc64_aix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_ppc64_linux.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_ppc_aix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_ppc_linux.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_ppc_macosx.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_ppc_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_riscv_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_s390_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_sparc_sun_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_x32_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_x64_masm.asm -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_x64_masm.obj -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_x64_msvc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_x86_msvc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_x86_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/tests/_test_extension.c -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/_test_extension_cpp.cpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests running build_ext building 'greenlet._greenlet' extension creating build/temp.macosx-10.9-universal2-cpython-39 creating build/temp.macosx-10.9-universal2-cpython-39/src creating build/temp.macosx-10.9-universal2-cpython-39/src/greenlet clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -iwithsysroot/System/Library/Frameworks/System.framework/PrivateHeaders -iwithsysroot/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/Headers -arch arm64 -arch x86_64 -Werror=implicit-function-declaration -I/Users/alexanderk/Library/CloudStorage/Dropbox/Mac/Documents/Development/Python-Dev/testCHat/env/include -I/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/Headers -c src/greenlet/greenlet.cpp -o build/temp.macosx-10.9-universal2-cpython-39/src/greenlet/greenlet.o --std=gnu++11 src/greenlet/greenlet.cpp:16:10: fatal error: 'Python.h' file not found #include <Python.h> ^~~~~~~~~~ 1 error generated. error: command '/usr/bin/clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for greenlet Running setup.py clean for greenlet Failed to build greenlet Installing collected packages: greenlet Running setup.py install for greenlet ... error error: subprocess-exited-with-error Γ Running setup.py install for greenlet did not run successfully. β exit code: 1 β°β> [100 lines of output] running install /Users/alexanderk/Library/CloudStorage/Dropbox/Mac/Documents/Development/Python-Dev/testCHat/env/lib/python3.9/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build/lib.macosx-10.9-universal2-cpython-39 creating build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/__init__.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet creating build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/__init__.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform creating build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_version.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_weakref.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_gc.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/leakcheck.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_generator.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_greenlet_trash.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_throw.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_tracing.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_cpp.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_contextvars.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_greenlet.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_extension_interface.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/__init__.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_generator_nested.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_stack_saved.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/test_leaks.py -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests running egg_info writing src/greenlet.egg-info/PKG-INFO writing dependency_links to src/greenlet.egg-info/dependency_links.txt writing requirements to src/greenlet.egg-info/requires.txt writing top-level names to src/greenlet.egg-info/top_level.txt reading manifest file 'src/greenlet.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no previously-included files found matching 'benchmarks/*.json' no previously-included directories found matching 'docs/_build' warning: no files found matching '*.py' under directory 'appveyor' warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '*.pyd' found anywhere in distribution warning: no previously-included files matching '*.so' found anywhere in distribution warning: no previously-included files matching '.coverage' found anywhere in distribution adding license file 'LICENSE' adding license file 'LICENSE.PSF' adding license file 'AUTHORS' writing manifest file 'src/greenlet.egg-info/SOURCES.txt' copying src/greenlet/greenlet.cpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_allocator.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_compiler_compat.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_cpython_compat.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_exceptions.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_greenlet.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_internal.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_refs.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_slp_switch.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_thread_state.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_thread_state_dict_cleanup.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/greenlet_thread_support.hpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/slp_platformselect.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet copying src/greenlet/platform/setup_switch_x64_masm.cmd -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_aarch64_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_alpha_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_amd64_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_arm32_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_arm32_ios.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_arm64_masm.asm -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_arm64_masm.obj -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_arm64_msvc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_csky_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_m68k_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_mips_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_ppc64_aix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_ppc64_linux.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_ppc_aix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_ppc_linux.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_ppc_macosx.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_ppc_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_riscv_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_s390_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_sparc_sun_gcc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_x32_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_x64_masm.asm -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_x64_masm.obj -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_x64_msvc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_x86_msvc.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/platform/switch_x86_unix.h -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/platform copying src/greenlet/tests/_test_extension.c -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests copying src/greenlet/tests/_test_extension_cpp.cpp -> build/lib.macosx-10.9-universal2-cpython-39/greenlet/tests running build_ext building 'greenlet._greenlet' extension creating build/temp.macosx-10.9-universal2-cpython-39 creating build/temp.macosx-10.9-universal2-cpython-39/src creating build/temp.macosx-10.9-universal2-cpython-39/src/greenlet clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -iwithsysroot/System/Library/Frameworks/System.framework/PrivateHeaders -iwithsysroot/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/Headers -arch arm64 -arch x86_64 -Werror=implicit-function-declaration -I/Users/alexanderk/Library/CloudStorage/Dropbox/Mac/Documents/Development/Python-Dev/testCHat/env/include -I/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/Headers -c src/greenlet/greenlet.cpp -o build/temp.macosx-10.9-universal2-cpython-39/src/greenlet/greenlet.o --std=gnu++11 src/greenlet/greenlet.cpp:16:10: fatal error: 'Python.h' file not found #include <Python.h> ^~~~~~~~~~ 1 error generated. error: command '/usr/bin/clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure Γ Encountered error while trying to install package. β°β> greenlet note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure. I have read a multitude of similar problems and solutionsbut none of them seem to work. My setuptools are up to date. My wheel is up to date. My python version is 3.9.6 My mac is up to date. I have XCode commandline tools installed. Please help me adress this error so I can install greenlet and other python packages (some packages install fine, others give the same error) | Solution: Turns out my whole pip/python-versions etc. mess was causing the issue. I followed this wonderfully written article and everything is working now: https://gist.github.com/MuhsinFatih/ee0154199803babb449b5bb98d3475f7 | 3 | 0 |
74,917,035 | 2022-12-26 | https://stackoverflow.com/questions/74917035/dont-truncate-columns-output | I am setting the options like this pd.options.display.max_columns = None When I try to print the DataFrame, I get truncated columns: <class 'pandas.core.frame.DataFrame'> Index(['contractSymbol', 'strike', 'currency', 'lastPrice', 'change', 'volume', 'bid', 'ask', 'contractSize', 'lastTradeDate', 'impliedVolatility', 'inTheMoney', 'openInterest', 'percentChange'], dtype='object') contractSymbol strike currency \ symbol expiration optionType TSLA 2022-12-30 calls TSLA221230C00050000 50.00 USD calls TSLA221230C00065000 65.00 USD How do I show all columns in one row? | I think you need to set the display size to a larger value. According to the documentation, display.max_columns defines the behavior taken when max_cols is exceeded. I'm not sure if this is referring to the number of columns, or the width of all columns. In either case, setting display.width to a larger value: pd.options.display.width = 120 would probably fix your issue. The default is 80 characters, which is about what you have there before the value is written to a newline. If your editor is in a terminal window that you can resize, you could also try doing that. | 3 | 2 |
74,913,169 | 2022-12-25 | https://stackoverflow.com/questions/74913169/how-can-i-make-pdf2image-work-with-pdfs-that-have-paths-containing-chinese-chara | Following this question, I tried to run the following code to convert PDF with a path that contains Chinese characters to images: from pdf2image import convert_from_path images = convert_from_path('path with Chinese character in it/some Chinese character.pdf', 500) # save images I got this error message: PDFPageCountError: Unable to get page count. I/O Error: Couldn't open file 'path with Chinese character in it/??????.pdf': No such file or directory. in which all Chinese characters are replaced with "?". The issue is caused solely by the Chinese characters in the directory since the program worked as intended after I ensured that the path contains no Chinese characters. In pdf2image.py, I tried to alter the function pdfinfo_from_path, that out.decode("utf8", "ignore") is changed to e.g. out.decode("utf32", "ignore"), which also does not work. Not sure whether it is relevant: according to the aforementioned answer, I also need to install poppler. But my code also worked when the directory does not contain any Chinese characters. In addition, running this code conda install -c conda-forge poppler (from the answer above) never ends after several hours of waiting. | You could use the convert_from_bytes to avoid the issue: from pdf2image import convert_from_bytes with open('chinese_filename.pdf', 'rb') as f: images = convert_from_bytes(f.read(), 500) | 3 | 2 |
74,904,866 | 2022-12-24 | https://stackoverflow.com/questions/74904866/repeated-categorical-x-axis-labels-in-matplotlib | I have a simple question: why are my x-axis labels repeated? Here's an MWE: X-Axis Labels MWE a = { # DATA -- 'CATEGORY': (VALUE, ERROR) 'Cats': (1, 0.105), 'Dogs': (2, 0.023), 'Pigs': (2.6, 0.134) } compositions = list(a.keys()) # MAKE INTO LIST a_vals = [i[0] for i in a.values()] # EXTRACT VALUES a_errors = [i[1] for i in a.values()] # EXTRACT ERRORS fig = plt.figure(figsize=(8, 6)) # DICTATE FIGURE SIZE bax = brokenaxes(ylims=((0,1.5), (1.7, 3)), hspace = 0.05) # BREAK AXES bax.plot(compositions, a_vals, marker = 'o') # PLOT DATA for i in range(0, len(a_errors)): # PLOT ALL ERROR BARS bax.errorbar(i, a_vals[i], yerr = a_errors[i], capsize = 5, fmt = 'red') # FORMAT ERROR BAR Here's stuff I tried: Manually setting x-axis tick marks using xticks Converting strings to floats using np.asarray(x, float) Reducing # ticks using pyplot.locator_params(nbins=3) | You can use bax.locator_params(axis='x', nbins=len(compositions)) to reduce the number of x-ticks so that it matches the length of compositions. More on locator_params() method, which controls the behavior of major tick locators: https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.locator_params.html import matplotlib.pyplot as plt from brokenaxes import brokenaxes a = { # DATA -- 'CATEGORY': (VALUE, ERROR) 'Cats': (1, 0.105), 'Dogs': (2, 0.023), 'Pigs': (2.6, 0.134) } compositions = list(a.keys()) # MAKE INTO LIST a_vals = [i[0] for i in a.values()] # EXTRACT VALUES a_errors = [i[1] for i in a.values()] # EXTRACT ERRORS fig = plt.figure(figsize=(8, 6)) # DICTATE FIGURE SIZE bax = brokenaxes(ylims=((0, 1.5), (1.7, 3)), hspace=0.05) # BREAK AXES bax.plot(compositions, a_vals, marker='o') # PLOT DATA for i in range(0, len(a_errors)): # PLOT ALL ERROR BARS bax.errorbar(i, a_vals[i], yerr=a_errors[i], capsize=5, fmt='red') # FORMAT ERROR BAR bax.locator_params(axis='x', nbins=len(compositions)) plt.show() Result: | 3 | 2 |
74,910,864 | 2022-12-24 | https://stackoverflow.com/questions/74910864/why-is-the-included-in-removeprefix-definition | In PEP-616, the specification for removeprefix() includes this code block: def removeprefix(self: str, prefix: str, /) -> str: if self.startswith(prefix): return self[len(prefix):] else: return self[:] Why does the last line say return self[:], instead of just return self? | [:] is an old idiom for copying sequences. Nowadays, we use the idiomatic .copy for lists; there isn't normally a good reason to copy strings, since they are supposed to be immutable, so the str class doesn't provide such a method. Furthermore, due to string interning, [:] may well return the same instance anyway. So, why include it in code like this? Because str can be subclassed. The clue is in the subsequent text: When the arguments are instances of str subclasses, the methods should behave as though those arguments were first coerced to base str objects, and the return value should always be a base str. Suppose we had a user-defined subclass: class MyString(str): ... Notice what happens when we slice an instance to copy it: >>> type(MyString('xyz')[:]) <class 'str'> In the example implementation, therefore, the [:] ensures that an instance of the base str type will be returned, conforming to the text specification. | 4 | 7 |
74,910,159 | 2022-12-24 | https://stackoverflow.com/questions/74910159/sort-a-pandas-dataset-by-date | I have a CSV dataset like this. import pandas as pd from io import StringIO data=""" Date| Link April 1, 2009, 12:00 PM| 4 March 27, 2009, 12:00 PM| 8 April 29, 2009, 12:00 PM| 15 May 12, 2009, 12:00 PM| 9 June 9, 2009, 12:00 PM| 11 July 3, 2009, 12:00 PM| 329 June 16, 2009, 12:00 PM| 12 September 26, 2009, 12:00 PM| 48 October 4, 2009, 12:00 PM| 49 August 15, 2009, 12:00 PM| 10 November 30, 2009, 12:00 PM| 29 December 23, 2009, 12:00 PM| 68 April 1, 2009, 12:00 PM| 4 May 12, 2010, 12:00 PM| 9 September 26, 2012, 12:00 PM| 48 """ df = pd.read_csv(StringIO(data), delimiter='|') Now I want to sort the dataset by the Date index; 1st day of calendar will need to appear first. For this, I have tried using df.sort_values(by = 'Date') but unfortunately, it gives me the sorted index using alphabetical order. How can I sort this data set by as like in calendar? | You can convert the Date column to datetime objects, and then use .sort_values: df["Date"] = pd.to_datetime(df["Date"]) df.sort_values("Date") This outputs: Date Link 1 2009-03-27 12:00:00 8 0 2009-04-01 12:00:00 4 12 2009-04-01 12:00:00 4 2 2009-04-29 12:00:00 15 3 2009-05-12 12:00:00 9 4 2009-06-09 12:00:00 11 6 2009-06-16 12:00:00 12 5 2009-07-03 12:00:00 329 9 2009-08-15 12:00:00 10 7 2009-09-26 12:00:00 48 8 2009-10-04 12:00:00 49 10 2009-11-30 12:00:00 29 11 2009-12-23 12:00:00 68 13 2010-05-12 12:00:00 9 14 2012-09-26 12:00:00 48 | 3 | 2 |
74,909,057 | 2022-12-24 | https://stackoverflow.com/questions/74909057/if-dataframe-column-has-specific-words-alter-value | I have a dataframe, example: df = [{'id': 1, 'text': 'text contains ok words'}, , {'id':2, 'text':'text contains word apple'}, {'id':3, 'text':'text contains words ok'}] Example: keywords = ['apple', 'orange', 'lime'] And I want to check all columns 'text' to check if contains any word from my keywords, if so I want to alter that text column to: 'disconsider this case' I've tried to tokenize the column but then I'm not able to use the function I created to check, here is the example: df = pd.DataFrame(df) def remove_keywords(inpt): keywords = ['apple', 'orange', 'lime'] if any(x in word for x in keyword): return 'disconsider this case' else: return inpt df['text'] = df['text'].apply(remove_keywords) df df['text'] = df.apply(lambda row: nltk.word_tokenize(row['text']), axis=1) for word in df['text']: if 'apple' in df['text']: return 'disconsider this case' Any help appreciated. Thanks!! | this worked for me using pandas and a loop import pandas as pd keywords=['apple', 'orange', 'lime'] df = pd.DataFrame([{'id': 1, 'text': 'text contains ok words'}, {'id':2, 'text':'text contains word apple'}, {'id':3, 'text':'text contains words ok'}]) print(df) for i in range(len(df)): if any(word in df.iat[i,1] for word in keywords): df.iat[i,1]='discondider in this case' print(df) | 3 | 3 |
74,891,109 | 2022-12-22 | https://stackoverflow.com/questions/74891109/write-import-call-same-python-module-multiple-times-runs-outdated-code | import sys import os import time MODULE_NAME = "mycode" def write_module(version): with open(MODULE_NAME+".py", "w") as f: f.write("def fun():"+os.linesep) f.write(" print('Code version:',"+str(version)+")") for i in range(5): # WRITE A PYTHON FILE AUTOMATICALLY write_module(i) # IMPORT IT if MODULE_NAME in sys.modules: del sys.modules[MODULE_NAME] # time.sleep(1) # <------------------------ WHY IS IT MANDATORY ???? module = __import__(MODULE_NAME) fun = module.fun # CALL IT fun() Produces: Code version: 0 Code version: 0 Code version: 0 Code version: 0 Code version: 0 I expected: Code version: 0 Code version: 1 Code version: 2 Code version: 3 Code version: 4 I am developing Python code, writing Python code automatically. Python import instructions do not work as I expected. It looks like an asynchronous callback. I don't know why, adding the line time.sleep(1) corrects the error. | The source bytes are cached in __pycache__ directory. _validate_timestamp_pyc validates it against the source last-modified time β same without time.sleep(1) β and the source size. You can remove the pyc file before deleting the module from sys.modules. if MODULE_NAME in sys.modules: os.remove(sys.modules[MODULE_NAME].__cached__) # Add this del sys.modules[MODULE_NAME] | 3 | 3 |
74,901,599 | 2022-12-23 | https://stackoverflow.com/questions/74901599/django-where-do-you-store-non-django-py-files-in-your-app | I have three python files (two that scrape data and 1 with my functions) I'm using in my app. Each file is about 150 lines, I don't want to include this code in views.py to keep my views as clean and readable as possible. Is there a best practice to keep things tidy? A separate folder inside the app with all external non-django .py files (similar to 'Templates' for .html or 'Static' for .css)? Any suggestions are appreciated. A Google search didn't yield any results. | You could create a separate app-- it depends on how sectioned off you want things. I prefer to just include the non-django .py files in the app that you expect to use them in the most. And then import the relevant functions in views.py from there. Sample of how the import would look in views.py for a function (called function) from a non-django .py file called operations.py located in the same app as the view: from .operations import function | 3 | 1 |
74,899,785 | 2022-12-23 | https://stackoverflow.com/questions/74899785/psycopg2-errors-activesqltransaction-create-database-cannot-run-inside-a-transa | I am trying to create a Django app that creates a new database for every user when he/she signs up. I am going with this approach due to some reason. I have tried many ways using management commands and even Celery. But I am still getting the same error. 2022-12-23 07:16:07.410 UTC [49] STATEMENT: CREATE DATABASE tenant_asdadsad [2022-12-23 07:16:07,415: ERROR/ForkPoolWorker-4] Task user.utils.create_database[089b0bc0-0b5f-4199-8cf3-bc336acc7624] raised unexpected: ActiveSqlTransaction('CREATE DATABASE cannot run inside a transaction block\n') Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/celery/app/trace.py", line 451, in trace_task R = retval = fun(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/celery/app/trace.py", line 734, in __protected_call__ return self.run(*args, **kwargs) File "/app/user/utils.py", line 45, in create_database cursor.execute(f'CREATE DATABASE tenant_{tenant_id}') psycopg2.errors.ActiveSqlTransaction: CREATE DATABASE cannot run inside a transaction block This is my task @shared_task def create_database(tenant_id): conn = psycopg2.connect(database="mydb", user="dbuser", password="mypass", host="db") cursor = conn.cursor() transaction.set_autocommit(True) cursor.execute(f'CREATE DATABASE tenant_{tenant_id}') cursor.execute(f'GRANT ALL PRIVILEGES ON DATABASE tenant_{tenant_id} TO dbuser') cursor.close() conn.close() I have tried several ways but I always get the same error This is my API call def create(self, request, *args, **kwargs): serializer_class = mySerializer(data=request.data) if serializer_class.is_valid(): validated_data = serializer_class.validated_data or = validated_data["org"] or = Org.objects.create(**org) create_database.delay(str(or.id)) return Response(create_user(validated_data)) | Use the autocommit property of the connection: from psycopg2 import sql def create_database(tenant_id): conn = psycopg2.connect(database="mydb", user="dbuser", password="mypass", host="db") cursor = conn.cursor() conn.autocommit = True #! # transaction.set_autocommit(True) #? dbname = sql.Identifier(f'tenant_{tenant_id}') create_cmd = sql.SQL('CREATE DATABASE {}').format(dbname) grant_cmd = sql.SQL('GRANT ALL PRIVILEGES ON DATABASE {} TO dbuser').format(dbname) cursor.execute(create_cmd) cursor.execute(grant_cmd) cursor.close() conn.close() Read in the docs about connection.autocommit. Note also the use of the SQL string composition to avoid SQL injection. | 3 | 10 |
74,901,020 | 2022-12-23 | https://stackoverflow.com/questions/74901020/python-typing-mypy-errors-with-overload-overlap-when-signatures-are-different | The following code appears to generate two mypy errors: Overloaded function signatures 1 and 3 overlap with incompatible return types and Overloaded function signatures 2 and 3 overlap with incompatible return types; but all overloads have different signatures - Literal[True], Literal[False] and None do not overlap. @overload def func_a(*, a: Literal[False] = ...) -> str: ... @overload def func_a(*, a: None = ...) -> str: ... @overload def func_a(*, a: Literal[True] = ...) -> int: ... def func_a(*, a: Optional[bool] = None) -> str | int: if a: return 1 return "foo" var1 = func_a() # str correctly discovered by VSCode Pylance var2 = func_a(a=False) # str correctly discovered by VSCode Pylance var3 = func_a(a=True) # int correctly discovered by VSCode Pylance Why does Mypy think they overlap and how could I go about fixing this? Mypy version: 0.991 Python version: 3.11.1 | The problem is that by writing = ... default values for every overload, you've marked the parameter as optional in every overload. A plain func_a() call matches every single overload of your function. You need to resolve that, so func_a() only matches one overload. Here's one way: @overload def func_a(*, a: Literal[False]) -> str: ... @overload def func_a(*, a: Literal[True]) -> int: ... @overload def func_a(*, a: None = None) -> str: ... Here, only the None overload marks the parameter as optional, so func_a() only matches that overload. Alternatively, you could make the no-argument version its own overload: @overload def func_a(*, a: Literal[False]) -> str: ... @overload def func_a(*, a: Literal[True]) -> int: ... @overload def func_a(*, a: None) -> str: ... @overload def func_a() -> str: ... | 4 | 9 |
74,899,987 | 2022-12-23 | https://stackoverflow.com/questions/74899987/create-a-list-of-lists-of-lists-in-a-single-line | This code creates a list of 25 lists of 25 lists: vals = [] for i in range(25): vals.append([]) for j in range(25): vals[i].append([]) How could I translate this code to a single line instead of using 5 lines in Python? | You can use list_comprehension. res = [[[] for _ in range(25)] for _ in range(25)] To check that result is the same, we can use numpy.ndarray.shape. >>> import numpy as np >>> np.asarray(vals).shape (25, 25, 0) >>> np.asarray(res).shape (25, 25, 0) | 3 | 9 |
74,895,292 | 2022-12-23 | https://stackoverflow.com/questions/74895292/why-do-breakpoints-in-my-function-with-a-yield-do-not-break | I am writing Python in VSCode. I have a file with two functions, and a second file that calls those function. When I put breakpoints in one of the functions, the breakpoint does not hit. Here is test2.py that has the two functions... def func1(): for i in range(10): yield i def func2(): for i in range(10): print(i) Here is test1.py for which the debugger is being launched from... import test2 test2.func1() test2.func2() When I put a breakpoint on the for loop in both func1 and func2, then run the debugger from test1.py, the breakpoint for func1 is never hit, but the breakpoint in func2 is hit. Why is this? | You never actually advanced the generator, so it set up a generator instance for func1, but didn't begin executing it. If you want it to run out the generator, iterate the result of calling func1, e.g. replace: test2.func1() with: # Extracts and prints all items for x in test2.func1(): print(x) # Or to extract and print a single item: print(next(test2.func1())) | 3 | 2 |
74,894,148 | 2022-12-22 | https://stackoverflow.com/questions/74894148/aligning-two-images-with-manual-homography | I'm creating an application that uses manual calibration to align two images. I'm trying to align them almost pixel perfectly, so I'm not relying on automatic calibration as it did not work the best for this scenario. I'm doing it manually by choosing pixels. However, the result is not what I hoped for, and I do not know where I'm making a mistake. I feel like the computed points should place the image precisely on top of the other one, but for some reason, it does not. What am I doing wrong? Results of homography: [[ 7.43200521e-01 -1.79170744e-02 -1.76782990e+02] [ 1.00046389e-02 7.84106136e-01 -3.22549155e+01] [ 5.10695284e-05 -8.48641135e-05 1.00000000e+00]] Manually picked points: RGB: [[ 277 708] [1108 654] [ 632 545] [ 922 439] [ 874 403] [ 398 376] [ 409 645] [ 445 593] [ 693 342] [ 739 244] [ 505 234] [ 408 275] [ 915 162] [1094 126] [ 483 115] [ 951 366] [ 517 355]] Thermal: [[ 8 549] [634 491] [282 397] [496 318] [461 289] [113 269] [122 479] [148 438] [325 236] [360 162] [194 156] [121 188] [484 106] [621 67] [178 62] [515 261] [203 253]] def manual_calibration(self, rgb: cv2.UMat, thermal: cv2.UMat) -> Tuple[cv2.UMat, Tuple[int, int, int, int]]: rgb_gray = cv2.cvtColor(rgb, cv2.COLOR_BGR2GRAY) thermal_gray = cv2.cvtColor(thermal, cv2.COLOR_BGR2GRAY) h_rgb, w_rgb = rgb_gray.shape h_th, w_th = thermal_gray.shape thermal_gray = cv2.copyMakeBorder(thermal_gray, 0, h_rgb - h_th, 0, 0, cv2.BORDER_CONSTANT, value=[0, 0, 0]) merged = cv2.hconcat((rgb_gray, thermal_gray)) self.merged = cv2.cvtColor(merged, cv2.COLOR_GRAY2RGB) def point_validation(ix, iy): if ix > w_rgb: ix -= w_rgb return ix, iy self.points_left = np.array([]) self.points_right = np.array([]) self.label = True def select_point(event, x, y, flags, param): if event == cv2.EVENT_LBUTTONDOWN: # captures left button double-click ix, iy = x, y cv2.circle(img=self.merged, center=(x,y), radius=5, color=(0,255,0),thickness=-1) ix, iy = point_validation(ix, iy) pt = np.array([ix, iy]) if self.label: # self.points_left = np.vstack((self.points_left, pt)) self.points_left = np.vstack((self.points_left, pt)) if self.points_left.size else pt self.label = False else: # self.points_right = np.vstack((self.points_right, pt)) self.points_right = np.vstack((self.points_right, pt)) if self.points_right.size else pt self.label = True print(ix, iy) cv2.namedWindow('calibration') cv2.setMouseCallback('calibration', select_point) while True: cv2.imshow("calibration", self.merged) if cv2.waitKey(20) & 0xFF == 27: break cv2.destroyAllWindows() print(self.points_left) print(self.points_right) ### EDIT NEW POINT VALIDATION rgb_gray_check = rgb_gray thermal_gray_check = thermal_gray for point in self.points_left: cv2.circle(img=rgb_gray_check, center=point, radius=5, color=(0,255,0),thickness=-1) for point in self.points_right: cv2.circle(img=thermal_gray_check, center=point, radius=5, color=(0,255,0),thickness=-1) cv2.imshow('rgb', rgb_gray_check) cv2.imshow('thermal', thermal_gray_check) cv2.waitKey(0) ### EDIT NEW POINT VALIDATION # Compute homography # 0 - a regular method using all the points # CV_RANSAC - RANSAC-based robust method # CV_LMEDS - Least-Median robust method matrix, mask = cv2.findHomography(self.points_left, self.points_right, 0) print(matrix) # matrix[0][3] += (w_th/2) # matrix[1][3] += (h_th/2) warp_src = cv2.warpPerspective(thermal, matrix, (rgb.shape[1], rgb.shape[0])) alpha = 0.5 beta = (1.0 - alpha) dst_warp_blended = cv2.addWeighted(rgb, alpha, warp_src, beta, 0.0) cv2.imshow('Blended destination and warped image', dst_warp_blended) cv2.waitKey(0) Source images: My result: | So I figured it out! I accidentally flipped parameters in the findHomography function. So it should be matrix, mask = cv2.findHomography(self.points_right, self.points_left, 0) And, of course, delete the offset for the homography matrix. | 6 | 5 |
74,893,354 | 2022-12-22 | https://stackoverflow.com/questions/74893354/is-literal-ellipsis-really-valid-as-paramspec-last-argument | Quote from Python docs for Concatenate: The last parameter to Concatenate must be a ParamSpec or ellipsis (...). I know what ParamSpec is, but the ellipsis here drives me mad. It is not accepted by mypy: from typing import Callable, ParamSpec, Concatenate, TypeVar, Generic _P = ParamSpec('_P') _T = TypeVar('_T') class Test(Generic[_P, _T]): fn: Callable[Concatenate[_P, ...], _T] E: Unexpected "..." [misc] E: The last parameter to Concatenate needs to be a ParamSpec [valid-type] and is not explained anywhere in docs. PEP612 doesn't mention it. Is it just a mistake, appeared as a result of mixing Callable and Concatenate together? This issue is somewhat related and shows syntax with ellipsis literal in Concatenate: The specification should be extended to allow either Concatenate[int, str, ...], or [int, str, ...], or some other syntax. But this clearly targets "future syntax". Note: I'm aware of meaning of ellipsis as Callable argument, this question is specifically about Concatenate. | According to PEP-612's grammar, the ellipsis is not permitted in the Concatenate expression: We now augment that with two new options: a parameter specification variable (Callable[P, int]) or a concatenation on a parameter specification variable (Callable[Concatenate[int, P], int]). callable ::= Callable "[" parameters_expression, type_expression "]" parameters_expression ::= | "..." | "[" [ type_expression ("," type_expression)* ] "]" | parameter_specification_variable | concatenate "[" type_expression ("," type_expression)* "," parameter_specification_variable "]" where parameter_specification_variable is a typing.ParamSpec variable, declared in the manner as defined above, and concatenate is typing.Concatenate. However, the support for ellipsis as the last argument for Concatenate was introduced in April 2022 as part of Python 3.11. No type checker seems to handle this new case though. | 3 | 3 |
74,893,662 | 2022-12-22 | https://stackoverflow.com/questions/74893662/transpose-pandas-df-based-on-value-data-type | I have pandas DataFrame A. I am struggling transforming this into my desired format, see DataFrame B. I tried pivot or melt but I am not sure how I could make it conditional (string values to FIELD_STR_VALUE, numeric values to FIELD_NUM_VALUE). I was hoping you could point me the right direction. A: Input DataFrame |FIELD_A |FIELD_B |FIELD_C |FIELD_D | |--------|--------|--------|--------| |123123 |8 |a |23423 | |123124 |7 |c |6464 | |123144 |99 |x |234 | B: Desired output DataFrame |ID |FIELD_A |FIELD_NAME |FIELD_STR_VALUE |FIELD_NUM_VALUE | |---|--------|-----------|----------------|----------------| |1 |123123 |B | |8 | |2 |123123 |C |a | | |3 |123123 |D | |23423 | |4 |123124 |B | |7 | |5 |123124 |C |c | | |6 |123124 |D | |6464 | |7 |123144 |B | |99 | |8 |123144 |C |x | | |9 |123144 |D | |234 | | You can use: # dic = {np.int64: 'NUM', object: 'STR'} (df.set_index('FIELD_A') .pipe(lambda d: d.set_axis(pd.MultiIndex.from_arrays( [d.columns, d.dtypes], # or for custom NAMES #[d.columns, d.dtypes.map(dic)], names=['FIELD_NAME', None]), axis=1) ) .stack(0).add_prefix('FIELD_').add_suffix('_VALUE') .reset_index() ) NB. if you really want STR/NUM, map those strings from the dtypes (see comments in code). Output: FIELD_A FIELD_NAME FIELD_int64_VALUE FIELD_object_VALUE 0 123123 FIELD_B 8.0 NaN 1 123123 FIELD_C NaN a 2 123123 FIELD_D 23423.0 NaN 3 123124 FIELD_B 7.0 NaN 4 123124 FIELD_C NaN c 5 123124 FIELD_D 6464.0 NaN 6 123144 FIELD_B 99.0 NaN 7 123144 FIELD_C NaN x 8 123144 FIELD_D 234.0 NaN | 6 | 5 |
74,889,280 | 2022-12-22 | https://stackoverflow.com/questions/74889280/combine-and-fill-a-pandas-dataframe-with-the-single-row-of-another | If I have two dataframes: df1: df1 = pd.DataFrame({'A':[10,20,15,30,45], 'B':[17,33,23,10,12]}) A B 0 10 17 1 20 33 2 15 23 3 30 10 4 45 12 df2: df2 = pd.DataFrame({'C':['cat'], 'D':['dog'], 'E':['emu'], 'F':['frog'], 'G':['goat'], 'H':['horse'], 'I':['iguana']}) C D E F G H I 0 cat dog emu frog goat horse iguana How do I combine the two dataframes and fill df1 whereby each row is a replicate of df2 ? Here is what I have so far. The code works as intended, but if I were to have hundreds of columns, then I would anticipate there would be a much easier way than my current method: Current Code: df1 = df1.assign(C = lambda x: df2.C[0], D = lambda x: df2.D[0], E = lambda x: df2.E[0], F = lambda x: df2.F[0], G = lambda x: df2.G[0], H = lambda x: df2.H[0], I = lambda x: df2.I[0]) Expected output: A B C D E F G H I 0 10 17 cat dog emu frog goat horse iguana 1 20 33 cat dog emu frog goat horse iguana 2 15 23 cat dog emu frog goat horse iguana 3 30 10 cat dog emu frog goat horse iguana 4 45 12 cat dog emu frog goat horse iguana | Use DataFrame.assign with Series by first row: df = df1.assign(**df2.iloc[0]) print (df) A B C D E F G H I 0 10 17 cat dog emu frog goat horse iguana 1 20 33 cat dog emu frog goat horse iguana 2 15 23 cat dog emu frog goat horse iguana 3 30 10 cat dog emu frog goat horse iguana 4 45 12 cat dog emu frog goat horse iguana | 4 | 4 |
74,887,536 | 2022-12-22 | https://stackoverflow.com/questions/74887536/how-to-generate-powers-of-10-with-list-comprehension-or-numpy-functions | I am generating this series of numbers using a for loop [1.e-03 1.e-04 1.e-05 1.e-06 1.e-07 1.e-08 1.e-09 1.e-10 1.e-11 1.e-12] This is the for loop: alphas = np.zeros(10) alphas[0] = 0.001 for i in range(1,10): alphas[i] = alphas[i-1] * 0.1 My heart of hearts tells me this is not "pythonic", but my brain can't come up a list comprehension to build this. I've tried numpy.linspace, arange, etc, but can't quite land where I need to. I wrote the for loop in 60 seconds, but am trying every time I write a for loop to think about how I could do it with a list comprehension. | Think about what you want the range on: it's powers of 10 [10**x for x in range(-3, -13, -1)] Or 10.0**np.arange(-3, -13, -1) # or 10.0**-np.arange(3, 13) The numpy example uses a float 10.0 because in numpy, Integers to negative integer powers are not allowed Try it online | 3 | 4 |
74,884,921 | 2022-12-22 | https://stackoverflow.com/questions/74884921/how-to-add-attributes-to-a-enum-strenum | I have an enum.StrEnum, for which I want to add attributes to the elements. For example: class Fruit(enum.StrEnum): APPLE = ("Apple", { "color": "red" }) BANANA = ("Banana", { "color": "yellow" }) >>> str(Fruit.APPLE) "Apple" >>> Fruit.APPLE.color "red" How can I accomplish this? (I'm running Python 3.11.0.) This question is not a duplicate of this one, which asks about the original enum.Enum. | The answer is much the same as in the other question (but too long to leave in a comment there): from enum import StrEnum class Fruit(StrEnum): # def __new__(cls, value, color): member = str.__new__(cls, value) member._value_ = value member.color = color return member # APPLE = "Apple", "red" BANANA = "Banana", "yellow" If you really want to use a dict for the attributes you can, but you run the risk of forgetting an attribute and then one or more of the members will be missing it. | 5 | 7 |
74,879,617 | 2022-12-21 | https://stackoverflow.com/questions/74879617/vs-code-suggestions-do-not-display-documentation | My VS code suggestions will not display any documentation next to the suggestions. What settings do I need to change? I have editor>suggest>show inline details active but it is doing it. Here is mine: I want it to look like this: I am using python. I tried searching through the settings. I tried looking it up on the internet. I tried looking at these GitHub issues: https://github.com/microsoft/vscode/issues/18582 https://github.com/microsoft/vscode/pull/25812 https://github.com/microsoft/vscode/issues/26282 | It seems that the space is too small. This setting has been deprecated. You can use the following alternatives: Use command "ctrl + space" when you get intellisense: And this is a switch. You only need to use this command once, and you can get the prompt of this window in the future. | 5 | 5 |
74,881,801 | 2022-12-21 | https://stackoverflow.com/questions/74881801/how-to-move-a-patch-along-a-path | I am trying to animate a patch.Rectangle object using matplotlib. I want the said object to move along a path.Arc. A roundabout way to do this would be (approximately) : import numpy as np from matplotlib import pyplot as plt from matplotlib import animation import matplotlib.patches as mpat fig, ax = plt.subplots() ax.set(xlim=(0, 10), ylim=(0, 10)) # generate the patch patch = mpat.Rectangle((5, 5), 1, 4) patch.rotation_point = 'center' # generate the path to follow path_to_follow = mpat.Arc((5, 5), 2, 2) ax.add_patch(path_to_follow) def init(): patch.set(x=5, y=5) ax.add_patch(patch) return patch, def animate(i, ax): new_x = 5 + np.sin(np.radians(i)) - 0.5 # parametric form for the circle new_y = 5 + np.cos(np.radians(i)) - 2 patch.set(x=new_x, y=new_y, angle=90-i) return patch, anim = animation.FuncAnimation(fig, animate, init_func=init, fargs=[ax], frames=360, interval=10, blit=True) plt.show() The rectangle follows a circle, but a parametric one. Would it be possible to make it follow any path? In other words, I would like to know if there are other simpler methods to do this (make my patch follow my path, here a circle), and if that could be generalized to other path. Thanks in advance ! I searched into the matplotlib doc for a methods which gives the parametric form for a given path (but apparently there is not), or for a methods which directly move a patch along a path (obviously, there was not). | Here is one way to use matplotlib.path.Path to generate a path, whose vertices can be obtained using the method cleaned, to move a patch along it. I have tried to showcase how blue and red colored Rectangles can be moved along a (blue) linear path and a (red) circular path, respectively: import numpy as np from matplotlib import pyplot as plt from matplotlib import animation, path import matplotlib.patches as mpat fig, ax = plt.subplots() ax.set(xlim=(0, 10), ylim=(0, 10)) # generate a linear path path1 = np.column_stack((np.arange(500)/50, np.arange(500)/50)) # generate a circular path circle = path.Path.circle(center=(5, 5), radius=1) path2 = circle.cleaned().vertices[:-3] # create patches patch1 = mpat.Rectangle((0, 0), 1, 3) patch2 = mpat.Rectangle((0, 0), 1, 3, color='red', fill=None) # plot path vertices plt.scatter(x=path1[:, 0], y=path1[:, 1], s=2) plt.scatter(x=path2[:, 0], y=path2[:, 1], color='red', s=2) def init(): patch1.set(x=0, y=0) patch2.set(x=5, y=6) ax.add_patch(patch1) ax.add_patch(patch2) return [patch1, patch2] def animate(i, ax): j = i % 500 # path1 has shape (500, 2) k = (i % 16) # path2 has shape (16, 2) patch1.set(x=path1[j][0], y=path1[j][1], angle=-j) patch2.set(x=path2[k][0], y=path2[k][1], angle=-k) return [patch1, patch2] anim = animation.FuncAnimation(fig, animate, init_func=init, fargs=[ax], frames=360, interval=100, blit=True) plt.show() | 3 | 1 |
74,882,096 | 2022-12-21 | https://stackoverflow.com/questions/74882096/automatically-add-decorator-to-all-inherited-methods | I want in class B to automatically add the decorator _preCheck to all methods that have been inherited from class A. In the example b.double(5) is correctly called with the wrapper. I want to avoid to manually re-declare (override) the inherited methods in B but instead, automatically decorate them, so that on the call to b.add(1,2) also _preCheck wrapper is called. Side note: I need to have a reference to the Instance of B in the wrapper (in my example via self) I want to avoid editing the Base class A. If possible, I want to encapsulate the decoration mechanism and initialization of it in the derived class B class A(object): def __init__(self, name): self.name = name def add(self, a, b): return a + b class B(A): def __init__(self, name, foo): super().__init__(name) self.foo = foo def _preCheck(func): @wraps(func) def wrapper(self, *args, **kwargs) : print("preProcess", self.name) return func(self, *args, **kwargs) return wrapper @_preCheck def double(self, i): return i * 2 b = B('myInst', 'bar') print(b.double(5)) print(b.add(1,2)) Based on How can I decorate all inherited methods in a subclass I thought a possible solutions might be to ad the following snippet into B's init method: for attr_name in A.__dict__: attr = getattr(self, attr_name) if callable(attr): setattr(self, attr_name, self._preCheck(attr)) However, I get the following error. I suspect the 2nd argument comes from the 'self'. . TypeError: _preCheck() takes 1 positional argument but 2 were given There exist solutions to similar problems where they either initialize the subclasses from within the base class : Add decorator to a method from inherited class? Apply a python decorator to all inheriting classes | Decorators need to be added the class itself not the instance: from functools import wraps class A(object): def __init__(self, name): self.name = name def add(self, a, b): return a + b class B(A): def __init__(self, name, foo): super().__init__(name) self.foo = foo def _preCheck(func): @wraps(func) def wrapper(self, *args, **kwargs) : print("preProcess", self.name) return func(self, *args, **kwargs) return wrapper @_preCheck def double(self, i): return i * 2 for attr_name in A.__dict__: if attr_name.startswith('__'): # skip magic methods continue print(f"Decorating: {attr_name}") attr = getattr(A, attr_name) if callable(attr): setattr(A, attr_name, B._preCheck(attr)) b = B('myInst', 'bar') print(b.double(5)) print(b.add(1,2)) Out: Decorating: add preProcess myInst 10 preProcess myInst 3 | 3 | 3 |
74,879,897 | 2022-12-21 | https://stackoverflow.com/questions/74879897/numpy-isin-for-multi-dimmensions | I have a big array of integers and second array of arrays. I want to create a boolean mask for the first array based on data from the second array of arrays. Preferably I would use the numpy.isin but it clearly states in it's documentation: The values against which to test each value of element. This argument is flattened if it is an array or array_like. See notes for behavior with non-array-like parameters. Do you maybe know some performant way of doing this instead of list comprehension? So for example having those arrays: a = np.array([[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]]) b = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]) I would like to have result like: np.array([ [True, True, False, False, False, False, False, False, False, False], [False, False, True, True, False, False, False, False, False, False], [False, False, False, False, True, True, False, False, False, False], [False, False, False, False, False, False, True, True, False, False], [False, False, False, False, False, False, False, False, True, True] ]) | Try numpy.apply_along_axis to work with numpy.isin: np.apply_along_axis(lambda x: np.isin(a, x), axis=1, arr=b) returns array([[[ True, True, False, False, False, False, False, False, False, False]], [[False, False, True, True, False, False, False, False, False, False]], [[False, False, False, False, True, True, False, False, False, False]], [[False, False, False, False, False, False, True, True, False, False]], [[False, False, False, False, False, False, False, False, True, True]]]) I will update with an edit comparing the runtime with a list comp EDIT: Whelp, I tested the runtime, and wouldn't you know, listcomp is faster timeit.timeit("[np.isin(a,x) for x in b]",number=10000, globals=globals()) 0.37380070000654086 vs timeit.timeit("np.apply_along_axis(lambda x: np.isin(a, x), axis=1, arr=b) ",number=10000, globals=globals()) 0.6078917000122601 the other answer to this post by @mozway is much faster: timeit.timeit("(a == b[...,None]).any(-2)",number=100, globals=globals()) 0.007107900004484691 and should probably be accepted. | 3 | 4 |
74,878,253 | 2022-12-21 | https://stackoverflow.com/questions/74878253/how-to-filter-3d-array-with-a-2d-mask | I have a (m,n,3) array data and I want to filter its values with a (m,n) mask to receive a (x,3) output array. The code below works, but how can I replace the for loop with a more efficient alternative? import numpy as np data = np.array([ [[11, 12, 13], [14, 15, 16], [17, 18, 19]], [[21, 22, 13], [24, 25, 26], [27, 28, 29]], [[31, 32, 33], [34, 35, 36], [37, 38, 39]], ]) mask = np.array([ [False, False, True], [False, True, False], [True, True, False], ]) output = [] for i in range(len(mask)): for j in range(len(mask[i])): if mask[i][j] == True: output.append(data[i][j]) output = np.array(output) The expected output is np.array([[17, 18, 19], [24, 25, 26], [31, 32, 33], [34, 35, 36]]) | import numpy as np data = np.array([ [[11, 12, 13], [14, 15, 16], [17, 18, 19]], [[21, 22, 13], [24, 25, 26], [27, 28, 29]], [[31, 32, 33], [34, 35, 36], [37, 38, 39]], ]) mask = np.array([ [False, False, True], [False, True, False], [True, True, False], ]) output = data[mask] | 3 | 2 |
74,877,323 | 2022-12-21 | https://stackoverflow.com/questions/74877323/how-to-extract-two-values-from-dict-in-python | I'm using python3 and and i have data set. That contains the following data. I'm trying to get the desire value from this data list. I have tried many ways but unable to figure out how to do that. slots_data = [ { "id":551, "user_id":1, "time":"199322002", "expire":"199322002" }, { "id":552, "user_id":1, "time":"199322002", "expire":"199322002" }, { "id":525, "user_id":3, "time":"199322002", "expire":"199322002" }, { "id":524, "user_id":3, "time":"199322002", "expire":"199322002" }, { "id":553, "user_id":1, "time":"199322002", "expire":"199322002" }, { "id":550, "user_id":2, "time":"199322002", "expire":"199322002" } ] # Desired output # [ # {"user_id":1,"slots_ids":[551,552,553]} # {"user_id":2,"slots_ids":[550]} # {"user_id":3,"slots_ids":[524,525]} # ] I have tried in the following way and obviously this is not correct. I couldn't figure out the solution of this problem : final_list = [] for item in slots_data: obj = obj.dict() obj = { "user_id":item["user_id"], "slot_ids":item["id"] } final_list.append(obj) print(set(final_list)) | Lots of good answers here. If I was doing this, I would base my answer on setdefault and/or collections.defaultdict that can be used in a similar way. I think the defaultdict version is very readable but if you are not already importing collections you can do without it. Given your data: slots_data = [ { "id":551, "user_id":1, "time":"199322002", "expire":"199322002" }, { "id":552, "user_id":1, "time":"199322002", "expire":"199322002" }, #.... ] You can reshape it into your desired output via: ## ------------------- ## get the value for the key user_id if it exists ## if it does not, set the value for that key to a default ## use the value to append the current id to the sub-list ## ------------------- reshaped = {} for slot in slots_data: user_id = slot["user_id"] id = slot["id"] reshaped.setdefault(user_id, []).append(id) ## ------------------- ## ------------------- ## take a second pass to finish the shaping in a sorted manner ## ------------------- reshaped = [ { "user_id": user_id, "slots_ids": sorted(reshaped[user_id]) } for user_id in sorted(reshaped) ] ## ------------------- print(reshaped) That will give you: [ {'user_id': 1, 'slots_ids': [551, 552, 553]}, {'user_id': 2, 'slots_ids': [550]}, {'user_id': 3, 'slots_ids': [524, 525]} ] | 4 | 2 |
74,873,524 | 2022-12-21 | https://stackoverflow.com/questions/74873524/selenium-unable-to-locate-button-on-cookie-popup | I am trying to parse this website using selenium but I fail to find the buttons of the cookie popup which I need to confirm to proceed. I know that I first need to load the page and then wait for the cookie popup to appear, although that should be well handled by the sleep function. The html of the buttons inside the popup looks like this (retrieved via firefox F12). I am trying to simply click the "OK" button: <button role="button" data-testid="uc-customize-button" style="border: 1px solid rgb(0, 139, 2);" class="sc-gsDKAQ hWcdhQ">Einstellungen oder ablehnen</button> <div class="sc-bBHxTw foBPAO"></div> <button role="button" data-testid="uc-accept-all-button" style="border: 1px solid rgb(0, 139, 2);" class="sc-gsDKAQ fILFKg">OK</button> I have tried to simply find the buttons via the xpath of the "OK" button: from time import sleep from selenium import webdriver from selenium.webdriver.common.by import By # adapting: https://stackoverflow.com/questions/64032271/handling-accept-cookies-popup-with-selenium-in-python BASE_URL = "https://www.immowelt.at/suche/wohnungen/mieten" driver = webdriver.Firefox() driver.get(BASE_URL) sleep(10) # cannot locate by XPATH driver.find_element( By.XPATH, "/div/div/div[2]/div/div[2]/div/div/div/button[2]" ).click() # weird thing is, I also cannot retrieve any of the buttons (although I know they exist) buttons = driver.find_elements(By.TAG_NAME, "button") Any ideas why I can't find the button? | By inspecting the HTML we can see that the button is inside a shadow root. So in order to be able to interact with the button we must first select the parent of the shadow root, which is the div element with id=usercentrics-root, and then we can select the button and click it: driver.execute_script('''return document.querySelector("#usercentrics-root").shadowRoot.querySelector("button[data-testid='uc-accept-all-button']")''').click() Alternatively, a more human readable code: shadow_parent = driver.find_element(By.CSS_SELECTOR, '#usercentrics-root') outer = driver.execute_script('return arguments[0].shadowRoot', shadow_parent) outer.find_element(By.CSS_SELECTOR, "button[data-testid='uc-accept-all-button']").click() | 4 | 4 |
74,818,677 | 2022-12-15 | https://stackoverflow.com/questions/74818677/problem-to-install-pyproject-toml-dependencies-with-pip | I have an old project created with poetry. The pyproject.toml create by poetry is the following: [tool.poetry] name = "Dota2Learning" version = "0.3.0" description = "Statistics and Machine Learning for your Dota2 Games." license = "MIT" readme = "README.md" homepage = "Coming soon..." repository = "https://github.com/drigols/dota2learning/" documentation = "Coming soon..." include = ["CHANGELOG.md"] authors = [ "drigols <[email protected]>", ] maintainers = [ "drigols <[email protected]>", ] keywords = [ "dota2", "statistics", "machine Learning", "deep learning", ] [tool.poetry.scripts] dota2learning = "dota2learning.cli.main:app" [tool.poetry.dependencies] python = "^3.10" requests = "^2.27.1" typer = {extras = ["all"], version = "^0.4.1"} install = "^1.3.5" SQLAlchemy = "^1.4.39" PyMySQL = "^1.0.2" cryptography = "^37.0.4" pydantic = "^1.9.1" rich = "^12.5.1" fastapi = "^0.79.0" uvicorn = "^0.18.2" [tool.poetry.dev-dependencies] black = {extras = ["jupyter"], version = "^22.3.0"} pre-commit = "^2.19.0" flake8 = "^4.0.1" reorder-python-imports = "^3.1.0" pyupgrade = "^2.34.0" coverage = "^6.4.1" [tool.black] line-length = 79 include = '\.pyi?$' # All Python files exclude = ''' /( \.git | \.hg | \.mypy_cache | \.tox | \.venv | _build | buck-out | build | dist )/ ''' [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" [tool.poetry.urls] "Bug Tracker" = "https://github.com/drigols/dota2learning/issues" If a run "pip install ." it's worked for me, however, now I want to follow the new approach without poetry to manage the project and dependencies. Then I created a new pyproject.toml (manually): [project] name = "Dota2Learning" version = "2.0.0" description = "Statistics and Machine Learning for your Dota2 Games." license = "MIT" readme = "README.md" homepage = "" requires-python = ">=3.10" repository = "https://github.com/drigols/dota2learning/" documentation = "" include = ["CHANGELOG.md"] authors = [ "drigols <[email protected]>", ] maintainers = [ "drigols <[email protected]>", ] keywords = [ "dota2", "statistics", "machine Learning", "deep learning", ] dependencies = [ "requests>=2.27.1", "typer>=0.4.1", "SQLAlchemy>=1.4.39", "PyMySQL>=1.0.2", "cryptography>=37.0.4", "pydantic>=1.9.1", "rich>=12.5.1", "fastapi>=0.79.0", "uvicorn>=0.18.2", ] [project.optional-dependencies] # Dev dependencies. dev = [ "black>=22.3.0", "pre-commit>=2.19.0", "flake8>=4.0.1", "reorder-python-imports>=3.1.0", "pyupgrade>=2.34.0", ] # Testing dependencies. test = [ "coverage>=6.4.1", ] # Docs dependencies. doc = [] [project.scripts] dota2learning = "dota2learning.cli.main:app" [tool.black] line-length = 79 include = '\.pyi?$' # All Python files exclude = ''' /( \.git | \.hg | \.mypy_cache | \.tox | \.venv | _build | buck-out | build | dist )/ ''' The problem now is that the pip command "pip install ." don't work: Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error Γ Getting requirements to build wheel did not run successfully. β exit code: 1 β°β> [89 lines of output] configuration error: `project.license` must be valid exactly by one definition (2 matches found): - keys: 'file': {type: string} required: ['file'] - keys: 'text': {type: string} required: ['text'] DESCRIPTION: `Project license <https://www.python.org/dev/peps/pep-0621/#license>`_. GIVEN VALUE: "MIT" OFFENDING RULE: 'oneOf' DEFINITION: { "oneOf": [ { "properties": { "file": { "type": "string", "$$description": [ "Relative path to the file (UTF-8) which contains the license for the", "project." ] } }, "required": [ "file" ] }, { "properties": { "text": { "type": "string", "$$description": [ "The license of the project whose meaning is that of the", "`License field from the core metadata", "<https://packaging.python.org/specifications/core-metadata/#license>`_." ] } }, "required": [ "text" ] } ] } Traceback (most recent call last): File "/home/drigols/Workspace/dota2learning/environment/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 351, in <module> main() File "/home/drigols/Workspace/dota2learning/environment/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 333, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "/home/drigols/Workspace/dota2learning/environment/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 118, in get_requires_for_build_wheel return hook(config_settings) File "/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 338, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) File "/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 320, in _get_build_requires self.run_setup() File "/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 484, in run_setup super(_BuildMetaLegacyBackend, File "/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 335, in run_setup exec(code, locals()) File "<string>", line 1, in <module> File "/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/__init__.py", line 87, in setup return distutils.core.setup(**attrs) File "/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 159, in setup dist.parse_config_files() File "/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/dist.py", line 867, in parse_config_files pyprojecttoml.apply_configuration(self, filename, ignore_option_errors) File "/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/config/pyprojecttoml.py", line 62, in apply_configuration config = read_configuration(filepath, True, ignore_option_errors, dist) File "/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/config/pyprojecttoml.py", line 126, in read_configuration validate(subset, filepath) File "/tmp/pip-build-env-up7_m__d/overlay/lib/python3.10/site-packages/setuptools/config/pyprojecttoml.py", line 51, in validate raise ValueError(f"{error}\n{summary}") from None ValueError: invalid pyproject.toml config: `project.license`. configuration error: `project.license` must be valid exactly by one definition (2 matches found): - keys: 'file': {type: string} required: ['file'] - keys: 'text': {type: string} required: ['text'] [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error Γ Getting requirements to build wheel did not run successfully. β exit code: 1 β°β> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. It's like pip doesn't know how to install the dependencies from pyproject.toml unlike poetry's approach and I don't understand why. THE PROBLEM WAS SOLVED! A edited the pyproject.toml to: [build-system] # Minimum requirements for the build system to execute. requires = ["setuptools", "wheel"] # PEP 508 specifications. build-backend = "setuptools.build_meta" # Ignore flat-layout. [tool.setuptools] py-modules = [] [project] name = "Dota2Learning" version = "2.0.0" description = "Statistics and Machine Learning for your Dota2 Games." license = {file = "LICENSE.md"} readme = "README.md" requires-python = ">=3.10.0" authors = [ { name = "Rodrigo Leite", email = "[email protected]" }, ] maintainers = [ { name = "Rodrigo Leite", email = "[email protected]" }, ] keywords = [ "dota2", "statistics", "machine Learning", "deep learning", ] dependencies = [ "requests>=2.27.1", "typer>=0.4.1", "SQLAlchemy>=1.4.39", "PyMySQL>=1.0.2", "cryptography>=37.0.4", "pydantic>=1.9.1", "rich>=12.5.1", "fastapi>=0.79.0", "uvicorn>=0.18.2", ] [project.optional-dependencies] # Dev dependencies. dev = [ "black>=22.3.0", "pre-commit>=2.19.0", "flake8>=4.0.1", "reorder-python-imports>=3.1.0", "pyupgrade>=2.34.0", ] # Test dependencies. test = [ "coverage>=6.4.1", ] # Doc dependencies. doc = [] [project.scripts] # dota2learning = "dota2learning.cli.main:app" [tool.black] line-length = 79 include = '\.pyi?$' # All Python files exclude = ''' /( \.git | \.hg | \.mypy_cache | \.tox | \.venv | _build | buck-out | build | dist )/ ''' | Multiple issues: 1. As the error message clearly states, there is an issue with the license key under the [project] section. Its value should be a table. As of December 2024, it seems like this should be fine. See specification. 2. The new pyproject.toml file that you are showing us is missing the [build-system] section. If this section is absent, then the build front-ends (such as pip), will assume that the build back-end of this project is setuptools (see PEP 517 and PEP 518), and not poetry/poetry-core which is probably what you wanted. 3. As of today, Poetry is not compatible with this new [project] section for pyproject.toml files. Poetry has not implemented the PEP 621 changes yet, so that would not work anyway, unless the project changes its build back-end from Poetry to another build back-end. | 7 | 4 |
74,798,626 | 2022-12-14 | https://stackoverflow.com/questions/74798626/why-is-loginf-inf-j-equal-to-inf-0-785398-j-in-c-python-numpy | I've been finding a strange behaviour of log functions in C++ and numpy about the behaviour of log function handling complex infinite numbers. Specifically, log(inf + inf * 1j) equals (inf + 0.785398j) when I expect it to be (inf + nan * 1j). When taking the log of a complex number, the real part is the log of the absolute value of the input and the imaginary part is the phase of the input. Returning 0.785398 as the imaginary part of log(inf + inf * 1j) means it assumes the infs in the real and the imaginary part have the same length. This assumption does not seem to be consistent with other calculation, for example, inf - inf == nan, inf / inf == nan which assumes 2 infs do not necessarily have the same values. Why is the assumption for log(inf + inf * 1j) different? Reproducing C++ code: #include <complex> #include <limits> #include <iostream> int main() { double inf = std::numeric_limits<double>::infinity(); std::complex<double> b(inf, inf); std::complex<double> c = std::log(b); std::cout << c << "\n"; } Reproducing Python code (numpy): import numpy as np a = complex(float('inf'), float('inf')) print(np.log(a)) EDIT: Thank you for everyone who's involved in the discussion about the historical reason and the mathematical reason. All of you turn this naive question into a really interesting discussion. The provided answers are all of high quality and I wish I can accept more than 1 answers. However, I've decided to accept @simon's answer as it explains in more detail the mathematical reason and provided a link to the document explaining the logic (although I can't fully understand it). | See Edit 2 at the bottom of the answer for a mathematical motivation (or rather, at least, the reference to one). The value of 0.785398 (actually pi/4) is consistent with at least some other functions: as you said, the imaginary part of the logarithm of a complex number is identical with the phase angle of the number. This can be reformulated to a question of its own: what is the phase angle of inf + j * inf? We can calculate the phase angle of a complex number z by atan2(Im(z), Re(z)). With the given number, this boils down to calculating atan2(inf, inf), which is also 0.785398 (or pi/4), both for Numpy and C/C++. So now a similar question could be asked: why is atan2(inf, inf) == 0.785398? I do not have an answer to the latter (except for "the C/C++ specifications say so", as others already answered), I only have a guess: as atan2(y, x) == atan(y / x) for x > 0, probably someone made the decision in this context to not interpret inf / inf as "undefined" but instead as "a very large number divided by the same very large number". The result of this ratio would be 1, and atan(1) == pi/4 by the mathematical definition of atan. Probably this is not a satisfying answer, but at least I could hopefully show that the log definition in the given edge case is not completely inconsistent with similar edge cases of related function definitions. Edit: As I said, consistent with some other functions: it is also consistent with np.angle(complex(np.inf, np.inf)) == 0.785398, for example. Edit 2: Looking at the source code of an actual atan2 implementation brought up the following code comment: note that the non obvious cases are y and x both infinite or both zero. for more information, see Branch Cuts for Complex Elementary Functions, or Much Ado About Nothing's Sign Bit, by W. Kahan I dug up the referenced document, you can find a copy here. In Chapter 8 of this reference, called "Complex zeros and infinities", William Kahan (who is both mathematician and computer scientist and, according to Wikipedia, the "Father of Floating Point") covers the zero and infinity edge cases of complex numbers and arrives at pi/4 for feeding inf + j * inf into the arg function (arg being the function that calculates the phase angle of a complex number, just like np.angle above). You will find this result on page 17 in the linked PDF. I am not mathematician enough for being able to summarize Kahan's rationale (which is to say: I don't really understand it), but maybe someone else can. | 55 | 39 |
74,800,989 | 2022-12-14 | https://stackoverflow.com/questions/74800989/how-to-add-a-duration-to-datetime-in-python-polars | Update: This was fixed by pull/5837 shape: (3, 3) βββββββββββββββββββββββ¬ββββββββββ¬βββββββββββββββ β dt β seconds β duration0 β β --- β --- β --- β β datetime[ΞΌs] β f64 β duration[ΞΌs] β βββββββββββββββββββββββͺββββββββββͺβββββββββββββββ‘ β 2022-12-14 00:00:00 β 1.0 β 1Β΅s β β 2022-12-14 00:00:00 β 2.2 β 2Β΅s β β 2022-12-14 00:00:00 β 2.4 β 2Β΅s β βββββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββ I want to add a duration in seconds to a date/time. My data looks like import polars as pl df = pl.DataFrame( { "dt": [ "2022-12-14T00:00:00", "2022-12-14T00:00:00", "2022-12-14T00:00:00", ], "seconds": [ 1.0, 2.2, 2.4, ], } ) df = df.with_columns(pl.col("dt").cast(pl.Datetime)) Now my naive attempt was to to convert the float column to duration type to be able to add it to the datetime column (as I would do in pandas). df = df.with_columns(pl.col("seconds").cast(pl.Duration).alias("duration0")) print(df.head()) βββββββββββββββββββββββ¬ββββββββββ¬βββββββββββββββ β dt β seconds β duration0 β β --- β --- β --- β β datetime[ΞΌs] β f64 β duration[ΞΌs] β βββββββββββββββββββββββͺββββββββββͺβββββββββββββββ‘ β 2022-12-14 00:00:00 β 1.0 β 0Β΅s β β 2022-12-14 00:00:00 β 2.2 β 0Β΅s β β 2022-12-14 00:00:00 β 2.4 β 0Β΅s β βββββββββββββββββββββββ΄ββββββββββ΄βββββββββββββββ ...gives the correct data type, however the values are all zero. The documentation is kind of sparse on the topic, any better options? | Update: The values being zero is a repr formatting issue that has been fixed with this commit. pl.duration() can be used in this way: df.with_columns( pl.col("dt").str.to_datetime() + pl.duration(nanoseconds=pl.col("seconds") * 1e9) ) shape: (3, 2) βββββββββββββββββββββββββββ¬ββββββββββ β dt β seconds β β --- β --- β β datetime[ΞΌs] β f64 β βββββββββββββββββββββββββββͺββββββββββ‘ β 2022-12-14 00:00:01 β 1.0 β β 2022-12-14 00:00:02.200 β 2.2 β β 2022-12-14 00:00:02.400 β 2.4 β βββββββββββββββββββββββββββ΄ββββββββββ | 4 | 4 |
74,814,175 | 2022-12-15 | https://stackoverflow.com/questions/74814175/replace-value-by-null-in-polars | Given a Polars DataFrame, is there a way to replace a particular value by "null"? For example, if there's a sentinel value like "_UNKNOWN" and I want to make it truly missing in the dataframe instead. | Update: Expr.replace() has also since been added to Polars. df.with_columns(pl.col(pl.String).replace("_UNKNOWN", None)) shape: (4, 3) ββββββββ¬βββββββ¬ββββββ β A β B β C β β --- β --- β --- β β str β str β i64 β ββββββββͺβββββββͺββββββ‘ β a β null β 1 β β b β d β 2 β β null β e β 3 β β c β f β 4 β ββββββββ΄βββββββ΄ββββββ You can use .when().then().otherwise() pl.col(pl.String) is used to select all "string columns". df = pl.DataFrame({ "A": ["a", "b", "_UNKNOWN", "c"], "B": ["_UNKNOWN", "d", "e", "f"], "C": [1, 2, 3, 4] }) df.with_columns( pl.when(pl.col(pl.String) == "_UNKNOWN") .then(None) .otherwise(pl.col(pl.String)) # keep original value .name.keep() ) | 3 | 8 |
74,854,903 | 2022-12-19 | https://stackoverflow.com/questions/74854903/not-required-in-pydantics-base-models | Im trying to accept data from an API and then validate the response structure with a Pydantic base model. However, I have the case where sometimes some fields will not come included in the response, while sometimes they do. The problem is, when I try to validate the structure, Pydantic starts complaining about those fields being "missing" even though they can be missing sometimes. I really don't understand how to define a field as "missible". The docs mention that a field that is just defined as a name and a type is considered this way, but I haven't had any luck This is a simple example of what I'm trying to accomplish # Response: {a: 1, b: "abc", c: ["a", "b", "c"]} response: dict = json.loads(request_response) # Pydantic Base Model from pydantic import BaseModel class Model(BaseModel): a: int b: str c: List[str] d: float # Validating Model(**response) # Return: ValidationError - Missing "d" field How do I make it so that "d" doesnt cause the validation to throw an error? I have tried to switch "d" to d: Optional[float] and d: Optional[float] = 0.0, but nothing works. Thanks! | Pydantic v2 Either a model has a field or it does not. In a sense, a field is always required to have a value on a fully initialized model instance. It is just that a field may have a default value that will be assigned to it, if no value was explicitly provided during initialization. (see Basic Model Usage in the docs) The question for you is ultimately: What value should be assigned to field d, if it is not set explicitly? Should it be None or should it be some default float value (like e.g. 0.) or something else? Whatever you choose, you must specify that default value in the model definition and remember to annotate the field with the correct type. If you choose a default float like 0. for instance, your type remains the same and you just define the field as d: float = 0.. If you want the default to be of a different type like None, you will need to change the definition to d: float | None = None. For the sake of completeness, you may also define a default factory instead of a static value to have the actual value be calculated during initialization. Here is a short demo: from pydantic import BaseModel, Field, ValidationError class Model(BaseModel): a: int b: str c: list[str] d: float | None = None # equivalent: `d: typing.Optional[float] = None` e: float = 0. f: float = Field(default_factory=lambda: 420.69) if __name__ == '__main__': instance = Model.model_validate({ "a": 1, "b": "abc", "c": ["a", "b", "c"], }) print(instance.model_dump_json(indent=4)) try: Model.model_validate({ "a": 1, "b": "abc", "c": ["a", "b", "c"], "d": None, # fine "e": None, # error "f": None, # error }) except ValidationError as e: print(e.json(indent=4)) Output: { "a": 1, "b": "abc", "c": [ "a", "b", "c" ], "d": null, "e": 0.0, "f": 420.69 } [ { "type": "float_type", "loc": [ "e" ], "msg": "Input should be a valid number", "input": null, "url": "https://errors.pydantic.dev/2.7/v/float_type" }, { "type": "float_type", "loc": [ "f" ], "msg": "Input should be a valid number", "input": null, "url": "https://errors.pydantic.dev/2.7/v/float_type" } ] [Old answer] Pydantic v1 As @python_user said, both your suggestions work. Admittedly, the behavior of typing.Optional for Pydantic fields is poorly documented. Perhaps because it is assumed to be obvious. I personally don't find it obvious because Optional[T] is just equivalent to Union[T, None] (or T | None in the new notation). Annotating a field with any other union of types, while omitting a default value will result in the field being required. But if you annotate with a union that includes None, the field automatically receives the None default value. Kind of inconsistent, but that is the way it is. Update (2024-04-20): As @Michael mentioned in the comments, with the release of Pydantic v2, the maintainers addressed this exact inconsistency (and arguably "fixed" it). The change is explained in the documentation section on Required Fields. Quote: (emphasis mine) In Pydantic V1, fields annotated with Optional or Any would be given an implicit default of None even if no default was explicitly specified. This behavior has changed in Pydantic V2, and there are no longer any type annotations that will result in a field having an implicit default value. See my Pydantic v2 answer above for an example. However, the question is ultimately what you want your model fields to be. What value should be assigned to field d, if it is not set explicitly? Should it be None or should it be some default float value? (like e.g. 0.) If None is fine, then you can use Optional[float] or float | None, and you don't need to specify the default value. (Specifying Optional[float] = None is equivalent.) If you want any other default value, you'll need to specify it accordingly, e.g. d: float = 0.0; but in that case None would be an invalid value for that field. from pydantic import BaseModel, ValidationError class Model(BaseModel): a: int b: str c: list[str] d: float | None # or typing.Optional[float] e: float = 0. if __name__ == '__main__': print(Model.parse_obj({ "a": 1, "b": "abc", "c": ["a", "b", "c"], }), "\n") try: Model.parse_obj({ "a": 1, "b": "abc", "c": ["a", "b", "c"], "e": None, }) except ValidationError as e: print(e) Output: a=1 b='abc' c=['a', 'b', 'c'] d=None e=0.0 1 validation error for Model e none is not an allowed value (type=type_error.none.not_allowed) | 6 | 11 |
74,851,128 | 2022-12-19 | https://stackoverflow.com/questions/74851128/language-detection-for-short-user-generated-string | I need to detect the language of text sent in chat, and I am faced with 2 problems: the length of the message the errors that may be in it and the noise (emoji etc...) For the noise, I clean the message and that works fine, but the length of the message is a problem. For example, if a user writes "hi", Fasttext detects the language as Dutch text, but Google Translate detects it as English. And most likely it is a message in English. I try to train my own Fasttext model, but how can I adjust the model to have better results with short strings? Do I need to train the model with the dictionary of a lot of languages to get a better result? I use Fasttext because it's the most accurate language detector. Here is an exemple of the problem with Fasttext: # wget https://dl.fbaipublicfiles.com/fasttext/supervised-models/lid.176.bin import fasttext text = "Hi" pretrained_lang_model = "lid.176.bin" model = fasttext.load_model(pretrained_lang_model) predictions = model.predict(text, k=2) print(predictions) # (('__label__de', '__label__en'), array([0.51606238, 0.31865335])) | I have found a way to have better results. If you sum all probabilities of all languages on different detectors like fastText and lingua, and add a dictionary-based detection for short texts, you can have very good results (for my task, I also made a fastText model trained on my data). | 3 | 1 |
74,822,543 | 2022-12-16 | https://stackoverflow.com/questions/74822543/colab-libtorch-cuda-cu-so-cannot-open-shared-object-file-no-such-file-or-dire | I'm trying to use the python package aitextgen in google Colab so I can fine-tune GPT. First, when I installed the last version of this package I had this error when importing it. Unable to import name '_TPU_AVAILABLE' from 'pytorch_lightning.utilities' Though with the help of the solutions given in this question I could pass this error by downgrading my packages like this: !pip3 install -q aitextgen==0.5.2 !pip3 install -q torchtext==0.10.0 !pip3 install -q torchmetrics==0.6.0 !pip3 install -q pytorch-lightning==1.4.0rc0 But now I'm facing this error when importing the aitextgen package and colab will crash! /usr/local/lib/python3.8/dist-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory warn(f"Failed to load image Python extension: {e}") Keep in mind that the error is in importing the package and there is not a bug in my code. To be more clear I have this error when I just import aitextgen like this: import aitextgen How can I deal with this error? | It seems that it is due to your CUDA version (it can be the cuDNN version too) not matching the supported version by tf, torch, or jax. As of Aug 2023, If your CUDA or cuDNN versions are +12, try downgrading them. You can find your CUDA version with nvcc --version and your cuDNN version via apt list --installed | grep cudnn. And you can downgrade your cuDNN with this (there could be other methods too): sudo apt-get install libcudnn8=8.8.1.3-1+cuda11.8 sudo apt-get install libcudnn8-dev=8.8.1.3-1+cuda11.8 | 7 | 1 |
74,826,436 | 2022-12-16 | https://stackoverflow.com/questions/74826436/importerror-cannot-import-name-ugettext-from-django-utils-translation | I installed djangorestframework as shown below: pip install djangorestframework -jwt Then, I used rest_framework_jwt.views as shown below: from rest_framework_jwt.views import ( obtain_jwt_token, refresh_jwt_token, verify_jwt_token ) ... path('auth-jwt/', obtain_jwt_token), path('auth-jwt-refresh/',refresh_jwt_token), path('auth-jwt-verify/', verify_jwt_token), ... But, I got the error below: ImportError: cannot import name 'ugettext' from 'django.utils.translation' So, how can I solve the error? | Upgrading djangorestframework-jwt will solve the error: pip install djangorestframework-jwt --upgrade | 4 | -1 |
74,829,469 | 2022-12-16 | https://stackoverflow.com/questions/74829469/polars-native-way-to-convert-unix-timestamp-to-date | I'm working with some data frames that contain Unix epochs in ms, and would like to display the entire timestamp series as a date. Unfortunately, the docs did not help me find a polars native way to do this, and I'm reaching out here. Solutions on how to do this in Python and also in Rust would brighten my mind and day. With pandas, for example, such things were possible: pd.to_datetime(pd_df.timestamp, unit="ms") # or to convert the whole col pd_df.timestamp = pd.to_datetime(pd_df.timestamp, unit="ms") I could loop over the whole thing and do something like I'm doing here for a single entry in each row. datetime.utcfromtimestamp(pl_df["timestamp"][0] / 1000).strftime("%Y-%m-%d") If I were to do this in Rust, I would then use something like chrono to convert the ts to a date. But I don't think looping over each row is a good solution. For now, as the best way I have found to help me is to convert pd_df = pl_df.to_pandas() and do it in pandas. | Adapting the answer of @jqurious. Polars has a dedicated from_epoch function for this: (pl.DataFrame({"timestamp": [1397392146866, 1671225446800]}) .with_columns( pl.from_epoch("timestamp", time_unit="ms") ) ) shape: (2, 1) βββββββββββββββββββββββββββ β timestamp β β --- β β datetime[ms] β βββββββββββββββββββββββββββ‘ β 2014-04-13 12:29:06.866 β βββββββββββββββββββββββββββ€ β 2022-12-16 21:17:26.800 β βββββββββββββββββββββββββββ | 5 | 15 |
74,797,663 | 2022-12-14 | https://stackoverflow.com/questions/74797663/convert-timedelta-to-milliseconds-python | I have the following time: time = datetime.timedelta(days=1, hours=4, minutes=5, seconds=33, milliseconds=623) Is it possible, to convert the time in milliseconds? Like this: 101133623.0 | I found a possibility to solve the problem import datetime time = datetime.timedelta(days=1, hours=4, minutes=5, seconds=33, milliseconds=623) result = time.total_seconds()*1000 print(result) | 12 | 11 |
74,836,151 | 2022-12-17 | https://stackoverflow.com/questions/74836151/nothing-provides-cuda-needed-by-tensorflow-2-10-0-cuda112py310he87a039-0 | I'm using mambaforge on WSL2 Ubuntu 22.04 with systemd enabled. I'm trying to install TensorFlow 2.10 with CUDA enabled, by using the command: mamba install tensorflow And the command nvidia-smi -q from WSL2 gives: ==============NVSMI LOG============== Timestamp : Sat Dec 17 23:22:43 2022 Driver Version : 527.56 CUDA Version : 12.0 Attached GPUs : 1 GPU 00000000:01:00.0 Product Name : NVIDIA GeForce RTX 3070 Laptop GPU Product Brand : GeForce Product Architecture : Ampere Display Mode : Disabled Display Active : Disabled Persistence Mode : Enabled MIG Mode Current : N/A Pending : N/A Accounting Mode : Disabled Accounting Mode Buffer Size : 4000 Driver Model Current : WDDM Pending : WDDM Serial Number : N/A GPU UUID : GPU-f03a575d-7930-47f3-4965-290b89514ae7 Minor Number : N/A VBIOS Version : 94.04.3f.00.d7 MultiGPU Board : No Board ID : 0x100 Board Part Number : N/A GPU Part Number : 249D-750-A1 Module ID : 1 Inforom Version Image Version : G001.0000.03.03 OEM Object : 2.0 ECC Object : N/A Power Management Object : N/A GPU Operation Mode Current : N/A Pending : N/A GSP Firmware Version : N/A GPU Virtualization Mode Virtualization Mode : None Host VGPU Mode : N/A IBMNPU Relaxed Ordering Mode : N/A PCI Bus : 0x01 Device : 0x00 Domain : 0x0000 Device Id : 0x249D10DE Bus Id : 00000000:01:00.0 Sub System Id : 0x118C1043 GPU Link Info PCIe Generation Max : 3 Current : 3 Device Current : 3 Device Max : 4 Host Max : 3 Link Width Max : 16x Current : 8x Bridge Chip Type : N/A Firmware : N/A Replays Since Reset : 0 Replay Number Rollovers : 0 Tx Throughput : 0 KB/s Rx Throughput : 0 KB/s Atomic Caps Inbound : N/A Atomic Caps Outbound : N/A Fan Speed : N/A Performance State : P8 Clocks Throttle Reasons Idle : Active Applications Clocks Setting : Not Active SW Power Cap : Not Active HW Slowdown : Not Active HW Thermal Slowdown : Not Active HW Power Brake Slowdown : Not Active Sync Boost : Not Active SW Thermal Slowdown : Not Active Display Clock Setting : Not Active FB Memory Usage Total : 8192 MiB Reserved : 159 MiB Used : 12 MiB Free : 8020 MiB BAR1 Memory Usage Total : 8192 MiB Used : 1 MiB Free : 8191 MiB Compute Mode : Default Utilization Gpu : 0 % Memory : 0 % Encoder : 0 % Decoder : 0 % Encoder Stats Active Sessions : 0 Average FPS : 0 Average Latency : 0 FBC Stats Active Sessions : 0 Average FPS : 0 Average Latency : 0 Ecc Mode Current : N/A Pending : N/A ECC Errors Volatile SRAM Correctable : N/A SRAM Uncorrectable : N/A DRAM Correctable : N/A DRAM Uncorrectable : N/A Aggregate SRAM Correctable : N/A SRAM Uncorrectable : N/A DRAM Correctable : N/A DRAM Uncorrectable : N/A Retired Pages Single Bit ECC : N/A Double Bit ECC : N/A Pending Page Blacklist : N/A Remapped Rows : N/A Temperature GPU Current Temp : 46 C GPU Shutdown Temp : 101 C GPU Slowdown Temp : 98 C GPU Max Operating Temp : 87 C GPU Target Temperature : N/A Memory Current Temp : N/A Memory Max Operating Temp : N/A Power Readings Power Management : Supported Power Draw : 12.08 W Power Limit : 4294967.50 W Default Power Limit : 80.00 W Enforced Power Limit : 100.00 W Min Power Limit : 1.00 W Max Power Limit : 100.00 W Clocks Graphics : 210 MHz SM : 210 MHz Memory : 405 MHz Video : 555 MHz Applications Clocks Graphics : N/A Memory : N/A Default Applications Clocks Graphics : N/A Memory : N/A Deferred Clocks Memory : N/A Max Clocks Graphics : 2100 MHz SM : 2100 MHz Memory : 6001 MHz Video : 1950 MHz Max Customer Boost Clocks Graphics : N/A Clock Policy Auto Boost : N/A Auto Boost Default : N/A Voltage Graphics : 637.500 mV Fabric State : N/A Status : N/A Processes GPU instance ID : N/A Compute instance ID : N/A Process ID : 24 Type : G Name : /Xwayland Used GPU Memory : Not available in WDDM driver model And my other enviroment works as expected: β¬’ [Systemd] β― mamba activate tf ~ via π
tf via π 774MiB/19GiB | 0B/5GiB β¬’ [Systemd] β― python Python 3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 08:45:29) [GCC 10.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow as tf 2022-12-17 23:25:13.867166: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. Then, it tries to install package version cuda112py39h9333c2f_1, winch uses Python 3.9, but I want Python 3.10. Whenever I try to install the version for 3.10, it shows the error: Could not solve for environment specs Encountered problems while solving: - nothing provides __cuda needed by tensorflow-2.10.0-cuda112py310he87a039_0 The environment can't be solved, aborting the operation Why is this error occurring and how can I solve it? | I ran into this today and found a solution that works (after also seeing your GitHub post). Long story short, you need to use CONDA_OVERRIDE_CUDA to make this work as described in this conda-forge blog post. For example, with CUDA 11.8 and mamba, use: CONDA_OVERRIDE_CUDA="11.8" mamba install tensorflow -c conda-forge For CUDA 11.8 and conda, it would be: CONDA_OVERRIDE_CUDA="11.8" conda install tensorflow -c conda-forge Depending on your setup, you may also want to install cudatoolkit as well, e.g., CONDA_OVERRIDE_CUDA="11.8" mamba install tensorflow cudatoolkit -c conda-forge Edit: fixed the command as per the helpful comment! | 5 | 13 |
74,818,160 | 2022-12-15 | https://stackoverflow.com/questions/74818160/huge-margin-when-using-matplotlib-supxlabel | I am trying to add a common label in a matplotlib's subplots, but I am having some troubles. I am using python 3.10 and matplotlib 3.5.1 There is a minimal working example illustrating the problem: import matplotlib.pyplot as plt fig, axs = plt.subplots(3, 2, figsize=(8, 12), sharex=True, sharey=True) fig.supxlabel('Example of supxlabel') fig.supylabel('Example of supylabel') fig.subplots_adjust(wspace=0, hspace=0) plt.savefig('test.pdf', bbox_inches='tight', pad_inches=0) This code generates the following figure: Note the huge ugly margins above 'Example of supxlabel' and right to 'Example of supylabel'. I tried to use the option constrained_layout=True, along with fig.set_constrained_layout_pads, but it didn't solve my problem. I know that the problem can be solved using the option x, y, va and ha of supxlabel and supylabel, but I have many figures to generate and cannot realistically find and set the values of these options manually. | Edit: simplest is plt.subplots(layout='constrained'), works very nicely for all sorts of manipulations; below is if full control is desired, which 'constrained' (from what I can tell) won't permit. My current workaround is y = np.sqrt(height) / 100 * 2.2, which seems to work well at least for height from 5 to 25. If using subplots_adjust(bottom=), a constant adjustment factor seems to work, like y -= .04. I've opened an Issue. width = 10 height = 20 y = np.sqrt(height) / 100 * 2.2 fig, _ = plt.subplots(23, 1, figsize=(width, height)) fig.supxlabel("supxlabel", y=y) | 5 | 5 |
74,857,446 | 2022-12-20 | https://stackoverflow.com/questions/74857446/how-to-specify-python-version-range-in-environment-yml-file | Does it make sense to specify range of allowed Python versions in environment.yml file? I got this idea while reading the Google's Biq Query documentation Supported Python Versions Python >= 3.7, < 3.11 If this makes sense then what is the right syntax to specify the range in the environment.yml file? | Recommendation: Prefer exact version, not range While there is nothing logically incorrect with specifying a version range for Python, it has the downside of defining a large solution space, which can lead to slow solving. For practical environments, I would recommend specifying the version for python through the minor version, e.g., python=3.9. Note that this behavior mostly pertains to central packages that define variants of other packages, like python, r-base, or cudatoolkit. For most other packages, the impact is not as drastic. Benchmarking Here is a simple environment for basic data analysis, with and without the Python version specified. so-py39.yaml name: so-py39 channels: - conda-forge - nodefaults dependencies: - python ==3.9 - ipykernel - numba - pandas - scikit-learn - scipy so-py3x.yaml name: so-py3x channels: - conda-forge - nodefaults dependencies: - python >=3.7,<4.0 - ipykernel - numba - pandas - scikit-learn - scipy Conda First, we can use the regular conda command. Command command time conda env create -dqn foo -f [file] Results Timing creating environment: so-py39.yaml 20.51 real 19.57 user 0.94 sys 20.73 real 19.84 user 0.97 sys 19.43 real 18.66 user 0.95 sys 19.22 real 18.36 user 0.92 sys 19.34 real 18.48 user 0.94 sys 19.08 real 18.16 user 0.94 sys Timing creating environment: so-py3x.yaml 30.53 real 29.56 user 1.00 sys 29.21 real 28.21 user 1.08 sys 31.13 real 29.77 user 1.07 sys 29.93 real 28.46 user 0.99 sys 30.53 real 29.43 user 0.98 sys 28.60 real 27.68 user 1.03 sys That is, solving for the environment with a range takes ~10s (~50%) longer. Mamba We can also test solving the environment with Mamba. Command command time mamba env create -dqn foo -f [file] Results Timing creating environment: so-py39.yaml 3.30 real 2.79 user 0.49 sys 3.36 real 2.84 user 0.51 sys 3.25 real 2.74 user 0.49 sys 3.34 real 2.82 user 0.51 sys 3.29 real 2.78 user 0.51 sys 3.24 real 2.74 user 0.48 sys Timing creating environment: so-py3x.yaml 3.27 real 2.79 user 0.47 sys 3.26 real 2.78 user 0.46 sys 3.33 real 2.83 user 0.48 sys 3.28 real 2.79 user 0.47 sys 3.31 real 2.81 user 0.49 sys 3.29 real 2.81 user 0.47 sys This indicates that when using Mamba the difference in solve time is neglible. | 7 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.