question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
75,093,819
2023-1-12
https://stackoverflow.com/questions/75093819/common-lisp-equivalent-of-pythons-itertools-starmap
Python's Itertools has what is called starmap. Given a collection of collections and a function, it applies the function to each collection strictly inside the collection, using the elements of said internal collection as arguments to the function. For example, from itertools import starmap NestedList = [(1, 2), (3, 4), (5, 6), (0, 0), (1, 1), (2, 2)] list(starmap(lambda x, y:x + y, NestedList)) returns the list containing 3, 7, 11, 0, 2, and 4. I refuse to believe that Python was the first to come up with this concept, but I'm drawing a blank when I try to think of what it was called in older languages. Does any analogous functionality exist in common lisp? I feel certain that it does, but I cannot name it.
Use a combination of mapcar and apply: (defun starmap (f list) (mapcar (lambda (x) (apply f x)) list)) Or loop with the keyword collect: (defun starmap (f list) (loop for x in list collect (apply f x))) Examples: > (starmap (lambda (x y) (+ x y)) '((1 2) (3 4) (5 6) (0 0) (1 1) (2 2))) (3 7 11 0 2 4) > (starmap #'expt '((1 2) (3 4) (5 6) (0 0) (1 1) (2 2))) (1 81 15625 1 1 4)
7
7
75,085,270
2023-1-11
https://stackoverflow.com/questions/75085270/cv2-aruco-charucoboard-create-not-found-in-opencv-4-7-0
I have installed opencv-python-4.7.0.68 and opencv-contrib-python-4.7.0.68 The code below gives me the following error: AttributeError: module 'cv2.aruco' has no attribute 'CharucoBoard_create' Sample code: import cv2 aruco_dict = cv2.aruco.getPredefinedDictionary(cv2.aruco.DICT_4X4_50) board = cv2.aruco.CharucoBoard_create(11, 8, 0.015, 0.011, aruco_dict)
This is due to a change that happened in release 4.7.0, when the Aruco code was moved from contrib to the main repository. The constructor cv2.aruco.CharucoBoard_create has been renamed to cv2.aruco.CharucoBoard and its parameter list has changed slightly -- instead of the first two integer parameters squaresX and squaresY, you should pass in a single tuple with two values, representing the size. (Note: The documentation seems to be missing the Python constructor's signature. Bug report has been filed.) So, your code should look like: import cv2 aruco_dict = cv2.aruco.getPredefinedDictionary(cv2.aruco.DICT_4X4_50) board = cv2.aruco.CharucoBoard((11, 8), 0.015, 0.011, aruco_dict)
8
9
75,085,575
2023-1-11
https://stackoverflow.com/questions/75085575/importerror-cannot-import-name-build-py-2to3-from-distutils-command-build-py
I tried to install bipwallet through pip but it says there is no 'build_py_2to3' in distutils Defaulting to user installation because normal site-packages is not writeable Collecting bipwallet ... Collecting protobuf==3.0.0a3 Using cached protobuf-3.0.0a3.tar.gz (88 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error Γ— python setup.py egg_info did not run successfully. β”‚ exit code: 1 ╰─> [6 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/tmp/pip-install-q8v8yny3/protobuf_3f1a8b67130540ab9c93af7fe765918c/setup.py", line 29, in <module> from distutils.command.build_py import build_py_2to3 as _build_py ImportError: cannot import name 'build_py_2to3' from 'distutils.command.build_py' (/home/orkhan/.local/lib/python3.11/site-packages/setuptools/_distutils/command/build_py.py) [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Γ— Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. I tried to search in Google but it did not help. I also tried pip install --upgrade distutils thinking maybe it's just older version. P.S. my python version 3.11
It seems as though bipwallet or one of its dependencies (protobuf-3.0.0a3?) wants to use whatever version of setuptools is available rather than pinning a specific version. setuptools v58.0.0 has a breaking change, first included in Python 3.10, where build_py_2to3 was removed. You have a couple options: Find the offending library and edit its setup.py to indicate that it should use setuptools<=57.5.0 and retry. Downgrade your Python installation to 3.9 to get a local version of setuptools prior to the breaking change. Here are a couple other related posts/links to the issue you're seeing: https://github.com/mhammond/pywin32/issues/1813 https://bytemeta.vip/repo/StanfordVL/iGibson/issues/227 https://github.com/mobinmbn/bipwallet_fix
5
5
75,084,387
2023-1-11
https://stackoverflow.com/questions/75084387/how-to-sort-columns-in-a-dataframe-such-that-the-values-in-the-first-row-are-fro
I have the following dataframe: Audi Hyundai Kia Mercedes Tesla VW Volvo 2019 0.25 nan nan 0.5 nan nan 0.25 2020 nan 0.125 nan 0.375 0.125 0.125 0.25 2021 nan nan 0.25 0.5 nan 0.25 nan I want to rearrange the columns such the the first row is sorted from largest to smallest. So the order of the columns should be Mercedes, Audi/Volvo, the rest. I tried df.sort_values() so many times, but I always get errors. The most common error is about the usage of by.
You can reorder the columns based on the sorted order of the first row: out = df[df.iloc[0].sort_values(ascending=False).index] print(out) # Output Mercedes Audi Volvo Hyundai Kia Tesla VW 2019 0.500 0.25 0.25 NaN NaN NaN NaN 2020 0.375 NaN 0.25 0.125 NaN 0.125 0.125 2021 0.500 NaN NaN NaN 0.25 NaN 0.250
3
5
75,078,242
2023-1-11
https://stackoverflow.com/questions/75078242/how-to-generate-a-png-image-in-pil-and-display-it-in-jinja2-template-using-fasta
I have a FastAPI endpoint that is generating PIL images. I want to then send the resulting image as a stream to a Jinja2 TemplateResponse. This is a simplified version of what I am doing: import io from PIL import Image @api.get("/test_image", status_code=status.HTTP_200_OK) def test_image(request: Request): '''test displaying an image from a stream. ''' test_img = Image.new('RGBA', (300,300), (0, 255, 0, 0)) # I've tried with and without this: test_img = test_img.convert("RGB") test_img = test_img.tobytes() base64_encoded_image = base64.b64encode(test_img).decode("utf-8") return templates.TemplateResponse("display_image.html", {"request": request, "myImage": base64_encoded_image}) With this simple html: <html> <head> <title>Display Uploaded Image</title> </head> <body> <h1>My Image<h1> <img src="data:image/jpeg;base64,{{ myImage | safe }}"> </body> </html> I've been working from these answers and have tried multiple permutations of these: How to display uploaded image in HTML page using FastAPI & Jinja2? How to convert PIL Image.image object to base64 string? How can I display PIL image to html with render_template flask? This seems like it ought to be very simple but all I get is the html icon for an image that didn't render. What am I doing wrong? Thank you. I used Mark Setchell's answer, which clearly shows what I was doing wrong, but still am not getting an image in html. My FastAPI is: @api.get("/test_image", status_code=status.HTTP_200_OK) def test_image(request: Request): # Create image im = Image.new('RGB',(1000,1000),'red') im.save('red.png') print(im.tobytes()) # Create buffer buffer = io.BytesIO() # Tell PIL to save as PNG into buffer im.save(buffer, 'PNG') # get the PNG-encoded image from buffer PNG = buffer.getvalue() print() print(PNG) base64_encoded_image = base64.b64encode(PNG) return templates.TemplateResponse("display_image.html", {"request": request, "myImage": base64_encoded_image}) and my html: <html> <head> <title>Display Uploaded Image</title> </head> <body> <h1>My Image 3<h1> <img src="data:image/png;base64,{{ myImage | safe }}"> </body> </html> When I run, if I generate a 1x1 image I get the exact printouts in Mark's answer. If I run this version, with 1000x1000 image, it saves a red.png that I can open and see. But in the end, the html page has the heading and the icon for no image rendered. I'm clearly doing something wrong now in how I send to html.
There are a couple of issues here. I'll make a new section for each to keep it clearly divided up. If you want to send a base64-encoded PNG, you need to change your HTML to: <img src="data:image/png;base64,{{ myImage | safe }}"> If you create an image of a single red pixel like this: im = Image.new('RGB',(1,1),'red') print(im.tobytes()) you'll get: b'\xff\x00\x00' That is not a PNG-encoded image, how could it be - you haven't told PIL that you want a PNG, or a JPEG, or a TIFF, so it cannot know. It is just giving you the 3 raw RGB pixels as bytes #ff0000. If you save that image to disk as a PNG and dump it you will get: im.save('red.png') Then dump it: xxd red.png 00000000: 8950 4e47 0d0a 1a0a 0000 000d 4948 4452 .PNG........IHDR 00000010: 0000 0001 0000 0001 0802 0000 0090 7753 ..............wS 00000020: de00 0000 0c49 4441 5478 9c63 f8cf c000 .....IDATx.c.... 00000030: 0003 0101 00c9 fe92 ef00 0000 0049 454e .............IEN 00000040: 44ae 4260 82 D.B`. You can now see the PNG signature at the start. So we need to create that same thing, but just in memory without bothering the disk: import io import base64 from PIL import image # Create image im = Image.new('RGB',(1,1),'red') # Create buffer buffer = io.BytesIO() # Tell PIL to save as PNG into buffer im.save(buffer, 'PNG') Now we can get the PNG-encoded image from the buffer: PNG = buffer.getvalue() And if we print it, it will look suspiciously identical to the PNG on disk: b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00\x01\x00\x00\x00\x01\x08\x02\x00\x00\x00\x90wS\xde\x00\x00\x00\x0cIDATx\x9cc\xf8\xcf\xc0\x00\x00\x03\x01\x01\x00\xc9\xfe\x92\xef\x00\x00\x00\x00IEND\xaeB`\x82' Now you can base64-encode it and send it: base64_encoded_image = base64.b64encode(PNG) Note: I only made 1x1 for demonstration purposes so I could show you the whole file. Make it bigger than 1x1 when you test, or you'll never see it πŸ˜€
3
3
75,047,812
2023-1-8
https://stackoverflow.com/questions/75047812/remove-lines-link-two-scatter-points
Anyone please help me how to remove the lines link scatter points when plot with graph objects from Python enter image description here ` fig.add_trace(go.Scatter3d(x=x[65000:133083], y=y[65000:133083], z=z[65000:133083], marker=dict( size=1, # Changed node size... color=color[65000:133083], # ...and color colorscale='Viridis', line=None, showscale=True)))` Please help me solve this
Have you tried to add: mode = 'markers', marker = dict( size = 12, # color = z, # set color to an array/list of desired values colorscale = "Viridis", # choose a colorscale opacity = 0.8 ) Though i'm not sure why you do have lines given your code...
3
2
75,040,507
2023-1-7
https://stackoverflow.com/questions/75040507/how-to-access-fastapi-backend-from-a-different-machine-ip-on-the-same-local-netw
Both the FastAPI backend and the Next.js frontend are running on localost. On the same computer, the frontend makes API calls using fetch without any issues. However, on a different computer on the same network, e.g., on 192.168.x.x, the frontend runs, but its API calls are no longer working. I have tried using a proxy as next.js but that still does not work. Frontend: export default function People({setPerson}:PeopleProps) { const fetcher = async (url:string) => await axios.get(url).then((res) => res.data); const { data, error, isLoading } = useSWR(`${process.env.NEXT_PUBLIC_API}/people`, fetcher); if (error) return <div>"Failed to load..."</div>; return ( <> {isLoading? "Loading..." :data.map((person: Person) => <div key={person.id}> {person.name} </div>)} </> ) } The Next.js app loads the env.local file at startup, which contains: NEXT_PUBLIC_API=http://locahost:20002 Backend: rom typing import List from fastapi import APIRouter, Depends from ..utils.db import get_session as db from sqlmodel import Session, select from ..schemas.person import Person, PersonRead router = APIRouter() @router.get("/people", response_model = List[PersonRead]) async def get_people(sess: Session = Depends(db)): res = sess.exec(select(Person)).all() return res The frontend runs with: npm run dev, and outputs ready - started server on 0.0.0.0:3000, url: http://localhost:3000 The backend runs with: uvicorn hogar_api.main:app --port=20002 --host=0.0.0.0 --reload, and outputs: INFO: Uvicorn running on http://0.0.0.0:20002 (Press CTRL+C to quit) When I open the browser on http://localhost:3000 on the same machine the list of Person is displayed on the screen. When I open the browser on http://192.168.x.x:3000 on another machine on the same network, I get the "Failed to Load..." message. When I open the FastAPI swagger docs on either machine, the documentation is displayed correctly and all the endpoints work as expected. CORS look like this: from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware app = FastAPI() origins = [ "http://localhost:3000", ] app.add_middleware( CORSMiddleware, allow_origins=origins, allow_credentials=True, allow_methods=["*"], allow_headers=["*"], )
Setting the host flag to 0.0.0.0 To access a FastAPI backend from a different machine/IP (than the local machine that is running the server) on the same network, you would need to make sure that the host flag is set to 0.0.0.0. The IP address 0.0.0.0 means all IPv4 addresses on the local machine. If a host has two IP addresses, e.g., 192.168.10.2 and 10.1.2.5, and the server running on the host listens on 0.0.0.0, it will be reachable at both of those IPs. For example, through command line interface: uvicorn main:app --host 0.0.0.0 --port 8000 or, programmatically: if __name__ == '__main__': uvicorn.run(app, host='0.0.0.0', port=8000) Note that RFC 1122 prohibits 0.0.0.0 as a destination address in IPv4 and only allows it as a source address, meaning that you can't type, for instance, http://0.0.0.0:8000 in the address bar of a web browser and expect it to work. You should instead use one of the IPv4 addresses of the local machine, e.g., http://192.168.10.2:8000 (or, if you are testing the API on the same local machine running the server, you could use http://127.0.0.1:8000 or http://localhost:8000). Adjusting Firewall Settings You may also need to adjust your Firewall to allow external access to the port you specified, by creating an inbound firewall rule for Python. On Windows, this is usually created automatically, when allowing a programβ€”Python, in this caseβ€”to communicate through Windows Firewall, and by default this allows traffic on Any port (for both TCP and UDP connections). Adjusting CORS Settings Additionally, if your frontend is listening on a separate IP address and/or port number from the backend, please make sure to have CORS enabled and properly configured, as desribed in this answer and this answer. For example: origins = ['http://localhost:3000','http://192.168.178.23:3000'] app.add_middleware( CORSMiddleware, allow_origins=origins, allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) Making HTTP requests in JavaScript Finally, please take a look at this answer and this answer regarding using the proper origin/URL when issuing a JavaScript fetch request from the frontend. In short, in your JavaScript asynchronous request, you should use the same domain name you typed in the address bar of your web browser (but with the port number your backend server is listening on). If, for example, both the backend and the frontend server are listening on the same IP address and port number, e.g., 192.168.178.23:8000β€”that would be the case when using Jinja2Templates for instanceβ€”you could access the frontend by typing in the address bar of the browser the URL leading to the frontend page, e.g., http://192.168.178.23:8000/, and the fetch request should look like this: fetch('http://192.168.178.23:8000/people', {... For convenience, in the above caseβ€”i.e., when both the backend and the frontend are running on the same machine and listening on the same portβ€”you could use relative paths, as suggested in a linked answer above. Note that if you are testing your application locally on the same machine and not from a different machine on LAN, and instead using 127.0.0.1 or localhost to access the frontend/backend, those two are different domains/origins. Thus, if you typed http://127.0.0.1:8000/ in the address bar of the browser to access the frontend page, you shouldn't make fetch requests using, for instance, fetch('http://localhost:8000/people', as you would get a CORS error (e.g., Access to fetch at [...] from origin [...] has been blocked by CORS policy...). You should instead use fetch('http://127.0.0.1:8000/people', and vice versa. Otherwise, if the frontend's origin differs from the backend's (see this answer for more details on origin), it should then be added to the list of origins in the CORS settings of the backend (see example above).
7
27
75,003,869
2023-1-4
https://stackoverflow.com/questions/75003869/how-to-handle-timestamps-from-summer-and-winter-when-converting-strings-in-polar
I'm trying to convert string timestamps to polars datetime from the timestamps my camera puts in it RAW file metadata, but polars throws this error when I have timestamps from both summer time and winter time. ComputeError: Different timezones found during 'strptime' operation. How do I persuade it to convert these successfully? (ideally handling different timezones as well as the change from summer to winter time) And then how do I convert these timestamps back to the proper local clocktime for display? Note that while the timestamp strings just show the offset, there is an exif field "Time Zone City" in the metadata as well as fields with just the local (naive) timestamp import polars as plr testdata=[ {'name': 'BST 11:06', 'ts': '2022:06:27 11:06:12.16+01:00'}, {'name': 'GMT 7:06', 'ts': '2022:12:27 12:06:12.16+00:00'}, ] pdf = plr.DataFrame(testdata) pdfts = pdf.with_column(plr.col('ts').str.strptime(plr.Datetime, fmt = "%Y:%m:%d %H:%M:%S.%f%z")) print(pdf) print(pdfts) It looks like I need to use tz_convert, but I cannot see how to add it to the conversion expression and what looks like the relevant docpage just 404's broken link to dt_namespace
polars 0.16 update Since PR 6496, was merged you can parse mixed offsets to UTC, then set the time zone: import polars as pl pdf = pl.DataFrame([ {'name': 'BST 11:06', 'ts': '2022:06:27 11:06:12.16+01:00'}, {'name': 'GMT 7:06', 'ts': '2022:12:27 12:06:12.16+00:00'}, ]) pdfts = pdf.with_columns( pl.col('ts').str.to_datetime("%Y:%m:%d %H:%M:%S%.f%z") .dt.convert_time_zone("Europe/London") ) print(pdfts) shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ name ┆ ts β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ datetime[ΞΌs, Europe/London] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════════════════════║ β”‚ BST 11:06 ┆ 2022-06-27 11:06:12.160 BST β”‚ β”‚ GMT 7:06 ┆ 2022-12-27 12:06:12.160 GMT β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ old version: Here's a work-around you could use: remove the UTC offset and localize to a pre-defined time zone. Note: the result will only be correct if UTC offsets and time zone agree. timezone = "Europe/London" pdfts = pdf.with_column( plr.col('ts') .str.replace("[+|-][0-9]{2}:[0-9]{2}", "") .str.strptime(plr.Datetime, fmt="%Y:%m:%d %H:%M:%S%.f") .dt.tz_localize(timezone) ) print(pdf) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ name ┆ ts β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════════════════════════║ β”‚ BST 11:06 ┆ 2022:06:27 11:06:12.16+01:00 β”‚ β”‚ GMT 7:06 ┆ 2022:12:27 12:06:12.16+00:00 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ print(pdfts) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ name ┆ ts β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ datetime[ns, Europe/London] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════════════════════║ β”‚ BST 11:06 ┆ 2022-06-27 11:06:12.160 BST β”‚ β”‚ GMT 7:06 ┆ 2022-12-27 12:06:12.160 GMT β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Side-Note: to be fair, pandas does not handle mixed UTC offsets either, unless you parse to UTC straight away (keyword utc=True in pd.to_datetime). With mixed UTC offsets, it falls back to using series of native Python datetime objects. That makes a lot of the pandas time series functionality like the dt accessor unavailable.
5
6
75,009,761
2023-1-4
https://stackoverflow.com/questions/75009761/do-we-need-to-run-load-dotenv-in-every-module
I have a .env defined with the following content: env=loc I have three python module that make use of this variable. β”œβ”€β”€ __init__.py β”œβ”€β”€ cli.py |── settings.py β”œβ”€β”€ commands β”‚ β”œβ”€β”€ __init__.py β”‚ └── output.py settings.py: from dotenv import load_dotenv load_dotenv() if not os.getenv("env"): raise TypeError("'env' variable not found in .env file") output.py: import os def output(): return getenv("env") cli.py: import settings from commands.output import output import os CURR_ENV = getenv("env") print(CURR_ENV) print(output()) Output: loc None Why is the output from output.py not loc? The environment variables were loaded when load_dotenv() was run for the first time. Do I have to run load_dotenv() every time I need to access the environment variables?
This answer more or less repeats what's mentioned in the comments and adds a demo example. Once load_dotenv() is called, environment variables will be visible in the process it's called in (and in any child process) from that point. Suppose you have a project organized as follows. β”œβ”€β”€ commands β”‚ β”œβ”€β”€ __init__.py β”‚ └── output.py β”œβ”€β”€ __init__.py β”œβ”€β”€ .env β”œβ”€β”€ main.py |── settings.py settings.py: from os import getenv print("From settings.py:", getenv("env")) output.py: from os import getenv def output(): print("From output.py:", getenv("env")) output() main.py: from os import getenv from dotenv import load_dotenv import settings # <-- settings don't see .env yet from commands.output import output # <-- output don't see .env yet load_dotenv() # <-- because load_dotenv is called here print("From main.py:", getenv("env")) output() # <--- the function call is made after load_dotenv(), so .env is visible So if you run main.py in the CLI, you'll get the following output. PS path\to\module> python .\main.py From settings.py: None From output.py: None From main.py: loc From output.py: loc To make .env visible everywhere, move load_dotenv logic before any of the imports in main.py or make a call to load_dotenv inside the first import (in this example, settings.py) instead of main.py because every line in settings.py will be read once it's imported so every import and function call in it will be read in main.py anyway. In the above example of executing main.py, .env became visible when calling the output() function inside main.py because it was called after the call to load_dotenv. That means, if you want to execute output.py by itself, you'll need to import load_dotenv and make a call to it inside output.py in order for it to see the environment variables. This becomes somewhat useful if you have a test/ directory in your module and want to execute some file from there.
5
3
75,040,733
2023-1-7
https://stackoverflow.com/questions/75040733/is-there-a-way-to-use-strenum-in-earlier-python-versions
The enum package in python 3.11 has the StrEnum class. I consider it very convenient but cannot use it in python 3.10. What would be the easiest method to use this class anyway?
On Python 3.10, you can inherit from str and Enum to have a StrEnum: from enum import Enum class MyEnum(str, Enum): choice1 = "choice1" choice2 = "choice2" With this approach, you have string comparison: "choice1" == MyEnum.choice1 >> True Be aware, however, that Python 3.11 makes a breaking change to classes which inherit from (str, Enum): https://github.com/python/cpython/issues/100458. You will have to change all instances of (str, Enum) to StrEnum when upgrading to maintain the same behavior. Or: you can execute pip install StrEnum and have this: from strenum import StrEnum class MyEnum(StrEnum): choice1 = "choice1" choice2 = "choice2"
10
16
75,021,750
2023-1-5
https://stackoverflow.com/questions/75021750/deltatable-schema-not-updating-when-using-alter-table-add-columns
I'm currently playing with Delta Tables on my local machine and I encountered a behavior that I don't understand. I create my DeltaTable like so: df.write \ .format('delta') \ .mode('overwrite') \ .option('overwriteSchema', 'true') \ .save(my_table_path) dt = DeltaTable.forPath(spark, my_table_path) Then, I run the following command. spark.sql(f"ALTER TABLE delta.`{my_table_path}` ADD COLUMNS (my_new_col string)") This adds a new column to the schema as can be seen by running spark.sql(f"DESCRIBE TABLE delta.`{my_table_path}`").show() It even shows as the last DeltaTable operation in the history, by running dt.history().show(). However, this is not reflected in the DeltaTable object dt, infact if I run dt.toDF().printSchema(), the new column is not displayed. On the other hand, if I do something like spark.sql(f"UPDATE delta.`{my_table_path}` SET existing_col = 'foo'") and after I run dt.toDF().show(), the update is reflected and shown under existing_col, which now appears containing foo everywhere. The only way I found out to have the dt object reflect the schema change, is to run dt = DeltaTable.forPath(spark, my_table_path) again after ALTER TABLE. What am I missing? Edit: Added repo link for reproducibility. https://github.com/wtfzambo/delta-bug-working-example Edit2: Repo uses Delta 1.0, but issue exists also in Delta 2.*
Thanks for your patience @wtfzambo - I just realized when I reproed this myself I should seen the issue immediately so sorry for taking so long to realize this. Actually, the way this works is as expected but allow me to explain. When you ran the ALTER TABLE statement, the schema in fact did change and it was registered within the Delta transaction log but up to this point, it was a metadata change. Once you ran the UPDATE statement, then this was registered and the data itself reflected this change, i.e. the Parquet files within the table can see this change. When you ran the dt.toDF().printSchema(), it is because the DataFrame could only see the data and metadata together but when you ran the ALTER TABLE statement, it was only a metadata change. To better showcase this, allow me to provide context via the file system. To recreate this exact scenario, please use the docker at https://go.delta.io/docker and use the DELTA_PACKAGE_VERSION as delta-core_2.12:2.1.0. That is, run the Docker container and use the pts-ark steps: To start PySpark, run the command: $SPARK_HOME/bin/pyspark --packages io.delta:delta-core_2.12:2.1.0 \ --conf "spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension" \ --conf "spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog" Run the basic commands to create a simple table # Create Spark DataFrame data = spark.range(0, 5) # Write Delta table data.write.format("delta").save("/tmp/delta-table") # Read Delta table df = spark.read.format("delta").load("/tmp/delta-table") # Show Delta table df.show() When you run the command spark.sql(f"DESCRIBE TABLE delta.'/tmp/delta-table/'").show(), you will see: +---------------+---------+-------+ | col_name|data_type|comment| +---------------+---------+-------+ | id| bigint| | | | | | | # Partitioning| | | |Not partitioned| | | +---------------+---------+-------+ As well, if you list the files of the table: NBuser@5b0edf0b8779:/tmp/delta-table$ ls -lsgA total 44 4 drwxr-xr-x 2 NBuser 4096 Mar 12 23:50 _delta_log 4 -rw-r--r-- 1 NBuser 478 Mar 12 23:29 part-00000-4abcc1fa-b2c8-441a-a392-8dab57edd819-c000.snappy.parquet 4 -rw-r--r-- 1 NBuser 12 Mar 12 23:29 .part-00000-4abcc1fa-b2c8-441a-a392-8dab57edd819-c000.snappy.parquet.crc 4 -rw-r--r-- 1 NBuser 478 Mar 12 23:29 part-00001-6327358c-8c00-4ad6-9e3d-263f0ea66e3f-c000.snappy.parquet 4 -rw-r--r-- 1 NBuser 12 Mar 12 23:29 .part-00001-6327358c-8c00-4ad6-9e3d-263f0ea66e3f-c000.snappy.parquet.crc 4 -rw-r--r-- 1 NBuser 478 Mar 12 23:29 part-00002-eea1d287-be68-4a62-874d-ab4e39c6a825-c000.snappy.parquet 4 -rw-r--r-- 1 NBuser 12 Mar 12 23:29 .part-00002-eea1d287-be68-4a62-874d-ab4e39c6a825-c000.snappy.parquet.crc 4 -rw-r--r-- 1 NBuser 478 Mar 12 23:29 part-00003-c79b4180-5968-4fee-8181-6752d9cfb333-c000.snappy.parquet 4 -rw-r--r-- 1 NBuser 12 Mar 12 23:29 .part-00003-c79b4180-5968-4fee-8181-6752d9cfb333-c000.snappy.parquet.crc 4 -rw-r--r-- 1 NBuser 478 Mar 12 23:29 part-00004-c3399acd-75ca-4ea5-85f9-03fa60897161-c000.snappy.parquet 4 -rw-r--r-- 1 NBuser 12 Mar 12 23:29 .part-00004-c3399acd-75ca-4ea5-85f9-03fa60897161-c000.snappy.parquet.crc Now, let's run the ALTER command ala spark.sql(f"ALTER TABLE delta.'/tmp/delta-table/' ADD COLUMNS (blah string)") and when you run the DESCRIBE TABLE statement, you get what you expected: spark.sql(f"DESCRIBE TABLE delta.`/tmp/delta-table/`").show() +---------------+---------+-------+ | col_name|data_type|comment| +---------------+---------+-------+ | id| bigint| | | blah| string| | | | | | | # Partitioning| | | |Not partitioned| | | +---------------+---------+-------+ But, if you were to run the ls -lsgA for the temp table, note that the files look exactly the same. That is, there are no changes to the data, but only the metadata. To see this, run the command: NBuser@5b0edf0b8779:/tmp/delta-table/_delta_log$ ls -lsgA total 16 4 -rw-r--r-- 1 NBuser 2082 Mar 12 23:29 00000000000000000000.json 4 -rw-r--r-- 1 NBuser 28 Mar 12 23:29 .00000000000000000000.json.crc 4 -rw-r--r-- 1 NBuser 752 Mar 12 23:38 00000000000000000001.json 4 -rw-r--r-- 1 NBuser 16 Mar 12 23:38 .00000000000000000001.json.crc Note the 00000000000000000001.json which contains the transaction which corresponds to your ALTER TABLE command. If you were to read this .json file it would like this: {"metaData": { "id":"d583238c-87ab-4de0-a09d-141ef499371d", "format":{"provider":"parquet","options":{}}, "schemaString":" {\"type\":\"struct\", \"fields\":[{\"name\":\"id\",\"type\":\"long\",\"nullable\":true,\"metadata\":{}}, {\"name\":\"blah\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}}]}" ,"partitionColumns":[],"configuration":{},"createdTime":1678663791967}} {"commitInfo":{"timestamp":1678664321014, "operation":"ADD COLUMNS","operationParameters":{"columns":"[{\"column\":{\"name\":\"blah\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}}}]"},"readVersion":0,"isolationLevel":"Serializable","isBlindAppend":true,"operationMetrics":{},"engineInfo":"Apache-Spark/3.3.1 Delta-Lake/2.1.0","txnId":"b54db68d-652b-4930-82d5-61a542d82100"}} Notice the schemaString -> fields which show the blah column and notice the operation command that points to a ADD COLUMNS command which also includes the blah column. So the key point here is that while the transaction log contains the blah column being added, the root table directory table folder has no changes to the .parquet files meaning that the change was reflected in the metadata but not the data. Until you ran the UPDATE command, the changes were not reflected in the .parquet files (i.e. data files). And in the case of the Spark DataFrame, it can only pull the schema when it's associated with the data.
3
3
75,017,836
2023-1-5
https://stackoverflow.com/questions/75017836/convert-bytes-to-bits-with-leading-zeros
I know that i can do this : byte = 58 format ( byte , '08b' ) >>> '00111010' with two bytes i have to do format( bytes , '016b') but if i doesn't have the number of bytes i can't set a number for format so i have to do : with open('file','rb')as a: b = a.read() c = int.from_bytes ( b ) d = format( c ,'b') d = (8-len(a)%8)*'0'+d but i was wondering if there was easier way to do this and i want this without using any loops thanks!
You can map each byte to an 8-bit representation with the str.format method and then join the byte representations into a single string for output (where b is the bytes object you read from a file): print(''.join(map('{:08b}'.format, b)))
4
1
75,052,206
2023-1-9
https://stackoverflow.com/questions/75052206/specifying-huggingface-model-as-project-dependency
Is it possible to install huggingface models as a project dependency? Currently it is downloaded automatically by the SentenceTransformer library, but this means in a docker container it downloads every time it starts. This is the model I am trying to use: https://huggingface.co/sentence-transformers/all-mpnet-base-v2 I have tried specifying the url as a dependency in my pyproject.toml: all-mpnet-base-v2 = {git = "https://huggingface.co/sentence-transformers/all-mpnet-base-v2.git", branch = "main"} The first error I got was that the name was incorrect and it should be called train-script, which I renamed the dependency to, but I'm not sure if this is correct. Now I have: train-script = {git = "https://huggingface.co/sentence-transformers/all-mpnet-base-v2.git", branch = "main"} However, now I get the following error: Package operations: 1 install, 0 updates, 0 removals β€’ Installing train-script (0.0.0 bd44305) EnvCommandError Command ['/srv/.venv/bin/pip', 'install', '--no-deps', '-U', '/srv/.venv/src/train-script'] errored with the following return code 1, and output: ERROR: Directory '/srv/.venv/src/train-script' is not installable. Neither 'setup.py' nor 'pyproject.toml' found. [notice] A new release of pip available: 22.2.2 -> 22.3.1 [notice] To update, run: pip install --upgrade pip at /usr/local/lib/python3.10/site-packages/poetry/utils/env.py:1183 in _run 1179β”‚ output = subprocess.check_output( 1180β”‚ cmd, stderr=subprocess.STDOUT, **kwargs 1181β”‚ ) 1182β”‚ except CalledProcessError as e: β†’ 1183β”‚ raise EnvCommandError(e, input=input_) 1184β”‚ 1185β”‚ return decode(output) 1186β”‚ 1187β”‚ def execute(self, bin, *args, **kwargs): Is this possible? If not, is there a recommended way to bake the model download into a docker container so it doesn't need to be downloaded each time?
I was not able to find a native way to do this with project dependency files, so I did this using a multi-stage docker file. First I clone the model locally, then copy it into the appropriate /root/.cache/torch/ folder. Here is an example: FROM python:3.10.3 as model-download-stage RUN apt update && apt install git-lfs -y RUN git lfs install RUN git clone https://huggingface.co/sentence-transformers/all-mpnet-base-v2 /tmp/model RUN rm -rf /tmp/model/.git FROM python:3.10.3 COPY --from=model-download-stage /tmp/model /root/.cache/torch/sentence_transformers/sentence-transformers_all-mpnet-base-v2
3
3
75,040,990
2023-1-7
https://stackoverflow.com/questions/75040990/importerror-dll-load-failed-while-importing-path-the-specified-module-could-n
When I was trying to import matplotlib, I wrote import matplotlib.pyplot as plt in my code. and this error occured. Traceback (most recent call last): File "C:\aiProjects\opencv\test.py", line 2, in <module> import matplotlib.pyplot as plt File "C:\Users\blackhao\AppData\Local\Programs\Python\Python311\Lib\site-packages\matplotlib\__init__.py", line 113, in <module> from . import _api, _version, cbook, _docstring, rcsetup File "C:\Users\blackhao\AppData\Local\Programs\Python\Python311\Lib\site-packages\matplotlib\rcsetup.py", line 27, in <module> from matplotlib.colors import Colormap, is_color_like File "C:\Users\blackhao\AppData\Local\Programs\Python\Python311\Lib\site-packages\matplotlib\colors.py", line 56, in <module> from matplotlib import _api, _cm, cbook, scale File "C:\Users\blackhao\AppData\Local\Programs\Python\Python311\Lib\site-packages\matplotlib\scale.py", line 22, in <module> from matplotlib.ticker import ( File "C:\Users\blackhao\AppData\Local\Programs\Python\Python311\Lib\site-packages\matplotlib\ticker.py", line 138, in <module> from matplotlib import transforms as mtransforms File "C:\Users\blackhao\AppData\Local\Programs\Python\Python311\Lib\site-packages\matplotlib\transforms.py", line 49, in <module> from matplotlib._path import ( ImportError: DLL load failed while importing _path: The specified module could not be found. Process finished with exit code 1 I tried several ways such as reinstalling matplotlib and pasting new "Msvcp71.dll" and "Msvcr71.dll" files in my system folders. Anyways to solve this problem?
... reinstall the matplotlib python package with this argument --ignore-installed: pip3 install matplotlib --user --ignore-installed
5
4
75,016,155
2023-1-5
https://stackoverflow.com/questions/75016155/converting-onnx-model-to-tensorflow-fails
I am trying to convert detr model to tensor flow using onnx. I converted the model using torch.onnx.export with opset_version=12.(which produces a detr.onnx file) Then I tried to convert the onnx file to tensorflow model using this example. I added onnx.check_model line to make sure model is loaded correctly. import math from PIL import Image import requests import matplotlib.pyplot as plt import torch from torch import nn from torchvision.models import resnet50 import onnx from onnx_tf.backend import prepare import torchvision.transforms as T torch.set_grad_enabled(False) model = torch.hub.load('facebookresearch/detr', 'detr_resnet50', pretrained=True) url = 'http://images.cocodataset.org/val2017/000000039769.jpg' im = Image.open(requests.get(url, stream=True).raw) transform = T.Compose([ T.Resize(800), T.ToTensor(), T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) img = transform(im).unsqueeze(0) torch.onnx.export(model, img, 'detr.onnx', opset_version = 12) onnx_model = onnx.load('./detr.onnx') result = onnx.checker.check_model(onnx_model) tf_rep = prepare(onnx_model) tf_rep.export_graph('./model.pb') This code raises an exception when it reaches tf_rep.export_graph('./model.pb') line. onnx version = 1.13.0 , torch version = 1.13.0+cu117 , onnx_tf = 1.10.0 message of exception : KeyError Traceback (most recent call last) Cell In[19], line 26 23 result = onnx.checker.check_model(onnx_model) 25 tf_rep = prepare(onnx_model) ---> 26 tf_rep.export_graph('./model.pb') File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\onnx_tf\backend_rep.py:143, in TensorflowRep.export_graph(self, path) 129 """Export backend representation to a Tensorflow proto file. 130 131 This function obtains the graph proto corresponding to the ONNX (...) 137 :returns: none. 138 """ 139 self.tf_module.is_export = True 140 tf.saved_model.save( 141 self.tf_module, 142 path, --> 143 signatures=self.tf_module.__call__.get_concrete_function( 144 **self.signatures)) 145 self.tf_module.is_export = False File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\eager\def_function.py:1239, in Function.get_concrete_function(self, *args, **kwargs) 1237 def get_concrete_function(self, *args, **kwargs): 1238 # Implements GenericFunction.get_concrete_function. -> 1239 concrete = self._get_concrete_function_garbage_collected(*args, **kwargs) 1240 concrete._garbage_collector.release() # pylint: disable=protected-access 1241 return concrete File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\eager\def_function.py:1219, in Function._get_concrete_function_garbage_collected(self, *args, **kwargs) 1217 if self._stateful_fn is None: 1218 initializers = [] -> 1219 self._initialize(args, kwargs, add_initializers_to=initializers) 1220 self._initialize_uninitialized_variables(initializers) 1222 if self._created_variables: 1223 # In this case we have created variables on the first call, so we run the 1224 # defunned version which is guaranteed to never create variables. File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\eager\def_function.py:785, in Function._initialize(self, args, kwds, add_initializers_to) 782 self._lifted_initializer_graph = lifted_initializer_graph 783 self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph) 784 self._concrete_stateful_fn = ( --> 785 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access 786 *args, **kwds)) 788 def invalid_creator_scope(*unused_args, **unused_kwds): 789 """Disables variable creation.""" File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\eager\function.py:2523, in Function._get_concrete_function_internal_garbage_collected(self, *args, **kwargs) 2521 args, kwargs = None, None 2522 with self._lock: -> 2523 graph_function, _ = self._maybe_define_function(args, kwargs) 2524 return graph_function File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\eager\function.py:2760, in Function._maybe_define_function(self, args, kwargs) 2758 # Only get placeholders for arguments, not captures 2759 args, kwargs = placeholder_dict["args"] -> 2760 graph_function = self._create_graph_function(args, kwargs) 2762 graph_capture_container = graph_function.graph._capture_func_lib # pylint: disable=protected-access 2763 # Maintain the list of all captures File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\eager\function.py:2670, in Function._create_graph_function(self, args, kwargs) 2665 missing_arg_names = [ 2666 "%s_%d" % (arg, i) for i, arg in enumerate(missing_arg_names) 2667 ] 2668 arg_names = base_arg_names + missing_arg_names 2669 graph_function = ConcreteFunction( -> 2670 func_graph_module.func_graph_from_py_func( 2671 self._name, 2672 self._python_function, 2673 args, 2674 kwargs, 2675 self.input_signature, 2676 autograph=self._autograph, 2677 autograph_options=self._autograph_options, 2678 arg_names=arg_names, 2679 capture_by_value=self._capture_by_value), 2680 self._function_attributes, 2681 spec=self.function_spec, 2682 # Tell the ConcreteFunction to clean up its graph once it goes out of 2683 # scope. This is not the default behavior since it gets used in some 2684 # places (like Keras) where the FuncGraph lives longer than the 2685 # ConcreteFunction. 2686 shared_func_graph=False) 2687 return graph_function File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\framework\func_graph.py:1247, in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, acd_record_initial_resource_uses) 1244 else: 1245 _, original_func = tf_decorator.unwrap(python_func) -> 1247 func_outputs = python_func(*func_args, **func_kwargs) 1249 # invariant: `func_outputs` contains only Tensors, CompositeTensors, 1250 # TensorArrays and `None`s. 1251 func_outputs = nest.map_structure( 1252 convert, func_outputs, expand_composites=True) File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\eager\def_function.py:677, in Function._defun_with_scope.<locals>.wrapped_fn(*args, **kwds) 673 with default_graph._variable_creator_scope(scope, priority=50): # pylint: disable=protected-access 674 # __wrapped__ allows AutoGraph to swap in a converted function. We give 675 # the function a weak reference to itself to avoid a reference cycle. 676 with OptionalXlaContext(compile_with_xla): --> 677 out = weak_wrapped_fn().__wrapped__(*args, **kwds) 678 return out File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\eager\function.py:3317, in class_method_to_instance_method.<locals>.bound_method_wrapper(*args, **kwargs) 3312 return wrapped_fn(weak_instance(), *args, **kwargs) 3314 # If __wrapped__ was replaced, then it is always an unbound function. 3315 # However, the replacer is still responsible for attaching self properly. 3316 # TODO(mdan): Is it possible to do it here instead? -> 3317 return wrapped_fn(*args, **kwargs) File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\framework\func_graph.py:1233, in func_graph_from_py_func.<locals>.autograph_handler(*args, **kwargs) 1231 except Exception as e: # pylint:disable=broad-except 1232 if hasattr(e, "ag_error_metadata"): -> 1233 raise e.ag_error_metadata.to_exception(e) 1234 else: 1235 raise File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\framework\func_graph.py:1222, in func_graph_from_py_func.<locals>.autograph_handler(*args, **kwargs) 1220 # TODO(mdan): Push this block higher in tf.function's call stack. 1221 try: -> 1222 return autograph.converted_call( 1223 original_func, 1224 args, 1225 kwargs, 1226 options=autograph.ConversionOptions( 1227 recursive=True, 1228 optional_features=autograph_options, 1229 user_requested=True, 1230 )) 1231 except Exception as e: # pylint:disable=broad-except 1232 if hasattr(e, "ag_error_metadata"): File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\impl\api.py:439, in converted_call(f, args, kwargs, caller_fn_scope, options) 437 try: 438 if kwargs is not None: --> 439 result = converted_f(*effective_args, **kwargs) 440 else: 441 result = converted_f(*effective_args) File ~\AppData\Local\Temp\__autograph_generated_fileq0h7j9t_.py:30, in outer_factory.<locals>.inner_factory.<locals>.tf____call__(self, **kwargs) 28 node = ag__.Undefined('node') 29 onnx_node = ag__.Undefined('onnx_node') ---> 30 ag__.for_stmt(ag__.ld(self).graph_def.node, None, loop_body, get_state, set_state, (), {'iterate_names': 'node'}) 31 outputs = ag__.converted_call(ag__.ld(dict), (), None, fscope) 33 def get_state_4(): File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:463, in for_stmt(iter_, extra_test, body, get_state, set_state, symbol_names, opts) 459 _tf_distributed_iterable_for_stmt( 460 iter_, extra_test, body, get_state, set_state, symbol_names, opts) 462 else: --> 463 _py_for_stmt(iter_, extra_test, body, None, None) File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:512, in _py_for_stmt(***failed resolving arguments***) 510 else: 511 for target in iter_: --> 512 body(target) File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:478, in _py_for_stmt.<locals>.protected_body(protected_iter) 477 def protected_body(protected_iter): --> 478 original_body(protected_iter) 479 after_iteration() 480 before_iteration() File ~\AppData\Local\Temp\__autograph_generated_fileq0h7j9t_.py:23, in outer_factory.<locals>.inner_factory.<locals>.tf____call__.<locals>.loop_body(itr) 21 node = itr 22 onnx_node = ag__.converted_call(ag__.ld(OnnxNode), (ag__.ld(node),), None, fscope) ---> 23 output_ops = ag__.converted_call(ag__.ld(self).backend._onnx_node_to_tensorflow_op, (ag__.ld(onnx_node), ag__.ld(tensor_dict), ag__.ld(self).handlers), dict(opset=ag__.ld(self).opset, strict=ag__.ld(self).strict), fscope) 24 curr_node_output_map = ag__.converted_call(ag__.ld(dict), (ag__.converted_call(ag__.ld(zip), (ag__.ld(onnx_node).outputs, ag__.ld(output_ops)), None, fscope),), None, fscope) 25 ag__.converted_call(ag__.ld(tensor_dict).update, (ag__.ld(curr_node_output_map),), None, fscope) File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\impl\api.py:439, in converted_call(f, args, kwargs, caller_fn_scope, options) 437 try: 438 if kwargs is not None: --> 439 result = converted_f(*effective_args, **kwargs) 440 else: 441 result = converted_f(*effective_args) File ~\AppData\Local\Temp\__autograph_generated_filetsq4l59p.py:62, in outer_factory.<locals>.inner_factory.<locals>.tf___onnx_node_to_tensorflow_op(cls, node, tensor_dict, handlers, opset, strict) 60 pass 61 handler = ag__.Undefined('handler') ---> 62 ag__.if_stmt(ag__.ld(handlers), if_body_1, else_body_1, get_state_1, set_state_1, ('do_return', 'retval_'), 2) 64 def get_state_2(): 65 return () File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:1363, in if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts) 1361 _tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts) 1362 else: -> 1363 _py_if_stmt(cond, body, orelse) File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:1416, in _py_if_stmt(cond, body, orelse) 1414 def _py_if_stmt(cond, body, orelse): 1415 """Overload of if_stmt that executes a Python if statement.""" -> 1416 return body() if cond else orelse() File ~\AppData\Local\Temp\__autograph_generated_filetsq4l59p.py:56, in outer_factory.<locals>.inner_factory.<locals>.tf___onnx_node_to_tensorflow_op.<locals>.if_body_1() 54 nonlocal retval_, do_return 55 pass ---> 56 ag__.if_stmt(ag__.ld(handler), if_body, else_body, get_state, set_state, ('do_return', 'retval_'), 2) File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:1363, in if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts) 1361 _tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts) 1362 else: -> 1363 _py_if_stmt(cond, body, orelse) File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:1416, in _py_if_stmt(cond, body, orelse) 1414 def _py_if_stmt(cond, body, orelse): 1415 """Overload of if_stmt that executes a Python if statement.""" -> 1416 return body() if cond else orelse() File ~\AppData\Local\Temp\__autograph_generated_filetsq4l59p.py:48, in outer_factory.<locals>.inner_factory.<locals>.tf___onnx_node_to_tensorflow_op.<locals>.if_body_1.<locals>.if_body() 46 try: 47 do_return = True ---> 48 retval_ = ag__.converted_call(ag__.ld(handler).handle, (ag__.ld(node),), dict(tensor_dict=ag__.ld(tensor_dict), strict=ag__.ld(strict)), fscope) 49 except: 50 do_return = False File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\impl\api.py:439, in converted_call(f, args, kwargs, caller_fn_scope, options) 437 try: 438 if kwargs is not None: --> 439 result = converted_f(*effective_args, **kwargs) 440 else: 441 result = converted_f(*effective_args) File ~\AppData\Local\Temp\__autograph_generated_filec7_esoft.py:41, in outer_factory.<locals>.inner_factory.<locals>.tf__handle(cls, node, **kwargs) 39 nonlocal retval_, do_return 40 raise ag__.converted_call(ag__.ld(BackendIsNotSupposedToImplementIt), (ag__.converted_call('{} version {} is not implemented.'.format, (ag__.ld(node).op_type, ag__.ld(cls).SINCE_VERSION), None, fscope),), None, fscope) ---> 41 ag__.if_stmt(ag__.ld(ver_handle), if_body, else_body, get_state, set_state, ('do_return', 'retval_'), 2) 42 return fscope.ret(retval_, do_return) File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:1363, in if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts) 1361 _tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts) 1362 else: -> 1363 _py_if_stmt(cond, body, orelse) File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:1416, in _py_if_stmt(cond, body, orelse) 1414 def _py_if_stmt(cond, body, orelse): 1415 """Overload of if_stmt that executes a Python if statement.""" -> 1416 return body() if cond else orelse() File ~\AppData\Local\Temp\__autograph_generated_filec7_esoft.py:33, in outer_factory.<locals>.inner_factory.<locals>.tf__handle.<locals>.if_body() 31 try: 32 do_return = True ---> 33 retval_ = ag__.converted_call(ag__.ld(ver_handle), (ag__.ld(node),), dict(**ag__.ld(kwargs)), fscope) 34 except: 35 do_return = False File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\impl\api.py:439, in converted_call(f, args, kwargs, caller_fn_scope, options) 437 try: 438 if kwargs is not None: --> 439 result = converted_f(*effective_args, **kwargs) 440 else: 441 result = converted_f(*effective_args) File ~\AppData\Local\Temp\__autograph_generated_filevddqx9qt.py:12, in outer_factory.<locals>.inner_factory.<locals>.tf__version(cls, node, **kwargs) 10 try: 11 do_return = True ---> 12 retval_ = ag__.converted_call(ag__.ld(cls)._common, (ag__.ld(node),), dict(**ag__.ld(kwargs)), fscope) 13 except: 14 do_return = False File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\impl\api.py:439, in converted_call(f, args, kwargs, caller_fn_scope, options) 437 try: 438 if kwargs is not None: --> 439 result = converted_f(*effective_args, **kwargs) 440 else: 441 result = converted_f(*effective_args) File ~\AppData\Local\Temp\__autograph_generated_filedezd6jrz.py:122, in outer_factory.<locals>.inner_factory.<locals>.tf___common(cls, node, **kwargs) 120 paddings = ag__.Undefined('paddings') 121 constant_values = ag__.Undefined('constant_values') --> 122 ag__.if_stmt(ag__.ld(cls).SINCE_VERSION < 11, if_body_1, else_body_1, get_state_1, set_state_1, ('constant_values', 'paddings'), 2) 123 cond = ag__.converted_call(ag__.ld(tf).cond, (ag__.converted_call(ag__.ld(check_positive), (ag__.ld(paddings),), None, fscope), ag__.autograph_artifact(lambda : ag__.converted_call(ag__.ld(process_pos_pads), (ag__.ld(x), ag__.ld(paddings), ag__.ld(constant_values)), None, fscope)), ag__.autograph_artifact(lambda : ag__.converted_call(ag__.ld(process_neg_pads), (ag__.ld(x), ag__.ld(paddings), ag__.ld(constant_values)), None, fscope))), None, fscope) 124 try: File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:1363, in if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts) 1361 _tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts) 1362 else: -> 1363 _py_if_stmt(cond, body, orelse) File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\control_flow.py:1416, in _py_if_stmt(cond, body, orelse) 1414 def _py_if_stmt(cond, body, orelse): 1415 """Overload of if_stmt that executes a Python if statement.""" -> 1416 return body() if cond else orelse() File ~\AppData\Local\Temp\__autograph_generated_filedezd6jrz.py:119, in outer_factory.<locals>.inner_factory.<locals>.tf___common.<locals>.else_body_1() 117 nonlocal paddings, constant_values 118 paddings = ag__.ld(tensor_dict)[ag__.ld(node).inputs[1]] --> 119 constant_values = ag__.if_exp(ag__.converted_call(ag__.ld(len), (ag__.ld(node).inputs,), None, fscope) == 3, lambda : ag__.ld(tensor_dict)[ag__.ld(node).inputs[2]], lambda : 0, 'ag__.converted_call(len, (node.inputs,), None, fscope) == 3') File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\conditional_expressions.py:27, in if_exp(cond, if_true, if_false, expr_repr) 25 return _tf_if_exp(cond, if_true, if_false, expr_repr) 26 else: ---> 27 return _py_if_exp(cond, if_true, if_false) File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\operators\conditional_expressions.py:52, in _py_if_exp(cond, if_true, if_false) 51 def _py_if_exp(cond, if_true, if_false): ---> 52 return if_true() if cond else if_false() File ~\AppData\Local\Temp\__autograph_generated_filedezd6jrz.py:119, in outer_factory.<locals>.inner_factory.<locals>.tf___common.<locals>.else_body_1.<locals>.<lambda>() 117 nonlocal paddings, constant_values 118 paddings = ag__.ld(tensor_dict)[ag__.ld(node).inputs[1]] --> 119 constant_values = ag__.if_exp(ag__.converted_call(ag__.ld(len), (ag__.ld(node).inputs,), None, fscope) == 3, lambda : ag__.ld(tensor_dict)[ag__.ld(node).inputs[2]], lambda : 0, 'ag__.converted_call(len, (node.inputs,), None, fscope) == 3') KeyError: in user code: File "C:\Users\alihe\AppData\Local\Programs\Python\Python39\lib\site-packages\onnx_tf\backend_tf_module.py", line 99, in __call__ * output_ops = self.backend._onnx_node_to_tensorflow_op(onnx_node, File "C:\Users\alihe\AppData\Local\Programs\Python\Python39\lib\site-packages\onnx_tf\backend.py", line 347, in _onnx_node_to_tensorflow_op * return handler.handle(node, tensor_dict=tensor_dict, strict=strict) File "C:\Users\alihe\AppData\Local\Programs\Python\Python39\lib\site-packages\onnx_tf\handlers\handler.py", line 59, in handle * return ver_handle(node, **kwargs) File "C:\Users\alihe\AppData\Local\Programs\Python\Python39\lib\site-packages\onnx_tf\handlers\backend\pad.py", line 91, in version_11 * return cls._common(node, **kwargs) File "C:\Users\alihe\AppData\Local\Programs\Python\Python39\lib\site-packages\onnx_tf\handlers\backend\pad.py", line 73, in _common * constant_values = tensor_dict[node.inputs[2]] if len( KeyError: ''
The problem that you are facing is due to the use of dynamic padding instead of static pad shape at source of the model. This is exposed when you lower the onnx opset version during export. import warnings warnings.filterwarnings("ignore") #import onnxruntime import math from PIL import Image import requests import matplotlib.pyplot as plt import torch from torch import nn from torchvision.models import resnet50 import torchvision.transforms as T import onnx from onnx_tf.backend import prepare #from onnxsim import simplify torch.set_grad_enabled(False) model = torch.hub.load('facebookresearch/detr', 'detr_resnet50', pretrained=True) url = 'http://images.cocodataset.org/val2017/000000039769.jpg' im = Image.open(requests.get(url, stream=True).raw) transform = T.Compose([ T.Resize(800), T.ToTensor(), T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) img = transform(im).unsqueeze(0) model.eval() torch.onnx.export(model, img, 'detr.onnx', opset_version = 10) onnx_model = onnx.load('./detr.onnx') #onnx_model, _ = simplify(model) result = onnx.checker.check_model(onnx_model) tf_rep = prepare(onnx_model) tf_rep.export_graph('./model.pb') Which throws the following output: SymbolicValueError Traceback (most recent call last) c:\Anaconda3\envs\workenv\lib\site-packages\torch\onnx\symbolic_opset9.py in _convert_padding_node(input) 1821 try: -> 1822 padding = [ 1823 symbolic_helper._get_const(v, "i", "padding") for v in input_list c:\Anaconda3\envs\workenv\lib\site-packages\torch\onnx\symbolic_opset9.py in <listcomp>(.0) 1822 padding = [ -> 1823 symbolic_helper._get_const(v, "i", "padding") for v in input_list 1824 ] c:\Anaconda3\envs\workenv\lib\site-packages\torch\onnx\symbolic_helper.py in _get_const(value, desc, arg_name) 169 if not _is_constant(value): --> 170 raise errors.SymbolicValueError( 171 f"ONNX symbolic expected a constant value of the '{arg_name}' argument, " SymbolicValueError: ONNX symbolic expected a constant value of the 'padding' argument, got '509 defined in (%509 : Long(requires_grad=0, device=cpu) = onnx::Sub(%max_size_i, %498), scope: models.detr.DETR:: # C:\Users\Anurag/.cache\torch\hub\facebookresearch_detr_main\util\misc.py:349:0 )' [Caused by the value '509 defined in (%509 : Long(requires_grad=0, device=cpu) = onnx::Sub(%max_size_i, %498), scope: models.detr.DETR:: # C:\Users\Anurag/.cache\torch\hub\facebookresearch_detr_main\util\misc.py:349:0 )' (type 'Tensor') in the TorchScript graph. The containing node has kind 'onnx::Sub'.] (node defined in C:\Users\Anurag/.cache\torch\hub\facebookresearch_detr_main\util\misc.py(349): <listcomp> C:\Users\Anurag/.cache\torch\hub\facebookresearch_detr_main\util\misc.py(349): _onnx_nested_tensor_from_tensor_list C:\Users\Anurag/.cache\torch\hub\facebookresearch_detr_main\util\misc.py(313): nested_tensor_from_tensor_list C:\Users\Anurag/.cache\torch\hub\facebookresearch_detr_main\models\detr.py(60): forward c:\Anaconda3\envs\workenv\lib\site-packages\torch\nn\modules\module.py(1182): _slow_forward ... #5: 507 defined in (%507 : Long(requires_grad=0, device=cpu) = onnx::Sub(%478, %466), scope: models.detr.DETR:: # C:\Users\Anurag/.cache\torch\hub\facebookresearch_detr_main\util\misc.py:349:0 ) (type 'Tensor') Outputs: #0: 510 defined in (%510 : int[] = prim::ListConstruct(%459, %509, %459, %508, %459, %507), scope: models.detr.DETR:: ) (type 'List[int]') My suggestion would be to pick up the model from another source. For reference have a look at: ONNX symbolic expected a constant value of the padding argument
7
1
75,042,153
2023-1-7
https://stackoverflow.com/questions/75042153/cant-load-from-autotokenizer-from-pretrained-typeerror-duplicate-file-name
I'm trying to load tokenizer and seq2seq model from pretrained models. from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("ozcangundes/mt5-small-turkish-summarization") model = AutoModelForSeq2SeqLM.from_pretrained("ozcangundes/mt5-small-turkish-summarization") But I got this error. File ~/.local/lib/python3.8/site-packages/google/protobuf/descriptor.py:1028, in FileDescriptor.__new__(cls, name, package, options, serialized_options, serialized_pb, dependencies, public_dependencies, syntax, pool, create_key) 1026 raise RuntimeError('Please link in cpp generated lib for %s' % (name)) 1027 elif serialized_pb: -> 1028 return _message.default_pool.AddSerializedFile(serialized_pb) 1029 else: 1030 return super(FileDescriptor, cls).__new__(cls) TypeError: Couldn't build proto file into descriptor pool: duplicate file name (sentencepiece_model.proto) I tried updating or downgrading the protobuf version. But I couldn't fix
I ran into the same issue when trying to use the microsoft/deberta-v3-small model. That is, at first it complained about not being able to find protobuf, and when I installed the latest, it asked for version 3.20.x. The issue happened after I downgraded to the lower version. Anyway, I was experimenting with it on a locally-running Jupyter notebook. Rerunning that cell didn't help. But when I chose to "Restart & Run All," the problem went away. Therefore, to solve your issue, I believe that you need to restart the Python instance where it cached the latest version of protobuf in the first place. Steps to reproduce: from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("some_model") # outcome: some error asking for `protobuf` pip install protobuf Rerun the code above; some error asking for [email protected] pip uninstall protobuf pip install protobuf==3.20 Rerun the code above; same error as in OP Restart the Python instance (notebook, app, etc.); βœ“ no error
6
6
75,073,085
2023-1-10
https://stackoverflow.com/questions/75073085/passing-array-object-from-php-to-python
This is my code so far $dataraw = $_SESSION['image']; $datagambar = json_encode($dataraw); echo '<pre>'; print_r($dataraw); echo '</pre>'; print($escaped_json); $type1 = gettype($dataraw); print($type1); $type2 = gettype($datagambar); print($type2); This is $dataraw output, the type is array Array ( [0] => Array ( [FileName] => 20221227_202035.jpg [Model] => SM-A528B [Longitude] => 106.904251 [Latitude] => -6.167665 ) [1] => Array ( [FileName] => 20221227_202157.jpg [Model] => SM-A528B [Longitude] => 106.9042428 [Latitude] => -6.1676580997222 ) ) This is $datagambar output, the type is string [{"FileName":"20221227_202035.jpg","Model":"SM-A528B","Longitude":106.904251,"Latitude":-6.167665},{"FileName":"20221227_202157.jpg","Model":"SM-A528B","Longitude":106.9042428,"Latitude":-6.167658099722223}] Pass to python echo shell_exec("D:\Anaconda\python.exe D:/xampp/htdocs/Klasifikasi_KNN/admin/test.py $datagambar"); This is my python test.py import sys, json import os import pymysql import pandas as pd import numpy as np import matplotlib.pyplot as plt import mplcursors as mpl from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import accuracy_score,hamming_loss,classification_report json_list = [] escaped_json1 = sys.argv[1] # this is working but its only a string of array json # print(escaped_json1) # this is working but its only a string of array json json_list.append(json.loads(escaped_json1)) parsed_data = json.loads(escaped_json1) print(json_list) print(parsed_data) When i do print(escaped_json1) it display a string of array json from php($datagambar). python output: Hello world has been called [{"FileName":"20221227_202035.jpg","Model":"SM-A528B","Longitude":106.904251,"Latitude":-6.167665},{"FileName":"20221227_202157.jpg","Model":"SM-A528B","Longitude":106.9042428,"Latitude":-6.167658099722223}] I use apache as my server with phpmyadmin and anaconda. T tried using print(type(escapedjson1)) or print(type(escapedjson1)) but it doesn't display the type json.loads didn't change the type of data to python array How to loads it and make the string array into a variable array so i can call it and convert it to dataframe?.
Update: A Completely different approach There is a difficulty with the PHP script JSON-encoding a structure to produce a JSON string and then passing it as a command line argument since the string needs to be placed in double quotes because there can be embedded spaces in the encoded string. But the string itself can contain double quotes as start of string characters and within such a string. Confused? Who wouldn't be? There is no problem however with writing such a string to a file and having the Python program read it in and decode it. But we don't want to have to deal with temporary files. So the solution is to pipe the data to the Python program where it can then be read in as stdin Let's assume your array looks like: $arr = [ [ "FileName" => "\"\\ \nRon's20221227_202035.jpg", "Model" => "27_202035.jpg", "Longitude" => 106.90425, "Latitude" => 106.90425 ], [ "FileName" => "20221227_202157.jpg", "Model" => "SM-A528B", "Longitude" => 106.9042428, "Latitude" => -6.1676580997222 ] ]; Note that I have modified your example slightly so that the first FileName field contains a " character, a ' character, an escape sequence \n representing the newline character and finally some spaces. Although your example does not contain these characters or some other escape sequence, I would like to be able to handle such a condition should it arise. This solution should work with such input. PHP File <?php $arr = [ [ "FileName" => "\"\\ \nRon's20221227_202035.jpg", "Model" => "27_202035.jpg", "Longitude" => 106.90425, "Latitude" => 106.90425 ], [ "FileName" => "20221227_202157.jpg", "Model" => "SM-A528B", "Longitude" => 106.9042428, "Latitude" => -6.1676580997222 ] ]; // Encode string as JSON: $json = json_encode($arr); // Pipe the JSON string to the Python process's stdin and // read the result from its stdout: $descriptorspec = array( 0 => array("pipe", "r"), // stdin is a pipe that the child will read from 1 => array("pipe", "w") // stdout is a pipe that the child will write to ); $options = array('bypass_shell' => True); // Only has effect on Windows //$process = proc_open("python3 test.py", $descriptorspec, $pipes, null, null, $options); // For your actual environment: $process = proc_open("D:/Anaconda/python.exe D:/xampp/htdocs/Klasifikasi_KNN/admin/test.py", $descriptorspec, $pipes, null, null, $options); // Pipe the input for the Python program and close the pipe: fwrite($pipes[0], $json); fclose($pipes[0]); // Read the result from the Python program and close its pipe $result = stream_get_contents($pipes[1]); fclose($pipes[1]); # Now that we have closed the pipes, we can close the process: $return_code = proc_close($process); echo "Result from stdin:\n$result\n"; test.py import json import sys DEBUG = True # for debugging purposes if DEBUG: import os for k, v in sorted(os.environ.items()): print(k, '->', v, file=sys.stderr) else: import numpy as np import pandas as pd # Load from stdin: arr = json.load(sys.stdin) # print each dictionary of the array: for d in arr: print(d) Prints: {'FileName': '"\\ \nRon\'s20221227_202035.jpg', 'Model': '27_202035.jpg', 'Longitude': 106.90425, 'Latitude': 106.90425} {'FileName': '20221227_202157.jpg', 'Model': 'SM-A528B', 'Longitude': 106.9042428, 'Latitude': -6.1676580997222}
4
5
75,056,435
2023-1-9
https://stackoverflow.com/questions/75056435/how-can-you-run-singular-parametrized-tests-in-pytest-if-the-parameter-is-a-stri
I have a test that looks as following: @pytest.mark.parametrize('param', ['my param', 'my param 2']) def test_param(self,param): ... This works fine when calling this test with python3 -m pytest -s -k "test_param" However, if I want to target a specific test as following: python3 -m pytest -s -k "test_param[my param]" I get the error message ERROR: Wrong expression passed to '-k': my param: at column 4: expected end of input; got identifier Also, when my input string contains a quotation mark ', I get the error ERROR: Wrong expression passed to '-k': ... : at column 51: expected end of input; got left parenthesis and if my string contains both " and ', I am completely unable to call it with the -k option without the string terminating in the middle. How can I run tests with string parameters that contain these symbols? I am currently creating a dict and supplying range(len(my_dict)) as the parameter so I can access these variables via index, but I would prefer to be able to directly enter them in the commandline. EDIT: The current suggestions are all great and already solve some of my problems. However, I'm still not sure how I would call singular tests if my test function looked like this (it has more than one entry as opposed to this minimal example): @pytest.mark.parametrize('input, expected', [ ( """ integer :: & my_var !< my comment """, {'my_var': 'my comment'} ) ]) def test_fetch_variable_definitions_multiline(input,expected): ...
Answer to your question including "EDIT" section: You can use following syntax to run pytest pytest .\tests\test_package_1\test_module_1_1.py::TestClass111::test_1111["param_name"] where param_name can be both value of single parameter or pytest param id To assign some id to parameter set you can use following syntax: class TestClass111: @pytest.mark.parametrize( "param_1_name, param_2_name", [ ("Some difficult param 1", {"a": 1}), ("Some difficult param 2", {"b": 2}), ], ids=["id1", "id2"] ) def test_1111(self, request, param_1_name, param_2_name): or class TestClass111: @pytest.mark.parametrize( "param_1_name, param_2_name", [ pytest.param("Some difficult param 1", {"a": 1}, id="id1"), pytest.param("Some difficult param 2", {"b": 2}, id="id2"), ], ) def test_1111(self, request, param_1_name, param_2_name): so you can access test with param set with id=id1 like this: pytest .\tests\test_package_1\test_module_1_1.py::TestClass111::test_1111["id1"]
5
2
75,043,654
2023-1-7
https://stackoverflow.com/questions/75043654/converting-a-massive-into-a-3-dimensional-bitmap
Problem I need this massive to serve as an input (for C based arduino). This is our massive from the example above in the required format: const byte bitmap[8][8] = { {0xFF, 0x81, 0x81, 0x81, 0x81, 0x81, 0x81, 0xFF}, {0x81, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x81}, {0x81, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x81}, {0x81, 0x00, 0x00, 0x18, 0x18, 0x00, 0x00, 0x81}, {0x81, 0x00, 0x00, 0x18, 0x18, 0x00, 0x00, 0x81}, {0x81, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x81}, {0x81, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x81}, {0xFF, 0x81, 0x81, 0x81, 0x81, 0x81, 0x81, 0xFF}, }; As you can see the above massive uses hex, for example 0xFF in binary is 0b11111111 Which makes sense: imagine bits coming out of your monitor through z axis forming a full line of 8 squares. If you break up the byte into bits and imagine those bits forming layers (with parallel bits) then you can see that this massive represents the 3D cube (shown above in itroduction). *Alternatively you could visualize it as whole bytes through z axis - you would end up with the 3D cube from introduction either way. I need a function that would convert the massive such that: Input: [[63, 62, 61, 60, 59, 58, 57, 56, 48, 40, 32, 24, 16, 8, 0, 1, 2, 3, 4, 5, 6, 7, 15, 23, 31, 39, 47, 55], [63, 56, 0, 7], [63, 56, 0, 7], [35, 36, 27, 56, 28, 0, 7, 63], [63, 56, 0, 7, 36, 35, 27, 28], [63, 7, 56, 0], [7, 0, 56, 63], [7, 6, 5, 4, 3, 2, 1, 0, 8, 16, 32, 24, 40, 48, 56, 57, 58, 59, 60, 61, 62, 63, 55, 39, 47, 31, 23, 15]] Output: { {0xFF, 0x81, 0x81, 0x81, 0x81, 0x81, 0x81, 0xFF}, {0x81, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x81}, {0x81, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x81}, {0x81, 0x00, 0x00, 0x18, 0x18, 0x00, 0x00, 0x81}, {0x81, 0x00, 0x00, 0x18, 0x18, 0x00, 0x00, 0x81}, {0x81, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x81}, {0x81, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x81}, {0xFF, 0x81, 0x81, 0x81, 0x81, 0x81, 0x81, 0xFF}, }; Attempt Below is my attempt: massive = [[63, 62, 61, 60, 59, 58, 57, 56, 48, 40, 32, 24, 16, 8, 0, 1, 2, 3, 4, 5, 6, 7, 15, 23, 31, 39, 47, 55], [63, 56, 0, 7], [63, 56, 0, 7], [35, 36, 27, 56, 28, 0, 7, 63], [63, 56, 0, 7, 36, 35, 27, 28], [63, 7, 56, 0], [7, 0, 56, 63], [7, 6, 5, 4, 3, 2, 1, 0, 8, 16, 32, 24, 40, 48, 56, 57, 58, 59, 60, 61, 62, 63, 55, 39, 47, 31, 23, 15]] rows, cols = (8, 8) arr = [['' for i in range(cols)] for j in range(rows)] arr[0][0] = '' for row in arr: print(row) def convert(): for i in range(0, 64): for n in range(0,64): for each in massive: if i == massive[massive.index(each)][n]: arr[massive.index(each)][n] = '1' else: arr[massive.index(each)][n] = '0' convert() for row in arr: print(row) Output: ['', '', '', '', '', '', '', ''] ['', '', '', '', '', '', '', ''] ['', '', '', '', '', '', '', ''] ['', '', '', '', '', '', '', ''] ['', '', '', '', '', '', '', ''] ['', '', '', '', '', '', '', ''] ['', '', '', '', '', '', '', ''] ['', '', '', '', '', '', '', ''] Traceback (most recent call last): File "main.py", line 28, in <module> convert() File "main.py", line 23, in convert if i == massive[massive.index(each)][n]: IndexError: list index out of range I do understand my mistake here, but I am stuck and cannot think of a neat way to get the desired output. Edit: * Please consider the layers of the cube to go from bottom to top. Thus, massive[0] would be the first layer and hence the bottom-most one, whereas massive[7] would be the last layer and hence the top one (when visualised as a cube, see 3D Massive representation in Introduction).
Definitely not the most efficient, but hopefully quite readable and simple solution. Start with a simple function that converts the indices into the desired layer bitmaps: def bitmap(indices, side=8): """Transform a list of indices to an 8x8 bitmap with those indices turned on""" indices = set(indices) return [[int(side*i+j in indices) for j in range(side)] for i in range(side)] For example, for the first row in massive, you'd get: [[1, 1, 1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0, 0, 1], [1, 1, 1, 1, 1, 1, 1, 1]] This matches your illustration of the layers, and can be used to also create them visually with matplotlib -- plt.imshow(bitmap(massive[0]), cmap='gray_r') plt.show() Or even as a 3D plot using voxels: cube = np.array([bitmap(layer) for layer in massive]) fig, ax = plt.subplots(subplot_kw={"projection": "3d"}) # Use transpose of `cube` to get the direction right # (bottom->up rather than left->right) ax.voxels(cube.T, edgecolor='k') ax.set(xticklabels=[], yticklabels=[], zticklabels=[]) plt.show() Then a small function to add those vertical layers as needed: def hexaize(massive, side=8): """Adds the values for each column across vertical layers""" final_map = [[0] * side for _ in range(side)] # Reverse-iterate over massive since it's given bottom-up and not top-down for i, layer in enumerate(reversed(massive)): for j, row in enumerate(bitmap(layer)): for k, val in enumerate(row): final_map[i][j] += val*2**k # Finally convert the added values to hexadecimal # Use the f-string formatting to ensure upper case and 2-digits return [[f"0x{val:02X}" for val in row] for row in final_map] Then calling hexaize(massive) returns: [['0xFF', '0x81', '0x81', '0x81', '0x81', '0x81', '0x81', '0xFF'], ['0x81', '0x00', '0x00', '0x00', '0x00', '0x00', '0x00', '0x81'], ['0x81', '0x00', '0x00', '0x00', '0x00', '0x00', '0x00', '0x81'], ['0x81', '0x00', '0x00', '0x18', '0x18', '0x00', '0x00', '0x81'], ['0x81', '0x00', '0x00', '0x18', '0x18', '0x00', '0x00', '0x81'], ['0x81', '0x00', '0x00', '0x00', '0x00', '0x00', '0x00', '0x81'], ['0x81', '0x00', '0x00', '0x00', '0x00', '0x00', '0x00', '0x81'], ['0xFF', '0x81', '0x81', '0x81', '0x81', '0x81', '0x81', '0xFF']] Finally, if you want the exact output as described above (in C-like notation?), then you can chain several replace calls like so: def massive_to_arduino(massive, side=8): """Converts a massive to Arduino style input""" # Get the hexa format of massive in_hex = hexaize(massive, side=side) # Replace square brackets with curly ones in_hex = str(in_hex).replace("[", "{").replace("]", "}") # Break rows to join them with new lines and indentation in_hex = "},\n ".join(in_hex.split("},")) # Add new line, indentation, and semicolon to start and end return in_hex.replace("{{", "{\n {").replace("}}", "},\n};") And then calling print(massive_to_arduino(massive)) produces { {'0xFF', '0x81', '0x81', '0x81', '0x81', '0x81', '0x81', '0xFF'}, {'0x81', '0x00', '0x00', '0x00', '0x00', '0x00', '0x00', '0x81'}, {'0x81', '0x00', '0x00', '0x00', '0x00', '0x00', '0x00', '0x81'}, {'0x81', '0x00', '0x00', '0x18', '0x18', '0x00', '0x00', '0x81'}, {'0x81', '0x00', '0x00', '0x18', '0x18', '0x00', '0x00', '0x81'}, {'0x81', '0x00', '0x00', '0x00', '0x00', '0x00', '0x00', '0x81'}, {'0x81', '0x00', '0x00', '0x00', '0x00', '0x00', '0x00', '0x81'}, {'0xFF', '0x81', '0x81', '0x81', '0x81', '0x81', '0x81', '0xFF'}, };
3
1
75,073,571
2023-1-10
https://stackoverflow.com/questions/75073571/how-to-add-a-google-formula-containing-commas-and-quotes-to-a-csv-file
I'm trying to output a CSV file from Python and make one of the entries a Google sheet formula: This is what the formula var would look like: strLink = "https://xxxxxxx.xxxxxx.com/Interact/Pages/Content/Document.aspx?id=" + strId + "&SearchId=0&utm_source=interact&utm_medium=general_search&utm_term=*" strLinkCellFormula = "=HYPERLINK(\"" + strLink + "\", \"" + strTitle + "\")" and then for each row of the CSV I have this: strCSV = strCSV + strId + ", " + "\"" + strTitle + "\", " + strAuthor + ", " + strDate + ", " + strStatus + ", " + "\"" + strSection + "\", \"" + strLinkCellFormula +"\"\n" Which doesn't quite work, the hyperlink formula for Google sheets is like so: =HYPERLINK(url, title) and I can't seem to get that comma escaped. So in my Sheet I am getting an additional column with the title in it and obviously the formula does not work.
Instead of reinventing the wheel, you should write your CSV rows using the builtin csv.writer class. This takes care of escaping any commas and quotes in the data, so you don't need to build your own escape logic. This helps you avoid the mess of escaping in your strLinkCellFormula = ... and strCSV = strCSV + ... lines. For example: import csv urls = ["https://google.com", "https://stackoverflow.com/", "https://www.python.org/"] titles = ["Google", "Stack Overflow", "Python"] with open("file.csv", "w") as fw: writer = csv.writer(fw) writer.writerow(["Company", "Website"]) for u, t in zip(urls, titles): formula = f'=HYPERLINK("{u}", "Visit {t}")' row = [t, formula] writer.writerow(row) Note that in the line formula = ... above, I used the f-string syntax to format the URL and title into the string. I also used apostrophes to define the string, since I knew that the string was going to contain quotation marks and I didn't want to bother escaping them. This gives the following CSV: Company,Website Google,"=HYPERLINK(""https://google.com"", ""Visit Google"")" Stack Overflow,"=HYPERLINK(""https://stackoverflow.com/"", ""Visit Stack Overflow"")" Python,"=HYPERLINK(""https://www.python.org/"", ""Visit Python"")" where the escaping of commas and quotes is already taken care of. It is also read by Excel/GSheets correctly, since it conforms to the standard CSV format: For your specific case, you'd write to your CSV file like so: with open(filename, "w") as wf: writer = csv.writer(wf) writer.writerow(headers) # if necessary for ...: strLink = f"https://xxxxxxx.xxxxxx.com/Interact/Pages/Content/Document.aspx?id={strID}&SearchId=0&utm_source=interact&utm_medium=general_search&utm_term=*" strLinkCellFormula = f'=HYPERLINK("{strLink}", "{strTitle}")' row = [strId, strTitle, strAuthor, strDate, strStatus, strSection, strLinkCellFormula] writer.writerow(row)
3
3
75,043,093
2023-1-7
https://stackoverflow.com/questions/75043093/python-compiler-says-im-adding-an-extra-argument-to-int-in-an-enum
I'm trying to create a custom enumerator that can replace an int, but has additional fields. from enum import IntEnum class MD_Fields(IntEnum): ACCOUNT = (0, "Account", True) M_DESCRIPT = (4, "Description", False) def __new__(cls, value: int, description: str, identifier: bool): obj = int.__new__(cls, value) obj.description = description obj.identifier = identifier return obj if __name__ == '__main__': print(MD_Fields.M_DESCRIPT) However, this code raises the following problem: Traceback (most recent call last): File ".../JetBrains/PyCharmCE2022.3/scratches/scratch.py", line 3, in <module> class MD_Fields(IntEnum): File "/usr/lib/python3.7/enum.py", line 223, in __new__ enum_member._value_ = member_type(*args) TypeError: int() takes at most 2 arguments (3 given) I don't understand what's happening. (I didn't find a meaningful definition of int.__new__)
You were close - it was only missing the obj._value_ = value assignment, which Enum needs: from enum import IntEnum class MD_Fields(IntEnum): ACCOUNT = (0, "Account", True) M_DESCRIPT = (4, "Description", False) def __new__(cls, value: int, description: str, identifier: bool): obj = int.__new__(cls, value) obj._value_ = value obj.description = description obj.identifier = identifier return obj After adding that, it works as intended: >>> for member in MD_Fields: ... member, int(member), member.description, member.identifier ... (<MD_Fields.ACCOUNT: 0>, 0, 'Account', True) (<MD_Fields.M_DESCRIPT: 4>, 4, 'Description', False) If you were only extending Enum instead of IntEnum (which is just a shortcut for extending both int and Enum), you would use obj = object.__new__(cls) instead. The reason that _value_ needs to be set is because EnumMeta.__new__ will use its default behavior for setting _value_ when it wasn't already stored on the member. The relevant portion from the source for EnumMeta.__new__ (in 3.10), which should look familiar from the error that occurred before that attribute was assigned a value in MD_Fields.__new__: if not hasattr(enum_member, '_value_'): if member_type is object: enum_member._value_ = value else: enum_member._value_ = member_type(*args) Normally, the ._value_ attribute would be populated using the value to the right of the = when the member was defined. That value ends up being exposed through the .value attribute for each member. In 3.11, the error was improved to include the hint that _value_ needs to be set: TypeError: _value_ not set in __new__, unable to create it
3
3
75,019,859
2023-1-5
https://stackoverflow.com/questions/75019859/is-there-a-way-to-include-shell-scripts-in-a-python-package-with-pyproject
Previously with setup.py you could just add setuptools.setup( ... scripts=[ "scripts/myscript.sh" ] ) and the shell script was just copied to the path of the environment. But with the new pyproject scpecification, this seems to not be possible any more. According to the Python specification of entry points and the setuptools specification, only python functions that will be wrapped later, are allowed. Does anyone know a simple way of doing this like in setup.py? Or at least simpler than just doing a python function that calls the shell script with subprocess, which is what I think I will do if there's no simpler way.
Probably using the script-files field of the [tool.setuptools] section should work: [tool.setuptools] script-files = ["scripts/myscript.sh"] It was not standardized in PEP 621, so it belongs in a setuptools-specific section. Setuptools marks it as deprecated, but personally I would assume that it is safe to use for the next couple of years at least. It seems like such scripts are standardized in the wheel file format, so it is a bit strange that they are not in pyproject.toml's [project] section. Maybe it will be added later, but that is just speculation. Reference: https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html#setuptools-specific-configuration https://packaging.python.org/en/latest/specifications/binary-distribution-format/ https://discuss.python.org/t/whats-the-status-of-scripts-vs-entry-points/18524
5
2
75,067,735
2023-1-10
https://stackoverflow.com/questions/75067735/generate-a-period-timestamps-in-a-dataframe-with-multiple-columns-and-fill-missi
I have a DataFrame with multiple columns and it looks like this: date col1 col2 col3 2023-01-01 Y N NaN 2023-01-02 Y N Y Knowing the start and the end timestamp of df['date'], I want to generate the in between timestamps with the desired frequency. That I can do it using the code below: new_date = pd.Series(pd.date_range(start=df.index[0], end=df.index[-1], freq= '15min')) However, I would like to generate for the rest of the columns an equal number of rows with len(new_date) and fill the in between values with the value before. I know I can do that using .ffill() but I don't know how to generate the missing rows. The end result should look like this: date col1 col2 col3 2020-11-01 00:00:00 Y N NaN 2020-11-01 00:15:00 Y N NaN 2020-11-01 00:30:00 Y N NaN ... 2023-01-02 00:00:00 Y N Y
You can reindex with those datetimes and forward fill the missings. This way, if the starting value was NaN, the followings will stay as NaN as well: >>> df.reindex(new_date, method="ffill") col1 col2 col3 2023-01-01 00:00:00 Y N NaN 2023-01-01 00:15:00 Y N NaN 2023-01-01 00:30:00 Y N NaN 2023-01-01 00:45:00 Y N NaN 2023-01-01 01:00:00 Y N NaN ... ... ... ... 2023-01-01 23:00:00 Y N NaN 2023-01-01 23:15:00 Y N NaN 2023-01-01 23:30:00 Y N NaN 2023-01-01 23:45:00 Y N NaN 2023-01-02 00:00:00 Y N Y [97 rows x 3 columns] where new_date is pd.date_range(start=df.index[0], end=df.index[-1], freq="15min").
3
3
75,057,274
2023-1-9
https://stackoverflow.com/questions/75057274/saving-custom-tablenet-model-vgg19-based-for-table-extraction-azure-databric
I have a model based on TableNet and VGG19, the data (Marmoot) for training and the saving path is mapped to a datalake storage (using Azure). I'm trying to save it in the following ways and get the following errors on Databricks: First approach: import pickle pickle.dump(model, open(filepath, 'wb')) This saves the model and gives the following output: WARNING:absl:Found untraced functions such as _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op while saving (showing 5 of 31). These functions will not be directly callable after loading. Now when I try to reload the mode using: loaded_model = pickle.load(open(filepath, 'rb')) I get the following error (Databricks show in addition to the following error the entire stderr and stdout but this is the gist): ValueError: Unable to restore custom object of type _tf_keras_metric. Please make sure that any custom layers are included in the `custom_objects` arg when calling `load_model()` and make sure that all layers implement `get_config` and `from_config`. Second approach: model.save(filepath) and for the I get the following error: Fatal error: The Python kernel is unresponsive. The Python process exited with exit code 139 (SIGSEGV: Segmentation fault). The last 10 KB of the process's stderr and stdout can be found below. See driver logs for full logs. --------------------------------------------------------------------------- Last messages on stderr: Mon Jan 9 08:04:31 2023 Connection to spark from PID 1285 Mon Jan 9 08:04:31 2023 Initialized gateway on port 36597 Mon Jan 9 08:04:31 2023 Connected to spark. 2023-01-09 08:05:53.221618: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA and much more, its hard to find the proper place of error form all of the stderr and stdout. It shows the entire stderr and stdout which makes it very hard to find the solution (it shows all the stderr and stdout including the training and everything) Third approach (partially): I also tried: model.save_weights(weights_path) but once again I was unable to reload them (this approach was tried the least) Also I tried saving the checkpoints by adding this: model_checkpoint = tf.keras.callbacks.ModelCheckpoint(filepath, monitor = "val_table_mask_loss", verbose = 1, save_weights_only=True) as a callback in the fit method (callbacks=[model_checkpoint]) but in the end of the first epoch it will generate the following error(I show the end of the Traceback): h5py/_objects.pyx in h5py._objects.with_phil.wrapper() h5py/_objects.pyx in h5py._objects.with_phil.wrapper() h5py/h5f.pyx in h5py.h5f.create() OSError: Unable to create file (file signature not found) When I use the second approach on a platform that is not Databricks it works fine, but then when I try to load the model I get an error similar to the first approach loading. Update 1 my variable filepath that I try to save to is a dbfs reference, and my dbfs is mapped to the datalake storage Update 2 When trying as suggested in the comments, with the following answer I get the following error: ----> 3 model2 = keras.models.load_model("/tmp/model-full2.h5") ... ValueError: Unknown layer: table_mask. Please ensure this object is passed to the `custom_objects` argument. See https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object for details. Update 3: So I try following the error plus this answer: model2 = keras.models.load_model("/tmp/model-full2.h5", custom_objects={'table_mask': table_mask}) but then I get the following error: TypeError: 'KerasTensor' object is not callable
Try making the following changes to your custom object(s), so they can be properly serialized and deserialized: Add the keywords arguments to your constructor: def __init__(self, **kwargs): super(TableMask, self).__init__(**kwargs) Rename table_mask to TableMask to avoid naming conflicts. So when you load your model, it will look something like this: model = keras.models.load_model("/tmp/path", custom_objects={'TableMask': TableMask, 'CustomObj2': CustomObj2, 'CustomMetric': CustomMetric}) Update from question author: We found few error in my code: I had 2 custom layers with the same name as a variable (beginners mistake) I needed to add the custom objects to the load method in the custom_objects keyword as the answer suggested I also needed to change the __init__ function as the answer suggest I had a custom scoring class that I also needed to add to the custom_objects Also I used the following answer that @AloneTogether suggested in the comments (this answer is the way I choose to save and load the model, plus the extra data we wrote in the above list) After all that, the saving, loading, predicting worked great
4
0
75,065,937
2023-1-10
https://stackoverflow.com/questions/75065937/how-to-make-calculation-inside-annotate
This one when I run generates error qs = User.objects.annotate(days=(datetime.now() - F("created_at")).days) AttributeError: 'CombinedExpression' object has no attribute 'days' How can I make that calculation as an annotation When I run this code, it wroks fine qs = User.objects.annotate(days=(datetime.now() - F("created_at")))
This can be achieved with a combination of ExpressionWrapper, which tells django what the output field type should be, and ExtractDay which, well, extracts the day. In this case, the output field is a timedelta object (i.e DurationField). ExtractDay is just a DB-level function which the django ORM provides. from django.db.models import DateTimeField, DurationField, ExpressionWrapper, F from django.db.models.functions import ExtractDay qs = User.objects.annotate( days=ExtractDay( ExpressionWrapper( datetime.now() - F("created_at"), output_field=DurationField())) )
3
2
75,064,656
2023-1-10
https://stackoverflow.com/questions/75064656/printing-pytorch-tensor-from-gpu-or-move-to-cpu-and-or-detach
I'm starting Pytorch and still trying to understand the basic concepts. If I have a network n on the GPU that produces an output tensor out, can it be printed to stdout directly? Or should it first be moved to the cpu, or be detached from the graph before printing? Tried several combinations below involving .cpu() and .detach() import torch.nn as nn import torch class Net(nn.Module): def __init__(self): super().__init__() self.layers = nn.Sequential( nn.Linear(5, 10), nn.ReLU(), nn.Linear(10, 10), nn.ReLU(), nn.Linear(10, 3), ) def forward(self, x): return self.layers(x) device = torch.device("cuda:0") # assume its available x = torch.rand(10, 5).to(device) net = Net().to(device) # Pretend we are in a training loop iteration out = net(x) print(f"The output is {out.max()}") print(f"The output is {out.max().detach()}") print(f"The output is {out.max().cpu()}") print(f"The output is {out.max().cpu().detach()}") # continue training iteration and repeat more iterations in training loop I got the same output for all 4 methods. Which is the correct way?
You should not get surprised by the same value output. It shouldn't change anything value. cpu() transfers the tensor to cpu. And detach() detaches the tensor from the computation graph so that autograd does not track it for future backpropagations. Usually .detach().cpu() is what I do, since it detaches it from the computation graph and then it moves to the cpu for further processing. .cpu().detach() is also fine but in this case autograd takes into account the cpu() but in the previous case .cpu() operation won't be tracked by autograd which is what we want. That's it. It's only these little things that are different - value would be same in all cases.
7
10
75,064,556
2023-1-10
https://stackoverflow.com/questions/75064556/how-can-i-vectorize-and-speed-up-this-pandas-iterrows
I cannot understand how to use previous indexes within an apply() or similar. This is the code: for i, row in data.iterrows(): index = data.index.get_loc(i) if index == 0: pass else: # changes data.at[i, '1_Day_%_Change'] = ( data.at[data.index[index], 'Adj_Close'] / data.at[data.index[index-1], 'Adj_Close'] ) - 1 data.at[i, '5_Day_%_Change'] = data.at[data.index[index], 'Adj_Close'] / data.at[data.index[index-5], 'Adj_Close'] - 1 data.at[i, '1_Month_%_Change'] = data.at[data.index[index], 'Adj_Close'] / data.at[data.index[index-21], 'Adj_Close'] - 1 data.at[i, '6_Monthr_%_Change'] = data.at[data.index[index], 'Adj_Close'] / data.at[data.index[index-151], 'Adj_Close'] - 1 data.at[i, '1_Year_%_Change'] = data.at[data.index[index], 'Adj_Close'] / data.at[data.index[index-252], 'Adj_Close'] - 1 data is the dataframe, and the goal is just to make % changes for stock prices. All I am doing is dividing the current row's 'Adj Close' price by the price X rows ago. How can I speed this up?
Use diff and shift methods. Example code is here. df['1_Day_%_Change'] = df['Adj_close'].diff() / df['Adj_close'].shift(1) df['5_Day_%_Change'] = df['Adj_close'].diff(5) / df['Adj_close'].shift(5)
3
2
75,063,547
2023-1-9
https://stackoverflow.com/questions/75063547/find-maximum-value-in-a-list-of-dicts
I'm new to Python, and I've been stuck at one point for several days now. There is a list of dicts like this one: dd = [{'prod': 'White', 'price': '80.496'}, {'prod': 'Blue', 'price': '9.718'}, {'prod': 'Green', 'price': '7161.3'}] I need to output the value in prod based on the maximum value of the price. Here is the desired result: Green I have tried many ways based on information I found on SO: dd = [{'prod': 'White', 'price': '80.496'}, {'prod': 'Blue', 'price': '9.718'}, {'prod': 'Green', 'price': '7161.3'}] L = [v for v in dd if v['price']==max([u['price']for u in dd])][0]['prod'] print(L) Output: Blue (Almost correct, but "Blue" does not have the maximum value of the price!) dd = [{'prod': 'White', 'price': '80.496'}, {'prod': 'Blue', 'price': '9.718'}, {'prod': 'Green', 'price': '7161.3'}] L = max(dd, key=lambda x:x['price']) print(L) Output: {'prod': 'Blue', 'price': '9.718'} dd = [{'prod': 'White', 'price': '80.496'}, {'prod': 'Blue', 'price': '9.718'}, {'prod': 'Green', 'price': '7161.3'}] L = max(e['price'] for e in dd) print(L) Output: 9.718 from operator import itemgetter dd = [{'prod': 'White', 'price': '80.496'}, {'prod': 'Blue', 'price': '9.718'}, {'prod': 'Green', 'price': '7161.3'}] L = max(map(itemgetter('price'), dd)) print(L) Output: 9.718 dd = [{'prod': 'White', 'price': '80.496'}, {'prod': 'Blue', 'price': '9.718'}, {'prod': 'Green', 'price': '7161.3'}] seq = [x['price'] for x in dd] L = max(seq) print(L) Output: 9.718 In all cases, the maximum value is 9.718 and not 7161.3. How can I fix this? I'm using MS Visual Studio running Python 3.9.
You need to convert the price values to floats for the key parameter: max(dd, key=lambda x: float(x['price']))['prod'] This outputs: Green
3
4
75,025,513
2023-1-5
https://stackoverflow.com/questions/75025513/is-this-a-general-bug-of-osmnxs-installation-cannot-import-shapely-geos-impor
When I want to import osmnx, this error comes up. I created a new environment before and followed the standard installation process via conda.
You're using a years-old version of OSMnx and a brand new version of Shapely. They are incompatible. OSMnx >= 1.3 works with Shapely >= 2.0, see here. OSMnx < 1.3 works with Shapely < 2.0, see here. Recreate your environment (and optionally explicitly specify osmnx=1.3.*) and it'll work. Make sure you follow the documented installation instructions and honor the dependency versions specified in OSMnx's requirements.txt file.
3
4
75,062,271
2023-1-9
https://stackoverflow.com/questions/75062271/aggregating-df-columns-but-not-duplicates
Is there a neat way to aggregate columns into a new column without duplicating information? For example, if I have a df: Description Information 0 text1 text1 1 text2 text3 2 text4 text5 And I want to create a new column called 'Combined', which aggregates 'Description' and 'Information' to get: Description Information Combined 0 text1 text1 text1 1 text2 text3 text2 text3 2 text4 text5 text4 text5 So far I have been using np.where and [mask] to check for duplicates before aggregating with df['Combined'] = df[['Description', 'Information']].agg(' '.join, axis=1) Although this works, it is not practical on a larger scale, grateful if anyone knows of a simpler way!
You can first run unique: df['Combined'] = (df[['Description', 'Information']] .agg(lambda x: ' '.join(x.unique()), axis=1) ) Output: Description Information Combined 0 text1 text1 text1 1 text2 text3 text2 text3 2 text4 text5 text4 text5
3
1
75,062,113
2023-1-9
https://stackoverflow.com/questions/75062113/how-to-check-if-list-includes-an-element-using-match-case
I'm trying to check if a single element is in a list using match case. I'm not very familiar with these new keywords so 90% sure I'm using them wrong. Regardless, is there a way to do this? This is my code. I'm expecting for this to print "hi detected in list. Hi!" and "hello detected in list. Hello!", but the match statement doesn't seem to work this way. mylist= ["hello", "hi", 123, True] match mylist: case ['hi']: print("hi detected in list. Hi!") case ['hello']: print("hello detected in list. Hello!") Is there a way to check if a list includes an element using match case?
Using match/case is not the most appropriate way to determine if a list contains some particular value. However, to answer the question then: mylist= ["hello", "hi", 123, True] for element in mylist: match element: case 'hello': print('hello detected') case 'hi': print('hi detected')
3
2
75,058,589
2023-1-9
https://stackoverflow.com/questions/75058589/annotating-function-with-typevar-and-default-value-results-in-union-type
When annotating a function parameter with a bound TypeVar, giving it a default value results in the parameter having a union type between the TypeVar and the default value type, even though the default value is of the TypeVar type. Example: class A: pass class B(A): pass Instance = TypeVar("Instance", bound=A) def get_instance(cls: type[Instance] = A) -> Instance: return cls() Running mypy yields the following error: error: Incompatible default for argument "cls" (default has type "Type[A]", argument has type "Type[Instance]"). reveal_type is correct in both cases: instance_a = get_instance(cls=A) reveal_type(instance_a) # note: Revealed type is "A" instance_b = get_instance(cls=B) reveal_type(instance_b) # note: Revealed type is "B" How do I correctly annotate get_instance so that I can keep the default argument?
It is a very old mypy issue. The general solution is using overload's, it can be applied in your case: from typing import TypeVar, overload class A: pass class B(A): pass _Instance = TypeVar("_Instance", bound=A) @overload def get_instance() -> A: ... @overload def get_instance(cls: type[_Instance]) -> _Instance: ... def get_instance(cls: type[A] = A) -> A: return cls() instance_a = get_instance(cls=A) reveal_type(instance_a) # note: Revealed type is "A" instance_b = get_instance(cls=B) reveal_type(instance_b) # note: Revealed type is "B" For type checking purposes, we define two overloads. Implementation signature is checked only within the function, external callers see only the overloads. Here we just ignore specific relationship between cls type and return type while checking body, however, you can consider other solutions from the linked issue. Try me in playground
4
5
75,052,604
2023-1-9
https://stackoverflow.com/questions/75052604/refresherror-invalid-grant-token-has-been-expired-or-revoked-google-api
About a week ago I set up an application on google. Now when I tri and run: SCOPES = ['https://www.googleapis.com/auth/gmail.readonly'] creds = None if os.path.exists('token.pickle'): with open(self.CREDENTIALS_PATH+self.conjoiner+'token.pickle', 'rb') as token: creds = pickle.load(token) if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) ##error here I get the following error: Exception has occurred: RefreshError ('invalid_grant: Token has been expired or revoked.', {'error': 'invalid_grant', 'error_description': 'Token has been expired or revoked.'}) What could be the problem?
token.pickle contains the access token and refresh token for your application. Token has been expired or revoked.' Means that the refersh token in this file is no longer working this can be caused by servral reasons. the user revoked your access The user has authorized your access token more then 50 times and this is the oldest token and was there for expired the refresh token hasn't been used in six months The project is still in the testing phase, and there for the refresh token will be expired after seven days. All of the above information can be found in Oauth2 expirationdocumentation the solution for options 1- 3 is to simply delete the token.pickle file and request authorization of the user again. For number four you should go to google developer console under the Oauth2 consent screen and set your application to production. Then your refresh token will stop expiring. Then you can delete the token.pickle and you wont have this issue again.
7
10
75,041,095
2023-1-7
https://stackoverflow.com/questions/75041095/how-to-apply-a-custom-function-to-xarray-dataarray-coarsen-reduce
I have a (2x2) NumPy array: ar = np.array([[2, 0],[3, 0]]) and the same one in the form of xarray.DataArray: da = xr.DataArray(ar, dims=['x', 'y'], coords=[[0, 1], [0, 1]]) I am trying to downsample the 2d array spatially using a custom function to find the mode (i.e., the most frequently occurring value): def find_mode(window): # find the mode over all axes uniq = np.unique(window, return_counts=True) return uniq[0][np.argmax(uniq[1])] The find_mode() works well for ar as find_mode(ar) gives 0. However, it doesn't work for da (i.e., da.coarsen(x=2, y=2).reduce(find_mode)), with an error: TypeError: find_mode() got an unexpected keyword argument 'axis' Thank you so much for your attention and participation.
The signature for functions passed to DatasetCoarsen.reduce must include axis and kwargs. A good example is np.sum. So your function would need to look something like: def find_mode(window, axis=None, **kwargs): # find the mode over all axes uniq = np.unique(window, return_counts=True) ret = uniq[0][np.argmax(uniq[1])] ret = np.atleast_2d(ret) return ret Depending on your application, you may want to use the axis argument (tuple of integers) in place of your [0] and [1] indexing steps. Note: I've added np.atleast_2d here to make sure the return array is 2D. This is a bit ugly so I recommend thinking a bit more about this part in your application. The key thing to understand is that the return array needs to have the same number of dimensions as the input array.
5
7
75,048,986
2023-1-8
https://stackoverflow.com/questions/75048986/way-to-temporarily-change-the-directory-in-python-to-execute-code-without-affect
I need to perform an action without changing the global working directory. My case is I have a few folders, and in each, there are a few files. I need to do some computations using those files. Initially, I tried the following: with os.chdir('/directory'): ...some code needing execution inside but got AttributeError: __enter__. After reading up online using with seems not to be an option. Therefore I'm looking to find another elegant way of doing so. I also tried just using os statements like so: cwd = os.getcwd() os.chdir('/directory') ..run code inside directory os.chdir(cwd) but this is a pain during debugging and seems like a bad practice.
You can write your own context manager to temporarily change the working directory. import contextlib @contextlib.contextmanager def new_cd(x): d = os.getcwd() # This could raise an exception, but it's probably # best to let it propagate and let the caller # deal with it, since they requested x os.chdir(x) try: yield finally: # This could also raise an exception, but you *really* # aren't equipped to figure out what went wrong if the # old working directory can't be restored. os.chdir(d) with new_cd('/directory'): ...
5
6
75,049,548
2023-1-8
https://stackoverflow.com/questions/75049548/why-cant-you-intern-bytes-in-python
As mentioned in Python documentation, sys.intern() only accepts string objects. I understand why mutable types are not supported by sys.intern. But there's at least one more immutable type for which interning would make sense: bytes. So here's my question: is there any particular reason why Python interning doesn't support bytes?
This was suggested a decade ago on the Python-Dev mailing list. The answer is: The main difference is that sys.intern() will remove the interned strings when every external reference vanishes. It requires either weakref'ability (which both str and bytes lack) or special cooperation from the object destructor (which is why sys.intern() is restricted to str instead of working with arbitrary objects). Clearly it is possible to add support for bytes, but it seems very niche, not something standard Python is likely to add. That doesn't stop you from making your own equivalent, unless the whole reason you want it is for dictionary key lookup speed. I've never seen anyone use bytes as dictionary keys, but I'm sure some people do.
3
4
75,036,773
2023-1-6
https://stackoverflow.com/questions/75036773/pydantic-error-wrappers-validationerror-fastapi
I'm making a crud in fastapiI have a user model and I created another one called showuser to only show some specific fields in the query, but when I execute the request I get an error. I just want my request to show the fields I have in showuser. my schemas from pydantic import BaseModel from typing import Optional from datetime import datetime # Create a User model # Create a class for the user class User(BaseModel): username: str password: str name: str lastname: str address: Optional[str] = None telephone: Optional[int] = None email: str creation_user: datetime = datetime.now() # Create UserId model # Create a class for the UserId class UserId(BaseModel): id: int # Create a ShowUser model # Create a class for the ShowUser class ShowUser(BaseModel): username: str name: str lastname: str email: str class Config(): orm_mode = True and this is the code from user where I implement the api @router.get('/{user_id}', response_model=ShowUser) def get_user(user_id: int, db: Session = Depends(get_db)): user = db.query(models.User).filter(models.User.id == user_id).first() if not user: return {"Error": "User not found"} return {"User": user} Terminal Message pydantic.error_wrappers.ValidationError: 4 validation errors for ShowUser response -> username field required (type-value_error.missing) response -> name field required (type=value_error.missing) response -> lastname field required (type=value_error.missing) response -> email field required (type=value_error.missing)
I think the return value of your get_user function is the issue. Rather than returning {"User": user}, try returning just the user object as shown below: @router.get('/{user_id}', response_model=ShowUser) def get_user(user_id: int, db: Session = Depends(get_db)): user = db.query(models.User).filter(models.User.id == user_id).first() if not user: return {"Error": "User not found"} return user EDIT: The same error will occur if the database does not contain a User object matching the value of user_id. Rather than returning {"Error": "User not found"}, the best way to handle this very common scenario is to raise an HTTPException with a 404 status code and error message: @router.get('/{user_id}', response_model=ShowUser) def get_user(user_id: int, db: Session = Depends(get_db)): user = db.query(models.User).filter(models.User.id == user_id).first() if not user: raise HTTPException( status_code=int(HTTPStatus.NOT_FOUND), detail=f"No user exists with user.id = {user_id}" ) return user
3
3
75,047,527
2023-1-8
https://stackoverflow.com/questions/75047527/how-to-add-a-new-column-to-dataframe-based-on-conditions-on-another-column
I have the following example dataframe: d = {'col1': [4, 2, 8, 4, 3, 7, 6, 9, 3, 5]} df = pd.DataFrame(data=d) df col1 0 4 1 2 2 8 3 4 4 3 5 7 6 6 7 9 8 3 9 5 I need to add col2 to this dataframe, and values of this new column will be set by comparing col1 values (from different rows) as described below. Each row of col2 will be set as following: df.loc[0, "col2"] will say how many of df.loc[1, "col1"], df.loc[2, "col1"] and df.loc[3, "col1"] are bigger than df.loc[0, "col1"]. df.loc[1, "col2"] will say how many of df.loc[2, "col1"], df.loc[3, "col1"] and df.loc[4, "col1"] are bigger than df.loc[1, "col1"]. df.loc[2, "col2"] will say how many of df.loc[3, "col1"], df.loc[4, "col1"] and df.loc[5, "col1"] are bigger than df.loc[2, "col1"]. And so on... If there are not 3 rows left after the index N, col2 value will be set to -1. The end result will look like the following: col1 col2 0 4 1 1 2 3 2 8 0 3 4 2 4 3 3 5 7 1 6 6 1 7 9 -1 8 3 -1 9 5 -1 I need a function that will take a dataframe as input and will return the dataframe by adding the new column as described above. In the example above, next 3 rows are considered. But this needs to be configurable and should be an input to the function that will do the work. Speed is important here so it is not desired to use for loops. How can this be done in the most efficient way in Python?
You need a reversed rolling to compare the values to the next ones: N = 3 df['col2'] = (df.loc[::-1, 'col1'] .rolling(N+1) .apply(lambda s: s.iloc[:-1].gt(s.iloc[-1]).sum()) .fillna(-1, downcast='infer') ) Alternatively, using numpy.lib.stride_tricks.sliding_window_view: import numpy as np from numpy.lib.stride_tricks import sliding_window_view as swv N = 3 df['col2'] = np.r_[(df['col1'].to_numpy()[:-N, None] < swv(df['col1'], N)[1:] # broadcasted comparison ).sum(axis=1), # count True per row -np.ones(N, dtype=int)] # add missing -1 Output: col1 col2 0 4 1 1 2 3 2 8 0 3 4 2 4 3 3 5 7 1 6 6 1 7 9 -1 8 3 -1 9 5 -1
3
3
75,045,739
2023-1-8
https://stackoverflow.com/questions/75045739/faster-startup-of-processes-python
I'm trying to run two functions in Python3 in parallel. They both take about 30ms, and unfortunately, after writing a testing script, I've found that the startup-time to get the processes running in the background takes over 100ms which is a pretty high overhead that I would like to avoid. Is anybody aware of a faster way to run functions concurrently in Python3 (having a lower overhead -- ideally in the ones or tens of milliseconds) where I can still get the results of their functions in the main process. Any guidance on this would be appreciated, and if there is any information that I can provide, please let me know. For hardware information, I'm running this on a 2019 MacBook Pro with Python 3.10.9 with a 2GHz Quad-Core Intel Core i5. I've provided the script that I've written below as well as the output that I typically get from it. import multiprocessing as mp import time import numpy as np def t(s): return (time.perf_counter() - s) * 1000 def run0(s): print(f"Time to reach run0: {t(s):.2f}ms") time.sleep(0.03) return np.ones((1,4)) def run1(s): print(f"Time to reach run1: {t(s):.2f}ms") time.sleep(0.03) return np.zeros((1,5)) def main(): s = time.perf_counter() with mp.Pool(processes=2) as p: print(f"Time to init pool: {t(s):.2f}ms") f0 = p.apply_async(run0, args=(time.perf_counter(),)) f1 = p.apply_async(run1, args=(time.perf_counter(),)) r0 = f0.get() r1 = f1.get() print(r0, r1) print(f"Time to run end-to-end: {t(s):.2f}ms") if __name__ == "__main__": main() Below is the output that I typically get from running the above script Time to init pool: 33.14ms Time to reach run0: 198.50ms Time to reach run1: 212.06ms [[1. 1. 1. 1.]] [[0. 0. 0. 0. 0.]] Time to run end-to-end: 287.68ms Note: I'm looking to decrease the quantities on the 2nd and 3rd line by a factor of 10-20x smaller. I know that that is a lot, and if it is not possible, that is perfectly fine, but I was just wondering if anybody more knowledgable would know any methods. Thanks!
several points to consider: "Time to init pool" is wrong. The child processes haven't finished starting, only the main process has initiated their startup. Once the workers have actually started, the speed of "Time to reach run" should drop to not include process startup. If you have a long lived pool of workers, you only pay startup cost once. startup cost of the interpreter is often dominated by imports in this case you really only have numpy, and it is used by the target function, so you can't exactly get rid of it. Another that can be slow is the automatic import of site, but it makes other imports difficult to skip that one. you're on MacOS, and can switch to using "fork" instead of "spawn" which should be much faster, but fundamentally changes how multiprocessing works in a few ways (and is incompatible with certain OS libraries) example: import multiprocessing as mp import time # import numpy as np def run(): time.sleep(0.03) return "whatever" def main(): s = time.perf_counter() with mp.Pool(processes=1) as p: p.apply_async(run).get() print(f"first job time: {(time.perf_counter() -s)*1000:.2f}ms") #first job 166ms with numpy ; 85ms without ; 45ms on linux (wsl2 ubuntu 20.04) with fork s = time.perf_counter() p.apply_async(run).get() print(f"after startup job time: {(time.perf_counter() -s)*1000:.2f}ms") #second job about 30ms every time if __name__ == "__main__": main()
4
4
75,044,362
2023-1-7
https://stackoverflow.com/questions/75044362/weird-scikit-learn-python-intellisense-error-message
Lately I was doing some ML stuff with Python using scikit-learn package. I wanted to use make_blobs() function so I began writing code for example: X, y = make_blobs(n_samples=m, centers=2, n_features=2, center_box=(80, 100)) and of course this is fine. However while coding next lines my Intellisense within Visual Studio Code (I have only Microsoft addons for Python installed just to be clear) started to showing weird error on that line I mentioned before. Here's full error message: Expression with type "tuple[Unknown | list[Unknown] | NDArray[float64], Unknown | list[Unknown] | NDArray[Any], ndarray[Any, dtype[float64]] | Any] | tuple[Unknown | list[Unknown] | NDArray[float64], Unknown | list[Unknown] | NDArray[Any]]" cannot be assigned to target tuple Type "tuple[Unknown | list[Unknown] | NDArray[float64], Unknown | list[Unknown] | NDArray[Any], ndarray[Any, dtype[float64]] | Any]" is incompatible with target tuple Element size mismatch; expected 2 but received 3 Please notice the last sentence. Element size mismatch where make_blobs() function returned 3 elements. What??? I've checked scikit-learn documentation for make_blobs() function and I've read that on default make_blobs() returns only 2 elements not 3. 3 elements can be returned when return_centers is set to True, where I have not set that to true as you can see in my example. Ok, maybe I'll try to expect those 3 elements, so I modified that line X, y, _ = make_blobs(n_samples=m, centers=2, n_features=2, center_box=(80, 100)) and well... this is the error message... Expression with type "tuple[Unknown | list[Unknown] | NDArray[float64], Unknown | list[Unknown] | NDArray[Any], ndarray[Any, dtype[float64]] | Any] | tuple[Unknown | list[Unknown] | NDArray[float64], Unknown | list[Unknown] | NDArray[Any]]" cannot be assigned to target tuple Type "tuple[Unknown | list[Unknown] | NDArray[float64], Unknown | list[Unknown] | NDArray[Any]]" is incompatible with target tuple Element size mismatch; expected 3 but received 2 Now it returns 2 elements?! What I have tried next is: reinstall scikit-learn package. Same effect purging Python with all it files. Same effetc reinstalling Microsoft python extension for vscode. Same effect Clearly it is some kind of intellisense issue, because running the code works fine, but what cause this behaviour? Python I used was 3.10.9 and 3.11.1. Running on Windows 10 22H2 19045.2364. VSCode up-to-date. For completeness scikit-learn version is 1.2.0
This is a known behaviour of pyright (which is a Python type checker used in Intellisense). It raises a return type mismatch warning if there is at least one return statement within the function that's incompatible with what you're expecting. See a similar issue in their repo for more details and an explanation from one of the maintainers. You can suppress type checking for this particular line with a comment though: X, y = make_blobs(n_samples=m, centers=2, n_features=2, center_box=(80, 100)) # pyright: ignore
5
6
75,043,981
2023-1-7
https://stackoverflow.com/questions/75043981/updating-entire-row-or-column-of-a-2d-array-in-jax
I'm new to JAX and writing code that JIT compiles is proving to be quite hard for me. I am trying to achieve the following: Given an (n,n) array mat in JAX, I would like to add a (1,n) or an (n,1) array to an arbitrary row or column, respectively, of the original array mat. If I wanted to add a row array, r, to the third row, the numpy equivalent would be, # if mat is a numpy array mat[2,:] = mat[2,:] + r The only way I know how to update an element of an array in JAX is using array.at[i].set(). I am not sure how one can use this to update a row or a column without explicitly using a for-loop.
JAX arrays are immutable, so you cannot do in-place modifications of array entries. But you can accomplish similar results with the np.ndarray.at syntax. For example, the equivalent of mat[2,:] = mat[2,:] + r would be mat = mat.at[2,:].set(mat[2,:] + r) But you can use the add method to be more efficient in this case: mat = mat.at[2:].add(r) Here is an example of adding a row and column array to a 2D array: import jax.numpy as jnp mat = jnp.zeros((5, 5)) # Create 2D row & col arrays, as in question row = jnp.ones(5).reshape(1, 5) col = jnp.ones(5).reshape(5, 1) mat = mat.at[1:2, :].add(row) mat = mat.at[:, 2:3].add(col) print(mat) # [[0. 0. 1. 0. 0.] # [1. 1. 2. 1. 1.] # [0. 0. 1. 0. 0.] # [0. 0. 1. 0. 0.] # [0. 0. 1. 0. 0.]] See JAX Sharp Bits: In-Place Updates for more discussion of this.
4
6
75,040,669
2023-1-7
https://stackoverflow.com/questions/75040669/how-do-i-split-a-column-to-many-colum-by-row-quantity
I got a long single column DataFrame as following table: Column A Cell 1 Cell 2 Cell 3 Cell 4 Cell 5 Cell 6 Cell 7 Cell 8 I want to split column A in order with specify row quantity and add to others new columns If I give 2 row quantity for each column Column A Column B Column C Column D Cell 1 Cell 3 Cell 5 Cell 7 Cell 2 Cell 4 Cell 6 Cell 8 split long a single column to new adding column by given row quantity.
You can use the underlying numpy array to reshape in Fortran order (rows, then columns): from string import ascii_uppercase N = 2 out = (pd.DataFrame(df['Column A'].to_numpy().reshape(N, -1, order='F')) # the line below is optional, just to have the column names .rename(columns=dict(enumerate(ascii_uppercase))).add_prefix('Column ') ) Output: Column A Column B Column C Column D 0 Cell 1 Cell 3 Cell 5 Cell 7 1 Cell 2 Cell 4 Cell 6 Cell 8 If you want to handle N that are non multiples of len(df), you can add a reindex step to pad the DataFrame with NaNs: N = 3 out = (pd.DataFrame(df['Column A'].reindex(range(int(np.ceil(len(df)/N)*N))) .to_numpy().reshape(N, -1, order='F')) .rename(columns=dict(enumerate(ascii_uppercase))).add_prefix('Column ') ) Output: Column A Column B Column C 0 Cell 1 Cell 4 Cell 7 1 Cell 2 Cell 5 Cell 8 2 Cell 3 Cell 6 NaN
3
2
75,039,674
2023-1-7
https://stackoverflow.com/questions/75039674/python-type-hinting-for-generic-container-constructor
What is the correct typing to use for the below marked in ???, where we cast a generic iterable data container type to an iterable container of different type? def foo(itr:Iterable, cast_type:???) -> ???: (For Py 3) # type: (Iterable[Any], ???) -> ??? (For Py 2.7) return cast_type(itr) foo([1,2], cast_type=set) # Example 1 foo(set([1,2]), cast_type=list) # Example 2 ...
No parameterized type variables! The problem is that so far the Python typing system does not allow higher-kinded variables, meaning type variables that are parameterized with yet another type variable. This would be helpful here, since we could define a type variable T annotate itr with Iterable[T], then define for example It as a type variable bounded by Iterable[T] and annotate cast_type as type[It[T]], and finally annotate the return type as It[T]. Alas, this is not possible yet (but in the making it seems), so we need to work around that. No safe __init__ signature The next problem is that there is no common constructor interface in the collections ABCs allowing an argument to be passed. We might be tempted to do the following: from collections.abc import Iterable from typing import Any, TypeVar It = TypeVar("It", bound=Iterable[Any]) def foo(itr: Iterable[Any], cast_type: type[It]) -> It: return cast_type(itr) But the problem is that mypy will correctly give us the error Too many arguments for "Iterable" [call-arg] for that last line. This problem remains the same, no matter which one of the abstract base classes we pick (like Collection or Set or what have you). Extend the protocol (EDIT: See the last section for why the __init__ protocol solution may cause problems and why it may be better to use Callable instead of type.) To avoid this issue, we can introduce our own custom protocol that inherits from Iterable and also introduces a common __init__ interface: from collections.abc import Iterable from typing import Any, Protocol, TypeVar class ConstructIterable(Iterable[Any], Protocol): def __init__(self, arg: Iterable[Any]) -> None: ... It = TypeVar("It", bound=ConstructIterable) def foo(itr: Iterable[Any], cast_type: type[It]) -> It: return cast_type(itr) a = foo([1, 2], cast_type=set) b = foo({1, 2}, cast_type=list) reveal_type(a) reveal_type(b) Those last two lines are for mypy and if we run that in --strict mode over this script, we get no errors and the following info: note: Revealed type is "builtins.set[Any]" note: Revealed type is "builtins.list[Any]" So far so good, the container types are inferred correctly from our argument types. Preserve the element types selectively But we are losing the element type information this way. It is currently just Any. As I said, there is no good way to solve this right now, but we can work around that, if we want to decide on the most common use cases for that function. If we anticipate it being called most often with common container types like list, tuple and set for example, we can write overloads specifically for those and leave a catch-all non-generic Iterable case as a fallback. That last case will then still drop the element type information, but at least the other signatures will preserve it. Here is an example: from collections.abc import Iterable from typing import Any, Protocol, TypeVar, overload class ConstructIterable(Iterable[Any], Protocol): def __init__(self, _arg: Iterable[Any]) -> None: ... T = TypeVar("T") It = TypeVar("It", bound=ConstructIterable) @overload def foo(itr: Iterable[T], cast_type: type[list[Any]]) -> list[T]: ... @overload def foo(itr: Iterable[T], cast_type: type[tuple[Any, ...]]) -> tuple[T, ...]: ... @overload def foo(itr: Iterable[T], cast_type: type[set[Any]]) -> set[T]: ... @overload def foo(itr: Iterable[T], cast_type: type[It]) -> It: ... def foo( itr: Iterable[Any], cast_type: type[ConstructIterable], ) -> ConstructIterable: return cast_type(itr) Test it again with mypy: ... a = foo([1, 2], cast_type=set) b = foo({1., 2.}, cast_type=list) c = foo([1, 2], cast_type=tuple) d = foo({1., 2.}, cast_type=frozenset) reveal_type(a) reveal_type(b) reveal_type(c) reveal_type(d) Output: note: Revealed type is "builtins.set[builtins.int]" note: Revealed type is "builtins.list[builtins.float]" note: Revealed type is "builtins.tuple[builtins.int, ...]" note: Revealed type is "builtins.frozenset[Any]" As you can see, at least the first three cases correctly preserved the element types. I am afraid this not-so-elegant solution is as good as it gets with the current type system limitations. PS It seemed as though your question included one about Python 2, but I don't think that merits a response. Nobody should be using Python 2 today. Not to mention the typing system was essentially non-existent back then. The solution I showed above requires 3.9, but can probably be made compatible with slightly older versions of Python by using typing_extensions, as well as the deprecated typing.List, typing.Tuple, and such. EDIT Thanks to @joel for pointing out that __init__ signatures are not checked, when passing subclasses. Instead of using type, it might be safer to go with the supertype Callable, then specify the argument and return type accordingly. This also makes the custom protocol unnecessary. The adjusted workaround solution would then look like this: from collections.abc import Callable, Iterable from typing import Any, TypeVar, overload T = TypeVar("T") It = TypeVar("It", bound=Iterable[Any]) @overload def foo(itr: Iterable[T], cast_type: type[list[Any]]) -> list[T]: ... @overload def foo(itr: Iterable[T], cast_type: type[tuple[Any, ...]]) -> tuple[T, ...]: ... @overload def foo(itr: Iterable[T], cast_type: type[set[Any]]) -> set[T]: ... @overload def foo(itr: Iterable[T], cast_type: Callable[[Iterable[Any]], It]) -> It: ... def foo( itr: Iterable[Any], cast_type: Callable[[Iterable[Any]], It], ) -> It: return cast_type(itr) The mypy output for out test lines is essentially the same, but for the last call revealing "builtins.frozenset[_T_co1]"` instead, which amounts to the same thing.
3
4
75,039,860
2023-1-7
https://stackoverflow.com/questions/75039860/how-to-concat-column-y-to-column-x-and-replicate-values-z-in-pandas-dataframe
I have a pandas DataFrame with three columns: X Y Z 0 1 4 True 1 2 5 True 2 3 6 False How do I make it so that I have two columns X and Z with values: X Z 0 1 True 1 2 True 2 3 False 3 4 True 4 5 True 5 6 False
you can melt: In [41]: df.melt(id_vars="Z", value_vars=["X", "Y"], value_name="XY")[["XY", "Z"]] Out[41]: XY Z 0 1 True 1 2 True 2 3 False 3 4 True 4 5 True 5 6 False identifier variable is "Z": it will be repeated as necessary against value variables... ...which are X and Y name X and Y's together column to "XY", and select that and "Z" at the end (you can chain .rename(columns={"XY": "X"}) if you want that column to be named X again.)
4
5
75,036,858
2023-1-6
https://stackoverflow.com/questions/75036858/how-do-i-concatenate-each-element-of-different-lists-together
print(sgrades_flat) ['Barrett', 'Edan', '70', '45', '59', 'Bradshaw', 'Reagan', '96', '97', '88', 'Charlton', 'Caius', '73', '94', '80', 'Mayo', 'Tyrese', '88', '61', '36', 'Stern', 'Brenda', '90', '86', '45'] print(s_grades) ['F', 'A', 'B', 'D', 'C'] I want to combine sgrades_flat and s_grades to look like ... ['Barrett', 'Edan', '70', '45', '59', 'F', 'Bradshaw', 'Reagan', '96', '97', '88', 'A' 'Charlton', 'Caius', '73', '94', '80', 'B' 'Mayo', 'Tyrese', '88', '61', '36', 'D' 'Stern', 'Brenda', '90', '86', '45', 'C'] My current strategy is to use this code: z=[] for i, x in zip(sgrades_flat[::5], s_grades): z.append(i+x) print(z) but that output is: ['BarrettF', 'BradshawA', 'CharltonB', 'MayoD', 'SternC']
I would combine the list by iterating manually on them: sgrades_flat=['Barrett', 'Edan', '70', '45', '59', 'Bradshaw', 'Reagan', '96', '97', '88', 'Charlton', 'Caius', '73', '94', '80', 'Mayo', 'Tyrese', '88', '61', '36', 'Stern', 'Brenda', '90', '86', '45'] s_grades=['F', 'A', 'B', 'D', 'C'] it1 = iter(sgrades_flat) it2 = iter(s_grades) result = [] try: while True: for _ in range(5): result.append(next(it1)) result.append(next(it2)) except StopIteration: pass print(result) prints ['Barrett', 'Edan', '70', '45', '59', 'F', 'Bradshaw', 'Reagan', '96', '97', '88', 'A', 'Charlton', 'Caius', '73', '94', '80', 'B', 'Mayo', 'Tyrese', '88', '61', '36', 'D', 'Stern', 'Brenda', '90', '86', '45', 'C'] (this still looks like a bad idea as a flat list is sub-optimal for such a data structure) Note that a one-liner without any manual iteration also does the same: import itertools grouped_sgrades = list(itertools.chain.from_iterable( sgrades_flat[i:i+5]+[s_grades[i//5]] for i in range(0,len(sgrades_flat),5))) however, why flatten the lists? grouped_sgrades = [sgrades_flat[i:i+5]+[s_grades[i//5]] for i in range(0,len(sgrades_flat),5)] result is a nice list of lists, which is approaching some structured data: [['Barrett', 'Edan', '70', '45', '59', 'F'], ['Bradshaw', 'Reagan', '96', '97', '88', 'A'], ['Charlton', 'Caius', '73', '94', '80', 'B'], ['Mayo', 'Tyrese', '88', '61', '36', 'D'], ['Stern', 'Brenda', '90', '86', '45', 'C']]
3
3
75,033,069
2023-1-6
https://stackoverflow.com/questions/75033069/type-hint-for-a-dict-that-maps-tuples-containing-classes-to-the-corresponding-in
I'm making a semi-singleton class Foo that can have (also semi-singleton) subclasses. The constructor takes one argument, let's call it a slug, and each (sub)class is supposed to have at most one instance for each value of slug. Let's say I have a subclass of Foo called Bar. Here is an example of calls: Foo("a slug") -> returns a new instance of Foo, saved with key (Foo, "a slug"). Foo("some new slug") -> returns a new instance Foo, saved with key (Foo, "some new slug"). Foo("a slug") -> we have the same class and slug from step 1, so this returns the same instance that was returned in step 1. Bar("a slug") -> we have the same slug as before, but a different class, so this returns a new instance of Bar, saved with key (Bar, "a slug"). Bar("a slug") -> this returns the same instance of Bar that we got in step 4. I know how to implement this: class dictionary associating a tuple of type and str to instance, override __new__, etc. Simple stuff. My question is how to type annotate this dictionary? What I tried to do was something like this: FooSubtype = TypeVar("FooSubtype", bound="Foo") class Foo: _instances: Final[dict[tuple[Type[FooSubtype], str], FooSubtype]] = dict() So, the idea is "whatever type is in the first element of the key ("assigning" it to FooSubtype type variable), the value needs to be an instance of that same type". This fails with Type variable "FooSubtype" is unbound, and I kinda see why. I get the same error if I split it like this: FooSubtype = TypeVar("FooSubtype", bound="Foo") InstancesKeyType: TypeAlias = tuple[Type[FooSubtype], str] class Foo: _instances: Final[dict[InstancesKeyType, FooSubtype]] = dict() The error points to the last line in this example, meaning it's the value type, not the key one, that is the problem. mypy also suggests using Generic, but I don't see how to do it in this particular example, because the value's type should somehow relate to the key's type, not be a separate generic type. This works: class Foo: _instances: Final[dict[tuple[Type["Foo"], str], "Foo"]] = dict() but it allows _instance[(Bar1, "x")] to be of type Bar2 (Bar1 and Bar2 here being different subclasses of Foo). It's not a big problem and I'm ok with leaving it like this, but I'm wondering if there is a better (stricter) approach.
This is a really great question. First I looked through and said "no, you can't at all", because you can't express any relation between dict key and value. However, then I realised that your suggestion is almost possible to implement. First, let's define a protocol that describes your desired behavior: from typing import TypeAlias, TypeVar, Protocol _T = TypeVar("_T", bound="Foo") # Avoid repetition, it's just a generic alias _KeyT: TypeAlias = tuple[type[_T], str] class _CacheDict(Protocol): def __getitem__(self, __key: _KeyT[_T]) -> _T: ... def __delitem__(self, __key: _KeyT['Foo']) -> None: ... def __setitem__(self, __key: _KeyT[_T], __value: _T) -> None: ... How does it work? It defines an arbitrary data structure with item access, such that cache_dict[(Foo1, 'foo')] resolves to type Foo1. It looks very much like a dict sub-part (or collections.abc.MutableMapping), but with slightly different typing. Dunder argument names are almost equivalent to positional-only arguments (with /). If you need other methods (e.g. get or pop), add them to this definition as well (you may want to use overload). You'll almost certainly need __contains__ which should have the same signature as __delitem__. So, now class Foo: _instances: Final[_CacheDict] = cast(_CacheDict, dict()) class Foo1(Foo): pass class Foo2(Foo): pass reveal_type(Foo._instances[(Foo, 'foo')]) # N: Revealed type is "__main__.Foo" reveal_type(Foo._instances[(Foo1, 'foo')]) # N: Revealed type is "__main__.Foo1" wow, we have properly inferred value types! We cast dict to the desired type, because our typing is different from dict definitions. It still has a problem: you can do Foo._instances[(Foo1, 'foo')] = Foo2() because _T just resolves to Foo here. However, this problem is completely unavoidable: even had we some infer keyword or Infer special form to spell def __setitem__(self, __key: _KeyT[Infer[_T]], __value: _T) -> None, it won't work properly: foo1_t: type[Foo] = Foo1 # Ok, upcasting foo2: Foo = Foo2() # Ok again Foo._instances[(foo1_t, 'foo')] = foo2 # Ough, still allowed, _T is Foo again Note that we don't use any casts above, so this code is type-safe, but certainly conflicts with our intent. So, we probably have to live with __setitem__ unstrictness, but at least have proper type from item access. Finally, the class is not generic in _T, because otherwise all values will be inferred to declared type instead of function-scoped (you can try using Protocol[_T] as a base class and watch what's happening, it's pretty good for deeper understanding of mypy approach to type inference). Here's a link to playground with full code. Also, you can subclass a MutableMapping[_KeyT['Foo'], 'Foo'] to get more methods instead of defining them manually. It will deal with __delitem__ and __contains__ out of the box, but __setitem__ and __getitem__ still need your implementation. Here's an alternative solution with MutableMapping and get (because get was tricky and funny to implement) (playground): from collections.abc import MutableMapping from abc import abstractmethod from typing import TypeAlias, TypeVar, Final, TYPE_CHECKING, cast, overload _T = TypeVar("_T", bound="Foo") _Q = TypeVar("_Q") _KeyT: TypeAlias = tuple[type[_T], str] class _CacheDict(MutableMapping[_KeyT['Foo'], 'Foo']): @abstractmethod def __getitem__(self, __key: _KeyT[_T]) -> _T: ... @abstractmethod def __setitem__(self, __key: _KeyT[_T], __value: _T) -> None: ... @overload # No-default version @abstractmethod def get(self, __key: _KeyT[_T]) -> _T | None: ... # Ooops, a `mypy` bug, try to replace with `__default: _T | _Q` # and check Foo._instances.get((Foo1, 'foo'), Foo2()) # The type gets broader, but resolves to more specific one in a wrong way @overload # Some default @abstractmethod def get(self, __key: _KeyT[_T], __default: _Q) -> _T | _Q: ... # Need this because of https://github.com/python/mypy/issues/11488 @abstractmethod def get(self, __key: _KeyT[_T], __default: object = None) -> _T | object: ... class Foo: _instances: Final[_CacheDict] = cast(_CacheDict, dict()) class Foo1(Foo): pass class Foo2(Foo): pass reveal_type(Foo._instances) reveal_type(Foo._instances[(Foo, 'foo')]) # N: Revealed type is "__main__.Foo" reveal_type(Foo._instances[(Foo1, 'foo')]) # N: Revealed type is "__main__.Foo1" reveal_type(Foo._instances.get((Foo, 'foo'))) # N: Revealed type is "Union[__main__.Foo, None]" reveal_type(Foo._instances.get((Foo1, 'foo'))) # N: Revealed type is "Union[__main__.Foo1, None]" reveal_type(Foo._instances.get((Foo1, 'foo'), Foo1())) # N: Revealed type is "__main__.Foo1" reveal_type(Foo._instances.get((Foo1, 'foo'), Foo2())) # N: Revealed type is "Union[__main__.Foo1, __main__.Foo2]" (Foo1, 'foo') in Foo._instances # We get this for free Foo._instances[(Foo1, 'foo')] = Foo1() Foo._instances[(Foo1, 'foo')] = object() # E: Value of type variable "_T" of "__setitem__" of "_CacheDict" cannot be "object" [type-var] Note that we don't use a Protocol now (because it needs MutableMapping to be a protocol as well) and use abstract methods instead. Trick, don't use it! When I was writing this answer, I discovered a mypy bug that you can abuse in a very interesting way here. We started with something like this, right? from collections.abc import MutableMapping from abc import abstractmethod from typing import TypeAlias, TypeVar, Final, TYPE_CHECKING, cast, overload _T = TypeVar("_T", bound="Foo") _Q = TypeVar("_Q") _KeyT: TypeAlias = tuple[type[_T], str] class _CacheDict(MutableMapping[_KeyT['Foo'], 'Foo']): @abstractmethod def __getitem__(self, __key: _KeyT[_T]) -> _T: ... @abstractmethod def __setitem__(self, __key: _KeyT[_T], __value: _T) -> None: ... class Foo: _instances: Final[_CacheDict] = cast(_CacheDict, dict()) class Foo1(Foo): pass class Foo2(Foo): pass Foo._instances[(Foo1, 'foo')] = Foo1() Foo._instances[(Foo1, 'foo')] = Foo2() Now let's change __setitem__ signature to a very weird thing. Warning: this is a bug, don't rely on this behavior! If we type __default as _T | _Q, we magically get "proper" typing with strict narrowing to type of first argument. @abstractmethod def __setitem__(self, __key: _KeyT[_T], __value: _T | _Q) -> None: ... Now: Foo._instances[(Foo1, 'foo')] = Foo1() # Ok Foo._instances[(Foo1, 'foo')] = Foo2() # E: Incompatible types in assignment (expression has type "Foo2", target has type "Foo1") [assignment] It is simply wrong, because _Q union part can be resolved to anything and is not used in fact (and moreover, it can't be a typevar at all, because it's used only once in the definition). Also, this allows another invalid assignment, when right side is not a Foo subclass: Foo._instances[(Foo1, 'foo')] = object() # passes I'll report this soon and link the issue to this question.
4
4
75,032,076
2023-1-6
https://stackoverflow.com/questions/75032076/python-typing-constrain-list-to-only-allow-one-type-of-subclass
I have 3 simple classes like: class Animal(abc.ABC): ... class Cat(Animal): ... class Dog(Animal): ... Then I have a function which is annotated as such: def speak(animals: List[Animal]) -> List[str]: ... My problem is that I want to constrain the List[Animal] to only include one type of animal, so: speak([Dog(), Dog()]) # OK speak([Cat(), Cat()]) # OK speak([Cat(), Dog()]) # typing error How would I annotate the speak function to allow for this? Is it even possible to do using typing or am I forced to check this at runtime? I have tried to use the List[Animal] as above but that doesn't give me an error when calling speak like speak([Cat(), Dog()]). I have also tried messing around with generics like TypeVar('T', bound=Animal) but this still allows me to pass in a List of any combination of subclasses.
I would consider your issue to be not yet well defined. Once you start filling in a more concrete implementation of Animal, you're possibly going to arrive at a convincing solution. Here, I'll reword your criteria for speak as it currently stands: You want it to accept a list of any individual subclass of Animal, but not Animal itself. Hopefully we can see why this doesn't make sense - there's nothing in your given code that suggests Animal can be distinguished from any subclass of Animal, at least from the point of view to what speak will do to the list of animals. Let's provide some distinguishing features instead: Python 3.10 import typing as t from typing_extensions import LiteralString from collections.abc import Sequence Sound = t.TypeVar("Sound", bound=LiteralString) class Animal(t.Generic[Sound]): def speak(self) -> Sound: ... class Cat(Animal[t.Literal["meow"]]): ... class Dog(Animal[t.Literal["bark"]]): ... def speak(animals: Sequence[Animal[Sound]]) -> list[str]: return [animal.speak() for animal in animals] >>> speak([Dog(), Dog()]) # OK >>> speak([Cat(), Cat()]) # OK >>> >>> # mypy: Argument 1 to "speak" has incompatible type "List[object]"; expected "Sequence[Animal[<nothing>]]" [arg-type] >>> # pyright: Argument of type "list[Cat | Dog]" cannot be assigned to parameter "animals" of type "Sequence[Animal[Sound@speak]]" in function "speak" >>> # pyre: Incompatible parameter type [6]: In call `speak`, for 1st positional only parameter expected `Sequence[Animal[Variable[Sound (bound to typing_extensions.LiteralString)]]]` but got `List[Union[Cat, Dog]]` >>> speak([Cat(), Dog()]) Note that although mypy doesn't complain about the signature squeak(animals: list[Animal[Sound]]), this is technically not type-safe; you may decide to append Cat() to list[Dog]. This is why Sequence is used (it is non-mutable and covariant in its element types).
7
1
75,031,831
2023-1-6
https://stackoverflow.com/questions/75031831/how-to-apply-fastapi-middleware-on-non-async-def-endpoints
According to https://fastapi.tiangolo.com/tutorial/middleware/, we could apply a FastAPI Middleware on async def endpoints. Currently I have several non-async def endpoints, how to apply FastAPI Middleware on non-async def endpoint? If I still register an async Middleware, will it work for the non-async def endpoint ? For example: @app.middleware("http") async def add_process_time_header(request: Request, call_next): start_time = time.time() response = await call_next(request) process_time = time.time() - start_time response.headers["X-Process-Time"] = str(process_time) return response Will the Middleware work properly if call_next is a non-async def method ? Thank you.
A coroutine is created based on any synchronous function. So yes, this will work fine for you. You can read more about this here.
3
3
75,033,570
2023-1-6
https://stackoverflow.com/questions/75033570/how-do-i-add-constraints-to-itertools-product
I am trying to list all products with numbers = [1,2,3,4,5,6,7,8] string length of 4 with some constraints. Position 0 must be < 8 Positions 2 and 3 must be < 6 With the current code it is printing every possible combination so I was wondering how do I go about filtering it? import itertools number = [1,2,3,4,5,6,7,8] result = itertools.product(number, repeat=4) for item in result: print(item) I've tried using if product[0] < 8 or product[2] < 6 or product[3] < 6: but I don't know where to fit in or how to format it.
I think you can simplify this and avoid wasting a lot of cycles by looking at the inputs carefully. repeat=4 means that you want to iterate over the following: [1, 2, 3, 4, 5, 6, 7, 8] [1, 2, 3, 4, 5, 6, 7, 8] [1, 2, 3, 4, 5, 6, 7, 8] [1, 2, 3, 4, 5, 6, 7, 8] However, what your question is asking is how to iterate through [1, 2, 3, 4, 5, 6, 7] [1, 2, 3, 4, 5, 6, 7, 8] [1, 2, 3, 4, 5] [1, 2, 3, 4, 5] Since the conditions on the inputs are independent (the value of one element is not what restricts the others), you can adjust the inputs instead of filtering elements after the fact. I suggest you just iterate over the sequences you want directly instead of filtering the output. You don't need to make actual lists: a range object will be much more efficient: itertools.product(range(1, 8), range(1, 9), range(1, 6), range(1, 6))
3
3
75,019,496
2023-1-5
https://stackoverflow.com/questions/75019496/enable-try-it-out-in-openapi-so-that-no-need-to-click
I'm using FastAPI and OpenAPI/Swagger UI to see and test my endpoints. Each time I use an endpoint for the first time, in order to test it, I have to first click the Try it out button, which is getting tedious. Is there a way to make it disappear and be able to test the endpoint instantly?
Yes, you can configure the OpenAPI/swagger page by passing a dictionary to the kwarg "swagger_ui_parameters" when creating your FastAPI instance (docs). The full list of all settings you can update that way can be found here. For your example, it would look like this: from fastapi import FastAPI app = FastAPI(swagger_ui_parameters={"tryItOutEnabled": True})
6
7
75,031,868
2023-1-6
https://stackoverflow.com/questions/75031868/resampling-agg-apply-behavior
This question relates to resample .agg/.apply which behaves differently than groupby .agg/.apply. Here is an example df: df = pd.DataFrame({'A':range(0,100),'B':range(0,200,2)},index=pd.date_range('1/1/2022',periods=100,freq='D')) Output: A B 2022-01-01 0 0 2022-01-02 1 2 2022-01-03 2 4 2022-01-04 3 6 2022-01-05 4 8 ... .. ... 2022-04-06 95 190 2022-04-07 96 192 2022-04-08 97 194 2022-04-09 98 196 2022-04-10 99 198 My question is, what does x represent in the apply function below. There are times where it behaves as a series and other times it behaves as a df. By calling type(x) it returns df. However, the below returns an error saying "No axis named 1 for object type Series" df.resample('M').apply(lambda x: x.sum(axis=1)) But this does not. There is no stack for a series, so this would imply x represents a df. df.resample('M').apply(lambda x: x.stack()) Also, when you run df.resample('M').apply(lambda x: print(type(x))) the outputs are series, but df.resample('M').apply(lambda x: type(x)) outputs dataframe type. So my main question is, what gets passed into apply for resample. a series or a dataframe?
That's a really good question and I think I have not the right answer but. resample a timeseries returns a DatetimeIndexResampler instance. apply is an alias of aggregate function. Now check the source code: @doc( _shared_docs["aggregate"], see_also=_agg_see_also_doc, examples=_agg_examples_doc, klass="DataFrame", axis="", ) def aggregate(self, func=None, *args, **kwargs): result = ResamplerWindowApply(self, func, args=args, kwargs=kwargs).agg() if result is None: how = func result = self._groupby_and_aggregate(how, *args, **kwargs) result = self._apply_loffset(result) return result agg = aggregate apply = aggregate What I understand: I think if something goes wrong with ResamplerWindowApply, aggregate function have a fallback mechanism to reevaluate the function with _groupby_and_aggregate. The docstring of the last one is : """ Re-evaluate the obj with a groupby aggregation. """ Let's debug with a named function: import inspect def f(x): print(inspect.stack()[2].function) print(f'begin: {type(x)}') x.stack() print(f'end: {type(x)}') return x.sum(axis=1) df.resample('M').apply(f) Output: _aggregate_series_pure_python begin: <class 'pandas.core.series.Series'> # something goes wrong _python_apply_general # the caller has changed begin: <class 'pandas.core.frame.DataFrame'> # now x is a DataFrame end: <class 'pandas.core.frame.DataFrame'> _python_apply_general begin: <class 'pandas.core.frame.DataFrame'> end: <class 'pandas.core.frame.DataFrame'> _python_apply_general begin: <class 'pandas.core.frame.DataFrame'> end: <class 'pandas.core.frame.DataFrame'> _python_apply_general begin: <class 'pandas.core.frame.DataFrame'> end: <class 'pandas.core.frame.DataFrame'> After a failure with a Series, aggregate calls the function with a DataFrame. Unfortunately, this behavior is not documented.
3
3
75,030,842
2023-1-6
https://stackoverflow.com/questions/75030842/sorting-of-simple-python-dictionary-for-printing-specific-value
I have a python dictionary. a = {'1':'saturn', '2':'venus', '3':'mars', '4':'jupiter', '5':'rahu', '6':'ketu'} planet = input('Enter planet : ') print(planet) If user enteres 'rahu', dictionary to be sorted like the following a = {'1':'rahu', '2':'ketu', '3':'saturn', '4':'venus', '5':'mars', '6':'jupiter' } print('4th entry is : ') It should sort dictionary based on next values in the dictionary. If dictionary ends, it should start from initial values of dictionary. It should print 4th entry of the dictionary, it should return venus How to sort python dictionary based on user input value?
Your use of a dictionary is probably not ideal. Dictionaries are useful when the key has a significance and the matching value needs to be accessed quickly. A list might be better suited. Anyway, you could do: l = list(a.values()) idx = l.index(planet) a = dict(enumerate(l[idx:]+l[:idx], start=1)) NB. the above code requires the input string to be a valid dictionary value, if not you'll have to handle the ValueError as you see fit. Output: {1: 'rahu', 2: 'ketu', 3: 'saturn', 4: 'venus', 5: 'mars', 6: 'jupiter'}
3
5
75,029,388
2023-1-6
https://stackoverflow.com/questions/75029388/how-do-i-distribute-fonts-with-my-python-package-using-python-m-build
Problem statement My package relies on matplotlib to use a specific font which may not be installed on the target device. I'm trying to install ttf font files from the source distribution to the matplotlib fonts/ttf directory, after building the package. With setuptools slowly removing parts of its CLI (python setup.py sdist/bdist_wheel) and the possibility to use build (python -m build), I decided to build my package with the latter. What I tried I tried modifying the setup.py from the outdated https://stackoverflow.com/a/34304823/8797886. My question differs from this post in that I would like to use python -m build. setup.py import warnings from functools import partial from setuptools import setup from setuptools.command.install import install def _post_install(): # Try to install custom fonts try: import os import shutil import matplotlib as mpl import mypkg # Find where matplotlib stores its True Type fonts mpl_data_dir = os.path.dirname(mpl.matplotlib_fname()) mpl_ttf_dir = os.path.join(mpl_data_dir, "fonts", "ttf") # Copy the font files to matplotlib's True Type font directory # (I originally tried to move the font files instead of copy them, # but it did not seem to work, so I gave up.) cp_ttf_dir = os.path.join(os.path.dirname(mypkg.__file__), "ttf") for file_name in os.listdir(cp_ttf_dir): if file_name[-4:] == ".ttf": old_path = os.path.join(cp_ttf_dir, file_name) new_path = os.path.join(mpl_ttf_dir, file_name) shutil.move(old_path, new_path) print("moving" + old_path + " -> " + new_path) # Try to delete matplotlib's fontList cache mpl_cache_dir = mpl.get_cachedir() mpl_cache_dir_ls = os.listdir(mpl_cache_dir) if "fontList.cache" in mpl_cache_dir_ls: fontList_path = os.path.join(mpl_cache_dir, "fontList.cache") os.remove(fontList_path) print("Deleted the matplotlib fontList.cache") except: warnings.warn( "An issue occured while installing the custom fonts for mypkg." ) raise # Set up the machinery to install custom fonts. Subclass the setup tools install # class in order to run custom commands during installation. class MoveTTF(install): def run(self): """ Performs the usual install process and then copies the True Type fonts that come with shithappens into matplotlib's True Type font directory, and deletes the matplotlib fontList.cache. """ # Perform the usual install process install.run(self) self.execute(_post_install, (), "Moving Custom Font to matplotlib.") setup_movettf = partial( setup, package_data={"": ["ttf/*.ttf"]}, cmdclass={"install": MoveTTF}, ) setup_movettf() I used a partial setup, because I expect build to fill in all the other stuff from pyproject.toml. pyproject.toml [build-system] requires = ["setuptools>=61.0", "matplotlib"] build-backend = "setuptools.build_meta" [project] name = "mypkg" etc... [tool.setuptools.packages.find] where = ["src"] [tool.setuptools.package-data] mypkg = ["ttf/*.ttf"] What I get When I use python -m build, I get ModuleNotFoundError: No module named 'mypkg' When I use python setup.py sdist and python setup.py bdist_wheel, the wheel is successfully build. When installing the wheel with pip install dist/mypkg-[].whl, the install is successful and the matplotlib ttf library is populated with the custom fonts. What I expect I expect python -m build to work with importing mypkg post-install, like the setuptools cli does.
You can not do this with Python packaging tools. You need to go beyond that. Some platform specific packaging tools. Think .deb/apt on Debian/Ubuntu or full blown executable installers on Windows (or .msi if it still exists). This is for operations at "install-time"... ... but you do not need to do this at "install-time", the code of your application or library can contain functions to do this installation of the fonts at run-time. On the other hand, if you and your users only ever use the setup.py script, without using pip at all (or any other modern installer), then it will likely still work, because obviously you can put any code you want in setup.py without restriction. There might be some more details in these: Should there be a new standard for installing arbitrary data files?
4
1
75,022,315
2023-1-5
https://stackoverflow.com/questions/75022315/attributeerror-dataframe-object-has-no-attribute-write-trying-to-upload-a
I have created a dataframe in databricks as a combination of multiple dataframes. I am now trying to upload that df to a table in my database and I have used this code many times before with no problem, but now it is not working. My code is df.write.saveAsTable("dashboardco.AccountList") getting the error: AttributeError: 'DataFrame' object has no attribute 'write' Thanks for any help!
Most probably your DataFrame is the Pandas DataFrame object, not Spark DataFrame object. try: spark.createDataFrame(df).write.saveAsTable("dashboardco.AccountList")
5
7
75,023,979
2023-1-5
https://stackoverflow.com/questions/75023979/odd-colours-in-cairo-conversion-to-pygame
When I run this: import pygame import cairo WIDTH, HEIGHT = 640, 480 pygame.display.init() screen = pygame.display.set_mode((WIDTH, HEIGHT), 0, 32) screen.fill((255, 255, 255)) surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, WIDTH, HEIGHT) ctx = cairo.Context(surface) ctx.set_source_rgb(0, 0, 1) ctx.rectangle(0, 0, 100, 100) ctx.fill() buf = surface.get_data() img = pygame.image.frombuffer(buf, (WIDTH, HEIGHT), "ARGB") screen.blit(img, (0, 0)) pygame.display.flip() clock = pygame.time.Clock() while not pygame.QUIT in [e.type for e in pygame.event.get()]: clock.tick(30) the result is a window with a blue rectangle, as I would expect. However, if I change ctx.set_source_rgb(0, 0, 1) with ctx.set_source_rgb(0, 0, 0), i.e. change the colour from (0, 0, 1) to (0, 0, 0), I get nothing at all. The expected behaviour would be that the rectangle is black. When I modify the blue value between 0 and 1, the opacity seems to change while the actual blue value seems to stay constant. The r and g values work as expected when b = 1. The same happens with set_source_rgba.
This is a problem of endianness, or byte order. Cairo's pixel formats are endian-dependent, and pygame's pixel formats are independent of endianness. See https://github.com/pygame/pygame/issues/2972 for more of a discussion of interoperability. So if the bytes are flipped, and you say blue (0, 0, 1), you fill out the ARGB like [1, 0, 0, 1], because the alpha is also 1 by default. Flip [1, 0, 0, 1] and you still have the same thing. However if you fill in ARGB for black, it goes to [1, 0, 0, 0]. Interpreting that as BGRA yields R=0, G=0, B=1, A=0. Because alpha is 0, it's completely transparent and nothing is drawn. On a little endian system (like yours), Cairo's ARGB is equivalent to pygame's BGRA. This was just added to pygame, so you'll have to update to pygame 2.1.3.dev8or higher. (Right now that's the highest release of pygame, and it's a pre release, so you'd need pip install pygame --upgrade --pre Then you can do img = pygame.image.frombuffer(buf, (WIDTH, HEIGHT), "BGRA")
3
4
75,023,226
2023-1-5
https://stackoverflow.com/questions/75023226/why-is-pip-not-letting-me-install-torch-1-9-1cu111-in-a-new-conda-env-when-i-h
When I run the pip install in the new conda env: (base) brando9~ $ pip install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html Looking in links: https://download.pytorch.org/whl/torch_stable.html ERROR: Could not find a version that satisfies the requirement torch==1.9.1+cu111 (from versions: 1.11.0, 1.11.0+cpu, 1.11.0+cu102, 1.11.0+cu113, 1.11.0+cu115, 1.11.0+rocm4.3.1, 1.11.0+rocm4.5.2, 1.12.0, 1.12.0+cpu, 1.12.0+cu102, 1.12.0+cu113, 1.12.0+cu116, 1.12.0+rocm5.0, 1.12.0+rocm5.1.1, 1.12.1, 1.12.1+cpu, 1.12.1+cu102, 1.12.1+cu113, 1.12.1+cu116, 1.12.1+rocm5.0, 1.12.1+rocm5.1.1, 1.13.0, 1.13.0+cpu, 1.13.0+cu116, 1.13.0+cu117, 1.13.0+cu117.with.pypi.cudnn, 1.13.0+rocm5.1.1, 1.13.0+rocm5.2, 1.13.1, 1.13.1+cpu, 1.13.1+cu116, 1.13.1+cu117, 1.13.1+cu117.with.pypi.cudnn, 1.13.1+rocm5.1.1, 1.13.1+rocm5.2) ERROR: No matching distribution found for torch==1.9.1+cu111 the other env with that pytorch version: (metalearning3.9) [pzy2@vision-submit ~]$ pip list Package Version Location ---------------------------------- -------------------- ------------------ absl-py 1.0.0 aiohttp 3.8.3 aiosignal 1.3.1 alabaster 0.7.12 anaconda-client 1.9.0 anaconda-project 0.10.1 antlr4-python3-runtime 4.8 anyio 2.2.0 appdirs 1.4.4 argcomplete 2.0.0 argh 0.26.2 argon2-cffi 20.1.0 arrow 0.13.1 asn1crypto 1.4.0 astroid 2.6.6 astropy 4.3.1 asttokens 2.0.7 astunparse 1.6.3 async-generator 1.10 async-timeout 4.0.2 atomicwrites 1.4.0 attrs 21.2.0 autopep8 1.5.7 Babel 2.9.1 backcall 0.2.0 backports.shutil-get-terminal-size 1.0.0 beautifulsoup4 4.10.0 binaryornot 0.4.4 bitarray 2.3.0 bkcharts 0.2 black 19.10b0 bleach 4.0.0 bokeh 2.4.1 boto 2.49.0 Bottleneck 1.3.2 brotlipy 0.7.0 cached-property 1.5.2 cachetools 5.0.0 certifi 2021.10.8 cffi 1.14.6 chardet 4.0.0 charset-normalizer 2.0.4 cherry-rl 0.1.4 click 8.0.3 cloudpickle 2.0.0 clyent 1.2.2 colorama 0.4.4 conda 4.12.0 conda-content-trust 0+unknown conda-pack 0.6.0 conda-package-handling 1.8.0 conda-token 0.3.0 configparser 5.3.0 contextlib2 0.6.0.post1 cookiecutter 1.7.2 crc32c 2.3 crcmod 1.7 cryptography 3.4.8 cycler 0.10.0 Cython 0.29.24 cytoolz 0.11.0 daal4py 2021.3.0 dask 2021.10.0 debugpy 1.4.1 decorator 5.1.0 defusedxml 0.7.1 diff-match-patch 20200713 dill 0.3.4 distributed 2021.10.0 docker-pycreds 0.4.0 docutils 0.17.1 entrypoints 0.3 et-xmlfile 1.1.0 executing 0.9.1 fairseq 0.12.2 /home/pzy2/fairseq fastcache 1.1.0 fastcluster 1.2.6 fasteners 0.17.3 filelock 3.3.1 flake8 3.9.2 Flask 1.1.2 flatbuffers 2.0.7 fonttools 4.25.0 frozenlist 1.3.0 fsspec 2021.8.1 gast 0.4.0 gcs-oauth2-boto-plugin 3.0 gevent 21.8.0 gitdb 4.0.9 GitPython 3.1.27 glob2 0.7 gmpy2 2.0.8 google-apitools 0.5.32 google-auth 2.6.3 google-auth-oauthlib 0.4.6 google-pasta 0.2.0 google-reauth 0.1.1 gql 0.2.0 graphql-core 1.1 greenlet 1.1.1 grpcio 1.44.0 gsutil 5.9 gym 0.22.0 gym-notices 0.0.6 h5py 3.3.0 HeapDict 1.0.1 higher 0.2.1 html5lib 1.1 httplib2 0.20.4 huggingface-hub 0.5.1 hydra-core 1.0.7 idna 3.2 imagecodecs 2021.8.26 imageio 2.9.0 imagesize 1.2.0 importlib-metadata 4.12.0 inflection 0.5.1 iniconfig 1.1.1 intervaltree 3.1.0 ipykernel 6.4.1 ipython 7.29.0 ipython-genutils 0.2.0 ipywidgets 7.6.5 isort 5.9.3 itsdangerous 2.0.1 jdcal 1.4.1 jedi 0.18.0 jeepney 0.7.1 Jinja2 2.11.3 jinja2-time 0.2.0 joblib 1.1.0 json5 0.9.6 jsonschema 3.2.0 jupyter 1.0.0 jupyter-client 6.1.12 jupyter-console 6.4.0 jupyter-core 4.8.1 jupyter-server 1.4.1 jupyterlab 3.2.1 jupyterlab-pygments 0.1.2 jupyterlab-server 2.8.2 jupyterlab-widgets 1.0.0 keras 2.10.0 Keras-Preprocessing 1.1.2 keyring 23.1.0 kiwisolver 1.3.1 lark-parser 0.12.0 lazy-object-proxy 1.6.0 learn2learn 0.1.7 libarchive-c 2.9 libclang 14.0.6 littleutils 0.2.2 llvmlite 0.37.0 locket 0.2.1 loguru 0.6.0 lxml 4.6.3 Markdown 3.3.6 MarkupSafe 1.1.1 matplotlib 3.4.3 matplotlib-inline 0.1.2 mccabe 0.6.1 mistune 0.8.4 mkl-fft 1.3.1 mkl-random 1.2.2 mkl-service 2.4.0 mock 4.0.3 monotonic 1.6 more-itertools 8.10.0 mpmath 1.2.1 msgpack 1.0.2 multidict 6.0.2 multipledispatch 0.6.0 munkres 1.1.4 mypy-extensions 0.4.3 nbclassic 0.2.6 nbclient 0.5.3 nbconvert 6.1.0 nbformat 5.1.3 nest-asyncio 1.5.1 networkx 2.6.3 nltk 3.6.5 nose 1.3.7 notebook 6.4.5 numba 0.54.1 numexpr 2.7.3 numpy 1.20.3 numpydoc 1.1.0 nvidia-ml-py3 7.352.0 nvidia-smi 0.1.3 oauth2client 4.1.3 oauthlib 3.2.0 olefile 0.46 omegaconf 2.0.6 opencv-python 4.6.0.66 openpyxl 3.0.9 opt-einsum 3.3.0 ordered-set 4.1.0 packaging 21.0 pandas 1.3.4 pandocfilters 1.4.3 parso 0.8.2 partd 1.2.0 path 16.0.0 pathlib2 2.3.6 pathspec 0.7.0 pathtools 0.1.2 patsy 0.5.2 pep8 1.7.1 pexpect 4.8.0 pickleshare 0.7.5 Pillow 8.4.0 pip 22.2.2 pkginfo 1.7.1 plotly 5.7.0 pluggy 0.13.1 ply 3.11 portalocker 2.5.1 poyo 0.5.0 progressbar2 4.0.0 prometheus-client 0.11.0 promise 2.3 prompt-toolkit 3.0.20 protobuf 3.19.6 psutil 5.8.0 ptyprocess 0.7.0 py 1.10.0 pyasn1 0.4.8 pyasn1-modules 0.2.8 pycodestyle 2.7.0 pycosat 0.6.3 pycparser 2.20 pycurl 7.44.1 pydocstyle 6.1.1 pyerfa 2.0.0 pyflakes 2.3.1 Pygments 2.10.0 pylint 2.9.6 pyls-spyder 0.4.0 pyodbc 4.0.0-unsupported pyOpenSSL 21.0.0 pyparsing 3.0.4 pyrsistent 0.18.0 PySocks 1.7.1 pytest 6.2.4 python-dateutil 2.8.2 python-lsp-black 1.0.0 python-lsp-jsonrpc 1.0.0 python-lsp-server 1.2.4 python-slugify 5.0.2 python-utils 3.1.0 pytz 2021.3 pyu2f 0.1.5 PyWavelets 1.1.1 pyxdg 0.27 PyYAML 6.0 pyzmq 22.2.1 QDarkStyle 3.0.2 qpth 0.0.15 qstylizer 0.1.10 QtAwesome 1.0.2 qtconsole 5.1.1 QtPy 1.10.0 regex 2021.8.3 requests 2.26.0 requests-oauthlib 1.3.1 retry-decorator 1.1.1 rope 0.19.0 rsa 4.7.2 Rtree 0.9.7 ruamel-yaml-conda 0.15.100 sacrebleu 2.2.0 sacremoses 0.0.49 scikit-image 0.18.3 scikit-learn 0.24.2 scikit-learn-intelex 2021.20210714.170444 scipy 1.7.1 seaborn 0.11.2 SecretStorage 3.3.1 Send2Trash 1.8.0 sentry-sdk 1.5.9 setproctitle 1.2.2 setuptools 58.0.4 shortuuid 1.0.8 simplegeneric 0.8.1 singledispatch 3.7.0 sip 4.19.13 six 1.16.0 sklearn 0.0 smmap 5.0.0 sniffio 1.2.0 snowballstemmer 2.1.0 sorcery 0.2.2 sortedcollections 2.1.0 sortedcontainers 2.4.0 soupsieve 2.2.1 Sphinx 4.2.0 sphinxcontrib-applehelp 1.0.2 sphinxcontrib-devhelp 1.0.2 sphinxcontrib-htmlhelp 2.0.0 sphinxcontrib-jsmath 1.0.1 sphinxcontrib-qthelp 1.0.3 sphinxcontrib-serializinghtml 1.1.5 sphinxcontrib-websupport 1.2.4 spyder 5.1.5 spyder-kernels 2.1.3 SQLAlchemy 1.4.22 statsmodels 0.12.2 subprocess32 3.5.4 sympy 1.9 tables 3.6.1 TBB 0.2 tblib 1.7.0 tensorboard 2.10.1 tensorboard-data-server 0.6.1 tensorboard-plugin-wit 1.8.1 tensorflow-estimator 2.10.0 tensorflow-gpu 2.10.1 tensorflow-io-gcs-filesystem 0.27.0 termcolor 2.0.1 terminado 0.9.4 testpath 0.5.0 text-unidecode 1.3 textdistance 4.2.1 tfrecord 1.14.1 threadpoolctl 2.2.0 three-merge 0.1.1 tifffile 2021.7.2 timm 0.6.11 tinycss 0.4 tokenizers 0.11.6 toml 0.10.2 toolz 0.11.1 torch 1.9.1+cu111 torchaudio 0.9.1 torchmeta 1.8.0 torchtext 0.10.1 torchvision 0.10.1+cu111 tornado 6.1 tqdm 4.62.3 traitlets 5.1.0 transformers 4.18.0 typed-ast 1.4.3 typing-extensions 3.10.0.2 ujson 4.0.2 ultimate-anatome 0.1.1 ultimate-aws-cv-task2vec 0.0.1 unicodecsv 0.14.1 Unidecode 1.2.0 urllib3 1.26.7 wandb 0.13.5 watchdog 2.1.3 wcwidth 0.2.5 webencodings 0.5.1 Werkzeug 2.0.2 wheel 0.37.0 whichcraft 0.6.1 widgetsnbextension 3.5.1 wrapt 1.12.1 wurlitzer 2.1.1 xlrd 2.0.1 XlsxWriter 3.0.1 xlwt 1.3.0 yapf 0.31.0 yarl 1.7.2 zict 2.0.0 zipp 3.6.0 zope.event 4.5.0 zope.interface 5.4.0 WARNING: You are using pip version 22.2.2; however, version 22.3.1 is available. You should consider upgrading via the '/home/pzy2/miniconda3/envs/metalearning3.9/bin/python -m pip install --upgrade pip' command. (metalearning3.9) [pzy2@vision-submit ~]$ I asked a related question because I can't install pytorch with cuda with conda, see details here: why does conda install the pytorch CPU version despite me putting explicitly to download the cuda toolkit version? I think this works: # -- Install PyTorch sometimes requires more careful versioning due to cuda, ref: official install instruction https://pytorch.org/get-started/previous-versions/ # you need python 3.9 for torch version 1.9.1 to work, due to torchmeta==1.8.0 requirement if ! python -V 2>&1 | grep -q 'Python 3\.9'; then echo "Error: Python 3.9 is required!" exit 1 fi pip install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html
To install pytorch 1.9.1cu11 you need python 3.9 to be avaiable. Added that to my bash install.sh # - create conda env conda create -n metalearning_gpu python=3.9 conda activate metalearning_gpu ## conda remove --name metalearning_gpu --all # - make sure pip is up to date which python pip install --upgrade pip pip3 install --upgrade pip which pip which pip3 # -- Install PyTorch sometimes requires more careful versioning due to cuda, ref: official install instruction https://pytorch.org/get-started/previous-versions/ # you need python 3.9 for torch version 1.9.1 to work, due to torchmeta==1.8.0 requirement if ! python -V 2>&1 | grep -q 'Python 3\.9'; then echo "Error: Python 3.9 is required!" exit 1 fi pip install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html
3
1
75,020,740
2023-1-5
https://stackoverflow.com/questions/75020740/dbt-postgres-all-models-appending-schema-public-to-output
I am testing a local setup of dbt-postgres. I have a simple model, but for some reason, any table created is being placed in a schema with the prefix public appended to it. Desired output table: public.test Current output table: public_public.test As you can see, the public schema is being duplicated here. Using another schema in the model also creates a new schema with public_ prefix. Simple model test.sql file: {{ config(materialized='table', schema='public') }} select a from table x dbt_profile.yml name: 'abc' version: '0.1' config-version: 2 profile: 'abc' model-paths: ["models"] analysis-paths: ["analyses"] test-paths: ["tests"] seed-paths: ["seeds"] macro-paths: ["macros"] snapshot-paths: ["snapshots"] target-path: "target" clean-targets: - "target" - "dbt_packages" - "dbt_modules" - "logs" models: abc: materialized: table profiles.yml abc: outputs: dev: type: postgres threads: 1 host: "localhost" port: 5432 user: "admin" pass: "admin" dbname: database schema: "public" target: dev
See the docs on custom schemas. You are defining public as the "target schema" in your profiles.yml file. You do not need to add {{ config(schema='public') }} to your model file; that config sets a "custom schema" for that model, and by default, dbt will land your model at <target_schema>_<custom_schema>. You can change this behavior if you want, by overriding the built-in macro called generate_schema_name, but in this case you can just remove the config from your model, and let the profile determine the schema.
5
5
75,021,051
2023-1-5
https://stackoverflow.com/questions/75021051/convert-pandas-series-of-strings-to-a-series-of-lists
For iinstance I have a dataframe as below import pandas as pd df = pd.DataFrame({"col":['AM RLC, F C', 'AM/F C', 'DM','D C']}) |col -------------------| 0 |"AM RLC, F C" | 1 |"AM/F C" | 2 |"DM" | 3 |"D C" | My expected output is as following |col ----|-----------------------| 0 |["AM", "RLC", "F", "C"]| 1 |["AM", "F", "C"] | 2 |["DM" ] | 3 |["D", "C"] | ",", "/" and "space" should be treated as delimiter, The answers in this question do not answer my queries
I would use str.split or str.findall: df['col'] = df['col'].str.split('[\s,/]+') # or df['col'] = df['col'].str.findall('\w+') Output: col 0 [AM, RLC, F, C] 1 [AM, F, C] 2 [DM] 3 [D, C] Regex: [\s,/]+ # at least one of space/comma/slash with optional repeats \w+ # one or more word characters
3
4
75,014,661
2023-1-5
https://stackoverflow.com/questions/75014661/why-does-my-while-loop-inside-function-keep-returning-true
I am trying to learn while loops. To practice, I created a simple while loop with an If statement inside a function to check and see if a word is a palindrome. For some reason, even if the word is not a palindrome, it keeps returning True. I expected the output of the print function on the last line to be False: from collections import deque word = "tacrocat" def check_palindrome(word): d = deque(word) while len(d) > 1: if d.pop() == d.popleft(): return True return False print(check_palindrome(word)) NOTE: When I change the if statement evaluation to "!=", change the return statement inside the if statement to False, and change the return statement in the while loop to True, it seems to accurately detect palindromes - but I have no idea why.
If the program is written like this: from collections import deque word = "tacrocat" def check_palindrome(word): d = deque(word) while len(d) > 1: if d.pop() == d.popleft(): return True return False print(check_palindrome(word)) At iteration-1: Since both the first character('t') and last character('t') in the deque are equal it will enter the if condition and return True. When a return statement is executed in a function, then the control comes out of the function. In other words, as soon as return True statement is executed, the control comes back to print statement without executing remaining iterations of while loop and since we returned True, so True will be printed as output. Let's analyze second program: from collections import deque word = "tacrocat" def check_palindrome(word): d = deque(word) while len(d) > 1: if d.pop() != d.popleft(): return False return True print(check_palindrome(word)) At iteration-1: Current deque: [t,a,c,r,o,c,a,t] We are popping both first element and last element and checking whether they are not equal in if condition. Since 't' and 't' are equal, if condition will not execute and while loop will continue. At iteration-2: Current deque: [a,c,r,o,c,a] Since 'a' and 'a' are equal, if condition will not execute and while loop will continue. Both first and last elements are popped (pop and popleft) At iteration-3: Current deque: [c,r,o,c] Since 'c' and 'c' are equal, if condition will not execute and while loop will continue. Both first and last elements are popped (pop and popleft) At iteration-4: Current deque: [r,o] Since 'r' and 'o' are not equal, if condition will execute and return False statement is executed. So, the function check_palindrome will terminate with return value as False and so we get output as False. The second program is correct because to check if a word is palindrome or not, we need to check all letters whether they are satisfying the condition or not. Just checking only the first and last character of a word and if they are equal, then it doesn't mean the remaining letters are same. We need to check them too.
3
2
75,010,631
2023-1-4
https://stackoverflow.com/questions/75010631/python-typing-cast-vs-built-in-casting
Is there any difference between the typing.cast function and the built-in cast function? x = 123 y = str(x) from typing import cast x = 123 y = cast(str, x) I expected that mypy might not like the first case and would prefer the typing.cast but this was not the case.
str(x) returns a new str object, independent of the original int. It's only an example of "casting" in a very loose sense (and one I don't think is useful, at least in the context of Python code). cast(str, x) simply returns x, but tells a type checker to pretend that the return value has type str, no matter what type x may actually have. Because Python variables have no type (type is an attribute of a value), there's no need for casting in the sense that languages like C use it (where you can change how the contents of a variable are viewed based on the type you cast the variable to).
10
17
75,003,385
2023-1-4
https://stackoverflow.com/questions/75003385/what-is-equivalent-protocol-of-python-callable
I always though that Callable is equivalent to having the dunder __call__ but apparently there is also __name__, because the following code is correct for mypy --strict: def print_name(f: Callable[..., Any]) -> None: print(f.__name__) def foo() -> None: pass print_name(foo) print_name(lambda x: x) What is actual interface of python Callable? I dug out what functools.wraps does. AFAIU it sets ('__module__', '__name__', '__qualname__', '__doc__', '__annotations__') - is that the same what the Callable is expected to have?
So the mypy position up until now seems to have been that most of the time, when a variable is annotated with Callable, the user expects it to stand for a user-defined function (i.e. def something(...): ...). Even though user-defined functions are technically a subtype of the callable and even though they are the ones that define a number of those attributes you mentioned, some users are not aware of this distinction and would be surprised, if mypy raised an error with code like this: from collections.abc import Callable from typing import Any def f(cal: Callable[..., Any]) -> None: print(cal.__name__) print(cal.__globals__) print(cal.__kwdefaults__) print(cal.foo) Each of those print-lines should be an error, yet only the last actually triggers one. Moreover, if we define a minimal callable class that doesn't have those attributes, it is treated as a subtype of Callable by both Python and mypy, creating a logical contradication: class Bar: def __call__(self) -> None: print(f"hi mom") f(Bar()) # this is valid print(Bar().__name__) # this is an error Their argument so far amounts to maintaining convenience for users that have so far failed to see the distinction between callable subtypes, and by extension avoiding confused issues being opened by those users, asking why callables shouldn't have __name__ or those other attributes. (I hope I am being charitable enough with my interpretation.) I find this to be a very odd position (to put it mildly) and I expressed as much in the issue I opened for this. I'll keep this answer updated, if any new insights are reached in the discussion around the issue. Bottom line is: You are right, callables must have the __call__ method and do not require anything else.
6
4
75,004,868
2023-1-4
https://stackoverflow.com/questions/75004868/interesting-results-with-the-increment-operator
I had learned that n = n + v and n += v are the same. Until this; def assign_value(n, v): n += v print(n) l1 = [1, 2, 3] l2 = [4, 5, 6] assign_value(l1, l2) print(l1) The output will be: [1, 2, 3, 4, 5, 6] [1, 2, 3, 4, 5, 6] Now when I use the expanded version: def assign_value(n, v): n = n + v print(n) l1 = [1, 2, 3] l2 = [4, 5, 6] assign_value(l1, l2) print(l1) The output will be: [1, 2, 3, 4, 5, 6] [1, 2, 3] Using the += has a different result with the fully expanded operation. What is causing this?
It may seem counter-intuitive, but they are not always the same. In fact, a = a + b means a = a.__add__(b), creating a new object a += b means a = a.__iadd__(b), mutating the object __iadd__, if absent, defaults to the __add__, but it also can (and it does, in the case of lists) mutate the original object in-place.
5
2
75,000,107
2023-1-4
https://stackoverflow.com/questions/75000107/sum-values-of-each-tuples-enclosed-of-two-lists-the-problem-is-that-they-add-to
I would sum the values of each tuple enclosed in two lists. The output i would like to get is: 125, 200.0, 100.0. The problem is that they don't sum, but they add like this [(87.5, 37.5), (125.0, 75.0), (50.0, 50.0)]. I need first and second to stay the same as mine, without changing any parentheses. I've searched for many similar answers on stackoverflow, but haven't found any answers for my case. How can i edit calc and fix it? Thank you! Code: first = [(87.5,), (125.0,), (50.0,)] second = [(37.5,), (75.0,), (50.0,)] calc = [x + y for x, y in zip(first, second)] print(calc)
The problem is that you are trying to add the tuples (if you do type(x) or type(y) you see that those are tuple values and not the specific floats that you have) if you want to add the values inside of the tuples then you have to access the elements you can do it like so: first = [(87.5,), (125.0,), (50.0,)] second = [(37.5,), (75.0,), (50.0,)] calc = [x[0] + y[0] for x, y in zip(first, second)] # accessing the first element of x and y -- as there is only 1 element in each of the tuples. # if you had more elements in each tuple you could do the following: calc = [sum(x) + sum(y) for x, y in zip(first, second)] print(calc) print(first, second)
3
3
74,953,540
2022-12-29
https://stackoverflow.com/questions/74953540/what-is-np-ndarrayany-np-dtypenp-float64-and-why-does-np-typing-ndarray
The documentation for np.typing.NDArray says that it is "a generic version of np.ndarray[Any, np.dtype[+ScalarType]]". Where is the generalization in "generic" happening? And in the documentation for numpy.ndarray.__class_getitem__ we have this example np.ndarray[Any, np.dtype[Any]] with no explanation as to what the two arguments are. And why can I do np.ndarray[float], ie just use one argument? What does that mean?
Note from the future: as of NumPy 2.0 the docs is more explicit to say A np.ndarray[Any, np.dtype[+ScalarType]] type alias generic w.r.t. its dtype.type. and as of 2.2 (dev docs currently) the type alias is changed to NDArray = np.ndarray[tuple[int, ...], np.dtype[+ScalarType]]. This now makes it clearer what the type alias represents, and what I concluded in my original answer below. "Generic" in this context means "generic type" (see also the Glossary), typing-related objects that can be subscripted to generate more specific type "instances" (apologies for the sloppy jargon, I'm not well-versed in typing talk). Think typing.List that lets you use List[int] to denote a homogeneous list of ints. As of Python 3.9 most standard-library collections have been upgraded to be compatible with typing as generic types themselves. Since tuple[foo] used to be invalid until 3.9, it was safe to allow tuple[int, int] to mean the same thing that typing.Tuple[int, int] used to mean: a tuple of two integers. So as of 3.9 NumPy also allows using the np.ndarray type as a generic, this is what np.ndarray[Any, np.dtype[Any]] does. This "signature" matches the actual signature of np.ndarray.__init__() (__new__() if we want to be correct): class numpy.ndarray(shape, dtype=float, ...) So what np.ndarray[foo, bar] does is create a type for type hinting that means "a NumPy array of shape type foo and dtype bar". People normally don't call np.ndarray() directly anyway (rather using helpers such as np.array() or np.full_like() and the like), so this is doubly fine in NumPy. Now, since most code runs with arrays of more than one possible number of dimensions, it would be a pain to have to specify an arbitrary number of lengths for the shape tuple (the first "argument" of np.ndarray as a generic type). I assume this was the motivation to define a type alias that is still a generic in the second "argument". This is np.typing.NDArray. It lets you easily type hint something as an array of a given type without having to say anything about the shape, covering a vast subset of use cases (which would otherwise use np.ndarray[typing.Any, ...]). And this is still a generic, since you can parameterise it with a dtype. To quote the docs: >>> print(npt.NDArray) numpy.ndarray[typing.Any, numpy.dtype[+ScalarType]] >>> print(npt.NDArray[np.float64]) numpy.ndarray[typing.Any, numpy.dtype[numpy.float64]] As usual with generics, you're allowed to specify an argument to the generic type, but you're not required to. ScalarType is derived from np.generic, a base class that covers most (maybe all) NumPy scalar types. And the library code that defines NDArray is here, and is fairly transparent to the point of calling the helper _GenericAlias for older Python (a backport of typing.GenericAlias). What you have at the end is a type alias that is still generic in one variable. To address the last part of your question: And why can I do np.ndarray[float], ie just use one argument? What does that mean? I think the anticlimactic explanation is that we again need to look at the signature of np.ndarray(): class numpy.ndarray(shape, dtype=float, buffer=None, offset=0, strides=None, order=None) There's one mandatory parameter (shape), all the others are optional. So I believe that np.ndarray[float] specifies that it corresponds to arrays whose shape is of type float (i.e. nonsense). There's an explicit check to only allow 1 or 2 parameters in the generic type: args_len = PyTuple_Check(args) ? PyTuple_Size(args) : 1; if ((args_len > 2) || (args_len == 0)) { return PyErr_Format(PyExc_TypeError, "Too %s arguments for %s", args_len > 2 ? "many" : "few", ((PyTypeObject *)cls)->tp_name); } generic_alias = Py_GenericAlias(cls, args); This snippet checks that two arguments were passed to __class_getitem__, raises otherwise, and in the valid cases defers to the C API version of typing.GenericAlias. I'm pretty sure that there's no technical reason to exclude the other parameters of the ndarray constructor from the generic type, but there was a semantic reason that the third parameter, buffer makes no sense to be included typing (or there was just a general push to reduce the generality of the generic type to most common use cases). All that being said, I haven't been able to construct a small example in which a type passed for the shape of the generic type leads to type checking errors in mypy. From several attempts it seems as if the shape was always checked as if it were typing.Any rather than whatever was passed as the first parameter of np.ndarray[...]. For instance, consider the following example: import numpy as np first: np.ndarray[tuple[int], np.dtype[np.int64]] = np.arange(3) # OK second: np.ndarray[tuple[int], np.dtype[np.int64]] = np.arange(3.0) # error due to dtype mismatch third: np.ndarray[tuple[int, int, int], np.dtype[np.int64]] = np.arange(3) # no error even though shape mismatch Running mypy 0.991 on Python 3.9 on this gives foo.py:5: error: Incompatible types in assignment (expression has type "ndarray[Any, dtype[floating[Any]]]", variable has type "ndarray[Tuple[int], dtype[signedinteger[_64Bit]]]") [assignment] Found 1 error in 1 file (checked 1 source file) Only the dtype mismatch is found, but not the shape mismatch. And I see the same thing if I use np.ndarray((3,), dtype=...) instead of np.arange(), so it's not just due to weird typing of the np.arange() helper (although I used it as an example because this is one function that's guaranteed to return a 1d array). Since I can't explain this behaviour I can't be certain that my understanding is correct, but I have no better model. To come back to a question you asked in a comment: Right, so then am I right in understanding that np.ndarray[int] is like np.ndarray[Any, int]? No, at least we can exlude this (and what we see here is consistent with the first parameter only affecting the shape to whatever extent it does affect it): from typing import Any import numpy as np first: np.ndarray[Any, np.dtype[np.int_]] = np.arange(3) # OK because dtype matches second: np.ndarray[np.dtype[np.int_]] = np.arange(3) # OK because shape check doesn't actually work, and dtype is left as "any scalar" third: np.ndarray[Any, np.dtype[np.int_]] = np.arange(3.0) # error due to dtype mismatch fourth: np.ndarray[np.dtype[np.int_]] = np.arange(3.0) # no error, so this can't be the same as the third option The result from mypy: foo.py:7: error: "ndarray" expects 2 type arguments, but 1 given [type-arg] foo.py:9: error: Incompatible types in assignment (expression has type "ndarray[Any, dtype[floating[Any]]]", variable has type "ndarray[Any, dtype[signedinteger[Any]]]") [assignment] foo.py:11: error: "ndarray" expects 2 type arguments, but 1 given [type-arg] Found 3 errors in 1 file (checked 1 source file) The four cases: first: explicitly typed as "int array with any shape", no error on type correct assignment second: typed as "array with an int-typed shape and any dtype", should fail because that's nonsense but doesn't (see the earlier musing about the impotence of shape type checks) third: explicitly typed as another "int array with any shape", being assigned a double array, leading to an error fourth: typed as "array with an int-typed shape and any dtype", leading to no error (see second). Since the third case leads to an error and the fourth doesn't, they can't be aliases of one another. Also notable that mypy complains about the two lines where np.ndarray[np.dtype[np.int_]] is present: foo.py:7: error: "ndarray" expects 2 type arguments, but 1 given [type-arg] This sounds like a single-parameter use of the generic is forbidden as far as mypy is concerned. I'm not sure why this is the case, but this would certainly simplify the situation.
6
9
74,981,940
2023-1-2
https://stackoverflow.com/questions/74981940/performing-integer-based-rolling-window-group-by-using-python-polars
I have a outer/inner loop-based function I'm trying to vectorise using Python Polars DataFrames. The function is a type of moving average and will be used to filter time-series financial data. Here's the function: def ma_j(df_src: pl.DataFrame, depth: float): jrc04 = 0.0 jrc05 = 0.0 jrc06 = 0.0 jrc08 = 0.0 series = df_src['close'] for x in range(0, len(series)): if x >= x - depth*2: for k in np.arange(start=math.ceil(depth), stop=0, step=-1): jrc04 = jrc04 + abs(series[x-k] - series[x-(k+1)]) jrc05 = jrc05 + (depth + k) * abs(series[x-k] - series[x-(k+1)]) jrc06 = jrc06 + series[x-(k+1)] else: jrc03 = abs(series - (series[1])) jrc13 = abs(series[x-depth] - series[x - (depth+1)]) jrc04 = jrc04 - jrc13 + jrc03 jrc05 = jrc05 - jrc04 + jrc03 * depth jrc06 = jrc06 - series[x - (depth+1)] + series[x-1] jrc08 = abs(depth * series[x] - jrc06) if jrc05 == 0.0: ma = 0.0 else: ma = jrc08/jrc05 return ma The tricky bit for me are multiple the inner loop look-backs (for k in...). I've looked through multiple examples that use group_by_dynamic on the timeseries data. For example, here. I've also seen an example for rolling, but this still seems to use a period. However, I'd like to strip away the timeseries and just use source Series. Does this mean I need to group on an integer range? Using this data example: import polars as pl import numpy as np i, t, v = np.arange(0, 50, 1), np.arange(0, 100, 2), np.random.randint(1,101,50) df = pl.DataFrame({"i": i, "t": t, "rand": v}) df = df.with_columns((pl.datetime(2022,10,30) + pl.duration(seconds=df["t"])).alias("datetime")).drop("t") cols = ["i", "datetime", "rand"] df = df.select(cols) DataFrame looks like this: shape: (50, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ i ┆ datetime ┆ rand β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ datetime[ΞΌs] ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════════════════β•ͺ══════║ β”‚ 0 ┆ 2022-10-30 00:00:00 ┆ 87 β”‚ β”‚ 1 ┆ 2022-10-30 00:00:02 ┆ 66 β”‚ β”‚ 2 ┆ 2022-10-30 00:00:04 ┆ 30 β”‚ β”‚ 3 ┆ 2022-10-30 00:00:06 ┆ 87 β”‚ β”‚ 4 ┆ 2022-10-30 00:00:08 ┆ 74 β”‚ β”‚ … ┆ … ┆ … β”‚ β”‚ 45 ┆ 2022-10-30 00:01:30 ┆ 91 β”‚ β”‚ 46 ┆ 2022-10-30 00:01:32 ┆ 52 β”‚ β”‚ 47 ┆ 2022-10-30 00:01:34 ┆ 68 β”‚ β”‚ 48 ┆ 2022-10-30 00:01:36 ┆ 26 β”‚ β”‚ 49 ┆ 2022-10-30 00:01:38 ┆ 99 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ ...I can do a grouping by datetime like this": df.group_by_dynamic("datetime", every="10s").agg( pl.col("rand").mean().alias('rolling mean') ) which gives this: But there's 3 issues with this: I don't want to group of datetime...I want to group on every row (maybe i?) in bins of [x] size. I need values against every row I would like to define the aggregation function, as per the various cases in the function above Any tips on how I could attack this using Polars? Thanks. ---------- Edit 1 Following @ritchie46 's awesome advice (thanks mate!), here's the groupby: result_grp = ( df .rolling(index_column="i", period="10i") .agg( pl.len().alias("rolling_slots"), pl.col("rand").mean().alias("roll_mean") ) ) df2 = df.select( pl.all(), result_grp.get_column("rolling_slots"), result_grp.get_column("roll_mean"), ) This now gives: shape: (50, 5) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ i ┆ datetime ┆ rand ┆ rolling_slots ┆ roll_mean β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ datetime[ΞΌs] ┆ i64 ┆ u32 ┆ f64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════════════════β•ͺ══════β•ͺ═══════════════β•ͺ═══════════║ β”‚ 0 ┆ 2022-10-30 00:00:00 ┆ 23 ┆ 1 ┆ 23.0 β”‚ β”‚ 1 ┆ 2022-10-30 00:00:02 ┆ 72 ┆ 2 ┆ 47.5 β”‚ β”‚ 2 ┆ 2022-10-30 00:00:04 ┆ 46 ┆ 3 ┆ 47.0 β”‚ β”‚ 3 ┆ 2022-10-30 00:00:06 ┆ 37 ┆ 4 ┆ 44.5 β”‚ β”‚ 4 ┆ 2022-10-30 00:00:08 ┆ 12 ┆ 5 ┆ 38.0 β”‚ β”‚ … ┆ … ┆ … ┆ … ┆ … β”‚ β”‚ 45 ┆ 2022-10-30 00:01:30 ┆ 95 ┆ 10 ┆ 53.7 β”‚ β”‚ 46 ┆ 2022-10-30 00:01:32 ┆ 100 ┆ 10 ┆ 62.7 β”‚ β”‚ 47 ┆ 2022-10-30 00:01:34 ┆ 6 ┆ 10 ┆ 62.2 β”‚ β”‚ 48 ┆ 2022-10-30 00:01:36 ┆ 27 ┆ 10 ┆ 56.5 β”‚ β”‚ 49 ┆ 2022-10-30 00:01:38 ┆ 33 ┆ 10 ┆ 54.5 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ This is great; now instead of mean(), how do I apply a custom function on the grouped values, such as: f_jparams(depth_array, jrc04, jrc05, jrc06, jrc08): _depth = len(depth_array) if len(depth_array) > 3: for x in np.arange(start=1, stop=len(depth_array), step=1): jrc04 = jrc04 + abs(depth_array[x] - depth_array[x-1]) jrc05 = jrc05 + (_depth+x) * abs(depth_array[x] - depth_array[x-1]) jrc06 = jrc06 + depth_array[x-1] else: jrc03 = abs(depth_array[_depth-1] - depth_array[_depth-2]) jrc13 = abs(depth_array[0] - depth_array[1]) jrc04 = jrc04 - jrc13 + jrc03 jrc05 = jrc05 - jrc04 + jrc03*_depth jrc06 = jrc06 - depth_array[1] + depth_array[_depth-2] jrc08 = abs(_depth * depth_array[0] - jrc06) if jrc05 == 0.0: ma = 0.0 else: ma = jrc08/jrc05 return ma, jrc04, jrc05, jrc06, jrc08 Thanks! ---- Edit 2: Thanks to this post, I can collect up the items in the rand rolling group into a list for each row: depth = 10 result_grp = ( df .rolling( index_column="i", period=str(depth) + "i", # offset="0i", # closed="left" ) .agg( pl.len().alias("rolling_slots"), pl.col("rand").mean().alias("roll_mean"), pl.col("rand").name.suffix('_val_list'), ) ) df2 = df.select( pl.all(), result_grp.get_column("rolling_slots"), result_grp.get_column("roll_mean"), result_grp.get_column("rand_val_list"), ) Also from this post, I saw a way to make the rolling window period a variable; nice! Is there a way to use get_columns and exclude together so I don't have to list every col I want? The dataframe now looks like: shape: (50, 6) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ i ┆ datetime ┆ rand ┆ rolling_slots ┆ roll_mean ┆ rand_val_list β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ datetime[ΞΌs] ┆ i64 ┆ u32 ┆ f64 ┆ list[i64] β”‚ β•žβ•β•β•β•β•β•ͺ═════════════════════β•ͺ══════β•ͺ═══════════════β•ͺ═══════════β•ͺ═════════════════║ β”‚ 0 ┆ 2022-10-30 00:00:00 ┆ 23 ┆ 1 ┆ 23.0 ┆ [23] β”‚ β”‚ 1 ┆ 2022-10-30 00:00:02 ┆ 72 ┆ 2 ┆ 47.5 ┆ [23, 72] β”‚ β”‚ 2 ┆ 2022-10-30 00:00:04 ┆ 46 ┆ 3 ┆ 47.0 ┆ [23, 72, 46] β”‚ β”‚ 3 ┆ 2022-10-30 00:00:06 ┆ 37 ┆ 4 ┆ 44.5 ┆ [23, 72, … 37] β”‚ β”‚ 4 ┆ 2022-10-30 00:00:08 ┆ 12 ┆ 5 ┆ 38.0 ┆ [23, 72, … 12] β”‚ β”‚ … ┆ … ┆ … ┆ … ┆ … ┆ … β”‚ β”‚ 45 ┆ 2022-10-30 00:01:30 ┆ 95 ┆ 10 ┆ 53.7 ┆ [10, 11, … 95] β”‚ β”‚ 46 ┆ 2022-10-30 00:01:32 ┆ 100 ┆ 10 ┆ 62.7 ┆ [11, 84, … 100] β”‚ β”‚ 47 ┆ 2022-10-30 00:01:34 ┆ 6 ┆ 10 ┆ 62.2 ┆ [84, 53, … 6] β”‚ β”‚ 48 ┆ 2022-10-30 00:01:36 ┆ 27 ┆ 10 ┆ 56.5 ┆ [53, 46, … 27] β”‚ β”‚ 49 ┆ 2022-10-30 00:01:38 ┆ 33 ┆ 10 ┆ 54.5 ┆ [46, 56, … 33] β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Should I just now resort back to looping through the rand_val_list column and send each grouped values list to my function? Or is there a better polars way? Thanks again!
Are you searching for periods="10i"? Polars rolling accepts a period argument with the following query language: - 1ns (1 nanosecond) - 1us (1 microsecond) - 1ms (1 millisecond) - 1s (1 second) - 1m (1 minute) - 1h (1 hour) - 1d (1 day) - 1w (1 week) - 1mo (1 calendar month) - 1y (1 calendar year) - 1i (1 index count) Where i is simply the number of indices/rows. So on your data a rolling group_by where we count the number of slots would give: (df.rolling(index_column="i", period="10i") .agg( pl.len().alias("rolling_slots") ) ) shape: (50, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ i ┆ rolling_slots β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ u32 β”‚ β•žβ•β•β•β•β•β•ͺ═══════════════║ β”‚ 0 ┆ 1 β”‚ β”‚ 1 ┆ 2 β”‚ β”‚ 2 ┆ 3 β”‚ β”‚ 3 ┆ 4 β”‚ β”‚ 4 ┆ 5 β”‚ β”‚ … ┆ … β”‚ β”‚ 45 ┆ 10 β”‚ β”‚ 46 ┆ 10 β”‚ β”‚ 47 ┆ 10 β”‚ β”‚ 48 ┆ 10 β”‚ β”‚ 49 ┆ 10 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
4
3
74,976,153
2023-1-1
https://stackoverflow.com/questions/74976153/what-is-the-best-practice-for-imports-when-developing-a-python-package
I am trying to build a Python package, that contains sub-modules and sub-packages ("libraries"). I was looking everywhere for the right way to do it, but amazingly I find it very complicated. Also went through multiple threads in StackOverFlow of course.. The problem is as follows: In order to import a module or a package from another directory, it seems to me that there are 2 options: a. Adding the absolute path to sys.path. b. Installing the package with the setuptools.setup function in a setup.py file, in the main directory of the package - which installs the package into the site-packages directory of the specific Python version that in use. Option a seems too clumsy for me. Option b is great, however I find it impractical becasue I am currently working and editing the package's source code - and the changes are not updating on the installed directory of the package, of course. In addition the installed directory of the package is not tracked by Git, and needless to say I use Git the original directory. To conclude the question: What is the best practice to import modules and sub-packages freely and nicely from within sub-directories of a Python package that is currently under construction? I feel I am missing something but couldn't find a decent solution so far. Thanks!
This is a great question, and I wish more people would think along these lines. Making a module importable and ultimately installable is absolutely necessary before it can be easily used by others. On sys.path munging Before I answer I will say I do use sys.path munging when I do initial development on a file outside of an existing package structure. I have an editor snippet that constructs code like this: import sys, os sys.path.append(os.path.expanduser('~/path/to/parent')) from module_of_interest import * # NOQA Given the path to the current file I use: import ubelt as ub fpath = ub.Path('/home/username/path/to/parent/module_of_interest.py') modpath, modname = ub.split_modpath(fpath, check=False) modpath = ub.Path(modpath).shrinkuser() # abstract home directory To construct the necessary parts the snippet will insert into the file so I can interact with it from within IPython. I find taking the little bit of extra time to remove the reference to my explicit homefolder such that the code still works as long as users have the same relative path structure wrt to the home directory makes this slightly more portable. Proper Python Package Management That being said, sys.path munging is not a sustainable solution. Ultimately you want your package to be managed by a python package manger. I know a lot of people use poetry, but I like plain old pip, so I can describe that process, but know this isn't the only way to do it. To do this we need to go over some basics. Basics You must know what Python environment you are working in. Ideally this is a virtual environment managed with pyenv (or conda or mamba or poetry ...). But it's also possible to do this in your global sytem Python environment, although that is not recommended. I like working in a single default Python virtual environment that is always activated in my .bashrc. Its always easy to switch to a new one or blow it away / start fresh. You need to consider two root paths: the root of your repository, which I will call your repo path, and your root to your package, the package path or module path, which should be a folder with the name of the top-level Python package. You will use this name to import it. This package path must live inside the repo path. Some repos, like xdoctest, like to put the module path in a src directory. Others , like ubelt, like to have the repo path at the top-level of the repository. I think the second case is conceptually easier for new package creators / maintainers, so let's go with that. Setting up the repo path So now, you are in an activated Python virtual environment, and we have designated a path we will checkout the repo. I like to clone repos in $HOME/code, so perhaps the repo path is $HOME/code/my_project. In this repo path you should have your root package path. Lets say your package is named mypymod. Any directory that contains an __init__.py file is conceptually a python module, where the contents of __init__.py are what you get when you import that directory name. The only difference between a directory module and a normal file module is that a directory module/package can have submodules or subpackages. For example if you are in the my_project repo, i.e. when you ls you see mypymod, and you have a file structure that looks something like this... + my_project + mypymod + __init__.py + submod1.py + subpkg + __init__.py + submod2.py you can import the following modules: import mypymod import mypymod.submod1 import mypymod.subpkg import mypymod.subpkg.submod2 If you ensured that your current working directory was always the repo root, or you put the repo root into sys.path, then this would be all you need. Being visible in sys.path or the CWD is all that is needed for another module could see your module. The package manifest: setup.py / pyproject.toml Now the trick is: how do you ensure your other packages / scripts can always see this module? That is where the package manager comes in. For this we will need a setup.py or the newer pyproject.toml variant. I'll describe the older setup.py way of doing things. All you need to do is put the setup.py in your repo root. Note: it does not go in your package directory. There are plenty of resources for how to write a setup.py so I wont describe it in much detail, but basically all you need is to populate it with enough information so it knows about the name of the package, its location, and its version. from setuptools import setup, find_packages setup( name='mypymod', version='0.1.0', packages=find_packages(include=['mypymod', 'mypymod.*']), install_requires=[], ) So your package structure will look like this: + my_project + setup.py + mypymod + __init__.py + submod1.py + subpkg + __init__.py + submod2.py There are plenty of other things you can specify, I recommend looking at ubelt and xdoctest as examples. I'll note they contain a non-standard way of parsing requirements out of a requirements.txt or requirements/*.txt files, which I think is generally better than the standard way people handle requirements. But I digress. Given something that pip or some other package manager (e.g. pipx, poetry) recognizes as a package manifest - a file that describes the contents of your package, you can now install it. If you are still developing it you can install it in editable mode, so instead of the package being copied into your site-packages, only a symbolic link is made, so any changes in your code are reflected each time you invoke Python (or immediately if you have autoreload on with IPython). With pip it is as simple as running pip install -e <path-to-repo-root>, which is typically done by navigating into the repo and running pip install -e .. Congrats, you now have a package you can reference from anywhere. Making the most of your package The python -m invocation Now that you have a package you can reference as if it was installed via pip from pypi. There are a few tricks for using it effectively. The first is running scripts. You don't need to specify a path to a file to run it as a script in Python. It is possible to run a script as __main__ using only its module name. This is done with the -m argument to Python. For instance you can run python -m mypymod.submod1 which will invoke $HOME/code/my_project/mypymod/submod1.py as the main module (i.e. it's __name__ attribute will be set to "__main__"). Furthermore if you want to do this with a directory module you can make a special file called __main__.py in that directory, and that is the script that will be executed. For instance if we modify our package structure + my_project + setup.py + mypymod + __init__.py + __main__.py + submod1.py + subpkg + __init__.py + __main__.py + submod2.py Now python -m mypymod will execute $HOME/code/my_project/mypymod/__main__.py and python -m mypymod.subpkg will execute $HOME/code/my_project/mypymod/subpkg/__main__.py. This is a very handy way to make a module double as both a importable package and a command line executable (e.g. xdoctest does this). Easier imports One pain point you might notice is that in the above code if you run: import mypymod mypymod.submod1 You will get an error because by default a package doesn't know about its submodules until they are imported. You need to populate the __init__.py to expose any attributes you desire to be accessible at the top-level. You could populate the mypymod/__init__.py with: from mypymod import submod1 And now the above code would work. This has a tradeoff though. The more thing you make accessible immediately, the more time it takes to import the module, and with big packages it can get fairly cumbersome. Also you have to manually write the code to expose what you want, so that is a pain if you want everything. If you took a look at ubelt's init.py you will see it has a good deal of code to explicitly make every function in every submodule accessible at a top-level. I've written yet another library called mkinit that actually automates this process, and it also has the option of using the lazy_loader library to mitigate the performance impact of exposing all attributes at the top-level. I find the mkinit tool very helpful when writing large nested packages. Summary To summarize the above content: Make sure you are working in a Python virtualenv (I recommend pyenv) Identify your "package path" inside of your "repo path". Put an __init__.py in every directory you want to be a Python package or subpackage. Optionally, use mkinit to autogenerate the content of your __init__.py files. Put a setup.py / pyproject.toml in the root of your "repo path". Use pip install -e . to install the package in editable mode while you develop it. Use python -m to invoke module names as scripts. Hope this helps.
5
13
74,976,313
2023-1-1
https://stackoverflow.com/questions/74976313/possible-to-stringize-a-polars-expression
Is it possible to stringize a Polars expression and vice-versa? For example, convert df.filter(pl.col('a')<10) to a string of "df.filter(pl.col('a')<10)". Is roundtripping possible e.g. eval("df.filter(pl.col('a')<10)") for user input or tool automation? I know this can be done with a SQL expression but I'm interested in native. I want to show the specified filter in the title of plots.
Expressions >>> expr = pl.col("foo") > 2 >>> print(str(expr)) [(col("foo")) > (2i32)] LazyFrames >>> import io >>> df = pl.DataFrame({ ... "foo": [1, 2, 3] ... }) >>> json_state = df.lazy().filter(expr).serialize(format="json") >>> query_plan = pl.LazyFrame.deserialize(io.StringIO(json_state), format="json") >>> query_plan.collect() shape: (1, 1) β”Œβ”€β”€β”€β”€β”€β” β”‚ foo β”‚ β”‚ --- β”‚ β”‚ i64 β”‚ β•žβ•β•β•β•β•β•‘ β”‚ 3 β”‚ β””β”€β”€β”€β”€β”€β”˜
3
2
74,946,845
2022-12-29
https://stackoverflow.com/questions/74946845/attributeerror-module-numpy-has-no-attribute-int
I tried to run my code in another computer, while it successfully compiled in the original environment, this error can outta nowhere: File "c:\vision_hw\hw_3\cv2IP.py", line 91, in SECOND_ORDER_LOG original = np.zeros((5,5),dtype=np.int) File "C:\Users\brian2lee\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\numpy\__init__.py", line 284, in __getattr__ raise AttributeError("module {!r} has no attribute " AttributeError: module 'numpy' has no attribute 'int' I have tried reinstallin numpy but it did not work. Down below is my code: def SECOND_ORDER_LOG (self,img): original = np.zeros((5,5),dtype=np.int) original[2,2] = 1 kernel = np.array([[ 0, 0, -1, 0, 0], [ 0, -1, -2, -1, 0], [-1, -2, 16, -2, -1], [ 0, -1, -2, -1, 0], [ 0, 0, -1, 0, 0]]) result = original + 1 * kernel sharpened = cv2.filter2D(img, -1, result) return sharpened
numpy.int was deprecated in NumPy 1.20 and was removed in NumPy 1.24. You can change it to numpy.int_, or just int. Several other aliases for standard types were also removed from NumPy's namespace on the same schedule: Deprecated name Identical to NumPy scalar type names numpy.bool bool numpy.bool_ numpy.int int numpy.int_ (default), numpy.int64, or numpy.int32 numpy.float float numpy.float64, numpy.float_, numpy.double (equivalent) numpy.complex complex numpy.complex128, numpy.complex_, numpy.cdouble (equivalent) numpy.object object numpy.object_ numpy.str str numpy.str_ numpy.long int numpy.int_ (C long), numpy.longlong (largest integer type) numpy.unicode str numpy.unicode_ There are similar AttributeError messages for these removals (listing them helps people find this Q&A in the search results): AttributeError: module 'numpy' has no attribute 'bool'. AttributeError: module 'numpy' has no attribute 'int'. AttributeError: module 'numpy' has no attribute 'float'. AttributeError: module 'numpy' has no attribute 'complex'. AttributeError: module 'numpy' has no attribute 'object'. AttributeError: module 'numpy' has no attribute 'str'. AttributeError: module 'numpy' has no attribute 'long'. AttributeError: module 'numpy' has no attribute 'unicode'.
25
34
74,968,585
2022-12-31
https://stackoverflow.com/questions/74968585/using-environment-variables-in-pyproject-toml-for-versioning
I am trying to migrate my package from setup.py to pyproject.toml and I am not sure how to do the dynamic versioning in the same way as before. Currently I can pass the development version using environment variables when the build is for development. The setup.py file looks similar to this: import os from setuptools import setup import my_package if __name__ == "__main__": dev_version = os.environ.get("DEV_VERSION") version = dev_version if dev_version else f"{my_package.__version__}" setup( name="my_package", version=version, ... ) Is there a way to do similar thing when using pyproject.toml file?
Another alternative that might be worth considering for your use case, if you use Git, is to use setuptools_scm. It uses your git tags to perform dynamic versioning. Your pyproject.toml would look something like this: [build-system] requires = ["setuptools", "setuptools-scm"] build-backend = "setuptools.build_meta" [tool.setuptools_scm] "version_scheme" = "post-release" "local_scheme" = "no-local-version" "write_to" = "mypackage/version.py" [tool.setuptools.package-dir] mypackage = "mypackage" [project] name = "mypackage" ... dynamic = ["version"]
19
3
74,960,707
2022-12-30
https://stackoverflow.com/questions/74960707/poetry-stuck-in-infinite-install-update
My issue is that when I execute poetry install, poetry update or poetry lock the process keeps running indefinitely. I tried using the -vvv flag to get output of what's happening and it looks like it gets stuck forever in the first install. My connection is good and all packages that I tried installing exist. I use version 1.2.1 but I cannot upgrade to newer versions because the format of the .lock file is different and our pipeline fails.
I found a clue in an issue on the GitHub repo. If you are using Linux you must delete all .lock files in the .cache/pypoetry dir in your user home directory. find ~/.cache/pypoetry -name '*.lock' -type f -delete If the directory does not exist maybe is in another location. Then I recommend removing the generated .lock file in the project you were doing the installation.
26
22
74,981,558
2023-1-2
https://stackoverflow.com/questions/74981558/error-updating-python3-pip-attributeerror-module-lib-has-no-attribute-openss
I'm having an error when installing/updating any pip module in python3. Purging and reinstalling pip and every package I can thing of hasn't helped. Here's the error that I get in response to running python -m pip install --upgrade pip specifically (but the error is the same for attempting to install or update any pip module): Traceback (most recent call last): File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/usr/lib/python3/dist-packages/pip/__main__.py", line 16, in <module> from pip._internal.cli.main import main as _main # isort:skip # noqa File "/usr/lib/python3/dist-packages/pip/_internal/cli/main.py", line 10, in <module> from pip._internal.cli.autocompletion import autocomplete File "/usr/lib/python3/dist-packages/pip/_internal/cli/autocompletion.py", line 9, in <module> from pip._internal.cli.main_parser import create_main_parser File "/usr/lib/python3/dist-packages/pip/_internal/cli/main_parser.py", line 7, in <module> from pip._internal.cli import cmdoptions File "/usr/lib/python3/dist-packages/pip/_internal/cli/cmdoptions.py", line 24, in <module> from pip._internal.exceptions import CommandError File "/usr/lib/python3/dist-packages/pip/_internal/exceptions.py", line 10, in <module> from pip._vendor.six import iteritems File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 65, in <module> vendored("cachecontrol") File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 36, in vendored __import__(modulename, globals(), locals(), level=0) File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 655, in _load_unlocked File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible File "<frozen zipimport>", line 259, in load_module File "/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/__init__.py", line 9, in <module> File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 655, in _load_unlocked File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible File "<frozen zipimport>", line 259, in load_module File "/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/wrapper.py", line 1, in <module> File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 655, in _load_unlocked File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible File "<frozen zipimport>", line 259, in load_module File "/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/adapter.py", line 5, in <module> File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 655, in _load_unlocked File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible File "<frozen zipimport>", line 259, in load_module File "/usr/share/python-wheels/requests-2.22.0-py2.py3-none-any.whl/requests/__init__.py", line 95, in <module> File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 655, in _load_unlocked File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible File "<frozen zipimport>", line 259, in load_module File "/usr/share/python-wheels/urllib3-1.25.8-py2.py3-none-any.whl/urllib3/contrib/pyopenssl.py", line 46, in <module> File "/home/patrick/.local/lib/python3.8/site-packages/OpenSSL/__init__.py", line 8, in <module> from OpenSSL import crypto, SSL File "/home/patrick/.local/lib/python3.8/site-packages/OpenSSL/crypto.py", line 3268, in <module> _lib.OpenSSL_add_all_algorithms() AttributeError: module 'lib' has no attribute 'OpenSSL_add_all_algorithms' Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/apport_python_hook.py", line 72, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python3/dist-packages/apport/__init__.py", line 5, in <module> from apport.report import Report File "/usr/lib/python3/dist-packages/apport/report.py", line 32, in <module> import apport.fileutils File "/usr/lib/python3/dist-packages/apport/fileutils.py", line 12, in <module> import os, glob, subprocess, os.path, time, pwd, sys, requests_unixsocket File "/usr/lib/python3/dist-packages/requests_unixsocket/__init__.py", line 1, in <module> import requests File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 655, in _load_unlocked File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible File "<frozen zipimport>", line 259, in load_module File "/usr/share/python-wheels/requests-2.22.0-py2.py3-none-any.whl/requests/__init__.py", line 95, in <module> File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 655, in _load_unlocked File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible File "<frozen zipimport>", line 259, in load_module File "/usr/share/python-wheels/urllib3-1.25.8-py2.py3-none-any.whl/urllib3/contrib/pyopenssl.py", line 46, in <module> File "/home/patrick/.local/lib/python3.8/site-packages/OpenSSL/__init__.py", line 8, in <module> from OpenSSL import crypto, SSL File "/home/patrick/.local/lib/python3.8/site-packages/OpenSSL/crypto.py", line 3268, in <module> _lib.OpenSSL_add_all_algorithms() AttributeError: module 'lib' has no attribute 'OpenSSL_add_all_algorithms' Original exception was: Traceback (most recent call last): File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/usr/lib/python3/dist-packages/pip/__main__.py", line 16, in <module> from pip._internal.cli.main import main as _main # isort:skip # noqa File "/usr/lib/python3/dist-packages/pip/_internal/cli/main.py", line 10, in <module> from pip._internal.cli.autocompletion import autocomplete File "/usr/lib/python3/dist-packages/pip/_internal/cli/autocompletion.py", line 9, in <module> from pip._internal.cli.main_parser import create_main_parser File "/usr/lib/python3/dist-packages/pip/_internal/cli/main_parser.py", line 7, in <module> from pip._internal.cli import cmdoptions File "/usr/lib/python3/dist-packages/pip/_internal/cli/cmdoptions.py", line 24, in <module> from pip._internal.exceptions import CommandError File "/usr/lib/python3/dist-packages/pip/_internal/exceptions.py", line 10, in <module> from pip._vendor.six import iteritems File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 65, in <module> vendored("cachecontrol") File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 36, in vendored __import__(modulename, globals(), locals(), level=0) File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 655, in _load_unlocked File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible File "<frozen zipimport>", line 259, in load_module File "/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/__init__.py", line 9, in <module> File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 655, in _load_unlocked File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible File "<frozen zipimport>", line 259, in load_module File "/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/wrapper.py", line 1, in <module> File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 655, in _load_unlocked File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible File "<frozen zipimport>", line 259, in load_module File "/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/adapter.py", line 5, in <module> File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 655, in _load_unlocked File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible File "<frozen zipimport>", line 259, in load_module File "/usr/share/python-wheels/requests-2.22.0-py2.py3-none-any.whl/requests/__init__.py", line 95, in <module> File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 655, in _load_unlocked File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible File "<frozen zipimport>", line 259, in load_module File "/usr/share/python-wheels/urllib3-1.25.8-py2.py3-none-any.whl/urllib3/contrib/pyopenssl.py", line 46, in <module> File "/home/patrick/.local/lib/python3.8/site-packages/OpenSSL/__init__.py", line 8, in <module> from OpenSSL import crypto, SSL File "/home/patrick/.local/lib/python3.8/site-packages/OpenSSL/crypto.py", line 3268, in <module> _lib.OpenSSL_add_all_algorithms() AttributeError: module 'lib' has no attribute 'OpenSSL_add_all_algorithms' I'm running Ubuntu 20.04 in WSL. Python openssl is already installed. sudo apt install python3-openssl Reading package lists... Done Building dependency tree Reading state information... Done python3-openssl is already the newest version (19.0.0-1build1). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. My assumption is that I need to re-install some stuff, but I'm not sure what. I've tried the obvious stuff like python3-openssl, libssl-dev, libffi-dev, and python3-pip itself and python3 alltogether.
As version 39.0.0 presented this bug, downgrading the cryptography package solves this, without purging or touching your OS. pip install cryptography==38.0.4 to downgrade from 39.0.0 which presented this error EDIT per @thomas, The error is a result of incompatibility between cryptography and pyopenssl, so if possible, also upgrading to openssl>22.1.0 should work: pip install -U pyopenssl cryptography
102
161
74,968,179
2022-12-31
https://stackoverflow.com/questions/74968179/session-state-is-reset-in-streamlit-multipage-app
I'm building a Streamlit multipage application and am having trouble keeping session state when switching between pages. My main page is called mainpage.py and has something like the following: import streamlit as st if "multi_select" not in st.session_state: st.session_state["multi_select"] = ["abc", "xyz"] if "select_slider" not in st.session_state: st.session_state["select_slider"] = ("1", "10") if "text_inp" not in st.session_state: st.session_state["text_inp"] = "" st.sidebar.multiselect( "multiselect", ["abc", "xyz"], key="multi_select", default=st.session_state["multi_select"], ) st.sidebar.select_slider( "number range", options=[str(n) for n in range(1, 11)], key="select_slider", value=st.session_state["select_slider"], ) st.sidebar.text_input("Text:", key="text_inp") for v in st.session_state: st.write(v, st.session_state[v]) Next, I have another page called 'anotherpage.py' in a subdirectory called 'pages' with this content: import streamlit as st for v in st.session_state: st.write(v, st.session_state[v]) If I run this app, change the values of the controls and switch to the other page, I see the values for the control being retained and printed. However, if I switch back to the main page, everything gets reset to the original values. For some reason st.session_state is cleared. Anyone have any idea how to keep the values in the session state? I'm using Python 3.11.1 and Streamlit 1.16.0
First, it's important to understand a widget's lifecycle. When you assign a key to a widget, then that key will get deleted from session state whenever that widget is not rendered. This can happen if a widget is conditionally not rendered on the same page or from switching pages. What you are seeing on the second page are the values leftover from the previous page before the widget cleanup process is completed. At the end of loading "anotherpage," Streamlit realizes it has keys in session state assigned to widgets that have disappeared and therefore deletes them. There are two ways around this. A hacky solution (not my preference) is to recommit values to session state at the top of every page. st.session_state.my_widget_key = st.session_state.my_widget_key This will interrupt the widget cleanup process and prevent the keys from being deleted. However, it needs to be on the page you go to when leaving a widget. Hence, it needs to be on all the pages. My preferred solution is to think of widget keys as separate from the values I want to keep around. I usually adopt the convention of prefixing widget keys with an underscore. import streamlit as st if "multi_select" not in st.session_state: st.session_state["multi_select"] = ["abc", "xyz"] if "select_slider" not in st.session_state: st.session_state["select_slider"] = ("1","10") if "text_inp" not in st.session_state: st.session_state["text_inp"] = "" def keep(key): # Copy from temporary widget key to permanent key st.session_state[key] = st.session_state['_'+key] def unkeep(key): # Copy from permanent key to temporary widget key st.session_state['_'+key] = st.session_state[key] unkeep("multi_select") st.sidebar.multiselect( "multiselect", ["abc", "xyz"], key="_multi_select", on_change=keep, args=['multi_select'] ) # This is a edge case and possibly a bug. See explanation. st.sidebar.select_slider( "number range", options=[str(n) for n in range(1, 11)], value = st.session_state.select_slider, key="_select_slider", on_change=keep, args=["select_slider"] ) unkeep("text_inp") st.sidebar.text_input("Text:", key="_text_inp", on_change=keep, args=["text_inp"]) for v in st.session_state: st.write(v, st.session_state[v]) You will observe I did something different with the select slider. It appears a tuple needs to be passed to the value kwarg specifically to make sure it initializes as a ranged slider. I wouldn't have needed to change the logic if it was being initialized with a single value instead of a ranged value. For other widgets, you can see that the default value is removed in favor of directly controlling their value via their key in session state. You need to be careful when you do something that changes a widget's default value. A change to the default value creates a "new widget." If you are simultaneously changing the default value and actual value via its key, you can get some nuanced behavior like initialization warnings if there is ever a conflict.
11
6
74,975,596
2023-1-1
https://stackoverflow.com/questions/74975596/matplotlibs-show-function-triggering-unwanted-output
Whenever I have any Python code executed via Python v3.10.4 with or without debugging in Visual Studio Code v1.74.2, I get output looking like the following in the Debug Console window in addition to the normal output of the code. Otherwise, all of my Python programs work correctly and as intended at this time. 1 HIToolbox 0x00007ff81116c0c2 _ZN15MenuBarInstance22RemoveAutoShowObserverEv + 30 2 HIToolbox 0x00007ff8111837e3 SetMenuBarObscured + 115 3 HIToolbox 0x00007ff81118a29e _ZN13HIApplication11FrontUILostEv + 34 4 HIToolbox 0x00007ff811183622 _ZN13HIApplication15HandleActivatedEP14OpaqueEventRefhP15OpaqueWindowPtrh + 508 5 HIToolbox 0x00007ff81117d950 _ZN13HIApplication13EventObserverEjP14OpaqueEventRefPv + 182 6 HIToolbox 0x00007ff811145bd2 _NotifyEventLoopObservers + 153 7 HIToolbox 0x00007ff81117d3e6 AcquireEventFromQueue + 494 8 HIToolbox 0x00007ff81116c5a4 ReceiveNextEventCommon + 725 9 HIToolbox 0x00007ff81116c2b3 _BlockUntilNextEventMatchingListInModeWithFilter + 70 10 AppKit 0x00007ff80a973f33 _DPSNextEvent + 909 11 AppKit 0x00007ff80a972db4 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1219 12 AppKit 0x00007ff80a9653f7 -[NSApplication run] + 586 13 _macosx.cpython-310-darwin.so 0x0000000110407e22 show + 162 14 Python 0x0000000100bb7595 cfunction_vectorcall_NOARGS + 101 15 Python 0x0000000100c9101f call_function + 175 16 Python 0x0000000100c8a2c4 _PyEval_EvalFrameDefault + 34676 17 Python 0x0000000100c801df _PyEval_Vector + 383 18 Python 0x0000000100c9101f call_function + 175 19 Python 0x0000000100c8a2c4 _PyEval_EvalFrameDefault + 34676 20 Python 0x0000000100c801df _PyEval_Vector + 383 21 Python 0x0000000100b53f61 method_vectorcall + 481 22 Python 0x0000000100c8a4f2 _PyEval_EvalFrameDefault + 35234 23 Python 0x0000000100c801df _PyEval_Vector + 383 24 Python 0x0000000100c9101f call_function + 175 25 Python 0x0000000100c8a2c4 _PyEval_EvalFrameDefault + 34676 26 Python 0x0000000100c801df _PyEval_Vector + 383 27 Python 0x0000000100cf536d pyrun_file + 333 28 Python 0x0000000100cf4b2d _PyRun_SimpleFileObject + 365 29 Python 0x0000000100cf417f _PyRun_AnyFileObject + 143 30 Python 0x0000000100d20047 pymain_run_file_obj + 199 31 Python 0x0000000100d1f815 pymain_run_file + 85 32 Python 0x0000000100d1ef9e pymain_run_python + 334 33 Python 0x0000000100d1ee07 Py_RunMain + 23 34 Python 0x0000000100d201e2 pymain_main + 50 35 Python 0x0000000100d2048a Py_BytesMain + 42 36 dyld 0x00007ff80741b310 start + 2432 Why do these lines come out in the Debug Console window though there is nothing in any of my Python programs that would directly cause them to come out as far as I know? How are they helpful and how can they be used if needed? How can I prevent them from coming out by default? I checked out the Visual Studio Code documentation on Python debugging but could not come across anything that would explain these lines. I am running Visual Studio Code on macOS Ventura v13.1. UPDATE as of January 2, 2023 I have figured out that the unwanted output in my initial post is being triggered by the matplotlib.pyplot.show function in my Python programs. I get that output even when I run a program as simple as that below: import matplotlib.pyplot as plt x = [1, 2, 3] y = [1, 2, 3] plt.plot(x, y) plt.show() When I remove plt.show() from the code above, the 36-line unwanted output does not come out but then the graph is also not displayed. Again, other than the unwanted output, all of my Python programs with the show function appear to work correctly, including the graph display triggered by the show function. I have Matplotlib 3.5.2 installed on my Mac. A very similar unwanted output comes out if I run the same program directly through the command line as (assume the Python program's name is test.py): python3 test.py but not when I run test.py through IDLE, Python’s Integrated Development and Learning Environment, or the code in it from within a Jupyter notebook. I can remove the show function from my Python programs to avoid the unwanted output but then the charts will not appear and I would prefer using the show function rather than a makeshift solution. UPDATE as of January 4, 2023 I was suggested at a Matplotlib forum that this might somehow be a macOS Ventura v13.1 issue because similar issues have started being reported recently with different programs executed under macOS Ventura v13.1. One user has reported encountering a similar output with code using Tkinter and another while using a video player called mpv. It is not implausible that the problem is also related with macOS Ventura v13.1 but I don’t know how and my questions remain. UPDATE as of January 6, 2023 Upgraded Matplotlib to v3.6.2 but the unwanted output issue has not been resolved. UPDATE as of January 8, 2023 Tried Matplotlib v3.6.2 along with Python v3.11.1. The unwanted output issue remains. UPDATE as of January 15, 2023 Reported this issue as a bug to Matplotlib Developers on GitHub: "[Bug]: Show function triggering unwanted additional output #24997 " UPDATE as of January 16, 2023 I have found out that the unwanted output comes out only when the "Automatically hide and show the menu bar" option under Systems Settings->Desktop&Dock->Menu Bar is set to Always (which is my setting) or on Desktop Only. The unwanted output does not come out if I set that option to In Full Screen Only or Never. UPDATE as of January 18, 2023 Both Matplotlib and Python developers on GitHub think that the unwanted output, which they can reproduce, is the result of a bug in macOS Ventura 13.1 and therefore there is not anything they can do about it. For details, see the respective discussions following the bug report I mentioned submitting for Matplotlib on GitHub and also the one I later submitted for Tkinter through Python/CPython on GitHub again as "Tkinter causing unwanted output in most recent macOS". I was also told in response to the latter that a Feedback Assistant report had now been submitted to Apple about the identified bug. UPDATE as of January 25, 2023 Upgraded the macOS on my Mac to Ventura 13.2 today (and further to Ventura 13.2.1 when it came out in mid-February). No change except that the unwanted output, when the small program is run, is now considerably longer (85 lines). As before, the program works fine otherwise and the unwanted output does not come out if I change my Mac's menu bar setting, for example, to Never.
This problem appears to be caused by one or more bugs in macOS 13.1 and 13.2. It can be fully resolved only by downgrading macOS to 13.0 or earlier or upgrading it to 13.3 or later.
11
1
74,981,011
2023-1-2
https://stackoverflow.com/questions/74981011/t5-model-generates-short-output
I have fine-tuned the T5-base model (from hugging face) on a new task where each input and target are sentences of 256 words. The loss is converging to low values however when I use the generate method the output is always too short. I tried giving minimal and maximal length values to the method but it doesn't seem to be enough. I suspect the issue is related to the fact that the sentence length before tokenization is 256 and after tokenization, it is not constant (padding is used during training to ensure all inputs are of the same size). Here is my generate method: model = transformers.T5ForConditionalGeneration.from_pretrained('t5-base') tokenizer = T5Tokenizer.from_pretrained('t5-base') generated_ids = model.generate( input_ids=ids, attention_mask=attn_mask, max_length=1024, min_length=256, num_beams=2, early_stopping=False, repetition_penalty=10.0 ) preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids][0] preds = preds.replace("<pad>", "").replace("</s>", "").strip().replace(" ", " ") target = [tokenizer.decode(t, skip_special_tokens=True, clean_up_tokenization_spaces=True) for t in reference][0] target = target.replace("<pad>", "").replace("</s>", "").strip().replace(" ", " ") The inputs are created using tokens = tokenizer([f"task: {text}"], return_tensors="pt", max_length=1024, padding='max_length') inputs_ids = tokens.input_ids.squeeze().to(dtype=torch.long) attention_mask = tokens.attention_mask.squeeze().to(dtype=torch.long) labels = self.tokenizer([target_text], return_tensors="pt", max_length=1024, padding='max_length') label_ids = labels.input_ids.squeeze().to(dtype=torch.long) label_attention = labels.attention_mask.squeeze().to(dtype=torch.long)
For whom it may concern, I found out the issue was with the max_length argument of the generation method. It limits the maximal number of tokens including the input tokens. In my case it was required to set max_new_tokens=1024 instead of the argument provided in the question.
3
1
74,939,758
2022-12-28
https://stackoverflow.com/questions/74939758/camelot-deprecationerror-pdffilereader-is-deprecated
I have been using camelot for our project, but since 2 days I got following errorMessage. When trying to run following code snippet: import camelot tables = camelot.read_pdf('C:\\Users\\user\\Downloads\\foo.pdf', pages='1') I get this error: DeprecationError: PdfFileReader is deprecated and was removed in PyPDF2 3.0.0. Use PdfReader instead. I checked this file and it does use pdfFileReader: c:\ProgramData\Anaconda3\lib\site-packages\camelot\handlers.py I thought that I can specify the version of PyPDF2, but it will be installed automatically(because the library is used by camelot) when I install camelot. Do you think there is any solution to specify the version of PyPDF2 manually?
This is issues #339. While there will hopefully be soon a release including the fix, you can still do this: pip install 'PyPDF2<3.0' after you've installed camelot. See https://github.com/camelot-dev/camelot/issues/339#issuecomment-1367331630 for details and screenshots.
33
46
74,965,764
2022-12-30
https://stackoverflow.com/questions/74965764/how-can-i-properly-hash-dictionaries-with-a-common-set-of-keys-for-deduplicatio
I have some log data like: logs = [ {'id': '1234', 'error': None, 'fruit': 'orange'}, {'id': '12345', 'error': None, 'fruit': 'apple'} ] Each dict has the same keys: 'id', 'error' and 'fruit' (in this example). I want to remove duplicates from this list, but straightforward dict and set based approaches do not work because my elements are themselves dicts, which are not hashable: >>> set(logs) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unhashable type: 'dict' Another approach is to sort and use itertools.groupby - but dicts are also not comparable, so this also does not work: >>> from itertools import groupby >>> [k for k, _ in groupby(sorted(logs))] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: '<' not supported between instances of 'dict' and 'dict' I had the idea to calculate a hash value for each log entry, and store it in a set for comparison, like so: def compute_hash(log_dict: dict): return hash(log_dict.values()) def deduplicate(logs): already_seen = set() for log in logs: log_hash = compute_hash(log) if log_hash in already_seen: continue already_seen.add(log_hash) yield log However, I found that compute_hash would give the same hash for different dictionaries, even ones with completely bogus contents: >>> logs = [{'id': '123', 'error': None, 'fruit': 'orange'}, {}] >>> # The empty dict will be removed; every dict seems to get the same hash. >>> list(deduplicate(logs)) [{'id': '123', 'error': None, 'fruit': 'orange'}] After some experimentation, I was seemingly able to fix the problem by modifying compute_hash like so: def compute_hash(log_dict: dict): return hash(frozenset(log_dict.values())) However, I cannot understand why this makes a difference. Why did the original version seem to give the same hash for every input dict? Why does converting the .values result to a frozenset first fix the problem? Aside from that: is this algorithm correct? Or is there some counterexample where the wrong values will be removed? This question discusses how hashing works in Python, in depth, as well as considering other data structures that might be more appropriate than dictionaries for the list elements. See List of unique dictionaries instead if you simply want to remove duplicates from a list of dictionaries.
What went wrong The first thing I want to point out about the original attempt is that it seems over-engineered. When the inputs are hashable, manually iterating is only necessary to preserve order, and even then, in 3.7 and up we can rely on the order-preserving property of dicts. Just because it's hashable doesn't mean the hash is useful It also isn't especially useful to call hash on log_dict.values(). While log_dict is not hashable, its .values() (in 3.x) is an instance of the dict_values type (the name is not defined in the builtins, but that is how instances identify themselves), which is hashable: >>> dv = {1:2, 3:4}.values() >>> dv dict_values([2, 4]) >>> {dv} {dict_values([2, 4])} So we could just as easily have used the .values() directly as a "hash": def compute_hash(log_dict: dict): return log_dict.values() ... but this would have given a new bug - now every hash would be different: >>> {1:2}.values() == {1:2}.values() False But why? Because dict_values type doesn't define __hash__, nor __eq__. object is the immediate superclass, so calls to those methods fall back to the object defaults: >>> dv.__class__.__bases__ (<class 'object'>,) >>> dv.__class__.__hash__ <slot wrapper '__hash__' of 'object' objects> >>> dv.__class__.__eq__ <slot wrapper '__eq__' of 'object' objects> In fact, dict_values cannot sensibly implement these methods because it is (indirectly) mutable - as a view, it is dependent on the underlying dict: >>> d = {1:2} >>> dv = d.values() >>> d[3] = 4 >>> dv dict_values([2, 4]) Since there isn't an obvious generic way to hash any object that also isn't exceedingly slow, while also caring about its actual attributes, the default simply doesn't care about attributes and is simply based on object identity. For example, on my platform, the results look like: Python 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> dv = {1:2, 3:4}.values() >>> bin(id(dv)) '0b11111110101110011010010110000001010101011110000' >>> bin(hash(dv)) '0b1111111010111001101001011000000101010101111' In other words: >>> hash(dv) == id(dv) // 16 True Thus, if compute_hash in the original code is repeatedly called with temporary objects, it won't give useful results - the results don't depend on the contents of the object, and will commonly be the same, as temporary (i.e., immediately GCd) objects in a loop will often end up in the same memory location. (Yes, this means that objects default to being hashable and equality-comparable. The dict type itself overrides __hash__ to explicitly disallow it, while - curiously - overriding __eq__ to compare contents.) frozenset has a useful hash On the other hand, frozenset is intended for long-term storage of some immutable data. Consequently, it's important and useful for it to define a __hash__, and it does: >>> f = frozenset(dv) >>> bin(id(f)) '0b11111110101110011010001011101000110001011100000' >>> bin(hash(f)) '0b101111010001101001001111100001000001100111011101101100000110001' Dictionaries, hashing and collision detection Although there have been many tweaks and optimizations over the years, Pythons dict and set types are both fundamentally based on hash tables. When a value is inserted, its hash is first computed (normally an integer value), and then that value is reduced (normally using modulo) into an index into the underlying table storage. Similarly, when a value is looked up, the hash is computed and reduced in order to determine where to look in the table for that value. Of course, it is possible that some other value is already stored in that spot. There are multiple possible strategies for dealing with this (and last I checked, the literature is inconsistent about naming them). But most importantly for our purposes: when looking up a value in a dict by key, or checking for the presence of a value in a set, the container will also have to do equality checks after figuring out where to look, in order to confirm that the right thing has actually been found. Consequently, any approach that simply computes a hash manually, and naively associates those hashes with the original values, will fail. It is easy for two of the input dicts to have the same computed hash value, even if their contents are actually being considered. For example, the hash of a frozenset is based on an XOR of hashes for the elements. So if two of our input dicts had all the same values assigned to keys in a different order, the hash would be the same: >>> def show_hash(d): ... return bin(hash(frozenset(d.values()))) ... >>> show_hash({'id': '1', 'error': None, 'value': 'apple'}) '0b101010010100001000111001000001000111101111110100010000010101110' >>> # Changing a value changes the hash... >>> show_hash({'id': '1', 'error': None, 'value': 'orange'}) '0b11111111001000011101011001001011100010100100010010110000100100' >>> # but rearranging them does not: >>> show_hash({'id': '1', 'error': 'orange', 'value': None}) '0b11111111001000011101011001001011100010100100010010110000100100' It's also possible for such a hash collision to occur by coincidence with totally unrelated values. It's extremely unlikely for 64-bit hashes (since this value will not be reduced and used as a hash table index, despite the name) Fixing it explicitly So, in order to have correct code, we would need to do our own checking afterwards, explicitly checking whether the value which hashed to something in our already_seen set was actually equal to previous values that had that hash. And there could theoretically be multiple of those, so we'd have to remember multiple values for each of those external hashes, perhaps by using a dict for already_seen instead. Something like: from collections import defaultdict def deduplicate(logs): already_seen = defaultdict(list) for log in logs: log_hash = compute_hash(log) if log in already_seen.get(log_hash, ()): continue already_seen[log_hash].append(log) yield log Hopefully this immediately looks unsatisfactory. With this approach, we are essentially re-implementing the core logic of sets and dictionaries - we compute hashes ourselves, retrieve corresponding values from internal storage (already_seen) and then manually check for equality (if log in ...). Looking at it from another angle The reason we're doing all of this in the first place - looking for a hash value to represent the original dict in our own storage - is because the dict isn't hashable. But we could address that problem head-on, instead, by explicitly converting the data into a hashable form (that preserves all the information), rather than trying to relate a hashable value to the data. In other words, let's use a different type to represent the data, rather than a dict. Since all our input dicts have the same keys, the natural thing to do would be to convert those into the attributes of a user-defined class. In 3.7 and up, a simple, natural and explicit way to do this is using a dataclass, like so: from dataclasses import dataclass from typing import Optional @dataclass(frozen=True, slots=True) class LogEntry: id: str error: Optional[str] fruit: str It's not explained very well in the documentation, but using frozen=True (the main purpose is to make the instances immutable) will cause a __hash__ to be generated as well, taking the fields into account as desired. Using slots=True causes __slots__ to be generated for the type as well, avoiding memory overhead. From here, it's trivial to convert the existing logs: logs = [LogEntry(**d) for d in logs] And we can directly deduplicate with a set: set(logs) or, preserving order using a dict (in 3.7 and up): list(dict.fromkeys(logs)) There are other options, of course. The simplest is to make a tuple from the .values - assuming each log dict has its keys in the same order (again, assuming Python 3.7 and up, where keys have an order), this preserves all the useful information - the .keys are just for convenience. Slightly more sophisticated, we could use collections.namedtuple: from collections import namedtuple LogEntry = namedtuple('LogEntry', 'id error fruit') # from here, use the LogEntry type as before This is simpler than the dataclass approach, but less explicit (and doesn't offer an elegant way to document field types).
18
22
74,964,527
2022-12-30
https://stackoverflow.com/questions/74964527/attributeerror-module-cv2-aruco-has-no-attribute-dictionary-get
AttributeError: module 'cv2.aruco' has no attribute 'Dictionary_get' even after installing opencv-python opencv-contrib-python import numpy as np import cv2, PIL from cv2 import aruco import matplotlib.pyplot as plt import matplotlib as mpl import pandas as pd vid = cv2.VideoCapture(0) while (True): ret, frame = vid.read() #cv2.imshow('frame', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) aruco_dict = aruco.Dictionary_get(aruco.DICT_6X6_250) parameters = aruco.DetectorParameters() corners, ids, rejectedImgPoints = aruco.detectMarkers(gray, aruco_dict, parameters=parameters) frame_markers = aruco.drawDetectedMarkers(frame.copy(), corners, ids) plt.figure() plt.imshow(frame_markers) for i in range(len(ids)): c = corners[i][0] plt.plot([c[:, 0].mean()], [c[:, 1].mean()], "o", label = "id={0}".format(ids[i])) plt.legend() plt.show() vid.release() # Destroy all the windows cv2.destroyAllWindows() normal example for finding and marking aruco
API changed for 4.7.x, I have updated a small snippet. Now you need to instantiate ArucoDetector object. import cv2 as cv dictionary = cv.aruco.getPredefinedDictionary(cv.aruco.DICT_4X4_250) parameters = cv.aruco.DetectorParameters() detector = cv.aruco.ArucoDetector(dictionary, parameters) frame = cv.imread(...) markerCorners, markerIds, rejectedCandidates = detector.detectMarkers(frame) OpenCV: cv::aruco::ArucoDetector Class Reference OpenCV: cv::aruco::Dictionary Class Reference
15
39
74,967,916
2022-12-31
https://stackoverflow.com/questions/74967916/how-to-create-predicted-vs-actual-plot-using-abline-plot-and-statsmodels
I am trying to recreate this plot from this website in Python instead of R: Background I have a dataframe called boston (the popular educational boston housing dataset). I created a multiple linear regression model with some variables with statsmodels api below. Everything works. import statsmodels.formula.api as smf results = smf.ols('medv ~ col1 + col2 + ...', data=boston).fit() I create a dataframe of actual values from the boston dataset and predicted values from above linear regression model. new_df = pd.concat([boston['medv'], results.fittedvalues], axis=1, keys=['actual', 'predicted']) This is where I get stuck. When I try to plot the regression line on top of the scatterplot, I get this error below. from statsmodels.graphics.regressionplots import abline_plot # scatter-plot data ax = new_df.plot(x='actual', y='predicted', kind='scatter') # plot regression line abline_plot(model_results=results, ax=ax) ValueError Traceback (most recent call last) <ipython-input-156-ebb218ba87be> in <module> 5 6 # plot regression line ----> 7 abline_plot(model_results=results, ax=ax) /usr/local/lib/python3.8/dist-packages/statsmodels/graphics/regressionplots.py in abline_plot(intercept, slope, horiz, vert, model_results, ax, **kwargs) 797 798 if model_results: --> 799 intercept, slope = model_results.params 800 if x is None: 801 x = [model_results.model.exog[:, 1].min(), ValueError: too many values to unpack (expected 2) Here are the independent variables I used in the linear regression if that helps: {'crim': {0: 0.00632, 1: 0.02731, 2: 0.02729, 3: 0.03237, 4: 0.06905}, 'chas': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0}, 'nox': {0: 0.538, 1: 0.469, 2: 0.469, 3: 0.458, 4: 0.458}, 'rm': {0: 6.575, 1: 6.421, 2: 7.185, 3: 6.998, 4: 7.147}, 'dis': {0: 4.09, 1: 4.9671, 2: 4.9671, 3: 6.0622, 4: 6.0622}, 'tax': {0: 296, 1: 242, 2: 242, 3: 222, 4: 222}, 'ptratio': {0: 15.3, 1: 17.8, 2: 17.8, 3: 18.7, 4: 18.7}, 'lstat': {0: 4.98, 1: 9.14, 2: 4.03, 3: 2.94, 4: 5.33}, 'rad3': {0: 0, 1: 0, 2: 0, 3: 1, 4: 1}, 'rad4': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0}, 'rad5': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0}, 'rad6': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0}, 'rad7': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0}, 'rad8': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0}, 'rad24': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0}, 'dis_sq': {0: 16.728099999999998, 1: 24.67208241, 2: 24.67208241, 3: 36.75026884, 4: 36.75026884}, 'lstat_sq': {0: 24.800400000000003, 1: 83.53960000000001, 2: 16.240900000000003, 3: 8.6436, 4: 28.4089}, 'nox_sq': {0: 0.28944400000000003, 1: 0.21996099999999996, 2: 0.21996099999999996, 3: 0.209764, 4: 0.209764}, 'rad24_lstat': {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}, 'rm_lstat': {0: 32.743500000000004, 1: 58.687940000000005, 2: 28.95555, 3: 20.57412, 4: 38.09351}, 'rm_rad24': {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}}
That R plot is actually for predicted ~ actual, but your python code passes the medv ~ ... model into abline_plot. To recreate the R plot in python: either use statsmodels to manually fit a new predicted ~ actual model for abline_plot or use seaborn.regplot to do it automatically Using statsmodels If you want to plot this manually, fit a new predicted ~ actual model and pass that model into abline_plot. Then, generate the confidence band using the summary_frame of the prediction results. import statsmodels.formula.api as smf from statsmodels.graphics.regressionplots import abline_plot # fit prediction model pred = smf.ols('predicted ~ actual', data=new_df).fit() # generate confidence interval summary = pred.get_prediction(new_df).summary_frame() summary['actual'] = new_df['actual'] summary = summary.sort_values('actual') # plot predicted vs actual ax = new_df.plot.scatter(x='actual', y='predicted', color='gray', s=10, alpha=0.5) # plot regression line abline_plot(model_results=pred, ax=ax, color='orange') # plot confidence interval ax.fill_between(x=summary['actual'], y1=summary['mean_ci_lower'], y2=summary['mean_ci_upper'], alpha=0.2, color='orange') Alternative to abline_plot, you can use matplotlib's built-in axline by extracting the intercept and slope from the model's params: # plot y=mx+b regression line using matplotlib's axline b, m = pred.params ax.axline(xy1=(0, b), slope=m, color='orange') Using seaborn Note that it's much simpler to let seaborn.regplot handle this automatically: import seaborn as sns sns.regplot(data=new_df, x='actual', y='predicted', scatter_kws=dict(color='gray', s=10, alpha=0.5), line_kws=dict(color='orange')) With seaborn, it's also trivial to plot a polynomial fit via the order param: sns.regplot(data=new_df, x='actual', y='predicted', order=2)
4
4
74,970,710
2022-12-31
https://stackoverflow.com/questions/74970710/type-annotations-for-full-class-hierarchy-in-python
Suppose we have the following code: class Base: a: int class Derived(Base): b: int print(Derived.__annotations__) Running this script in all recent versions of python will print {'b': <class 'int'>} (that is, the class members we explicitly defined in Derived. In python 3.10, using inspect.get_annotations(Derived) will result in the same output. My question is, how do I get the type annotations for all the members of Derived? I'd like some function that outputs {'a': <class 'int'>, 'b': <class 'int'>}. This seems like a common enough use case that there should be a built-in function to do this, but I haven't found any. I'm running Python 3.10, so recently added functions are good for me.
This is a job for typing.get_type_hints, not raw annotation inspection: import typing full_hints = typing.get_type_hints(Derived) This will also resolve string annotations, and recursively replace Annotated[T, ...] with T. Use cases that are interested in non-type-hint annotations can pass include_extras=True to get_type_hints to preserve Annotated annotations.
4
3
74,944,012
2022-12-28
https://stackoverflow.com/questions/74944012/how-to-convert-incredibly-long-decimals-to-fractions-and-back-with-high-precisio
I'm trying to convert very large integers to decimals, then convert those decimals to Fractions, and then convert the Fraction back to a decimal. I'm using the fractions and decimal packages to try and avoid floating point imprecision, however the accuracy still tapers off rather quickly. Is there any way to fix this / other ways of doing this? import fractions import decimal def convert(exampleInt): power_of_10 = len(str(exampleInt)) decimal.getcontext().prec = 10000 exampleDecimal = decimal.Decimal(exampleInt) / (decimal.Decimal(10) ** power_of_10) exampleFraction = fractions.Fraction(str(exampleDecimal)).limit_denominator() backToDecimal = exampleFraction.numerator / decimal.Decimal(exampleFraction.denominator) print(f"backToDecimal: {backToDecimal}") convert(34163457536856478543908582348965743529867234957893246783427568734742390675934285342) Which outputs: .341634575369123189552597490138153768602695082851231192155069838....
It's because of calling the limit_denominator(). Also it's quite inefficient to convert using an intermediate string. Convert a Decimal object into a Fraction object using the constructor like the following.(It's Mark Dickinson's solution.) import fractions import decimal decimal.getcontext().prec = 100 d = decimal.Decimal(34163457536856478543908582348965743529867234957893246783427568734742390675934285342) d = d / 10**(d.adjusted() + 1) f = fractions.Fraction(d) # This is equivalent to Fraction(*d.as_integer_ratio()) d2 = decimal.Decimal(f.numerator) / f.denominator assert(d2 == d)
3
4
74,978,154
2023-1-2
https://stackoverflow.com/questions/74978154/why-does-adding-multiprocessing-prevent-python-from-finding-my-compiled-c-progra
I am currently looking to speed up my code using the power of multiprocessing. However I am encountering some issues when it comes to calling the compiled code from python, as it seems that the compiled file disappears from the code's view when it includes any form of multiprocessing. For instance, with the following test code: #include <omp.h> int main() { int thread_id; #pragma omp parallel { thread_id = omp_get_thread_num(); } return 0; } Here, I compile the program, then turn it into a .so file using the command gcc -fopenmp -o theories/test.so -shared -fPIC -O2 test.c I then attempt to run the said code from test.py: from ctypes import CDLL import os absolute_path = os.path.dirname(os.path.abspath(__file__)) # imports the c libraries test_lib_path = absolute_path + '/theories/test.so' test = CDLL(test_lib_path) test.main() print('complete') I get the following error: FileNotFoundError: Could not find module 'C:\[my path]\theories\test.so' (or one of its dependencies). Try using the full path with constructor syntax. However, when I comment out the multiprocessing element to get the follwing code: #include <omp.h> int main() { int thread_id; /* #pragma omp parallel { thread_id = omp_get_thread_num(); } */ return 0; } I then have a perfect execution with the python program printing out "complete" at the end. I'm wondering how this has come to happen, and how the code can seemingly be compiled fine but then throw problems only once it's called from python (also I have checked and the file is in fact created). UPDATES: I have now checked that I have libgomp-1.dll installed I have uninstalled and reinstalled MinGW, with no change happening. I have installed a different, 64 bit version of gcc and, using a different (64 bit python 3.10) version of python have reproduced the same error. This also has libgomp-1.dll.
I think this has the same root cause as [SO]: Can't import dll module in Python (@CristiFati's answer) (also check [SO]: PyWin32 and Python 3.8.0 (@CristiFati's answer)). A .dll (.so) is only loaded when its dependencies are successfully loaded (recursively). [SO]: Python Ctypes - loading dll throws OSError: [WinError 193] %1 is not a valid Win32 application (@CristiFati's answer) focuses on a different error, but covers this topic. When commenting out omp_get_thread_num call, the linker no longer links test.so to libgomp*.dll (as it doesn't need anything from it), and code runs fine (all required .dlls are found). To fix the issue, you should add to os.add_dll_directory (before attempting to load the .so): libgomp*.dll's directory MinGW's bin directory Any other directory that may contain .dlls that are required (dependents) To see a .dll dependencies, check [SO]: Discover missing module using command-line ("DLL load failed" error) (@CristiFati's answer). Notes: main as an exported function from an so can be misleading Although not required here (at least for the current code), check [SO]: C function called from Python via ctypes returns incorrect value (@CristiFati's answer)
4
2
74,978,707
2023-1-2
https://stackoverflow.com/questions/74978707/optimizing-a-puzzle-solver
Over the holidays, I was gifted a game called "Kanoodle Extreme". The details of the game are somewhat important, but I think I've managed to abstract them away. The 2D variant of the game (which is what I'm focusing on) has a number of pieces that can be flipped/rotated/etc. A given puzzle will give you a certain amount of a hex-grid board to cover, and a certain number of pieces with which to cover it. See the picture below for a quick visual, I think that explains most of it. (Image attribution: screenshotted from the amazon listing) Here is the full manual for the game, including rules, board configurations, and pieces (manufactorer's site). For convenience, here's the collection of pieces (individual problems may include a subset of these pieces): Here is an example of a few board configurations (shown pieces are fixed - the open spaces must be filled with the remaining pieces): It's an interesting game, but I decided I didn't just want to solve a puzzle, I wanted to solve all the puzzles. I did this not because it would be easy, but because I thought it would be easy. As it turns out, a brute-force/recursive approach is pretty simple. It's also hilariously inefficient. What have I done so far? Well, I'll happily post code but I've got quite a bit of it. Essentially, I started with a few assumptions: It doesn't matter if I place piece 1, then 2, then 3... or 3, then 1, then 2... since every piece must be placed, the ordering doesn't matter (well, matter much: I think placing bigger pieces first might be more efficient since there are fewer options?). In the worst case, solving for all possible solutions to puzzle is no slower than solving for a single solution. This is where I'm not confident: I guess on average the single solution could probably be early-exited sooner, but I think in the worst case they're equivalent. I don't THINK there's a clean algebraic way to solve this - I don't know if that classifies it as NP-complete or what, but I think some amount of combinatorics must be explored to find solutions. This is my least-confident assumption. My approach to solving so far: For each piece given, I find all possible locations/orientations of said piece on the game board. I code each piece/location on the board as a bitmap (where each bit represents a location on the board, and 0 means unoccupied while 1 means occupied). Now for each piece I have a collection of some 20-200 ints (depending on the size of the board) that represent technically-valid placements, though not all are optimal. (I think there's some room to trim down unlikely orientations here). I store all these ints in a map, as a list associated with a key that represents the index of the piece in question. Starting at piece index 0, I loop through all possible iterations (keyed by the index of that iteration in the list of all possible iterations for that piece), and loop through all possible iterations of the next piece. I take the two ints and bitwise-& them together: if I get a "0" out, it means that there is no overlap between the pieces so they COULD be placed together. I store all the valid combos from piece 0-1 (for instance, piece 0 iteration index 5 is compatible with piece 1 iterations 1-3, 6, 35-39, and 42). How I store that is likely irrelevant, but I currently store it as a nested list: index 5 of the list would contain another list that held [1, 2, 3, 6, 35, 36, 37, 38, 39, 42]. I do this for piece 0-1, 0-2, 0-3... 1-2, 1-3... 2-3... every combination of pieces. I then start finding 3-sequence combos: Iterate through the list of valid piece 0->1 lists, and for each piece 1 index (so 1, 2, 3, 6, 35, 36... from the example above), find the compatibility list from piece 1->2 for that index. This will give me a sequence of lists. For each item in this sequence, I filter it by taking the intersection with the compatibility list for piece 0->2 for the selected piece 0 iteration. This gives me a collection of "3-lists". I find all 3-lists ((0, 1, 2), (1, 2, 3), (2, 3, 4)), and repeat the process of filtering to get 4-lists: ((0, 1, 2, 3), (1, 2, 3 4)). Repeat to get 5-lists. If I have only 5 pieces, the 5 list represents all solutions. If I have more than n pieces, repeat until I have an n-list. This approach DOES work, and I don't THINK I'm duplicating many (if any) calculations, but the combinatorics get so large that it takes too long or - when I have 8 or 9 pieces - takes up 30+ GB of ram, then crashes. The ultimate question: how can I solve this problem (searching for ALL solutions for a given puzzle) more efficiently? Sub-questions: Optimizations to my algorithmic approach? Optimizations to the calculations performed (I used ints and bitwise operations and then set intersections because I thought they'd be fast)? Rejections of my assumptions that might result in faster solutions? Thanks!
I think your approach with bitmaps is a good start. One of the problems is that if a narrow area is created by a combination, where a cell could never be covered by any of the remaining pieces, the brute force search will only discover this much later -- after having added several pieces successfully in another area of the board. This means that eventually there are a lot of such combinations being combined in astronomically more combinations, while it is already clear that the problematic area cannot be covered by any of them. Depth-first The first suggestion is to use a depth-first search: select a free cell, and find all possible piece positions that would occupy that cell, and only those. If the cell cannot be covered by any piece, then backtrack. If there are multiple possibilities, try the first one, and continue with a next free cell, ...etc. When backtracking to this position, try the next way a piece can cover that cell, ...etc. Using a depth first search will at least be more memory efficient, but will also have the same problem: some free cells become uncoverable, but this may be detected much later, meaning that backtracking will be inefficient, as the latest placed pieces will get alternatives first, which doesn't solve the uncoverable-cell problem. Choose free cell I would propose this improvement on a depth-first approach: When deciding on the next "move", first iterate all free cells and for each of those cells determine how many valid moves would be possible that cover that cell. This extra step represents some work, but offers huge advantages: Now you will detect early when there is a cell that has no more hope of getting covered, so you can backtrack earlier than you would normally do; You can select the cell that has the fewest possible coverings. This means you actually look for areas where you might run into problems soon, and give those priority, again with the aim to backtrack early. Implementation I have made an implementation of this in JavaScript. I'm a bit disappointed that it turned out to be so much code. Still, the fiddling with these bitmaps was made a bit easier as JavaScript supports big integers, and so the board of 56 cells could be represented with one integer, even allowing for extra cells to make a boundary (wall) around the board. This snippet starts with an empty board (defined with strings, but immediately converted to a bitmap), and defines the 12 shapes using the same string-to-bitmap logic. For each piece it finds all valid positions on the board, with its 6 rotations and mirror transformation. I believe this is what you already did in your implementation. Then it starts the depth-first search, but with this extra logic to select the "most difficult" cell to cover. All possible coverings for that cell represent the children in the recursion tree. When a solution is found, it is printed, and a small delay is introduced (you can alter the delay interactively) so the browser does not hang and you can have a longer look at one configuration. Then it continues the search for more. The printed output will use the single letters (A-L) to identify the pieces as depicted in the image you have shared. Asterisks in the output denote cells that exist in the bitmap, but are "walls". These walls might not all be really necessary for the algorithm, but I just left it like that. // Utility function to introduce delays between asynchronous executions const delay = ms => new Promise(resolve => setTimeout(resolve, ms)); // Utility function to translate an array of strings to a bitpattern (bigint) function hexaShape(lines, width) { let bits = 0n; let bit = 1n; for (let [i, line] of lines.entries()) { if (line.length > width * 2 + 1) throw `line width is ${line.length}, but expected at most ${width * 2 + 1}`; if (!/^([#-] )*[#-]?$/.test(line)) throw `line ${i} has invalid format`; for (let j = 0; j < width; j++) { if (line[2*j] === "#") bits |= bit; bit <<= 1n; } } return bits; } // For I/O handling const output = document.querySelector("pre"); const input = document.querySelector("input"); class Board { /* Constructor takes an array of strings representing lines of the board area, surrounded by boundaries. See example run for the expected format for the parameter */ constructor(lines) { this.width = (lines[0].length + 1) >> 1; this.height = lines.length; this.bits = hexaShape(lines, this.width); if (lines[0].includes ('-') || lines.at(-1).includes('-')) throw "board should have boundaries"; if (lines.some(line => /^-|-$/.test(line.trim()))) throw "board should have boundaries"; // Shapes are the pieces. One shape can have more than one actual position/transformation this.shapes = []; } translate(bits, translation) { /* Transform the positioned shape by applying the given translation callback to each cell. Used for mirroring and for rotating. Returns an array with the transformed position in all its possible locations on the board. */ // Rotate 60Β° clockwise around the (0, 0) coordinate. let old = bits; bits = 0n; let bit = 1n; for (let row = 0; row < this.height; row++) { for (let col = 0; col < this.width; col++) { if (old & bit) bits |= 1n << BigInt(translation(row, col)); bit <<= 1n; } } // Shift shape's cell up and left as much as possible -- which probably is an invalid position while ((bits & 1n) == 0n) bits >>= 1n; // Shift it back within the boundaries of the board and append it to the array of valid positions const positions = []; while (bits < this.bits) { if ((bits & this.bits) == 0) positions.push(bits); bits <<= 1n; } return positions; } mirror(bits) { return this.translate(bits, (row, col) => (row + 1) * (this.width - 1) - col)[0]; } rotation(bits) { return this.translate(bits, (row, col) => ((row + col) * this.width) - row); } addShape(color, lines) { let bits = hexaShape(lines, this.width); if (bits == 0n) throw "empty shape"; const positions = []; const unique = new Set; // Apply mirroring and rotation to arrive at all valid positions of this shape on the board. for (let mirror = 0; mirror < 2; mirror++) { bits = this.mirror(bits); for (let rotation = 0; rotation < 6; rotation++) { const shifts = this.rotation(bits); bits = shifts[0]; if (unique.has(bits)) continue; // Skip: it's an already processed position unique.add(bits); positions.push(...shifts); } } if (positions.length == 0) throw "Could not fit shape unto empty board"; this.shapes.push({ color, positions, placement: 0n }); } toString() { let output = ""; let bit = 1n; for (let row = 0; row < this.height; row++) { output += " ".repeat(row); for (let col = 0; col < this.width; col++) { const shape = this.shapes.find(({placement}) => placement & bit); output += shape ? shape.color[0] : (this.bits & bit) ? "*" : " "; output += " "; bit <<= 1n; } output += "\n"; } return output; } getMoves(occupied, cell) { /* Return an array will all possible positions of any unused shape that covers the given cell */ const moves = []; for (const shape of this.shapes) { if (shape.placement) continue; for (const position of shape.positions) { if ((cell & position) && !(position & occupied)) { // Found candidate moves.push([shape, position]); } } } return moves; } getCriticalCell(occupied) { /* This leads to optimisation: do a quick run over all free cells and count how many ways it can be covered. This will detect when there is a cell that cannot be covered. If there are no such cells, the cell with the least number of possible coverings is returned */ let minCount = Infinity, critical = -2n; for (let cell = 1n; cell < occupied; cell <<= 1n) { if (cell & occupied) continue; // Occupied // Count all moves that would cover this cell let count = this.getMoves(occupied, cell).length; if (count < minCount) { if (!count) return -1n; // Found a cell that cannot be covered minCount = count; critical = cell; } } return critical; } async recur(occupied, remaining) { /* Depth-first search for solutions */ if (remaining === 0) { // BINGO!! output.textContent = this.toString(); await delay(+input.value); return; } const cell = this.getCriticalCell(occupied); if (cell == -1n) return; // Stuck. Need to backtrack for (const [shape, position] of this.getMoves(occupied, cell)) { shape.placement = position; await this.recur(occupied | position, remaining - 1); shape.placement = 0n; } } async solutions() { await this.recur(this.bits, this.shapes.length); } } function main() { const board = new Board([ "# # # # # # # # # # # # # # #", "# # # - - - - - - - - - - - #", "# # - - - - - - - - - - - # #", "# - - - - - - - - - - - - # #", "# - - - - - - - - - - - # # #", "# - - - - - - - - - - - # # #", "# # # # # # # # # # # # # # #" ]); board.addShape("A", ["- - - #", "- - # #", "# #"]); board.addShape("B", ["- - # #", "# # #"]); board.addShape("C", ["- - - - #", "- - - #", "# # #"]); board.addShape("D", ["- - - #", "# # # #"]); board.addShape("E", ["- # #", "# # #"]); board.addShape("F", ["- - #", "# # # #"]); board.addShape("G", ["- # - #", "# # #"]); board.addShape("H", ["- - #", "- #", "# # #"]); board.addShape("I", ["- - - #", "# # #"]); board.addShape("J", ["# #", "- # #"]); board.addShape("K", ["- # #", "# #"]); board.addShape("L", ["- - #", "# # #"]); board.solutions(); } main(); <pre></pre> Delay: <input type="number" min="0" max="5000" step="50" value="50" > Observations You'll notice that the pieces at the left side quickly change from one solution to the next, while on the right side of the board there is no change soon. This is because the algorithm decided that the cells at the right of the board were the ones with the least possibilities for covering, so there the very first piece placements happened -- at the top of the search tree. If you want to run this code on boards where some pieces were already placed (like in the images you shared), then change the code like this: Initialise the board with more '#' characters to indicate where pieces were already placed. Comment out the calls of addPiece of the pieces that are no longer available. Solution size I ran a variant of the above code that only counts solutions, and uses memoization. After some 25 minutes running time, the result was: 6,029,968 solutions. // Utility function to introduce delays between asynchronous executions const delay = ms => new Promise(resolve => setTimeout(resolve, ms)); // Utility function to translate an array of strings to a bitpattern (bigint) function hexaShape(lines, width) { let bits = 0n; let bit = 1n; for (let [i, line] of lines.entries()) { if (line.length > width * 2 + 1) throw `line width is ${line.length}, but expected at most ${width * 2 + 1}`; if (!/^([#-] )*[#-]?$/.test(line)) throw `line ${i} has invalid format`; for (let j = 0; j < width; j++) { if (line[2*j] === "#") bits |= bit; bit <<= 1n; } } return bits; } const output = document.querySelector("pre"); // For I/O handling let counter = 0; class Board { /* Constructor takes an array of strings representing lines of the board area, surrounded by boundaries. See example run for the expected format for the parameter */ constructor(lines) { this.width = (lines[0].length + 1) >> 1; this.height = lines.length; this.bits = hexaShape(lines, this.width); if (lines[0].includes ('-') || lines.at(-1).includes('-')) throw "board should have boundaries"; if (lines.some(line => /^-|-$/.test(line.trim()))) throw "board should have boundaries"; // Shapes are the pieces. One shape can have more than one actual position/transformation this.shapes = []; this.map = new Map; } translate(bits, translation) { /* Transform the positioned shape by applying the given translation callback to each cell. Used for mirroring and for rotating. Returns an array with the transformed position in all its possible locations on the board. */ // Rotate 60Β° clockwise around the (0, 0) coordinate. let old = bits; bits = 0n; let bit = 1n; for (let row = 0; row < this.height; row++) { for (let col = 0; col < this.width; col++) { if (old & bit) bits |= 1n << BigInt(translation(row, col)); bit <<= 1n; } } // Shift shape's cell up and left as much as possible -- which probably is an invalid position while ((bits & 1n) == 0n) bits >>= 1n; // Shift it back within the boundaries of the board and append it to the array of valid positions const positions = []; while (bits < this.bits) { if ((bits & this.bits) == 0) positions.push(bits); bits <<= 1n; } return positions; } mirror(bits) { return this.translate(bits, (row, col) => (row + 1) * (this.width - 1) - col)[0]; } rotation(bits) { return this.translate(bits, (row, col) => ((row + col) * this.width) - row); } addShape(color, lines) { let bits = hexaShape(lines, this.width); if (bits == 0n) throw "empty shape"; const positions = []; const unique = new Set; // Apply mirroring and rotation to arrive at all valid positions of this shape on the board. for (let mirror = 0; mirror < 2; mirror++) { bits = this.mirror(bits); for (let rotation = 0; rotation < 6; rotation++) { const shifts = this.rotation(bits); bits = shifts[0]; if (unique.has(bits)) continue; // Skip: it's an already processed position unique.add(bits); positions.push(...shifts); } } if (positions.length == 0) throw "Could not fit shape unto empty board"; this.shapes.push({ id: 1n << BigInt(this.shapes.length), // Unique bit for shape color, positions, placement: 0n }); } toString() { let output = ""; let bit = 1n; for (let row = 0; row < this.height; row++) { output += " ".repeat(row); for (let col = 0; col < this.width; col++) { const shape = this.shapes.find(({placement}) => placement & bit); output += shape ? shape.color[0] : (this.bits & bit) ? "*" : " "; output += " "; bit <<= 1n; } output += "\n"; } return output; } getMoves(occupied, cell) { /* Return an array will all possible positions of any unused shape that covers the given cell */ const moves = []; for (const shape of this.shapes) { if (shape.placement) continue; for (const position of shape.positions) { if ((cell & position) && !(position & occupied)) { // Found candidate moves.push([shape, position]); } } } return moves; } getCriticalCell(occupied) { /* This leads to optimisation: do a quick run over all free cells and count how many ways it can be covered. This will detect when there is a cell that cannot be covered. If there are no such cells, the cell with the least number of possible coverings is returned */ let minCount = Infinity, critical = -2n; for (let cell = 1n; cell < occupied; cell <<= 1n) { if (cell & occupied) continue; // Occupied // Count all moves that would cover this cell let count = this.getMoves(occupied, cell).length; if (count < minCount) { if (!count) return -1n; // Found a cell that cannot be covered minCount = count; critical = cell; } } return critical; } async recur(occupied, remaining, usedShapes) { /* Depth-first search for solutions */ if (remaining === 0) { // BINGO!! output.textContent = ++counter; if (counter % 100 == 0) await delay(0); return 1; } let map = this.map.get(usedShapes); if (!map) this.map.set(usedShapes, map = new Map); const memoCount = map.get(occupied); if (memoCount !== undefined) { if (memoCount) { counter += memoCount; output.textContent = counter; if (counter % 100 == 0) await delay(0); } return memoCount; } let count = 0; const cell = this.getCriticalCell(occupied); if (cell != -1n) { for (const [shape, position] of this.getMoves(occupied, cell)) { shape.placement = position; count += await this.recur(occupied | position, remaining - 1, usedShapes | shape.id); shape.placement = 0n; } } map.set(occupied, count); return count; } async solutions() { let start = performance.now(); await this.recur(this.bits, this.shapes.length, 0n); console.log("all done", counter); console.log(performance.now() - start, "milliseconds"); } } function main() { const board = new Board([ "# # # # # # # # # # # # # # #", "# # # - - - - - - - - - - - #", "# # - - - - - - - - - - - # #", "# - - - - - - - - - - - - # #", "# - - - - - - - - - - - # # #", "# - - - - - - - - - - - # # #", "# # # # # # # # # # # # # # #" ]); board.addShape("A", ["- - - #", "- - # #", "# #"]); board.addShape("B", ["- - # #", "# # #"]); board.addShape("C", ["- - - - #", "- - - #", "# # #"]); board.addShape("D", ["- - - #", "# # # #"]); board.addShape("E", ["- # #", "# # #"]); board.addShape("F", ["- - #", "# # # #"]); board.addShape("G", ["- # - #", "# # #"]); board.addShape("H", ["- - #", "- #", "# # #"]); board.addShape("I", ["- - - #", "# # #"]); board.addShape("J", ["# #", "- # #"]); board.addShape("K", ["- # #", "# #"]); board.addShape("L", ["- - #", "# # #"]); board.solutions(); } main(); Number of solutions found: <pre></pre>
15
2
74,998,112
2023-1-3
https://stackoverflow.com/questions/74998112/how-to-list-latest-posts-in-django
I'm working on my blog. I'm trying to list my latest posts in page list_posts.html.I tried but posts are not shown, I don't know why. I don't get any errors or anything, any idea why my posts aren't listed? This is models.py from django.db import models from django.utils import timezone from ckeditor.fields import RichTextField from stdimage import StdImageField STATUS = ( (0,"Publish"), (1,"Draft"), ) class Category(models.Model): created_at = models.DateTimeField(auto_now_add=True, verbose_name="Created at") updated_at = models.DateTimeField(auto_now=True, verbose_name="Updated at") title = models.CharField(max_length=255, verbose_name="Title") class Meta: verbose_name = "Category" verbose_name_plural = "Categories" ordering = ['title'] def __str__(self): return self.title class Post(models.Model): created_at = models.DateTimeField(auto_now_add=True, verbose_name="Created at") updated_at = models.DateTimeField(auto_now=True, verbose_name="Updated at") is_published = models.BooleanField(default=False, verbose_name="Is published?") published_at = models.DateTimeField(null=True, blank=True, editable=False, verbose_name="Published at") title = models.CharField(max_length=200, verbose_name="Title") slug = models.SlugField(max_length=200, unique=True) author = models.ForeignKey('auth.User', verbose_name="Author", on_delete=models.CASCADE) category = models.ForeignKey(Category, verbose_name="Category", on_delete=models.CASCADE) body = RichTextField(blank=True, null=True) image = StdImageField(upload_to='featured_image/%Y/%m/%d/', variations={'standard':(1170,820),'banner':(1170,530),'thumbnail':(500,500)}) status = models.IntegerField(choices=STATUS, default=0) class Meta: verbose_name = "Post" verbose_name_plural = "Posts" ordering = ['-created_at'] def publish(self): self.is_published = True self.published_at = timezone.now() self.save() def __str__(self): return self.title This is views.py from django.shortcuts import render, get_object_or_404 from django.utils import timezone from .models import Category, Post def post_list(request): posts = Post.objects.filter(published_at__lte=timezone.now()).order_by('published_at') latest_posts = Post.objects.filter(published_at__lte=timezone.now()).order_by('published_at')[:5] context = {'posts': posts, 'latest_posts': latest_posts} return render(request, 'list_posts.html', context) def post_detail(request, pk, post): latest_posts = Post.objects.filter(published_at__lte=timezone.now()).order_by('published_at')[:5] post = get_object_or_404(Post, pk=pk) context = {'post': post, 'latest_posts': latest_posts} return render(request, 'post_detail.html', context) This is list_posts.html {% extends "base.html" %} {% load static %} {% block content %} <!-- Main Wrap Start --> <main class="position-relative"> <div class="post-carausel-1-items mb-50"> {% for post in latest_posts %} <div class="col"> <div class="slider-single bg-white p-10 border-radius-15"> <div class="img-hover-scale border-radius-10"> <span class="top-right-icon bg-dark"><i class="mdi mdi-flash-on"></i></span> <a href="{{ post.get_absolute_url }}"> <img class="border-radius-10" src="{{ post.image.standard.url }}" alt="post-slider"> </a> </div> <h6 class="post-title pr-5 pl-5 mb-10 mt-15 text-limit-2-row"> <a href="{{ post.get_absolute_url }}">{{ post.title }}</a> </h6> <div class="entry-meta meta-1 font-x-small color-grey float-left text-uppercase pl-5 pb-15"> <span class="post-by">By <a href="#">{{ post.author }}</a></span> <span class="post-on">{{ post.created_at}}</span> </div> </div> </div> {% endfor %} </div> </main> {% endblock content%} Everything works except that posts aren't listed. Why I don't get listed posts? Thanks in advance!
The reason this doesn't work is because the published_at is apparently NULL and is thus never filled in. With the .filter(published_at__lte=timezone.now()), it checks that the published_at is less than or equal to the current timestamp. If it is NULL, it thus is excluded. That means that you will either need to fill in the published_at some way, or filter (and order) with a different field, like created_at. You can thus work with: from django.db.models.functions import Now from django.shortcuts import get_object_or_404, render from .models import Category, Post def post_list(request): posts = Post.objects.filter(created_at__lte=Now()).order_by('-created_at') latest_posts = posts[:5] context = {'posts': posts, 'latest_posts': latest_posts} return render(request, 'list_posts.html', context) def post_detail(request, pk, post): latest_posts = Post.objects.filter(created_at__lte=Now()).order_by( '-created_at' )[:5] post = get_object_or_404(Post, pk=pk) context = {'post': post, 'latest_posts': latest_posts} return render(request, 'post_detail.html', context) Note: You can work with Now [Django-doc] to work with the database timestamp instead. This can be useful if you want to specify the queryset in a class-based view, since each time the queryset is evaluated, it will then take the (updated) timestamp.
3
2
74,993,877
2023-1-3
https://stackoverflow.com/questions/74993877/different-behavior-of-applystr-and-astypestr-for-datetime64ns-pandas-colum
I'm working with datetime information in pandas and wanted to convert a bunch of datetime64[ns] columns to str. I noticed a different behavior from the two approaches that I expected to yield the same result. Here's a MCVE. import pandas as pd # Create a dataframe with dates according to ISO8601 df = pd.DataFrame({"dt_column": ["2023-01-01", "2023-01-02", "2023-01-02"]}) # Convert the strings to datetimes # (I expect the time portion to be 00:00:00) df["dt_column"] = pd.to_datetime(df["dt_column"]) df["str_from_astype"] = df["dt_column"].astype(str) df["str_from_apply"] = df["dt_column"].apply(str) print(df) print() print("Datatypes of the dataframe") print(df.dtypes) Output dt_column str_from_astype str_from_apply 0 2023-01-01 2023-01-01 2023-01-01 00:00:00 1 2023-01-02 2023-01-02 2023-01-02 00:00:00 2 2023-01-02 2023-01-02 2023-01-02 00:00:00 Datatypes of the dataframe dt_column datetime64[ns] str_from_astype object str_from_apply object dtype: object If I use .astype(str) the time information is lost and when I use .apply(str) the time information is retained (or inferred). Why is that? (Pandas v1.5.2, Python 3.9.15)
The time information is never lost, if you use 2023-01-02 12:00, you'll see that all times will be present with astype, but also visible in the original datetime column: dt_column str_from_astype str_from_apply 0 2023-01-01 00:00:00 2023-01-01 00:00:00 2023-01-01 00:00:00 1 2023-01-02 00:00:00 2023-01-02 00:00:00 2023-01-02 00:00:00 2 2023-01-02 12:00:00 2023-01-02 12:00:00 2023-01-02 12:00:00 With apply, the python str builtin is applied on each Timestamp object, which always shows a full format: str(pd.Timestamp('2023-01-01')) # '2023-01-01 00:00:00' With astype, the formatting is handled by pandas.io.formats.format.SeriesFormatter, which is a bit smarter and decides on the output format depending on the context (here other values in the Series and the presence of a non-null time). The canonical way to be explicit is anyway to use dt.strftime: # without time df["dt_column"].dt.strftime('%Y-%m-%d') # with time df["dt_column"].dt.strftime('%Y-%m-%d %H:%M:%S')
16
19
74,992,814
2023-1-3
https://stackoverflow.com/questions/74992814/pandas-confused-when-extending-dataframe-vs-series-column-index-why-the-dif
First off, let me say that I've already looked over various responses to similar questions, but so far, none of them has really made it clear to me why (or why not) the Series and DataFrame methodologies are different. Also, some of the Pandas information is not clear, for example looking up Series.reindex, https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.reindex.html all the examples suddenly switch to showing examples for DataFrame not Series, but the functions don't seem to overlap exactly. So, now to it, first with a DataFrame. > df = pd.DataFrame(np.random.randn(6,4), index=range(6), columns=list('ABCD')) > df Out[544]: A B C D 0 0.136833 -0.974500 1.708944 0.435174 1 -0.357955 -0.775882 -0.208945 0.120617 2 -0.002479 0.508927 -0.826698 -0.904927 3 1.955611 -0.558453 -0.476321 1.043139 4 -0.399369 -0.361136 -0.096981 0.092468 5 -0.130769 -0.075684 0.788455 1.640398 Now, to add new columns, I can do something simple (2 ways, same result). > df[['X','Y']] = (99,-99) > df.loc[:,['X','Y']] = (99,-99) > df Out[557]: A B C D X Y 0 0.858615 -0.552171 1.225210 -1.700594 99 -99 1 1.062435 -1.917314 1.160043 -0.058348 99 -99 2 0.023910 1.262706 -1.924022 -0.625969 99 -99 3 1.794365 0.146491 -0.103081 0.731110 99 -99 4 -1.163691 1.429924 -0.194034 0.407508 99 -99 5 0.444909 -0.905060 0.983487 -4.149244 99 -99 Now, with a Series, I have hit a (mental?) block trying the same. I'm going to be using a loop to construct a list of Series that will eventually be a data frame, but I want to deal with each 'row' as a Series first, (to make development easier). > ss = pd.Series(np.random.randn(4), index=list('ABCD')) > ss Out[552]: A 0.078013 B 1.707052 C -0.177543 D -1.072017 dtype: float64 > ss['X','Y'] = (99,-99) Traceback (most recent call last): ... KeyError: "None of [Index(['X', 'Y'], dtype='object')] are in the [index]" Same for, > ss[['X','Y']] = (99,-99) > ss.loc[['X','Y']] = (99,-99) KeyError: "None of [Index(['X', 'Y'], dtype='object')] are in the [index]" The only way I can get this working is a rather clumsy (IMHO), > ss['X'],ss['Y'] = (99,-99) > ss Out[560]: A 0.078013 B 1.707052 C -0.177543 D -1.072017 X 99.000000 Y -99.000000 dtype: float64 I did think that, perhaps, reindexing the Series to add the new indices prior to assignment might solve to problem. It would, but then I hit an issue trying to change the index. > ss = pd.Series(np.random.randn(4), index=list('ABCD'), name='z') > xs = pd.Series([99,-99], index=['X','Y'], name='z') Here I can concat my 2 Series to create a new one, and I can also concat the Series indices, eg, > ss.index.append(xs.index) Index(['A', 'B', 'C', 'D', 'X', 'Y'], dtype='object') But I can't extend the current index with, > ss.index = ss.index.append(xs.index) ValueError: Length mismatch: Expected axis has 4 elements, new values have 6 elements So, what intuitive leap must I make to understand why the former Series methods don't work, but (what looks like an equivalent) DataFrame method does work? It makes passing multiple outputs back from a function into new Series elements a bit clunky. I can't 'on the fly' make up new Series index names to insert values into my exiting Series object.
I don't think you can directly modify the Series in place to add multiple values at once. If having a new object is not an issue: ss = pd.Series(np.random.randn(4), index=list('ABCD'), name='z') xs = pd.Series([99,-99], index=['X','Y'], name='z') # new object with updated index ss = ss.reindex(ss.index.union(xs.index)) ss.update(xs) Output: A -0.369182 B -0.239379 C 1.099660 D 0.655264 X 99.000000 Y -99.000000 Name: z, dtype: float64 in place alternative using a function: ss = pd.Series(np.random.randn(4), index=list('ABCD'), name='z') xs = pd.Series([99,-99], index=['X','Y'], name='z') def extend(s1, s2): s1.update(s2) # update common indices # add others for idx, val in s2[s2.index.difference(s1.index)].items(): s1[idx] = val extend(ss, xs) Updated ss: A 0.279925 B -0.098150 C 0.910179 D 0.317218 X 99.000000 Y -99.000000 Name: z, dtype: float64
3
2
74,991,754
2023-1-3
https://stackoverflow.com/questions/74991754/how-to-yield-one-array-element-and-keep-other-elements-in-pyspark-dataframe
I have a pyspark DataFrame like: +------------------------+ | ids| +------------------------+ |[101826, 101827, 101576]| +------------------------+ and I want explode this dataframe like: +------------------------+ | id| ids| +------------------------+ |101826 |[101827, 101576]| |101827 |[101826, 101576]| |101576 |[101826, 101827]| +------------------------+ How can I do using pyspark udf or other methods?
The easiest way out is to copy id into ids. Explode id and use array except to exclude each id in the row. Code below. ( df1.withColumn('ids', col('id')) .withColumn('id',explode('id')) .withColumn('ids',array_except(col('ids'), array('id'))) ).show(truncate=False) +------+----------------+ |id |ids | +------+----------------+ |101826|[101827, 101576]| |101827|[101826, 101576]| |101576|[101826, 101827]| +------+----------------+
4
4
74,988,070
2023-1-3
https://stackoverflow.com/questions/74988070/how-can-i-overlay-one-image-over-another-so-that-dark-background-is-transparent
I have 2 images, test1.jpg and test2.jpg that are RGB images. They have been converted from a 2D numpy array so they are monochrome images. They have the same shape. When I use the paste function, I only see one of the images instead of both. Here are the test1 and test2 jpgs: . This is what I get after doing test1.paste(test2) and test1.save('final.jpg'): Why is it only showing test2? Here is my code: im1 = Image.open('test1.jpg') im2 = Image.open('test2.jpg') im1.paste(im2) im1.save('final.jpg')
You simply need to choose the lighter of your two images at each point with PIL Channel Operations: from PIL import Image, ImageChops im1 = Image.open('test1.jpeg') im2 = Image.open('test2.jpeg') # Choose lighter of the two images at each pixel location combined = ImageChops.lighter(im1,im2) Note that you could use paste() as you originally intended, but that it will paste all the black as well as the white pixels from image2 over image1. In order to avoid that, you would need to make a mask and only paste where image2 is non-zero. That might look like this: im1 = Image.open('test1.jpeg') im2 = Image.open('test2.jpeg') # Make greyscale mask from image2 mask = im2.convert('L') mask = mask.point(lambda i: 255 if i>0 else 0) # Paste image2 into image1 only where image2 has non-black content im1.paste(im2, mask=mask) I just think the ImageChops.lighter() method is simpler. Note that these two methods will give subtly different results. For example, if a pixel is 192 in image1 and 67 in image2, the ImageChops.lighter() method will result in 192, whereas the paste() method will see there is something in image2, and therefore give you the 67. Your choice!
4
5
74,987,702
2023-1-2
https://stackoverflow.com/questions/74987702/how-to-parse-script-tag-using-beautifulsoup
I am trying to read the window.appCache from a glassdoor reviews site. url = "https://www.glassdoor.com/Reviews/Alteryx-Reviews-E351220.htm" html = requests.get(url, headers={'User-Agent': 'Mozilla/5.0'}) soup = BeautifulSoup(html.content,'html.parser') text = soup.findAll("script")[0].text This isolates the dict I need however when I tried to do json.loads() I get the following error: raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) I checked the type of text and it is str. When I print text to a file, it looks something like this (just a snippet as the output is about 5000 lines): window.appCache={"appName":"reviews","appVersion":"7.14.12","initialState" {"surveyEndpoint":"https:\u002F\u002Femployee-pulse-survey-b2c.us-east-1.prod.jagundi.com", "i18nStrings":{"_":"JSON MESSAGE BUNDLE - do not remove", "eiHeader.seeAllPhotos":" See All Photos","eiHeader.viewJobs":"View Jobs", "eiHeader.bptw.description":"This employer is a winner of the [year] Best Places to Work award. Winners were determined by the people who know these companies best... I am only concerned with the "reviews":[ field that is buried about halfway through the data, but I can't seem to parse the string into json and retrieve what I need.
One solution is to parse the required data with re/json module: import json import pprint import re import requests url = "https://www.glassdoor.com/Reviews/Alteryx-Reviews-E351220.htm" html = requests.get(url, headers={"User-Agent": "Mozilla/5.0"}).text reviews = re.search(r'"reviews":(\[.*?}\])}', html, flags=re.S).group(1) reviews = json.loads(reviews) pprint.pprint(reviews) Prints: [{'__typename': 'EmployerReview', 'advice': "Don't rush too finish a project", 'adviceOriginal': None, 'cons': 'Typical like other companies where newbies get higher salary and ' 'you have to work your way up for promotions nothing really bad', 'consOriginal': None, 'countHelpful': 0, 'countNotHelpful': 0, 'divisionLink': None, 'divisionName': None, 'employer': {'__ref': 'Employer:351220'}, 'employerResponses': [], 'employmentStatus': None, 'isCovid19': False, 'isCurrentJob': True, 'isLanguageMismatch': False, 'isLegal': True, 'jobEndingYear': None, 'jobTitle': None, 'languageId': 'eng', 'lengthOfEmployment': 6, 'location': None, ...and so on.
4
1
74,987,641
2023-1-2
https://stackoverflow.com/questions/74987641/how-to-remove-square-after-2-seconds
I have this code and I don't know how to make the red cube disappear after 2 seconds. import pygame import sys from pygame.locals import * pygame.init() a=0 #display prozor = pygame.display.set_mode((800,800)) FPS = pygame.time.Clock() FPS.tick(60) #boje green=pygame.Color(0 ,255 , 0) red=pygame.Color(255, 0, 0) yellow=pygame.Color(255, 255, 0) blue=pygame.Color(0, 0, 255) black=pygame.Color(0, 0, 0) white=pygame.Color(255, 255, 255) #class class Cube: def update(self): self.cx, self.cy = pygame.mouse.get_pos() self.square = pygame.Rect(self.cx, self.cy, 50, 50) def draw(self): pygame.draw.rect(prozor, (255, 0, 0), self.square) cube = Cube() drawing_cube = False #objekt for j in range(16): for i in range(800): if i%50 == 0: pygame.draw.rect(prozor, green, ((0+i, a), (50, 50),),1) a=a+50 #gameloop while True: for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit if event.type == pygame.MOUSEBUTTONDOWN: cube.update() drawing_cube = True if drawing_cube: cube.draw() pygame.display.flip() pygame.display.update() Adding it to the draw class will just freeze the game and shut it. def draw(self): pygame.draw.rect(prozor, red, self.square) time.sleep(3) pygame.draw.rect(prozor, black, self.square I tried making another class called delete which would delete the cube after 3 seconds using the time module. def delete(self): time.sleep(3) pygame.draw.rect(prozor, black, self.square) and added it here while True: for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit if event.type == pygame.MOUSEBUTTONDOWN: cube.update() drawing_cube = True if drawing_cube: cube.draw() pygame.display.flip() **cube.delete** pygame.display.flip() pygame.display.update() but the cube is not disappearing.
Use pygame.time.get_ticks to measure the time in milliseconds. Calculate the time when the cube must disappear again and hide the cube if the current time is greater than the calculated time. You also need to clear the display (prozor.fill(0)) and redraw the scene in each frame: drawing_cube = False hide_cube_time = 0 clock = pygame.time.Clock() run = True while run: clock.tick(100) current_time = pygame.time.get_ticks() for event in pygame.event.get(): if event.type == QUIT: run = False if event.type == pygame.MOUSEBUTTONDOWN: cube.update() drawing_cube = True hide_cube_time = current_time + 2000 # 2000 milliseconds == 2 sconds if current_time > hide_cube_time: drawing_cube = False prozor.fill(0) a=0 for j in range(16): for i in range(800): if i%50 == 0: pygame.draw.rect(prozor, green, (0+i, a, 50, 50), 1) a=a+50 if drawing_cube: cube.draw() pygame.display.update() pygame.quit() sys.exit() Note, the typical PyGame application loop has to: limit the frames per second to limit CPU usage with pygame.time.Clock.tick handle the events by calling either pygame.event.pump() or pygame.event.get(). update the game states and positions of objects dependent on the input events and time (respectively frames) clear the entire display or draw the background draw the entire scene (blit all the objects) update the display by calling either pygame.display.update() or pygame.display.flip()
3
2
74,945,655
2022-12-28
https://stackoverflow.com/questions/74945655/dataspell-outputs-the-following-error-local-cdn-resources-have-problems-on-chro
faced with such an error: Local cdn resources have problems on chrome/safari when used in jupyter-notebook. It appears when working with the pyvis library. net = Network(notebook=True) net.add_nodes( [1, 2, 3, 4, 5], # node ids label=['Node #1', 'Node #2', 'Node #3', 'Node #4', 'Node #5'], # node labels # node titles (display on mouse hover) title=['Main node', 'Just node', 'Just node', 'Just node', 'Node with self-loop'], color=['#d47415', '#22b512', '#42adf5', '#4a21b0', '#e627a9'] # node colors (HEX) ) net.add_edges([(1, 2), (1, 3), (2, 3), (2, 4), (3, 5), (5, 1)]) net.show('graph.html') I tried switching the browser to dataspell
From pyvis documentation: while using notebook in chrome browser, to render the graph, pass additional kwarg β€˜cdn_resources’ as β€˜remote’ or β€˜inline’ I did net = Network(notebook=True, cdn_resources='in_line') A note from me - you have to use 'in_line' instead of 'inline'. From sources: assert cdn_resources in ["local", "in_line", "remote"], "cdn_resources not in [local, in_line, remote]." Additionally, pyvis output render is an open feature request for DataSpell at the moment. Watch and upvote this ticket if you're interested: DS-3446
3
5
74,986,002
2023-1-2
https://stackoverflow.com/questions/74986002/attributeerror-updater-object-has-no-attribute-dispatcher
When I run this code: from telegram.ext import * import keys print('Starting a bot....') def start_commmand(update, context): update.message.reply_text('Hello! Welcome To Store!') if __name__ == '__main__': updater = Updater(keys.token, True) dp = updater.dispatcher # Commands dp.add.handler(CommandHandler('start', start_commmand)) # Run bot updater.start_polling(1.0) updater.idle() I get this error: Traceback (most recent call last): File "C:\Users\pc\PycharmProjects\telegram\main.py", line 11, in <module> dp = updater.dispatcher AttributeError: 'Updater' object has no attribute 'dispatcher' I attempted to resolve this issue by updating the library but the error remained.
You probably found an example for v13, but since a few days the v20 for python-telegram-bot is out. Now you have to build your application differently and you have to use async functions. This should work: from telegram.ext import * import keys print('Starting a bot....') async def start_commmand(update, context): await update.message.reply_text('Hello! Welcome To Store!') if __name__ == '__main__': application = Application.builder().token(keys.token).build() # Commands application.add_handler(CommandHandler('start', start_commmand)) # Run bot application.run_polling(1.0) Also, here are some good examples for the python-telegram-bot library.
5
11
74,985,638
2023-1-2
https://stackoverflow.com/questions/74985638/how-to-plot-points-over-a-violin-plot
I have four pandas Series and I plot them using a violin plot as follows: import seaborn seaborn.violinplot([X1['total'], X2['total'], X3['total'], X4['total']]) I would like to plot the values on top of the violin plot so I added: seaborn.stripplot([X1['total'], X2['total'], X3['total'], X4['total']]) But this gives: It plots all the points over the first violin plot. What am I doing wrong?
Currently (seaborn 0.12.1), sns.violinplot seems to accept a list of lists as data, and interprets it similar to a wide form dataframe. sns.striplot (as well as sns.swarmplot), however, interpret this as a single dataset. On the other hand, sns.stripplot accepts a dictionary of lists and interprets it as a wide form dataframe. But sns.violinplot refuses to work with that dictionary. Note that seaborn is being actively reworked internally to allow a wider set of data formats, so one of the future versions will tackle this issue. So, a list of lists for the violin plot, and a dictionary for the stripplot allows combining both: import seaborn as sns import pandas as pd import numpy as np X1, X2, X3, X4 = [pd.DataFrame({'total': np.random.normal(.1, 1, np.random.randint(99, 300)).cumsum()}) for _ in range(4)] ax = sns.violinplot([X1['total'], X2['total'], X3['total'], X4['total']], inner=None) sns.stripplot({0: X1['total'], 1: X2['total'], 2: X3['total'], 3: X4['total']}, edgecolor='black', linewidth=1, palette=['white'] * 4, ax=ax)
3
5
74,984,318
2023-1-2
https://stackoverflow.com/questions/74984318/in-django-whats-the-difference-between-verbose-name-as-a-field-parameter-and
Consider this class: class Product(models.Model): name = models.Charfield(verbose_name="Product Name", max_length=255) class Meta: verbose_name = "Product Name" I looked at the Django docs and it says: For verbose_name in a field declaration: "A human-readable name for the field." For verbose_name in a Meta declaration: "A human-readable name for the object, singular". When would I see either verbose_name manifest at runtime? In a form render? In Django admin?
The verbose_name in the Meta deals with the name of the model, not field(s) of that model. It thus likely should be 'Product', not 'Product Name': class Product(models.Model): name = models.Charfield(verbose_name='Product Name', max_length=255) class Meta: verbose_name = 'Product' This thus specifies the table name in the model admin, whereas the verbose_name of the field(s) will show up in ModelForms and when you display or edit records.
3
2
74,982,353
2023-1-2
https://stackoverflow.com/questions/74982353/problems-with-version-control-for-dictionaries-inside-a-python-class
I'm doing something wrong in the code below. I have a method (update_dictonary) that changes a value or values in a dictionary based on what is specificed in a tuple (new_points). Before I update the dictionary, I want to save that version in a list (history) in order to be able to access previous versions. However, my attempt below updates all dictionaries in history to be like the latest version. I can't figure out what I'm doing wrong here. test_dict = {'var0':{'var1':{'cond1':1, 'cond2':2, 'cond3':3} } } class version_control: def __init__ (self, dictionary): self.po = dictionary self.history = list() self.version = 0 def update_dictionary(self, var0, var1, new_points): po_ = self.po self.history.append(po_) for i in new_points: self.po[var0][var1][i[0]] = i[1] self.version += 1 def get_history(self, ver): return self.history[ver] a = version_control(test_dict) new_points = [('cond1', 2), ('cond2', 0)] a.update_dictionary('var0', 'var1', new_points) new_points = [('cond3', -99), ('cond2', 1)] a.update_dictionary('var0', 'var1', new_points) print(a.get_history(0)) print(a.get_history(1))
Try this from copy import deepcopy ... def update_dictionary(self, var0, var1, new_points): po_ = deepcopy(self.po) self.history.append(po_) for i in new_points: self.po[var0][var1][i[0]] = i[1] self.version += 1 ... The problem here is that when you assign po_= self.po you expect po_ to a new variable with a new memory id but actually, you just make a shallow copy(same memory id) of your dictionary. This means if you update the self.po then op_ will automatically update. To solve this problem by using deepcopy from the copy module(Built-in). It will create a new variable. You can use this code to save the data into a JSON file. import json class version_control: def __init__(self, dictionary): self.po = dictionary self.version = 0 self.ZEROth_version() def update_dictionary(self, var0, var1, new_points, version=None): self.version += 1 for i in new_points: self.po[var0][var1][i[0]] = i[1] # set the ver to version if given else set to self.version ver = self.version if version is None else version with open("version.json", "r") as jsonFile: # loading the data from the file. data = json.load(jsonFile) data[str(ver)] = self.po with open("version.json", "w") as jsonFile: # save the updated dictionary in json file json.dump(data, jsonFile, indent=4) def get_history(self, ver): try: with open("version.json", "r") as jsonFile: # I don't use .get here. I will catch key errors in except block. I don't want to add an if statement to check for None. But string if you want you can add that. return json.load(jsonFile)[str(ver)] # Checking if the file not exists or is empty except (json.decoder.JSONDecodeError, FileNotFoundError, KeyError) as e: print("File or Version not found") def ZEROth_version(self): with open("version.json", "w") as f: data = {0: self.po} json.dump(data, f, indent=4) I have explained some main points if you want more explanation then comment, I will reply as soon as possible.
3
4
74,982,325
2023-1-2
https://stackoverflow.com/questions/74982325/poetry-clean-remove-package-from-env-after-removing-from-toml-file
I installed a package with poetry add X, and so now it shows up in the toml file and in the venv (mine's at .venv/lib/python3.10/site-packages/). Now to remove that package, I could use poetry remove X and I know that would work properly. But sometimes, it's easier to just go into the toml file and delete the package line there. So that's what I tried by removing the line for X. I then tried doing poetry install but that didn't do anything When I do ls .venv/lib/python3.10/site-packages/, I still see X is installed there. I also tried poetry lock but no change with that either. So is there some command to take the latest toml file and clean up packages from being installed that are no longer present in the toml?
When ever you manual edit the pyproject.toml you have to run poetry lock --no-update to sync the locked dependencies in the poetry.lock file. This is necessary because Poetry will use the resolved dependencies from the poetry.lock file on install if this file is available. Once the pyproject.toml and poetry.lock file are in sync run poetry install --sync to get the venv in sync with the poetry.lock file.
8
18
74,959,175
2022-12-30
https://stackoverflow.com/questions/74959175/getting-the-command-bin-sh-c-pip-install-no-cache-dir-r-requirements-txt
here is my requirements.txt beautifulsoup4==4.11.1 cachetools==5.2.0 certifi==2022.12.7 charset-normalizer==2.1.1 click==8.1.3 colorama==0.4.6 Flask==2.2.2 Flask-SQLAlchemy==3.0.2 google==3.0.0 google-api-core==2.10.2 google-auth==2.14.1 google-cloud-pubsub==2.13.11 googleapis-common-protos==1.57.0 greenlet==2.0.1 grpc-google-iam-v1==0.12.4 grpcio==1.51.1 grpcio-status==1.51.1 idna==3.4 importlib-metadata==5.2.0 itsdangerous==2.1.2 Jinja2==3.1.2 MarkupSafe==2.1.1 NotFound==1.0.2 proto-plus==1.22.1 protobuf==4.21.12 psycopg2==2.9.5 pyasn1==0.4.8 pyasn1-modules==0.2.8 requests==2.28.1 rsa==4.9 six==1.16.0 soupsieve==2.3.2.post1 SQLAlchemy==1.4.45 urllib3==1.26.13 Werkzeug==2.2.2 zipp==3.11.0 here is my Dockerfile FROM python:3.10-slim # Allow statements and log messages to immediately appear in the Knative logs ENV PYTHONUNBUFFERED True # Copy local code to the container image. ENV APP_HOME /app WORKDIR $APP_HOME COPY . ./ # Install production dependencies. RUN pip install --no-cache-dir -r requirements.txt CMD ["python", "-u", "main.py"] done all the versions upgrades and downgrades of the installed modules tried with python 3.8.2.final.0 && 3.10 python interpreter what to do? any leads would be appreciated..!!
I tried to install your Python dependencies in a Docker environment and I identified an error while installing the psycopg2 package. The reason is this package relies on two core dependencies: libpq-dev gcc But the Docker base image you use python:3.10-slim does not contain these core dependencies natively. You must declare their installation from your Dockerfile like so: FROM python:3.10-slim # Allow statements and log messages to immediately appear in the Knative logs ENV PYTHONUNBUFFERED True # Copy local code to the container image. ENV APP_HOME /app WORKDIR $APP_HOME COPY . ./ # Install core dependencies. RUN apt-get update && apt-get install -y libpq-dev build-essential # Install production dependencies. RUN pip install --no-cache-dir -r requirements.txt CMD ["python", "-u", "main.py"] UPDATE: Investigation steps Connect to Docker container running python:3.10-slim image: docker run -it --rm python:3.10-slim /bin/bash Write requirements.txt file with adapted content: cat << EOF > requirements.txt beautifulsoup4==4.11.1 cachetools==5.2.0 certifi==2022.12.7 charset-normalizer==2.1.1 click==8.1.3 colorama==0.4.6 Flask==2.2.2 Flask-SQLAlchemy==3.0.2 google==3.0.0 google-api-core==2.10.2 google-auth==2.14.1 google-cloud-pubsub==2.13.11 googleapis-common-protos==1.57.0 greenlet==2.0.1 grpc-google-iam-v1==0.12.4 grpcio==1.51.1 grpcio-status==1.51.1 idna==3.4 importlib-metadata==5.2.0 itsdangerous==2.1.2 Jinja2==3.1.2 MarkupSafe==2.1.1 NotFound==1.0.2 proto-plus==1.22.1 protobuf==4.21.12 psycopg2==2.9.5 pyasn1==0.4.8 pyasn1-modules==0.2.8 requests==2.28.1 rsa==4.9 six==1.16.0 soupsieve==2.3.2.post1 SQLAlchemy==1.4.45 urllib3==1.26.13 Werkzeug==2.2.2 zipp==3.11.0 EOF Run pip command: pip install --no-cache-dir -r requirements.txt Catch first error related to missing libpq-dev package Installing libpq-dev: apt update -y && apt-get install -y libpq-dev Run pip command again Catch second error related to missing gcc package
5
7
74,979,693
2023-1-2
https://stackoverflow.com/questions/74979693/removing-elements-from-sublists-in-python
I have two lists A1 and J1 containing many sublists. From each sublist of A1[0], I want to remove the element specified in J1[0]. I present the current and expected outputs. A1 = [[[1, 3, 4, 6], [0, 2, 3, 5]], [[1, 3, 4, 6], [1, 3, 4, 6]]] J1 = [[[1], [2]], [[1], [4]]] arD = [] for i in range(0,len(A1)): for j in range(0,len(J1)): C=set(A1[i][j])-set(J1[i][j]) D=list(C) arD.append(D) D=list(arD) print("D =",D) The current output is D = [[3, 4, 6], [0, 3, 5], [3, 4, 6], [1, 3, 6]] The expected output is D = [[[3, 4, 6], [0, 3, 5]],[[3, 4, 6],[1, 3, 6]]]
Code:- A1 = [[[1, 3, 4, 6], [0, 2, 3, 5]], [[1, 3, 4, 6], [1, 3, 4, 6]]] J1 = [[[1], [2]], [[1], [4]]] arD=[] for i in range(0,len(A1)): tmp=[] #Created a tmp variable list for j in range(0,len(J1)): C=set(A1[i][j])-set(J1[i][j]) tmp.append(list(C)) #Appending result in tmp variable arD.append(tmp) #Storing tmp list as a list of lists in arD. print("D =",arD) Output:- D = [[[3, 4, 6], [0, 3, 5]], [[3, 4, 6], [1, 3, 6]]]
4
2
74,979,220
2023-1-2
https://stackoverflow.com/questions/74979220/drawing-2d-and-3d-contour-in-the-same-plot
Is it possible to draw 2D and 3D contour plot like this in python. Sorry I couldn't provide much detail on the plot in terms of mathematical equations and all.
Use plot_surface along with contour to project the contour. It is not limited to the Z plane; you can do this to the X and Y planes as well. There is an example in the official documentation of Matplotlib: https://matplotlib.org/stable/gallery/mplot3d/contourf3d_2.html#sphx-glr-gallery-mplot3d-contourf3d-2-py Note that an offset is needed to move the contour to the bottom of the 3D plot. You can set the offset equal to the lower bound of the y limit. I created an example: import matplotlib.pyplot as plt import numpy as np x = y = np.arange(-3.0, 3.0, 0.02) X, Y = np.meshgrid(x, y) Z1 = np.exp(-X ** 2 - Y ** 2) Z2 = np.exp(-(X - 1) ** 2 - (Y - 1) ** 2) Z3 = np.exp(-(X + 1) ** 2 - (Y + 1) ** 2) Z = (Z1 - Z2 - Z3) * 2 fig, ax = plt.subplots(subplot_kw={"projection": "3d"}) # draw surface plot surf = ax.plot_surface(X, Y, Z, lw=0.1, cmap='coolwarm', edgecolor='k') # add color bar fig.colorbar(surf, shrink=0.5, aspect=10) # projecting the contour with an offset ax.contour(X, Y, Z, 20, zdir='z', offset=-2, cmap='coolwarm') # match the lower bound of zlim to the offset ax.set(zlim=(-2, 1)) plt.tight_layout() plt.show()
3
2
74,975,799
2023-1-1
https://stackoverflow.com/questions/74975799/beautiful-soup-scraping
I am trying to scrape lineups from https://www.rotowire.com/hockey/nhl-lineups.php I would like a resulting dataframe like the following Team Position Player Line CAR C Sebastian Aho Power Play #1 CAR LW Stefan Noesen Power Play #1 .... This is what I have currently, but am unsure how to get the team and line to matchup with the players/positions as well as put into a dataframe import requests, pandas as pd from bs4 import BeautifulSoup url = "https://www.rotowire.com/hockey/nhl-lineups.php" soup = BeautifulSoup(requests.get(url).text, "html.parser") lineups = soup.find_all('div', {'class':['lineups']})[0] names = lineups.find_all('a', title=True) for name in names: name = name.get('title') print(name) positions = lineups.find_all('div', {'class':['lineup__pos']}) for pos in positions: pos = pos.text print(pos)
Try: import pandas as pd import requests from bs4 import BeautifulSoup url = "https://www.rotowire.com/hockey/nhl-lineups.php" soup = BeautifulSoup(requests.get(url).content, "html.parser") all_data = [] for a in soup.select(".lineup__player a"): name = a["title"] pos = a.find_previous("div").text line = a.find_previous(class_="lineup__title").text lineup = a.find_previous(class_="lineup__list")["class"][-1] team = a.find_previous(class_=f"lineup__team {lineup}").img["alt"] all_data.append((team, pos, name, line)) df = pd.DataFrame(all_data, columns=["Team", "Pos", "Player", "Line"]) print(df.to_markdown(index=False)) Prints: Team Pos Player Line CAR C Sebastian Aho POWER PLAY #1 CAR LW Stefan Noesen POWER PLAY #1 CAR RW Andrei Svechnikov POWER PLAY #1 CAR LD Brent Burns POWER PLAY #1 CAR RD Martin Necas POWER PLAY #1
3
4
74,972,850
2023-1-1
https://stackoverflow.com/questions/74972850/jax-lax-select-vs-jax-numpy-where
Was taking a look at the dropout implementation in flax: def __call__(self, inputs, deterministic: Optional[bool] = None): """Applies a random dropout mask to the input. Args: inputs: the inputs that should be randomly masked. deterministic: if false the inputs are scaled by `1 / (1 - rate)` and masked, whereas if true, no mask is applied and the inputs are returned as is. Returns: The masked inputs reweighted to preserve mean. """ deterministic = merge_param( 'deterministic', self.deterministic, deterministic) if (self.rate == 0.) or deterministic: return inputs # Prevent gradient NaNs in 1.0 edge-case. if self.rate == 1.0: return jnp.zeros_like(inputs) keep_prob = 1. - self.rate rng = self.make_rng(self.rng_collection) broadcast_shape = list(inputs.shape) for dim in self.broadcast_dims: broadcast_shape[dim] = 1 mask = random.bernoulli(rng, p=keep_prob, shape=broadcast_shape) mask = jnp.broadcast_to(mask, inputs.shape) return lax.select(mask, inputs / keep_prob, jnp.zeros_like(inputs)) Particularly, I'm interested in last line lax.select(mask, inputs / keep_prob, jnp.zeros_like(inputs)). Wondering why lax.select is used here instead of: return jnp.where(mask, inputs / keep_prob, 0) or even more simply: return mask * inputs / keep_prob
jnp.where is basically the same as lax.select, except more flexible in its inputs: for example, it will broadcast inputs to the same shape or cast to the same dtype, whereas lax.select requires more strict matching of inputs: >>> import jax.numpy as jnp >>> from jax import lax >>> x = jnp.arange(3) # Implicit broadcasting >>> jnp.where(x < 2, x[:, None], 0) DeviceArray([[0, 0, 0], [1, 1, 0], [2, 2, 0]], dtype=int32) >>> lax.select(x < 2, x[:, None], 0) TypeError: select cases must have the same shapes, got [(), (3, 1)]. # Implicit type promotion >>> jnp.where(x < 2, jnp.zeros(3), jnp.arange(3)) DeviceArray([0., 0., 2.], dtype=float32) >>> lax.select(x < 2, jnp.zeros(3), jnp.arange(3)) TypeError: lax.select requires arguments to have the same dtypes, got int32, float32. (Tip: jnp.where is a similar function that does automatic type promotion on inputs). Library code is one place where the stricter semantics can be useful, because rather than smoothing-over potential implementation bugs and returning an unexpected output, it will complain loudly. But performance-wise (especially once JIT-compiled) the two are essentially equivalent. As for why the flax developers chose lax.select vs. multiplying by a mask, I can think of two reasons: Multiplying by a mask is subject to implicit type promotion semantics, and it takes a lot more thought to anticipate problematic outputs than a simple select, which is specifically-designed for the intended operation. Using multiplication causes the compiler to treat this operation as a multiplication, which it is not. A select is a much more narrow and precise operation than a multiplication, and by specifying operations precisely it often allows the compiler to optimize the results to a greater extent.
3
5