question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
76,036,074
2023-4-17
https://stackoverflow.com/questions/76036074/cannot-debug-test-case-in-vs-code-found-duplicate-in-env-path
I am using VS Code for developing in Python. I have been able to debug single test cases from the test module, which is very practical. Since recently, it no longer works. After a short waiting time, a dialog pops up: "Invalid message: Found duplicate in "env": PATH." with the buttons "Open launch.json" and "Cancel". Opening launch.json does not help, since I don't manipulate PATH there. I also did not edit PATH recently (and the message seems to point not to a wrong value in PATH, but to PATH being set multiple times). I tried to see the duplicate PATH myself, but failed: Both echo ${env:PATH} at the console and os.environ in Python will only give a single PATH variable - as already implied by the used data structure. I tried applying this SO answer. It does neither relate directly to VS Code, nor to duplicate PATH variables, but to duplicates in PATH, but it was the closest I found on here. It didn't work.
You can install different versions of the Python plugin (to whom it may concern: if this works unsatisfactorily, it may be because you have multiple instances of VS Code running). You will note: 2023.4.1 is the latest known stable version that works 2023.6.0 does not work v2023.7.11011538 (latest) works It seems it's an issue in the latest stable version the VS Code team have solved in the meanwhile.
3
0
76,040,332
2023-4-18
https://stackoverflow.com/questions/76040332/conda-install-cant-install-packages-from-requirements-txt
I have a conda environment set up with all of the packages I need (currently), and I wanted to make a new one from the requirements.txt file (formatted for conda, not pip) to make sure that someone else can use my project. This requirements file was generated from the other environment that I already have set up. Thus, I am confused when it fails to install with conda install --file "requirements.txt" and conda install -c conda-forge --file "requirements.txt" I made sure to specify the proper python version when creating the new environment. Here's what's in my requirements.txt file: requirements.txt # This file may be used to create an environment using: # $ conda create --name <env> --file <this file> # platform: win-64 arrow=1.2.3=pyhd8ed1ab_0 asgiref=3.6.0=pyhd8ed1ab_0 async-timeout=4.0.2=pyhd8ed1ab_0 backports.zoneinfo=0.2.1=py39hcbf5309_7 beautifulsoup4=4.12.2=pyha770c72_0 binaryornot=0.4.4=py_1 brotlipy=0.7.0=py39hb82d6ee_1004 ca-certificates=2022.12.7=h5b45459_0 certifi=2022.12.7=pyhd8ed1ab_0 cffi=1.15.1=py39h0878f49_0 chardet=5.1.0=py39hcbf5309_0 charset-normalizer=3.1.0=pyhd8ed1ab_0 click=8.1.3=win_pyhd8ed1ab_2 colorama=0.4.6=pyhd8ed1ab_0 cookiecutter=2.1.1=pyh6c4a22f_0 cryptography=40.0.1=pypi_0 defusedxml=0.7.1=pypi_0 django=4.2=pyhd8ed1ab_0 django-allauth=0.54.0=pypi_0 django-autoslug=1.9.8=pypi_0 django-bootstrap4=23.1=pypi_0 django-crispy-forms=1.9.2=pyhd8ed1ab_1 django-debug-toolbar=4.0.0=pypi_0 django-environ=0.4.5=pypi_0 django-extensions=3.2.1=pypi_0 idna=3.4=pyhd8ed1ab_0 importlib-metadata=2.0.0=py_1 itunespy=1.6.0=pypi_0 jinja2=2.11.3=pyhd8ed1ab_2 jinja2-time=0.2.0=pyhd8ed1ab_3 markupsafe=1.1.1=py39hb82d6ee_4 oauthlib=3.2.2=pypi_0 openssl=1.1.1t=h2bbff1b_0 pillow=9.5.0=pypi_0 pip=23.0.1=pypi_0 pycountry=22.3.5=pypi_0 pycparser=2.21=pyhd8ed1ab_0 pyjwt=2.6.0=pypi_0 pyopenssl=20.0.1=pyhd8ed1ab_0 pysocks=1.7.1=pyh0701188_6 python=3.9.16=h6244533_2 python-dateutil=2.8.2=pyhd8ed1ab_0 python-slugify=4.0.1=pypi_0 I've tried installing with no channel specified as well as with "-c conda-forge" as a param. Each yields a slightly different list of missing packages. When I ran conda install -c conda-forge --file requirements.txt after creating an env, it yielded this output: Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. PackagesNotFoundError: The following packages are not available from current channels: - soupsieve==2.4=pypi_0 - itunespy==1.6.0=pypi_0 - unidecode==1.1.1=pypi_0 - python-slugify==4.0.1=pypi_0 - django-extensions==3.2.1=pypi_0 - whitenoise==5.2.0=pypi_0 - pyjwt==2.6.0=pypi_0 - python3-openid==3.2.0=pypi_0 - django-debug-toolbar==4.0.0=pypi_0 - oauthlib==3.2.2=pypi_0 - pillow==9.5.0=pypi_0 - django-bootstrap4==23.1=pypi_0 - pip==23.0.1=pypi_0 - zlib-state==0.1.5=pypi_0 - defusedxml==0.7.1=pypi_0 - django-environ==0.4.5=pypi_0 - pytz==2020.1=pypi_0 - requests-oauthlib==1.3.1=pypi_0 - tzdata==2023.3=pypi_0 - django-autoslug==1.9.8=pypi_0 - pycountry==22.3.5=pypi_0 - cryptography==40.0.1=pypi_0 - django-allauth==0.54.0=pypi_0 Current channels: - https://conda.anaconda.org/conda-forge/win-64 - https://conda.anaconda.org/conda-forge/noarch - https://repo.anaconda.com/pkgs/main/win-64 - https://repo.anaconda.com/pkgs/main/noarch - https://repo.anaconda.com/pkgs/r/win-64 - https://repo.anaconda.com/pkgs/r/noarch - https://repo.anaconda.com/pkgs/msys2/win-64 - https://repo.anaconda.com/pkgs/msys2/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page. I've also tried to reference this file upon creation of the environment, and that does not work either. I have a feeling django might be causing some issues with how it names certain packages, as I had to manually install a lot of these packages when building the original environment. If anyone has any ideas, I would greatly appreciate them!
It looks like you've used conda list --export to generate the list of packages. Unfortunately, when some packages are installed from PyPI using pip, this command isn't helpful. The command you need to export your environment is: conda env export > environment.yml Then to create a new environment from that spec: conda env create -n new-env-name -f environment.yml The conda environment management commands have gradually changed, and the documentation hasn't really kept up. The conda list --export doesn't work correctly when packages have been installed from PyPI using pip, and result in an export that can't be re-loaded. The package versions have pypi_0 at the end of them, and conda doesn't know what to make of it.
4
4
76,020,838
2023-4-15
https://stackoverflow.com/questions/76020838/find-all-possible-sums-of-the-combinations-of-sets-of-integers-efficiently
I have an algorithm that finds the set of all unique sums of the combinations of k tuples drawn with replacement from of a list of tuples. Each tuple contains n positive integers, the order of these integers matters, and the sum of the tuples is defined as element-wise addition. e.g. (1, 2, 3) + (4, 5, 6) = (5, 7, 9) Simple example for k=2 and n=3: input = [(1,0,0), (2,1,1), (3,3,2)] solution = [(1,0,0)+(2,1,1), (1,0,0)+(3,3,2), (2,1,1)+(3,3,2), (1,0,0)+(1,0,0), (2,1,1)+(2,1,1), (3,3,2)+(3,3,2)] solution = [(3,1,1), (4,3,2), (5,4,3), (2,0,0), (4,2,2), (6,6,4)] In practice the integers in the tuples range from 0 to 50 (in some positions it may be a lot more constraint, like [0:2]), k goes up to 4 combinations, and the length of the tuples goes up to 5. The number of tuples to draw from goes up to a thousand. The algorithm I currently have is an adaptation of an algorithm proposed in a related question, it's more efficient then enumerating all combinations with itertools (if we're drawing 4 tuples out of 1000, there are billions of combinations, but the number of sums will be orders of magnitude less), but I don't see how to apply bitsets for example to this problem. # example where length of tuples n = 3: lst = [] for x in range(0,50,2): for y in range(0, 20, 1): for z in range(0, 3, 1): lst.append((x,y,z)) # this function works for any k and n def unique_combination_sums(lst, k): n = len(lst[0]) sums = {tuple(0 for _ in range(n))} # initialize with tuple of zeros for _ in range(k): sums = {tuple(s[i]+x[i] for i in range(n)) for s in sums for x in lst} return sums unique_combination_sums(lst, 4)
You can actually encode your tuples as integers. Since you mention that the integers range [0, 50], and there may be up to 5 such integers, so that creates a range of 51^5 = 345,025,251 values, which is perfectly doable. To understand how we can do this encoding, think about how decimal numbers work- 123 means 1*100 + 2*10 + 1*1. Each digit is multiplied by the base (10) raised to some power, corresponding to its position. Each number has only one representation, because each digit is less than the base (10) itself. We could do something similar then; we could choose a sufficiently large base, say 100, and multiply each value in the tuple by the base to its corresponding power. Take the following example: (1, 4, 7) -> 1*100^2 + 4*100^1 + 7*100^0 -> 1*10000 + 4*100 + 7 -> 10407 This work work perfectly well by itself, however whatever underlying solver you're using for the integer case may very well perform better on smaller numbers, so we really should try to "compact" the representation as much as possible. This means picking the smallest base possible. In fact, it means picking multiple bases, for a mixed-radix number system. Without going into too much detail, it means that if one position of the tuples only spans a small interval of integers, we won't "waste" space for values that won't ever exist at that specific tuple position. What this may look like, for an arbitrary example: (1, 4, 7, 11) -> 1*22*7*15 + 4*22*7 + 7*22 + 11*1 -> 2310 + 616 + 154 + 11 -> 3091 // Here we arbitrarily choose the radices [22, 7, 15] // In practice, we actually choose meaningful (and minimal) radices Furthermore, we can also subtract off the smallest value at tuple position, to shrink the values even further. We just have to remember to add back the appropriate offset multiplied by the number of elements when we convert the value back to a tuple. All that said, here's the code to do exactly that: from functools import wraps def transform_tuples(func): @wraps(func) def inner(arr, n): if n == 0 or not arr: return set() groups = [(max(g)-min(g), min(g)) for g in zip(*arr)] def encode(tup): val = 0 for (size, low), elem in zip(groups, tup): val *= size * n + 1 val += elem - low return val def decode(val): tup = [] for size, low in groups[::-1]: val, part = divmod(val, size * n + 1) tup.append(part + low * n) return tuple(tup[::-1]) result = func([encode(tup) for tup in arr], n) return [decode(val) for val in result] return inner This is a decorator- you apply it to function the solves the original integer-based problem, and it will transform the function into one that operates on tuples. For example, taking the Kelly1 solution from the related question you linked above, we can decorate it, and it will then work on tuples: @transform_tuples def Kelly1(a, n): sums = {0} for _ in range(n): sums = {s + x for s in sums for x in a} return sums Calling it on your example: tuples = [(1,0,0), (2,1,1), (3,3,2)] k = 2 print(Kelly1(tuples, k)) Produces: [(2, 0, 0), (5, 4, 3), (3, 1, 1), (6, 6, 4), (4, 2, 2), (4, 3, 2)] So you can take whichever implementation is the fastest, tweak it / optimize it as you like, and then decorate it to operate on tuples.
8
5
76,031,309
2023-4-17
https://stackoverflow.com/questions/76031309/is-it-possible-to-hint-that-a-function-parameter-should-not-be-modified
For example, I might have an abstract base class with some abstract method that takes some mutable type as a parameter: from abc import * class AbstractClass(metaclass=ABCMeta): @abstractmethod def abstract_method(self, mutable_parameter: list | set): raise NotImplementedError Is there some way of hinting to the function implementer that this parameter should not be modified in any implementation of this method? I imagine maybe something like ReadOnly could exist: def abstract_method(self, mutable_parameter: ReadOnly[list]): but I can't seem to find anything suggesting such a thing does exist. From looking at the typing module docs I would have assumed that Final was what I am looking for but PyCharm tells me 'Final' could not be used in annotations for function parameters.
You might be able to find a structural superclass that provides the behaviors you want. There's a good summary of the available collections classes at collections.abc, but as a quick, non-exhaustive summary. If you're just planning to iterate over the collection in some order, you're looking for Iterable. If all you're doing is using the infix in operator to check for membership, you can use Container. Collection is Iterable plus Container plus the len function. Sequence gives you Collection, __getitem__ (i.e. the square brackets operator), and several other nice list-y things. This is a good general-purpose candidate for list-like structures. None of these protocols have any mutating methods, so if you adhere strictly to your types (and don't downcast), then you can be assured that a function taking these types will not mutate the argument. Note that, prior to PEP 585 (released in Python 3.9), the collections.abc classes were non-generic. If you need to be compatible with Python versions prior to that, the typing module contains the same class names as collections.abc, and those were generic since day one. And, finally, the glory of structural superclasses is that you can define them anytime with typing.Protocol. If you can't find a protocol that suits your needs, then just make one. Are you planning to call __getitem__ but want to support structures that don't have a len? Here's your protocol! from typing import Protocol, TypeVar T = TypeVar("T", covariant=True) class Gettable(Protocol[T]): def __getitem__(self, index: int, /) -> T: ... Since this class inherits from Protocol, when used in type hints it will behave as a structural subtype, meaning any object which has a __getitem__ that takes an int and returns a T counts as a Gettable[T], even without an explicit runtime subtyping relationship.
5
3
76,020,646
2023-4-15
https://stackoverflow.com/questions/76020646/python-planetscale-db-mysql-connection
I have been trying to make a Python program that connects to a Planetscale MySQL DB, I have used 2 libraries that are mysql-connector-python and mysqlclient. I think I have entered the correct details every time with both but it hasn't worked. I tried Planetscale's recommended way which is the following (it didn't work for me btw) : import MySQLdb # pip install mysqlclient # This is planetscale's copy/paste method to connect to the DB conn = MySQLdb.connect( host=[HOST], user=[USER], passwd=[PASSWORD], db=[DATABASE], ssl_mode = "VERIFY_IDENTITY", ssl = { "ca": "/etc/ssl/cert.pem" } ) This is the current code I'm using import mysql.connector # pip install mysql-connector-python config = { "user": hidden for security purposes, "password": hidden for security purposes, "host": hidden for security purposes, "database": hidden for security purposes, "ssl_verify_identity": True, "ssl_ca": "/etc/ssl/cert.pem", } conn = mysql.connector.connect(**config) cursor = conn.cursor() cursor.execute("""CREATE TABLE IF NOT EXISTS qr_table ( type VARCHAR(255), date VARCHAR(255), path VARCHAR(255), unique_id VARCHAR(255) )""") conn.commit() cursor.close() conn.close() This is the error I'm getting Traceback (most recent call last): File "C:\Users\Axelr\Python coding\lib\site-packages\mysql\connector\connection_cext.py", line 268, in _open_connection self._cmysql.connect(**cnx_kwargs) _mysql_connector.MySQLInterfaceError: SSL connection error: SSL_CTX_set_default_verify_paths failed The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Users\Axelr\PycharmProjects\PC01\main\Special Main\ARC QR Generator\SQL Script.py", line 26, in <module> conn = mysql.connector.connect(**config) File "C:\Users\Axelr\Python coding\lib\site-packages\mysql\connector\pooling.py", line 286, in connect return CMySQLConnection(*args, **kwargs) File "C:\Users\Axelr\Python coding\lib\site-packages\mysql\connector\connection_cext.py", line 101, in __init__ self.connect(**kwargs) File "C:\Users\Axelr\Python coding\lib\site-packages\mysql\connector\abstracts.py", line 1108, in connect self._open_connection() File "C:\Users\Axelr\Python coding\lib\site-packages\mysql\connector\connection_cext.py", line 273, in _open_connection raise get_mysql_exception( mysql.connector.errors.InterfaceError: 2026 (HY000): SSL connection error: SSL_CTX_set_default_verify_paths failed Process finished with exit code 1 Can someone please help me make a connection that works? Also, it wouldn't be a problem to use another library to connect. [Python 3.10]
I have fixed the issue. The problem was the SSL and that I used a dictionary to store the details, then used **config to use the dictionary in the connection. Here is the correct and working code for me: import mysql.connector as sql conn = sql.connect(host="*Hidden For Security Purposes*", database="*Hidden For Security Purposes*", user="*Hidden For Security Purposes*", password="*Hidden For Security Purposes*", ) Thanks to everyone helping me with this problem and i hope this StackOverflow question will help many with the same problem.
4
2
76,018,208
2023-4-14
https://stackoverflow.com/questions/76018208/python-typing-equivalent-of-typescripts-keyof
In TypeScript, we have the ability to create a "literal" type based on the keys of an object: const tastyFoods = { pizza: '🍕', burger: '🍔', iceCream: '🍦', fries: '🍟', taco: '🌮', sushi: '🍣', spaghetti: '🍝', donut: '🍩', cookie: '🍪', chicken: '🍗', } as const; type TastyFoodsKeys = keyof typeof tastyFoods; // gives us: // type TastyFoodsKeys = "pizza" | "burger" | "iceCream" | "fries" | "taco" | "sushi" | "spaghetti" | "donut" | "cookie" | "chicken" Is there an equivalent in Python (3.10+ is fine) for creating type hints based on a dict? E.g., from typing import Literal tasty_foods = { "pizza": '🍕', "burger": '🍔', "iceCream": '🍦', "fries": '🍟', "taco": '🌮', "sushi": '🍣', "spaghetti": '🍝', "donut": '🍩', "cookie": '🍪', "chicken": '🍗', } TastyFoodsKeys = Literal[list(tasty_foods.keys())] # except that doesn't work, obviously
There is not. You need to define the Literal type statically: TastyFoodsKeys = Literal["pizza", "burger"] You can, however, use TastyFoodsKeys.__args__ to then define your dict. tasty_foods = dict(zip(TastyFoodsKeys.__args__, ['🍕', '🍔'])) The __args__ attribute is not documented, so use at your own risk. As pointed out in a comment, you can use typing.get_args (which is documented) instead of accessing __args__ directly: tasty_foods = dict(zip(typing.get_args(TastyFoodsKeys), ['🍕', '🍔'])) You might, however, not want to use a dict at all. Perhaps what you really want is an enumerated type. from enum import StrEnum class TastyFood(StrEnum): PIZZA = '🍕' BURGER = '🍔' # etc
8
9
76,009,612
2023-4-13
https://stackoverflow.com/questions/76009612/numpy-matmul-performs-100-times-worse-than-dot-on-array-views
It was brought to my attention that the matmul function in numpy is performing significantly worse than the dot function when multiplying array views. In this case my array view is the real part of a complex array. Here is some code which reproduces the issue: import numpy as np from timeit import timeit N = 1300 xx = np.random.randn(N, N) + 1j yy = np.random.randn(N, N) + 1J x = np.real(xx) y = np.real(yy) assert np.shares_memory(x, xx) assert np.shares_memory(y, yy) dot = timeit('np.dot(x,y)', number = 10, globals = globals()) matmul = timeit('np.matmul(x,y)', number = 10, globals = globals()) print('time for np.matmul: ', matmul) print('time for np.dot: ', dot) On my machine the output is as follows: time for np.matmul: 23.023062199994456 time for np.dot: 0.2706864000065252 This clearly has something to do with the shared memory as replacing np.real(xx) with np.real(xx).copy() makes the performance discrepancy go away. Trolling the numpy docs was not particularly helpful as the listed differences did not discuss implementation details when dealing with memory views.
These timings indicate the dot is doing a copy with real: In [22]: timeit np.dot(xx.real,xx.real) 232 ms ± 3.34 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [23]: timeit np.dot(xx.real.copy(),xx.real.copy()) 232 ms ± 4.18 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Applying that to matmul produces nearly the same times: In [24]: timeit np.matmul(xx.real.copy(),xx.real.copy()) 231 ms ± 3.54 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Again, matmul with real is taking some slow route. matmul/dot both do poorer when given int arrays, though not as slow as the matmul real case. matmul/dot can also handle object dtypes, but that's even slower. So there's a lot going on under the covers that we, as python level users, don't see (and isn't documented). edit I was tempted to change the title to focus on complex-real, but decided to check another view - a slice of a float array In [42]: y=xx.real.copy()[::2,::2];y.shape,y.dtype Out[42]: ((650, 650), dtype('float64')) In [43]: timeit np.dot(y,y) 36.4 ms ± 63.4 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [44]: timeit np.dot(y.copy(),y.copy()) 35.6 ms ± 191 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) Again it is evident that dot is using copies of the view. matmul does not: In [45]: timeit np.matmul(y,y) 1.89 s ± 3.01 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) but with copies the times are the same as dot: In [46]: timeit np.matmul(y.copy(),y.copy()) 35.3 ms ± 102 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) So my guess is that dot routinely makes a copy if it can't send the arrays directly to the BLAS routines. matmul apparently takes a slower route instead. edit While their handling of 2d arrays is similar, dot and matmul are quite different in how they handle 3+d arrays. In fact the main reason to add @ was to provide a convenient 'batch' notion to matrix multiplication. Sticking with the large complex arrays, lets make one 3x larger: In [49]: yy=np.array([xx,xx,xx]);yy.shape Out[49]: (3, 1300, 1300) In [50]: timeit np.dot(xx,xx) 794 ms ± 12.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [51]: timeit np.dot(xx,yy) # (yy,xx) same timings 55.5 s ± 151 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [52]: timeit np.matmul(xx,yy) # (yy,yy) same 2.58 s ± 362 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) matmul has just increased the time by 3; dot by 70. I could explore things more, but not with timings in the minute range.
5
1
76,012,644
2023-4-14
https://stackoverflow.com/questions/76012644/fastapi-uvicorn-or-hypercorn-where-is-my-root-path
Based on a few FastAPI tutorials, including this, I made a simple FastAPI app: from fastapi import FastAPI, Request app = FastAPI() # also tried FastAPI(root_path="/api/v1") @app.get("/app") def read_main(request: Request): return {"message": "Hello World", "root_path": request.scope.get("root_path")} Which i want to have at a path other than root (e.g. /api/vi)... Again based on most tutorials and common sense, I tried to start it with e.g.: uvicorn main:app --root-path /api/v1 The service comes up ok (on http://127.0.0.1:8000), however, the root-path seems to be ignored: Any GET request to http://127.0.0.1:8000/ gives: message "Hello World" root_path "/api/v1" and any GET request to http://127.0.0.1:8000/api/v1 gives: detail "Not Found" I would expect the requests to produce the reverse outcomes... What is going on here?!? I also tried initializing FastAPI with FastAPI(root_path="/api/v1") as well as switching to hypercorn without avail... Details of the versions of apps (I might have tried a few others as well, though these should be the latest tried): python 3.9.7 hf930737_3_cpython conda-forge fastapi 0.85.1 pyhd8ed1ab_0 conda-forge uvicorn 0.20.0 py39h06a4308_0 hypercorn 0.14.3 py39hf3d152e_1 conda-forge
As noted by @MatsLindh in the comments section, root_path (or --root-path) does not change your application's prefix path, but is rather designated for behind the proxy cases, where "you might need to use a proxy server like Traefik or Nginx with a configuration that adds an extra path prefix that is not seen by your application" (see the relevant documentation). As described in the documentation: Proxy with a stripped path prefix Having a proxy with a stripped path prefix, in this case, means that you could declare a path at /app in your code, but then, you add a layer on top (the proxy) that would put your FastAPI application under a path like /api/v1. In this case, the original path /app would actually be served at /api/v1/app. Even though all your code is written assuming there's just /app. And the proxy would be "stripping" the path prefix on the fly before transmitting the request to Uvicorn, keep your application convinced that it is serving at /app, so that you don't have to update all your code to include the prefix /api/v1. Up to here, everything would work as normally. But then, when you open the integrated docs UI (the frontend), it would expect to get the OpenAPI schema at /openapi.json, instead of /api/v1/openapi.json. So, the frontend (that runs in the browser) would try to reach /openapi.json and wouldn't be able to get the OpenAPI schema (it would show "Failed to load API definition" error). Because we have a proxy with a path prefix of /api/v1 for our app, the frontend needs to fetch the OpenAPI schema at /api/v1/openapi.json. The docs UI would also need the OpenAPI schema to declare that this API server is located at /api/v1 (behind the proxy). To achieve this, you can use the command line option --root-path: uvicorn main:app --root-path /api/v1 [...] Alternatively, if you don't have a way to provide a command line option like --root-path or equivalent, you can set the root_path parameter when creating your FastAPI app: app = FastAPI(root_path="/api/v1") Option 1 Hence, in your case, since you are not using a proxy, but rather need to have a custom prefix for your API, you could instead use an APIRouter, which allows you to define a prefix for the API routes (note that the prefix must not include a final /). You can either give the prefix when instantiating the APIRouter (e.g., router = APIRouter(prefix='/api/v1')) or using .include_router(), which, as described in the documentation, would allow you to include the same router multiple times with different prefix: You can also use .include_router() multiple times with the same router using different prefixes. This could be useful, for example, to expose the same API under different prefixes, e.g. /api/v1 and /api/latest. This is an advanced usage that you might not really need, but it's there in case you do. The /app endpoint in the example below can be accessed at http://127.0.0.1:8000/api/v1/app. Working Example from fastapi import FastAPI from fastapi.routing import APIRouter router = APIRouter() @router.get('/app') def main(): return 'Hello world!' app = FastAPI() app.include_router(router, prefix='/api/v1') Once you have multiple versions of the API endpoints, you could use: from fastapi import FastAPI from fastapi.routing import APIRouter router_v1 = APIRouter() router_v2 = APIRouter() @router_v1.get('/app') def main(): return 'Hello world - v1' @router_v2.get('/app') def main(): return 'Hello world - v2' app = FastAPI() app.include_router(router_v1, prefix='/api/v1') app.include_router(router_v2, prefix='/api/v2') app.include_router(router_v2, prefix='/latest') # optional Option 2 Alternatively, one could also mount sub-application(s) with the desired prefix, as demonstrated in this answer and this answer (see Option 3). As described in the documentation: When you mount a sub-application as described above, FastAPI will take care of communicating the mount path for the sub-application using a mechanism from the ASGI specification called a root_path. That way, the sub-application will know to use that path prefix for the docs UI. And the sub-application could also have its own mounted sub-applications and everything would work correctly, because FastAPI handles all these root_paths automatically. Hence, in the example given below, you can access the /app endpoint from the main app at http://127.0.0.1:8000/app and the /app endpoint from the sub app at http://127.0.0.1:8000/api/v1/app. Similarly, the Swagger UI autodocs can be accessed at http://127.0.0.1:8000/docs and http://127.0.0.1:8000/api/v1/docs, respectively. Working Example from fastapi import FastAPI app = FastAPI() subapi = FastAPI() @app.get('/app') def read_main(): return {'message': 'Hello World from main app'} @subapi.get('/app') def read_sub(): return {'message': 'Hello World from sub API'} app.mount('/api/v1', subapi)
7
7
75,994,620
2023-4-12
https://stackoverflow.com/questions/75994620/windows-store-not-adding-python-to-path
I have installed Python 3.11 from Windows Store. I used to have Python 3.10 also installed from Windows store, but I changed the environment variables and could not use it from the terminal anymore. Therefore I decided to uninstall it and install the latest version, hoping that it would be added to PATH automatically, just like it happened the first time I installed the previous version. The reason why I am installing it from Windows store is because that is the only interpreter that VSCode is able to find (I have tried accessing another Python interpreter and VSCode would just not allow me). At the moment, this Python interpreter has the following route C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1008.0_x64__qbz5n2kfra8p0\python3.11.exe I have copy-pasted it to my Path and still can't run python on a terminal. What I mean by this is that if I type "python", the command is not recognized. This is the complete directory in which the .exe file is, in case it is useful EDIT I also have the following route C:\Users\Usuario\AppData\Local\Programs\Python\Python311 That contains the following I have a vague memory of the Scripts folder being in the Path before, but adding it does not solve the issue either. I have also added the above route (where the Scripts folder is) and still does not work. What can I do to be able to run python from terminal?
I finally solved it by looking at another device where I had a similar configuration. I needed to add C:\Users\Usuario\AppData\Local\Microsoft\WindowsApps and C:\Users\Usuario\AppData\Local\Microsoft\WindowsApps\python3.11.exe to the path. That worked. Still don't know why Windows Store didn't add them automatically this time tho.
3
2
76,012,206
2023-4-14
https://stackoverflow.com/questions/76012206/how-do-i-properly-handle-all-possible-exceptions-by-the-requests-module-if-a-sit
I live in China, behind the infamous Great Firewall of China, and I use VPNs. A simple observation is that while the VPN is connected, I can access www.google.com. And if the VPN isn't connected, then I cannot access Google. So I can check if I have an active VPN connection by accessing Google. My ISP really loves to disconnect my VPN, and so I have routinely check if I have an active VPN connection, and I have already found a way to programmatically do this. I am connected to the VPN right now, and if I do the following: import requests google = requests.get('https://www.google.com', timeout=3) print(google.status_code == 200) Everything is fine. But if I don't have an active VPN connection, then all hell breaks loose. I do this check precisely because my connection will be disconnected, and I need a function to return a False when it happens, but requests really loves to throw exceptions, it stops the execution of my script, and the exceptions come one after another: ... ReadTimeoutError: HTTPSConnectionPool(host='www.google.com', port=443): Read timed out. (read timeout=3) During handling of the above exception, another exception occurred: ReadTimeout Traceback (most recent call last) ... I have imported a bunch of exceptions just so requests doesn't panic and stop my script when VPN is disconnected: import requests from requests.exceptions import ConnectionError, ConnectTimeout, ReadTimeout, Timeout from socket import gaierror from requests.packages.urllib3.exceptions import MaxRetryError, NewConnectionError, ReadTimeoutError def google_accessible(): try: google = requests.get('https://www.google.com', timeout=3) if google.status_code == 200: return True except (ConnectionError, ConnectTimeout, gaierror, MaxRetryError, NewConnectionError, ReadTimeout, ReadTimeoutError, TimeoutError): pass return False I thought I caught all exceptions previously, but that isn't the case because I failed to catch the above exceptions (ReadTimeout, ReadTimeoutError, TimeoutError). I know I can use except Exception to catch them all, but that would catch exceptions that aren't intended to be caught, and I would rather let those exceptions stop the execution than risking bugs. How do I use minimal number of exceptions to catch all exceptions that are VERY likely to occur when a request failed?
I think it would be better to use RequestException from requests.exception module. The hierarchy is the following: builtins.OSError(builtins.Exception) RequestException # <- Use this top level exception ChunkedEncodingError ConnectionError ConnectTimeout(ConnectionError, Timeout) ProxyError SSLError ContentDecodingError(RequestException, urllib3.exceptions.HTTPError) HTTPError InvalidHeader(RequestException, builtins.ValueError) InvalidJSONError JSONDecodeError(InvalidJSONError, json.decoder.JSONDecodeError) InvalidSchema(RequestException, builtins.ValueError) InvalidURL(RequestException, builtins.ValueError) InvalidProxyURL MissingSchema(RequestException, builtins.ValueError) RetryError StreamConsumedError(RequestException, builtins.TypeError) Timeout ReadTimeout TooManyRedirects URLRequired UnrewindableBodyError builtins.Warning(builtins.Exception) RequestsWarning FileModeWarning(RequestsWarning, builtins.DeprecationWarning) RequestsDependencyWarning So you can do: from requests.exception import RequestException def google_accessible(): try: google = requests.get('https://www.google.com', timeout=3) if google.status_code == 200: return True except RequestException: pass return False
3
3
76,009,318
2023-4-13
https://stackoverflow.com/questions/76009318/python-vectorization-split-string
I want to use vectorization to create a column in a pandas data frame that retrieve the second/last part of a string, from each row in a column, that is split on '_'. I tried this code: df = pd.DataFrame() df['Var1'] = ["test1_test2","test3_test4"] df['Var2'] = [[df['Var1'].str.split('_')][0]][0] df Var1 Var2 0 test1_test2 test3 1 test3_test4 test4 Which is obviously incorrect as I should get test2 and test 4 in row 0 and 1 of column Var2 respectively.
Use the .str.split('_') method along with .str[-1] to retrieve the second/last part of each string in the column. Following is the updated code: import pandas as pd df = pd.DataFrame() df['Var1'] = ["test1_test2", "test3_test4"] df['Var2'] = df['Var1'].str.split('_').str[-1] print(df) Output: Var1 Var2 0 test1_test2 test2 1 test3_test4 test4 In the above code, df['Var1'].str.split('_') splits each string in the 'Var1' column by the '_' delimiter, and .str[-1] selects the last part of the split string for each row.
3
1
75,996,188
2023-4-12
https://stackoverflow.com/questions/75996188/ansible-how-to-execute-set-fact-module-from-within-ansible-action-plugin-python
I am writing a custom Action Plugin for Ansible which I use in my playbook and I am trying to set a variable that will be used in the next task, in the playbook, by a (custom) module. Effectively, the playbook equivalent of what I am trying to mimic is a set_fact task like so: - name: set_fact task set_fact: ansible_python_interpreter: /path/to/python In my custom Action Plugin, I have used self._execute_module before to execute other modules (such as slurp) within the plugin code. However, with the set_fact module it doesn't seem to be updating the ansible_python_interpreter variable, as expected. I have tried the following: self._execute_module(module_name='ansible.builtin.set_fact', module_args=dict(ansible_python_interpreter=/path/to/python), task_vars=task_vars) And I have also tried different variations of module_args: module_args=dict(key_value={ansible_python_interpreter=/path/to/python}) module_args=dict(key_value='ansible_python_interpreter:/path/to/python') However, my ansible_python_interpreter does not seem to be changing. Any help, please?
The closest I can get is to return a dict containing the Ansible facts that I want to be set for the playbook simply by following the dev guide for Action Plugins on the Ansible docs. So, as I am returning an _execute_model() call in my plugin as well, my run() function in my plugin would look something like this: def run(self, tmp=None, task_vars=None): # Plugin code here facts = dict() facts['ansible_python_interpreter'] = '/path/to/python' return dict(self._execute_module(module_name='my_custom_module', module_args=module_args, task_vars=task_vars), ansible_facts=dict(facts)) However, unfortunately this throws another warning/error of: [WARNING]: Removed restricted key from module data: ansible_python_interpreter And this seems to be due to a safety mechanism for overriding connection details, and so I have gone down a different route for my plugin. In another use case, returning dict(ansible_facts=dict(facts)) (like in the docs) would work if it wasn't a connection var I was trying to override, I believe.
5
2
76,005,798
2023-4-13
https://stackoverflow.com/questions/76005798/numpy-svd-does-not-agree-with-r-implementation
I saw a question about inverting a singular matrix on Stack Overflow using NumPy. I wanted to see if NumPy SVD could provide an acceptable answer. I've demonstrated using SVD in R for another Stack Overflow answer. I used that known solution to make sure that my NumPy code was working correctly before applying it to the new question. I was surprised to learn that the NumPy solution did not match the R answer. I didn't get an identity back when I substituted the NumPy solution back into the equation. The U matricies from R and NumPy are the same shape (3x3) and the values are the same, but signs are different. Here is the U matrix I got from NumPy: The D matricies are identical for R and NumPy. Here is D after the large diagonal element is zeroed out: The V matrix I get from NumPy has shape 3x4; R gives me a 4x3 matrix. The values are similar, but the signs are different, as they were for U. Here is the V matrix I got from NumPy: The R solution vector is: x = [2.41176,-2.28235,2.15294,-3.47059] When I substitute this back into the original equation A*x = b I get the RHS vector from my R solution: b = [-17.00000,28.00000,11.00000] NumPy gives me this solution vector: x = [2.55645,-2.27029,1.98412,-3.23182] When I substitute the NumPy solution back into the original equation A*x = b I get this result: b = [-15.93399,28.04088,12.10690] Close, but not correct. I repeated the experiment using NumPy np.linalg.pinv pseudo-inverse method. It agrees with the R solution. Here is my complete Python script: # https://stackoverflow.com/questions/75998775/python-vs-matlab-why-my-matrix-is-singular-in-python import numpy as np def pseudo_inverse_solver(A, b): A_inv = np.linalg.pinv(A) x = np.matmul(A_inv, b) error = np.matmul(A, x) - b return x, error, A_inv def svd_solver(A, b): U, D, V = np.linalg.svd(A, full_matrices=False) D_diag = np.diag(np.diag(np.reciprocal(D))) D_zero = np.array(D_diag) D_zero[D_zero >= 1.0e15] = 0.0 D_zero = np.diag(D_zero) A_inv = np.matmul(np.matmul(np.transpose(V), D_zero), U) x = np.matmul(A_inv, b) error = np.matmul(A, x) - b return x, error, A_inv if __name__ == '__main__': """ Solution from my SO answer https://stackoverflow.com/questions/19763698/solving-non-square-linear-system-with-r/19767525#19767525 Example showing how to use NumPy SVD https://stackoverflow.com/questions/24913232/using-numpy-np-linalg-svd-for-singular-value-decomposition """ np.set_printoptions(20) A = np.array([ [0.0, 1.0, -2.0, 3.0], [5.0, -3.0, 1.0, -2.0], [5.0, -2.0, -1.0, 1.0] ]) b = np.array([-17.0, 28.0, 11.0]).T x_svd, error_svd, A_inv_svd = svd_solver(A, b) error_svd_L2 = np.linalg.norm(error_svd) x_pseudo, error_pseudo, A_inv_pseudo = pseudo_inverse_solver(A, b) error_pseudo_L2 = np.linalg.norm(error_pseudo) Any advice on what I've missed with NumPy SVD? Did I make a mistake at this line? A_inv = np.matmul(np.matmul(np.transpose(V), D_zero), U) Update: Chrysophylaxs pointed out my error: I needed to transpose U: A_inv = np.matmul(np.matmul(np.transpose(V), D_zero), np.transpose(U)) This change solves the problem. Thank you so much!
Thanks to Chrysophylaxs, here is the code that is now working correctly: # https://stackoverflow.com/questions/75998775/python-vs-matlab-why-my-matrix-is-singular-in-python import numpy as np def pseudo_inverse_solver(A, b): A_inv = np.linalg.pinv(A) x = np.matmul(A_inv, b) error = np.matmul(A, x) - b return x, error, A_inv def svd_solver_so(A, b, max_diag=1.0e15): """ see https://stackoverflow.com/questions/24913232/using-numpy-np-linalg-svd-for-singular-value-decomposition see https://stackoverflow.com/questions/59292279/solving-linear-systems-of-equations-with-svd-decomposition :param A: Matrix in equation A*x = b :param b: RHS vector in equation A*x = b :param max_diag: max value of diagonal for setting to zero. :return: x solution, error vector """ U, D, V = np.linalg.svd(A, full_matrices=False) D_diag = np.diag(np.diag(np.reciprocal(D))) D_zero = np.array(D_diag) D_zero[D_zero >= max_diag] = 0.0 D_zero = np.diag(D_zero) A_inv = V.T @ D_zero @ U.T c = U.T @ b w = D_zero @ c x = V.T @ w error = np.matmul(A, x) - b return x, error, A_inv def svd_solver(A, b, max_diag=1.0e15): U, D, V = np.linalg.svd(A, full_matrices=False) D_diag = np.diag(np.diag(np.reciprocal(D))) D_zero = np.array(D_diag) D_zero[D_zero >= max_diag] = 0.0 D_zero = np.diag(D_zero) A_inv = np.matmul(np.matmul(np.transpose(V), D_zero), np.transpose(U)) x = np.matmul(A_inv, b) error = np.matmul(A, x) - b return x, error, A_inv if __name__ == '__main__': """ Solution from my SO answer https://stackoverflow.com/questions/19763698/solving-non-square-linear-system-with-r/19767525#19767525 Example showing how to use NumPy SVD https://stackoverflow.com/questions/24913232/using-numpy-np-linalg-svd-for-singular-value-decomposition """ np.set_printoptions(20) A = np.array([ [0.0, 1.0, -2.0, 3.0], [5.0, -3.0, 1.0, -2.0], [5.0, -2.0, -1.0, 1.0] ]) b = np.array([-17.0, 28.0, 11.0]).T x_svd, error_svd, A_inv_svd = svd_solver(A, b) error_svd_L2 = np.linalg.norm(error_svd) x_pseudo, error_pseudo, A_inv_pseudo = pseudo_inverse_solver(A, b) error_pseudo_L2 = np.linalg.norm(error_pseudo) x_so, error_so, A_inv_so = svd_solver_so(A, b) error_so_L2 = np.linalg.norm(error_so)
5
1
76,003,473
2023-4-13
https://stackoverflow.com/questions/76003473/how-to-disable-debugger-warnings-about-frozen-modules-when-using-nbconvert-execu
I am trying to run a python script to run all cells in all notebooks found a directory. It runs fine and I am getting the desired results in the notebook files. However, I want to disable the warnings that are printed to the VSCode cmd terminal when running the script. My code below: import nbformat from glob import glob from nbconvert.preprocessors import ExecutePreprocessor if __name__ == "__main__": nb_list = glob("./*.ipynb") ep = ExecutePreprocessor() for nb in nb_list: with open(nb) as f: nb_r = nbformat.read(f, as_version=4) ep.preprocess(nb_r) The console output: 0.00s - Debugger warning: It seems that frozen modules are being used, which may 0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off 0.00s - to python to disable frozen modules. 0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation. Tried setting "env": {"PYDEVD_DISABLE_FILE_VALIDATION":"1"} in the launch.json file. Didn't change anything. Tried setting "pythonArgs": ["-Xfrozen_modules=off"] in the launch.json file. Didn't change anything. Tried setting warnings.filterwarnings('ignore', module='ExecutePreprocessor'). Didn't change anything. Tried setting os.environ['PYTHONWARNINGS'] = ''. Didn't change anything. Tried setting os.environ['PYDEVD_USE_CYTHON'] = '1'. Didn't change anything. What I haven't tried is setting PYDEVD_DISABLE_FILE_VALIDATION=1. I don't know where to set this, how to set it, and the implications.
Figured out how to "Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation". Adding a user or system environment variable called 'PYDEVD_DISABLE_FILE_VALIDATION' and setting the value to '1' did the job. Didn't know this is what it meant (newbie alert).
3
3
76,004,898
2023-4-13
https://stackoverflow.com/questions/76004898/how-to-convert-polars-dataframe-column-type-from-float64-to-int64
I have a polars dataframe, like: import polars as pl df = pl.DataFrame({"foo": [1.0, 2.0, 3.0], "bar": [11, 5, 8]}) How do I convert the first column to int64 type? I was trying something like: df.select(pl.col('foo')) = df.select(pl.col('foo')).cast(pl.Int64) but it is not working. In Pandas it was super easy: df['foo'] = df['foo'].astype('int64') Thanks.
Select your col, cast it to (int64) and add it back to the original DataFrame with_columns. df = df.with_columns(pl.col("foo").cast(pl.Int64)) Output : print(df) shape: (3, 2) ┌─────┬─────┐ │ foo ┆ bar │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞═════╪═════╡ │ 1 ┆ 11 │ │ 2 ┆ 5 │ │ 3 ┆ 8 │ └─────┴─────┘
4
8
76,001,787
2023-4-13
https://stackoverflow.com/questions/76001787/how-can-i-read-just-one-line-from-standard-input-and-pass-the-rest-to-a-subproc
If you readline() from sys.stdin, passing the rest of it to a subprocess does not seem to work. import subprocess import sys header = sys.stdin.buffer.readline() print(header) subprocess.run(['nl'], check=True) (I'm using sys.stdin.buffer to avoid any encoding issues; this handle returns the raw bytes.) This runs, but I don't get any output from the subprocess; bash$ printf '%s\n' foo bar baz | python demo1.py b'foo\n' If I take out the readline etc, the subprocess reads standard input and produces the output I expect. bash$ printf '%s\n' foo bar baz | > python -c 'import subprocess; subprocess.run(["nl"], check=True)' 1 foo 2 bar 3 baz Is Python buffering the rest of stdin when I start reading it, or what's going on here? Running with python -u does not remove the problem (and indeed, the documentation for it only mentions that it changes the behavior for stdout and stderr). But if I pass in a larger amount of data, I do get some of it: bash$ wc -l /etc/services 13921 /etc/services bash$ python demo1.py </etc/services | head -n 3 1 27/tcp # NSW User System FE 2 # Robert Thomas <[email protected]> 3 # 28/tcp Unassigned (... traceback from broken pipe elided ...) bash$ fgrep -n 'NSW User System FE' /etc/services 91:nsw-fe 27/udp # NSW User System FE 92:nsw-fe 27/tcp # NSW User System FE bash$ sed -n '1,/NSW User System FE/p' /etc/services | wc 91 449 4082 (So, looks like it eats 4096 bytes from the beginning.) Is there a way I can avoid this behavior, though? I would like to only read one line off from the beginning, and pass the rest to the subprocess. Calling sys.stdin.buffer.readline(-1) repeatedly in a loop does not help. This is actually a follow-up for Read line from shell pipe, pass to exec, and keep to variable but I wanted to focus on this, to me, surprising aspect of the problem in that question.
This is because sys.stdin is created using the built-in open function in the default buffered mode, which uses a buffer of size io.DEFAULT_BUFFER_SIZE, which on most systems is either 4096 or 8192 bytes. To make the parent process consume precisely one line of text from the standard input, you can therefore open it with the buffer disabled by passing 0 as the buffering argument to the open or os.fdopen function: # subp1.py import os import sys import subprocess # or with the platform-dependent device file: # unbuffered_stdin = open('/dev/stdin', 'rb', buffering=0) unbuffered_stdin = os.fdopen(sys.stdin.fileno(), 'rb', buffering=0) print(unbuffered_stdin.readline()) subprocess.run(['nl'], check=True) so that: printf "foo\nbar\n" | python subp1.py would then output: b'foo\n' 1 bar
8
11
75,999,222
2023-4-12
https://stackoverflow.com/questions/75999222/how-to-plot-spiral-that-goes-around-circular-paraboloid
I have a 3D circular paraboloid surface and I would like to plot a spiral that starts from an arbitrary point on the surface and goes down while "hugging" the surface. This is my attempt so far: import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax = plt.axes(projection='3d') # Surface ------------------ # Create the mesh in polar coordinates and compute corresponding Z r0 = 5 r = np.linspace(0, r0, 50) p = np.linspace(0, 2*np.pi, 50) R, P = np.meshgrid(r, p) Z = -R**2 + r0**2 # Express the mesh in the cartesian system X, Y = R*np.cos(P), R*np.sin(P) # Plot the surface ax.plot_surface(X, Y, Z, linewidth=0, antialiased=False, alpha=0.2) # Spiral ------------------- u = np.arange(0, 29, 0.1) x = 0.17*u*np.cos(u) y = 0.17*u*np.sin(u) z = -0.15*u/np.pi*(x**2 + y**2) + r0**2 # Plot spiral ax.plot3D(x, y, z, 'r') plt.show() However, my spiral is not actually following the surface. I also tried this: x = [] y = [] z = [] for i in range(50): x.append(X[i,i]) y.append(Y[i,i]) z.append(-(X[i,i]**2 + Y[i,i]**2) + r0**2) ax.plot3D(x, y, z, 'b') which is going around the surface but I don't know how to make it do more circles around the surface. Any ideas?
The formula in the second attempt is correct. I get what you want if I use the same formula in your first attempt. The line z = -0.15*u/np.pi*(x**2 + y**2) + r0**2 needs to be replaced with -(x**2 + y**2) + r0**2. For reproducibility: %matplotlib notebook import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax = plt.axes(projection='3d') # Surface ------------------ # Create the mesh in polar coordinates and compute corresponding Z r0 = 5 r = np.linspace(0, r0, 50) p = np.linspace(0, 2*np.pi, 50) R, P = np.meshgrid(r, p) Z = -R**2 + r0**2 # Express the mesh in the cartesian system X, Y = R*np.cos(P), R*np.sin(P) # Plot the surface ax.plot_surface(X, Y, Z, linewidth=0, antialiased=False, alpha=0.2) # Spiral ------------------- # Attempt 1 u = np.arange(0, 29, 0.1) x = 0.17*u*np.cos(u) y = 0.17*u*np.sin(u) z = -(x**2 + y**2) + r0**2 # z = -0.15*u/np.pi*(x**2 + y**2) + r0**2 # Plot spiral ax.plot3D(x, y, z, 'r') plt.show() Output is as shown below:
3
2
76,001,604
2023-4-13
https://stackoverflow.com/questions/76001604/extracting-the-minimum-x-value-keys-from-dictionary
Suppose we wish to extract the minimum value from a dictionary like so scores = { 0:1.3399288498085087, 1:1.2672683347433629, 3:1.6999159970296505, 4:1.8410942584597279, 5:1.336658057628646 } #find minimum value in dictionary minimum_value = min(scores.values()) #get keys with minimal value using list comprehension minimum_keys = [key for key in scores if scores[key]==minimum_value] minimum_keys This returns the key with the lowest value. However, what if I wish to extract the minimum 2 keys and put them into a list? What if I wanted the minimum 20? How would I go about doing this for an arbitrary number of minimal values desired?
In fact, the problem is simpler than you think: scores = { 0:1.3399288498085087, 1:1.2672683347433629, 3:1.6999159970296505, 4:1.8410942584597279, 5:1.336658057628646 } # minimum key print(min(scores, key=scores.get)) # n minimum keys print(sorted(scores, key=scores.get)[:3]) Output: 1 [1, 5, 0] Both min and sorted allow you to provide a key which is some function to be called with a value, to compute an associated value to be used for sorting. By providing scores.get as that function, you can sort keys by their matching value, which is what you want.
4
5
75,999,612
2023-4-12
https://stackoverflow.com/questions/75999612/why-is-a-pickled-object-with-slots-bigger-than-one-without-slots
I'm working on a program that keeps dying because of the OOM killer. I was hoping for some quick wins in reducing the memory usage without a major refactor. I tried adding __slots__ to the most common classes but I noticed the pickled size went up. Why is that? class Class: def __init__(self, a, b, c): self.a = a self.b = b self.c = c class ClassSlots: __slots__ = ["a", "b", "c"] def __init__(self, a, b, c): self.a = a self.b = b self.c = c cases = [ Class(1, 2, 3), ClassSlots(1, 2, 3), [Class(1, 2, 3) for _ in range(1000)], [ClassSlots(1, 2, 3) for _ in range(1000)] ] for case in cases: dump = pickle.dumps(case, protocol=5) print(len(dump)) with Python 3.10 prints 59 67 22041 25046
So, on Python 3.11, let's define the following: class Foo: def __init__(self, a, b, c): self.a = a self.b = b self.c = c class Bar: __slots__ = ["a", "b", "c"] def __init__(self, a, b, c): self.a = a self.b = b self.c = c Now, let's see: >>> import pickle >>> import pickletools >>> len(pickle.dumps(Foo(1,2,3))), len(pickle.dumps(Bar(1,2,3))) (57, 60) So, there seems to be a three-byte difference (when we make the classes have the same length name... that accounts for 5 out of the 8 byte difference you were originally seeing) An important point to understand is that a "pickle" is basically a series of instructions on how to rebuild an object, these instructions are executed on a pickle virtual machine. We can use pickletools.dis to get a human readable disassembly of these instructions. Now, let's see what the disassembly shows us: >>> pickletools.dis(pickle.dumps(Foo(1,2,3))) 0: \x80 PROTO 4 2: \x95 FRAME 46 11: \x8c SHORT_BINUNICODE '__main__' 21: \x94 MEMOIZE (as 0) 22: \x8c SHORT_BINUNICODE 'Foo' 27: \x94 MEMOIZE (as 1) 28: \x93 STACK_GLOBAL 29: \x94 MEMOIZE (as 2) 30: ) EMPTY_TUPLE 31: \x81 NEWOBJ 32: \x94 MEMOIZE (as 3) 33: } EMPTY_DICT 34: \x94 MEMOIZE (as 4) 35: ( MARK 36: \x8c SHORT_BINUNICODE 'a' 39: \x94 MEMOIZE (as 5) 40: K BININT1 1 42: \x8c SHORT_BINUNICODE 'b' 45: \x94 MEMOIZE (as 6) 46: K BININT1 2 48: \x8c SHORT_BINUNICODE 'c' 51: \x94 MEMOIZE (as 7) 52: K BININT1 3 54: u SETITEMS (MARK at 35) 55: b BUILD 56: . STOP highest protocol among opcodes = 4 And: >>> pickletools.dis(pickle.dumps(Bar(1,2,3))) 0: \x80 PROTO 4 2: \x95 FRAME 49 11: \x8c SHORT_BINUNICODE '__main__' 21: \x94 MEMOIZE (as 0) 22: \x8c SHORT_BINUNICODE 'Bar' 27: \x94 MEMOIZE (as 1) 28: \x93 STACK_GLOBAL 29: \x94 MEMOIZE (as 2) 30: ) EMPTY_TUPLE 31: \x81 NEWOBJ 32: \x94 MEMOIZE (as 3) 33: N NONE 34: } EMPTY_DICT 35: \x94 MEMOIZE (as 4) 36: ( MARK 37: \x8c SHORT_BINUNICODE 'a' 40: \x94 MEMOIZE (as 5) 41: K BININT1 1 43: \x8c SHORT_BINUNICODE 'b' 46: \x94 MEMOIZE (as 6) 47: K BININT1 2 49: \x8c SHORT_BINUNICODE 'c' 52: \x94 MEMOIZE (as 7) 53: K BININT1 3 55: u SETITEMS (MARK at 36) 56: \x86 TUPLE2 57: \x94 MEMOIZE (as 8) 58: b BUILD 59: . STOP highest protocol among opcodes = 4 So, the first difference is that on opcode 33, the non-slotted class is missing a None, i.e.: 33: } EMPTY_DICT 34: \x94 MEMOIZE (as 4) Vs: 33: N NONE 34: } EMPTY_DICT 35: \x94 MEMOIZE (as 4) The rest of the instructions build the same dictionary, but then the slotted version also does: 56: \x86 TUPLE2 57: \x94 MEMOIZE (as 8) Which creates a tuple (None, {<the dict>}) I am almost certain this is related to the difference between the results of __getstate__: >>> Foo(1,2,3).__getstate__() {'a': 1, 'b': 2, 'c': 3} >>> Bar(1,2,3).__getstate__() (None, {'a': 1, 'b': 2, 'c': 3}) That behavior is described in the pickle docs for object.__getstate__: For a class that has an instance __dict__ and no __slots__, the default state is self.__dict__. ... For a class that has __slots__ and no instance __dict__, the default state is a tuple whose first item is None and whose second item is a dictionary mapping slot names to slot values described in the previous bullet.
5
4
75,998,924
2023-4-12
https://stackoverflow.com/questions/75998924/what-is-the-difference-between-manager-pool-and-pool-in-python-multiprocessing
Say I want to share a dictionary between processes. If I have defined a manager, what is the difference between instantiating a pool using manager.Pool() and multiprocessing.Pool()? Ex: What is the difference between the two with statements in main_1 and main_2? import multiprocessing as mp import time from random import random def func(d): pid = mp.current_process().pid time.sleep(3 * random()) d[pid] = 'I added this' def main_1(): print('Start of main 1') with mp.Manager() as manager: d = manager.dict() with manager.Pool() as pool: # using manager pool.map(func, [d] * 4) print(d) def main_2(): print('Start of main 2') manager = mp.Manager() try: d = manager.dict() with mp.Pool() as pool: # using multiprocessing pool.map(func, [d] * 4) print(d) finally: manager.shutdown() if __name__ == '__main__': main_1() main_2() More generally Are all processes started after a Manager exists in the scope automatically served by it?
Both manager.Pool and multiprocessing.Pool in Python are used for creating a pool of worker processes to execute tasks in parallel. However, there are some differences between them. multiprocessing.Pool is a built-in class provided by the Python multiprocessing module. It creates a pool of worker processes, where each process executes tasks independently. The tasks can be submitted to the pool as functions or methods to be executed in parallel. On the other hand, manager.Pool is a class provided by the Python multiprocessing.managers module. It also creates a pool of worker processes like multiprocessing.Pool. However, manager.Pool is used for creating a pool of processes that can be shared among multiple Python processes. This means that the processes in the pool can be accessed and used by multiple Python processes, making it a good choice for distributed computing.
4
4
75,999,147
2023-4-12
https://stackoverflow.com/questions/75999147/how-to-structurally-pattern-match-builtin-type-slice-in-python
How to structurally pattern match builtin type slice in python? Somewhy the following code does not work: def __getitem__(self, index): match index: case int(i): ... case slice(start, stop, step): ... case _: ... and I completely do not understand why.
Try: class Example: def __getitem__(self, index): match index: case int(i): print('int', i) case slice(start=start, stop=stop, step=step): print('slice', start, stop, step) case _: print('default') e = Example() e[1] e[1:2] Prints: int 1 slice 1 2 None
3
2
75,999,041
2023-4-12
https://stackoverflow.com/questions/75999041/pandas-how-to-check-if-column-not-empty-then-apply-str-replace-in-one-line-code
code: df['Rep'] = df['Rep'].str.replace('\\n', ' ') issue: if the df['Rep'] is empty or null ,there will be an error: Failed: Can only use .str accessor with string values! is there anyway can handle the situation when the column value is empty or null? If it is empty or null ,just ignore that row
By default the empty series dtype will be float64. You can do a workaround using the astype: df['Rep'] = df['Rep'].astype('str').str.replace('\\n', ' ') Test code: df = pd.DataFrame({'Rep': []}) # works df['Rep'] = df['Rep'].astype('str').str.replace('\\n', ' ') # doesn't work df['Rep'] = df['Rep'].str.replace('\\n', ' ') I don't know which version of pandas you are using by they will changed to object the default dtype. Edit: Still won't work with object, just tested using the latest version of pandas (2.0.0). Source
5
3
75,998,784
2023-4-12
https://stackoverflow.com/questions/75998784/python-customtkinter-attributeerror-int-object-has-no-attribute-root
i just took over a project in python after a year and i wanted to rebuild it with customtkinter using the documentation. Here is the code: import customtkinter import tkinter from pytube import YouTube from PIL import Image customtkinter.set_appearance_mode("system") customtkinter.set_default_color_theme("blue") app = customtkinter.CTk() app.geometry("800x760") app.title("YouTube Unlocked") logo = customtkinter.CTkImage(dark_image=Image.open("G:\\ytdownloader\\assets\\icon.png"), size=(260,190)) logoDisplay = customtkinter.CTkLabel(app, text="", image=logo) logoDisplay.pack(padx=10, pady=10) linkLabel = customtkinter.CTkLabel(app, text="Video link", font=("Arial", 15)) linkLabel.pack(padx=10, pady=10) linkTextbox = customtkinter.CTkEntry(app, width=400) linkTextbox.pack(padx=10, pady=10) radio_var = tkinter.IntVar(0) radiobutton_1 = customtkinter.CTkRadioButton(app, text="Video", variable=radio_var, value=1) radiobutton_2 = customtkinter.CTkRadioButton(app, text="Audio", variable=radio_var, value=2) radiobutton_1.pack(padx=10, pady=10) radiobutton_2.pack(padx=10, pady=10) app.mainloop() But after writing the radio buttons code (always from the documentation) I got this error: Traceback (most recent call last): File "g:\ytdownloader\main.py", line 23, in <module> radio_var = tkinter.IntVar(0) File "C:\Users\gp_ga\AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py", line 564, in __init__ Variable.__init__(self, master, value, name) File "C:\Users\gp_ga\AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py", line 372, in __init__ self._root = master._root() AttributeError: 'int' object has no attribute '_root' Is there any way to fix this? Thanks! (by the way if it's a bad english I want to let you know that i'm using google translate)
tkinter.IntVar() already has a default value of 0. The documentation says the following about the arguments that tkinter.IntVar() can take: tkinter.IntVar(master=None, value=None, name=None) Since master comes first and you provided 0 as the first positional argument, it believes that it is the master argument. What you want to do is specify the argument value. Like so: radio_var = tkinter.IntVar(value=0)
3
4
75,998,310
2023-4-12
https://stackoverflow.com/questions/75998310/converting-a-python-recursive-function-into-excel
So, I have to run a recursive function times but it's going to take too long to actually print out all the values. Thus, my professor recommended using Excel but I don't know Excel at all. I need help converting the code into Excel. It's probably easy for someone who knows Excel. def a(n): k=3.8 if n==0: return .5 else: return k*a(n-1)*(1-a(n-1)) for i in range(100): print(a(i)) All i know is that you use the lambda() function in excel
You don't need to use excel. You just need to use a better algorithm. The easiest way to prevent the exponential time complexity is don't re-calculate the same value twice: def a(n): k = 3.8 if n==0: return .5 else: x = a(n - 1) return k*x*(1-x) for i in range(100): print(a(i)) In Python, you should avoid recursion, though, since Python doesn't optimize recursion and you will run out of stack space. This is easy to convert to an iterative algorithm, though: def b(n): k = 3.8 prev = curr = 0.5 for i in range(1, n + 1): curr = k * prev * (1 - prev) prev = curr return curr
3
2
75,998,574
2023-4-12
https://stackoverflow.com/questions/75998574/how-to-implement-rolling-mean-ignoring-null-values
I am trying calculate RSI indicator. For that I need rolling-mean gain and loss. I would like to calculate rolling mean ignoring null values. So mean would be calculated by sum and count on existing values. Example: window_size = 5 df = DataFrame(price_change: { 1, 2, 3, -2, 4 }) df_gain = .select( pl.when(pl.col('price_change') > 0.0) .then(pl.col('price_change')) .otherwise(None) .alias('gain') ) # UNKOWN HOW TO GET WANTED RESULT: rol_mean_gain = df_gain.select( pl.col('gain').rolling_mean(window_size=window_size, ignore_null=True) ) So that rol_mean_gain be calculated: [1, 2, 3, skip, 4] / 4 (not 5) I know Pandas has .mean(skipna=True) or .apply(pandas.np.nanmean) But as far as I am aware polars does not provide such API.
I think the skipping of nulls is implied, see the min_periods description in the docs for this method. df_gain.select(pl.col('gain').rolling_mean(window_size=window_size, min_periods=1)) Gives me a column of 1.0, 1.5, 2.0, 2.0, 2.5. Note how the last two columns skips the null correctly.
5
1
75,955,739
2023-4-7
https://stackoverflow.com/questions/75955739/how-to-select-the-column-from-a-polars-dataframe-that-has-the-largest-sum
I have Polars dataframe with a bunch of columns I need to find the column with, for example, the largest sum. The below snippet sums all of the columns: df = pl.DataFrame( { "a": [0, 1, 3, 4], "b": [0, 0, 0, 0], "c": [1, 0, 1, 0], } ) max_col = df.select(pl.col(df.columns).sum()) shape: (1, 3) ┌─────┬─────┬─────┐ │ a ┆ b ┆ c │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╡ │ 8 ┆ 0 ┆ 2 │ └─────┴─────┴─────┘ But I'm missing the last step of selecting the column with the largest value?
I would do this as a unpivot/filter. df \ .select(pl.all().sum()) \ .unpivot() \ .filter(pl.col('value')==pl.col('value').max()) If you want the original shape then a single chain is a bit tougher. I'd just do it like this instead. allcalc=df \ .select(pl.all().sum()) allcalc.select(allcalc.unpivot().filter(pl.col('value')==pl.col('value').max()) \ .get_column('variable').to_list()) The above works if there is a tie, for instance if you have: df=pl.DataFrame( { "a": [0, 1, 3, 4], "b": [0, 0, 0, 0], "c": [1, 0, 1, 6], } ) then you'll get 'a' and 'c' in either case.
3
0
75,954,280
2023-4-6
https://stackoverflow.com/questions/75954280/how-to-change-the-position-of-a-single-column-in-python-polars-library
I am working with the Python Polars library for data manipulation on a DataFrame, and I am trying to change the position of a single column. I would like to move a specific column to a different index while keeping the other columns in their respective positions. One way of doing that is using select, but that requires giving a complete order for all the columns which I don't want to do. import polars as pl # Create a simple DataFrame data = { 'A': [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9], 'D': [10, 11, 12] } df = pl.DataFrame(data) I want to move column 'C' to index 1, so the desired output should be: shape: (3, 4) ┌─────┬─────┬─────┬──────┐ │ A │ C │ B │ D │ │ --- │ --- │ --- │ ---- │ │ i64 │ i64 │ i64 │ i64 │ ╞═════╪═════╪═════╪══════╡ │ 1 │ 7 │ 4 │ 10 │ ├─────┼─────┼─────┼──────┤ │ 2 │ 8 │ 5 │ 11 │ ├─────┼─────┼─────┼──────┤ │ 3 │ 9 │ 6 │ 12 │ └─────┴─────┴─────┴──────┘
Some attempts: df.drop("C").insert_column(1, df.get_column("C")) df.select(df.columns[0], "C", pl.exclude(df.columns[0], "C")) cols = df.columns cols[1], cols[2] = cols[2], cols[1] # cols[1:3] = cols[2:0:-1] df.select(cols) shape: (3, 4) ┌─────┬─────┬─────┬─────┐ │ A ┆ C ┆ B ┆ D │ │ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╪═════╡ │ 1 ┆ 7 ┆ 4 ┆ 10 │ │ 2 ┆ 8 ┆ 5 ┆ 11 │ │ 3 ┆ 9 ┆ 6 ┆ 12 │ └─────┴─────┴─────┴─────┘
4
6
75,977,591
2023-4-10
https://stackoverflow.com/questions/75977591/mark-rows-of-one-dataframe-based-on-values-from-another-dataframe
I have following problem. Let's say I have two dataframes df1 = pl.DataFrame({'a': range(10)}) df2 = pl.DataFrame({'b': [[1, 3], [5,6], [8, 9]], 'tags': ['aa', 'bb', 'cc']}) print(df1) print(df2) shape: (10, 1) ┌─────┐ │ a │ │ --- │ │ i64 │ ╞═════╡ │ 0 │ │ 1 │ │ 2 │ │ 3 │ │ 4 │ │ 5 │ │ 6 │ │ 7 │ │ 8 │ │ 9 │ └─────┘ shape: (3, 2) ┌───────────┬──────┐ │ b ┆ tags │ │ --- ┆ --- │ │ list[i64] ┆ str │ ╞═══════════╪══════╡ │ [1, 3] ┆ aa │ │ [5, 6] ┆ bb │ │ [8, 9] ┆ cc │ └───────────┴──────┘ I need to mark/tag rows in dataframe df1 based on values of dataframe df2, so I can get following dataframe print(pl.DataFrame({'a': range(10), 'tag': ['NA', 'aa', 'aa', 'aa', 'NA', 'bb', 'bb', 'NA', 'cc', 'cc']})) shape: (10, 2) ┌─────┬─────┐ │ a ┆ tag │ │ --- ┆ --- │ │ i64 ┆ str │ ╞═════╪═════╡ │ 0 ┆ NA │ │ 1 ┆ aa │ │ 2 ┆ aa │ │ 3 ┆ aa │ │ 4 ┆ NA │ │ 5 ┆ bb │ │ 6 ┆ bb │ │ 7 ┆ NA │ │ 8 ┆ cc │ │ 9 ┆ cc │ └─────┴─────┘ So list in column b of df2 indicates start and end values for column a of df1 that needs to be tagged with what's in column tags. Thanks
You could create the ranges and "flatten" the frame: .int_ranges() .explode() (df2 .with_columns(pl.int_ranges(pl.col("b").list.first(), pl.col("b").list.last() + 1)) .explode("b") ) shape: (7, 2) ┌─────┬──────┐ │ b ┆ tags │ │ --- ┆ --- │ │ i64 ┆ str │ ╞═════╪══════╡ │ 1 ┆ aa │ │ 2 ┆ aa │ │ 3 ┆ aa │ │ 5 ┆ bb │ │ 6 ┆ bb │ │ 8 ┆ cc │ │ 9 ┆ cc │ └─────┴──────┘ Matching could then be done with a LEFT JOIN df1.join( df2.with_columns( pl.int_ranges(pl.col("b").list.first(), pl.col("b").list.last() + 1)).explode("b"), left_on = "a", right_on = "b", how = "left" ) shape: (10, 2) ┌─────┬──────┐ │ a ┆ tags │ │ --- ┆ --- │ │ i64 ┆ str │ ╞═════╪══════╡ │ 0 ┆ null │ │ 1 ┆ aa │ │ 2 ┆ aa │ │ 3 ┆ aa │ │ 4 ┆ null │ │ 5 ┆ bb │ │ 6 ┆ bb │ │ 7 ┆ null │ │ 8 ┆ cc │ │ 9 ┆ cc │ └─────┴──────┘
3
3
75,987,622
2023-4-11
https://stackoverflow.com/questions/75987622/change-case-of-all-column-names-with-ibis
I have an Ibis table named t. Its column names are all lowercase. I want to change them all to uppercase. How can I do that?
The rename method of Ibis table objects renames columns. It can be used to make all the column names uppercase like this: t = t.rename(dict(zip([x.upper() for x in t.columns], t.columns))) rename also provides a shortcut for this: t = t.rename("ALL_CAPS") The above works in Ibis version 7.0.0 or newer. In older versions of Ibis, you can use relabel: t = t.relabel(dict(zip(t.columns, [x.upper() for x in t.columns])))
3
2
75,942,865
2023-4-5
https://stackoverflow.com/questions/75942865/async-solution-for-factory-boy-style-fixtures-in-fastapi
I really like the factory boy style of generated factories that can handle things like sequences, complex relationships etc. For a FastAPI app with fully async database access using factory boy seems likely problematic. There is dated discussion here and an old PR to add async support that seems stuck. Is there a good solution for these kinds of fixtures that has full async support?
I haven't seen further progress from factory boy on this issue, but ultimately implemented a solution using pytest fixtures as factories that is working well for me. The core idea is to build fixtures that return a factory method that can be used in tests. Here is a concrete example that generates users: @pytest.fixture(scope="function") def user_factory(db: Session): """A factory for creating users""" last_user = 0 def create_user( email=None, name=None, role=None, org: Organization | None = None ) -> User: """Return a new user, optionally with role for an existing organization""" nonlocal last_user last_user += 1 email = email or f"user{last_user}@example.com" name = name or f"User {last_user}" user = User(email=email, name=name, auth_id=auth_id) db.add(user) if role: role = OrganizationRole(user_id=user_id, organization_id=org.id, role=role) db.add(role) db.commit() db.refresh(user) return user return create_user # use in test def test_something(user_factory) -> None: user = user_factory() # ... I picked an example with SQLAlchemy ORM for simplicity of demonstrating the concept but you can persist however you like, pull in other factories as dependencies, etc. There is a pretty good discussion of how to approach this in this article about using factories as fixtures. This article also has good ideas for how to address sequences, constraints and a lot of the things I would use factory boy for by just using simple python.
5
2
75,973,808
2023-4-10
https://stackoverflow.com/questions/75973808/concise-way-to-retrieve-a-row-from-a-polars-dataframe-with-an-iterator-of-column
I often need to retrieve a row from a Polars DataFrame given a collection of column values, like I might use a composite key in a database. This is possible in Polars using DataFrame.row, but the resulting expression is very verbose: row_index = {'treatment': 'red', 'batch': 'C', 'unit': 76} row = df.row(by_predicate=( (pl.col('treatment') == row_index['treatment']) & (pl.col('batch') == row_index['batch']) & (pl.col('unit') == row_index['unit']) )) The most succinct method I've found is from functools import reduce from operator import and_ expr = reduce(and_, (pl.col(k) == v for k, v in row_index.items())) row = df.row(by_predicate=expr) But that is still verbose and hard to read. Is there an easier way? Possibly a built-in Polars functionality I'm missing?
(a == b) & (c == d) will return true if all of the conditions are true. Another way to express this is with pl.all_horizontal() pl.all_horizontal(a == b, c == d) pl.any_horizontal() can be used for "logical OR" To which you can pass your comprehension directly: expr = pl.all_horizontal( pl.col(k) == v for k, v in row_index.items() ) df.row(by_predicate=expr)
3
4
75,971,804
2023-4-9
https://stackoverflow.com/questions/75971804/what-should-i-do-with-user-installed-packages-on-debian-in-light-of-pep668
In light of some distributions (Debian at least) steering away from python3 -m pip install numpy what should I do with my user packages that I had installed with python3 -m pip install --user numpy (for instance)? They are located in python3 -m site --user-site : ~/.local/lib/python3.11/site-packages/. I don't want to: delete /usr/lib/python3.x/EXTERNALLY-MANAGED install pipx sometimes, of course, cannot find an apt managed package I'm all for using venv, but do I make some ~/.local/lib/python3/packages folder and include it in a PYTHON_PATH, but also make that folder a venv virtual environment? Should I move what is in my ~/.local/lib/python3.11/site-packages/ folder to this new folder, or re-download? For someone who uses Python and 'standard' packages, I really don't need to worry about mixing things too much, but to me it seems (only for me as a low-key user) that functionally there is little difference between venv installs in one directory (one for each project) and --user installs in another. I've looked around a lot about this but couldn't find a "here's what to do" tutorial on adopting the change. Or maybe I am missing some big picture... How do I solve "error: externally-managed-environment" everytime I use pip3? pip install -r requirements.txt is failing: "This environment is externally managed" pip install -r requirements.txt is failing: "This environment is externally managed"
I decided to use pipenv to manage my packages, but to install it, I have to do a pip install. I wanted to do this just for my user, so I created a local venv for it, mkdir -p ~/.local/share/ apt install python3-pip python3 -m venv ~/.local/share/pipenv source ~/.local/share/pipenv/bin/activate and installed pipenv on it: pip install pipenv To now use pipenv with my user, I added the path to the pipenv bin venv to my PATH: export PATH=$HOME/.local/share/pipenv/bin:$PATH Now I can use pipenv to manage the project venvs. It's kinda convoluted, I know, but Python can be quite convoluted with packages.
3
1
75,984,983
2023-4-11
https://stackoverflow.com/questions/75984983/polars-change-a-value-in-a-dataframe-if-a-condition-is-met-in-another-column
I have this dataframe df = pl.from_repr(""" ┌─────┬───────┐ │ one ┆ two │ │ --- ┆ --- │ │ str ┆ str │ ╞═════╪═══════╡ │ a ┆ hola │ │ b ┆ world │ └─────┴───────┘ """) And I want to change hola for hello: shape: (2, 2) ┌─────┬───────┐ │ one ┆ two │ │ --- ┆ --- │ │ str ┆ str │ ╞═════╪═══════╡ │ a ┆ hello │ # <- │ b ┆ world │ └─────┴───────┘ How can I change the values of a row based on a condition in another column? For instance, with PostgreSQL I could do this: UPDATE my_table SET two = 'hello' WHERE one = 'a'; Or in Spark my_table.withColumn("two", when(col("one") == "a", "hello")) I've tried using with_columns(pl.when(pl.col("one") == "a").then("hello")) but that changes the column "one". EDIT: I could create a SQL instance and plot my way through via SQL but there must be way to achieve this via the Python API.
You were really close with with_columns(pl.when(pl.col("one") == "a").then("hello")) but you needed to tell it which column that should be. When you don't tell it which column you're referring to then it has to guess and in this case it guessed the column you referred to. Instead you do (df .with_columns( two=pl.when(pl.col('one')=='a') .then(pl.lit('hello')) .otherwise(pl.col('two'))) ) This uses the **kwargs input of with_columns to allow the column named to be on the left of an equal sign as though it were a parameter to a function. You can also use alias syntax like this... (df .with_columns( (pl.when(pl.col('one')=='a') .then(pl.lit('hello')) .otherwise(pl.col('two'))) .alias('two') ) ) Note that I wrapped the entire when/then/otherwise in parenthesis. The order of operations around when/then/otherwise and alias is weird so I find it's best to always completely wrap them in parenthesis to avoid unexpected results. Worst case scenario is you have redundant parenthesis which doesn't hurt anything.
12
20
75,985,726
2023-4-11
https://stackoverflow.com/questions/75985726/error-metric-for-backtest-and-historical-forecasting-in-darts-are-different
When using backtest and historical_forecast in darts I expect the same error. However, when doing a test, I get different MAPE values for the same input variables. Can somebody explain how this can happen? How can I make the two methods comparable? Example: import pandas as pd from darts import TimeSeries from darts.models import NaiveDrift from darts.metrics import mape df = pd.read_csv('AirPassengers.csv') series = TimeSeries.from_dataframe(df, 'Month', '#Passengers') print("Backtest MAPE: ", NaiveDrift().backtest(series, start=12, forecast_horizon=6, metric=mape)) historical_forecast = NaiveDrift().historical_forecasts(series, start=12, forecast_horizon=6, verbose=False) print("Historical Forecast MAPE: ", mape(historical_forecast, series)) Output: Backtest MAPE: 16.821355933599133 Historical Forecast MAPE: 21.090231183002143 Links Link to the documentation:https://unit8co.github.io/darts/generated_api/darts.models.forecasting.baselines.html Link to the dataset: https://www.kaggle.com/datasets/rakannimer/air-passengers
The reason the MAPEs are different is because the data used to compute them are different. Historical_forecast() and backtest() have different default values for the parameter "last_points_only". For historical_forecast(), the parameter is set to True, while for backtest() it is False. This means that historical_forecast by default generates one series against which the accuracy is checked. On the other hand, backtest generates multiple series and averages the error among them. The backtest() function has the default "stride" parameter as 1, which means after it makes one prediction, it will move forward one time step and generate another, more forward one step and generate another, and so on. These multiple series will have a different average MAPE than the one series from the historical forecast. If last_points_only is set to True (as it is by default in historical_forecast()), when the function steps forward one time step, it will only include the new (last) time series point in the calculation, instead of the whole series again. You can check this by setting the "last_points_only" parameter in backtest() to True, and you will get the same result for both functions.
5
4
75,979,676
2023-4-10
https://stackoverflow.com/questions/75979676/why-does-this-code-work-on-python-3-6-but-not-on-python-3-7
In script.py: def f(n, memo={0:0, 1:1}): if n not in memo: memo[n] = sum(f(n - i) for i in [1, 2]) return memo[n] print(f(400)) python3.6 script.py correctly prints f(400), but with python3.7 script.py it stack overflows. The recursion limit is reached at f(501) in 3.6 and at f(334) in 3.7. What changed between Python 3.6 and 3.7 that caused the maximum recursion depth to be exceeded earlier by this code?
After some git bisecting between Python 3.6.0b1 and Python 3.7.0a1 I found bpo bug #29306 (git commits 7399a05, 620580f), which identified some bugs with the recursion depth counting. Originally, Victor Stinner reported that he was unsure that some new internal API functions for optimised calls (part of the reported call overhead optimisations) were handling the recursion counter properly, but after further discussion it was decided that the problem was more general and that all calls to C functions need to handle the recursion counter too. A simple test script is included in the linked issue that demonstrates that there are issues with recursion counting in older Python versions too; the script depends on a special extension module that is part of the development tree. However, even though Python versions 2.7, 3.5 and 3.6 were shown to be affected the changes were never back-ported. I'm guessing that because those versions didn't receive the call overhead optimisations backporting was going to be a lot of work. I also can’t find an entry for this change in the Python changelog, perhaps because Victor regarded this as part of the optimisation work. The changes mean that sum() and other built-in functions now count against the recursive call limit where they didn’t before.
8
12
75,968,376
2023-4-9
https://stackoverflow.com/questions/75968376/lowercase-text-with-regex-pattern
I use regex pattern to block acronyms while lower casing text. The code is # -*- coding: utf-8 -*- #!/usr/bin/env python from __future__ import unicode_literals import codecs import os import re text = "This sentence contains ADS, NASA and K.A. as acronymns." pattern = r'([A-Z][a-zA-Z]*[A-Z]|(?:[A-Z]\.)+)' matches = re.findall(pattern, text) def lowercase_ignore_matches(match): word = match.group() if word in matches: return word return word.lower() text2 = re.sub(r"\w+", lowercase_ignore_matches, text) print(text) print(text2) matches = re.findall(pattern, text) print (matches) output is This sentence contains ADS, NASA and K.A. as acronymns. this sentence contains ADS, NASA and k.a. as acronymns. ['ADS', 'NASA', 'K.A.'] The issue is why is it ignoring k.a. while identifying it as acronymns. I wish to retain k.a. as K.A. Kindly help
The solution with r[\w\.] works in this case but will struggle if the acronym is at the end of a line with a dot after it (i.e. "[...] or ASDF." We use the pattern to identify every acronym, than lowercase the whole string and then replace the acronyms again with their original value. I changed the pattern a bit so that it also supports acronym like "eFUEL" # -*- coding: utf-8 -*- #!/usr/bin/env python from __future__ import unicode_literals import codecs import os import re text = "This sentence contains ADS, NASA and K.A. as acronymns. eFUEL or ASDF." pattern = r'\b([A-z]*?[A-Z](?:\.?[A-Z])+[A-z]*)' # Find all matches of the pattern in the text matches = re.findall(pattern, text) # Make everything lowercase text2 = text.lower() # Replace each match with its original uppercase version for match in matches: text2 = text2.replace(match.lower(), match) print(text2) The result is: ['ADS', 'NASA', 'K.A', 'eFUEL', 'ASDF'] this sentence contains ADS, NASA and K.A. as acronymns. eFUEL or ASDF.
3
1
75,936,149
2023-4-5
https://stackoverflow.com/questions/75936149/convert-tensorflow-to-onnx-current-implementation-of-rfft-or-fft-only-allows-co
I am trying to convert this TensorFlow model to onnx. But I get an error message: > python -m tf2onnx.convert --saved-model .\spice --output model.onnx --opset 11 --verbose ... 2023-04-08 18:33:10,811 - ERROR - tf2onnx.tfonnx: Tensorflow op [Real: Real] is not supported 2023-04-08 18:33:10,812 - ERROR - tf2onnx.tfonnx: Tensorflow op [Imag: Imag] is not supported 2023-04-08 18:33:10,879 - ERROR - tf2onnx.tfonnx: Unsupported ops: Counter({'Real': 6, 'Imag': 6}) ... ValueError: make_sure failure: Current implementation of RFFT or FFT only allows ComplexAbs as consumer not {'Real', 'Imag'} Same happens with the TensorFlow Light (tflight) model: > python -m tf2onnx.convert --opset 16 --tflite .\lite-model_spice_1.tflite --output spice.onnx ... ValueError: make_sure failure: Current implementation of RFFT or FFT only allows ComplexAbs as consumer not {'Imag', 'Real'} I am on Windows 11, Python 3.10.10, TensorFlow 2.12 This is my first attempt with TensorFlow / ONNX, so I am unsure where the error comes from. Questions Is it related to TensorFlow, tf2onnx, or the model? Would it work with another setup (maybe on Linux or other TF version)? How to fix the issue?
The error mentioned above occurred because tf2onnx does not support Real and Imag operations. For a list of operations that can be used with tf2onnx, please refer to this link. This will also affect the TensorFlow model. I'm not sure what this means, but as far as I know, it has no relation whatsoever to what is referred to in point number 2. The way to solve this is by changing the Real and Imag operations as follows: # Compute the real and imaginary components using tf.math.real and tf.math.imag x_real = tf.math.real(x) x_imag = tf.math.imag(x)
3
2
75,983,462
2023-4-11
https://stackoverflow.com/questions/75983462/is-it-possible-to-interpolate-a-quarter-of-the-video-with-optical-flow
I am now trying to interpolate video using optical flow. I was able to interpolate the video by referring to this question and using it as a reference. So my question is: Is it possible to interpolate the video in 1/4 units even finer using the original frame and the optical flow frame? Thank you in advance. I tried halving the optical flow frame and remapped it to the original image, but it did not work. (It's obvious, of course, but...).
I will try solving this problem with the help of an example. Suppose my previous image is and my next image is The next image was created by translating the previous image towards right (I hope it is visible). Now, let's calculate the optical flow between the two images and plot the flow vectors. optical_flow = cv2.calcOpticalFlowFarneback(img1, img2, None, pyr_scale = 0.5, levels = 3, winsize = 200, iterations = 3, poly_n = 5, poly_sigma = 1.1, flags = 0) I have taken the previous image and drawn flow vectors on top of it. This image shows that, previous image was formed by translating the next image towards the left. If you followed this, it is pretty easy to accomplish the task at hand .i.e interpolate frames between these two frames Essentially, optical flow vectors tell us for each pixel, where has the pixel come from the previous image. Lets talk about a particular pixel, which has moved from (100, 100) to (200, 100) .i.e moved in the right direction by 100 pixels. The question is where will this pixel be if there was uniform motion between these 2 frames? Example: if there were 4 frames in between these two frames, the pixel will be at locations initial location = (100, 100) interpolated frame 1 = (120, 100) interpolated frame 2 = (140, 100) interpolated frame 3 = (160, 100) interpolated frame 4 = (180, 100) final location. = (200, 100) Enough talking, let me show you some code. def load_image(image_path): "Load the image and convert it into a grayscale image" img = cv2.imread(image_path) return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def plot_flow_vectors(img, flow): """ Plots the flow vectors """ img = img.copy() h, w = img.shape[:2] step_size = 40 cv2.imshow("original_image", img) for y in range(0, h, step_size): for x in range(0, w, step_size): # Get the flow vector at this point dx, dy = flow[y, x] # Draw an arrow to represent the flow vector cv2.arrowedLine(img, (x, y), (int(x + dx), int(y + dy)), (0, 255, 0), 1, cv2.LINE_AA, tipLength=0.8) cv2.imshow("image_with_flow_vectors", img) cv2.waitKey(0) cv2.destroyAllWindows() img1 = load_image('first.jpeg') // load previous image img2 = load_image('second.jpeg') // load next image optical_flow = cv2.calcOpticalFlowFarneback(img1, img2, None, pyr_scale = 0.5, levels = 3, winsize = 200, iterations = 3, poly_n = 5, poly_sigma = 1.1, flags = 0) // calculate optical flow plot_flow_vectors(img1, optical_flow) // plot flow vectors // Generate frames in between num_frames = 10 h, w = optical_flow.shape[:2] for frame_num in range(1, num_frames+1): alpha = frame_num / num_frames flow = alpha * optical_flow flow[:,:,0] += np.arange(w) flow[:,:,1] += np.arange(h)[:,np.newaxis] interpolated_frame = cv2.remap(img2, flow, None, cv2.INTER_LINEAR) cv2.imshow("intermediate_frame", interpolated_frame) cv2.waitKey(0) PS: optical flow vectors also show an motion in the y direction (when there isn't). I think it is an artifact of the large motion between the two frames. This is generally not the case for frames in a video. Edit: I created a github repository to interpolate frames in between two video frames to increase the video fps or get a slow motion video out of the input video. Check it out here
3
2
75,980,420
2023-4-10
https://stackoverflow.com/questions/75980420/upset-plot-python-list-row-names
The upset plot tutorials on the documentation have this example with movies: https://upsetplot.readthedocs.io/en/stable/formats.html#When-category-membership-is-indicated-in-DataFrame-columns I wanted to know, after creating data from memberships "Genre" and plotting how do I list the names of the movies as well? In the plot, I want to print the list of movies at each intersection. So at intersection 48, I want to list the 48 movies.
In the example on the documentation page, this information is contained in the dataframe movies_by_genre, which is defined as: movies_by_genre = from_indicators(genre_indicators, data=movies). Now, we can extract the required information from this data frame. We just need to make sure that the order of the boolean tuple of length 20, (True, False, ....., True) in the pandas Series object intersection and the pandas Series object movies_by_genre.Genres. I used a dict to map the order of columns. For reproducibility, the end-to-end python script is given below: # ! pip install upsetplot # ! pip install smartprint from upsetplot import from_indicators import pandas as pd from upsetplot import UpSet from smartprint import smartprint as sprint def get_movie_list_at_intersection(u, movies_by_genre, col=0): """ Args: u: result of the call UpSet(movies_by_genre, min_subset_size=15, show_counts=True) movies_by_genre: result of from_indicators(genre_indicators, data=movies) column number: 0 implies the first intersection with 48 elements Returns: list of movie names at column number col """ keys = list(u.intersections.index.names) values = list(u.intersections.index[col]) # Fix the order of columns between movies df and the movies_by_genre_df dict_ = dict(zip(keys, values)) column_names_in_df_movies_by_genre = movies_by_genre.Genre.index.names mapped_boolean = [*map(dict_.get, column_names_in_df_movies_by_genre)] movie_list = movies_by_genre.loc[tuple(mapped_boolean)].Title.tolist() return movie_list from upsetplot import from_indicators import pandas as pd from upsetplot import UpSet movies = pd.read_csv("https://raw.githubusercontent.com/peetck/IMDB-Top1000-Movies/master/IMDB-Movie-Data.csv") genre_indicators = pd.DataFrame([{cat: True for cat in cats} for cats in movies.Genre.str.split(',').values]).fillna(False) movies_by_genre = from_indicators(genre_indicators, data=movies) u = UpSet(movies_by_genre, min_subset_size=15, show_counts=True) # For for the 4th intersection set, i.e. column number 3 we have the following, # which outputs the corresponding list of length 15 movies sprint (get_movie_list_at_intersection(u, movies_by_genre, 3)) sprint (len(get_movie_list_at_intersection(u, movies_by_genre, 3))) Output: get_movie_list_at_intersection(u, movies_by_genre, 3) : ['Nocturnal Animals', 'Miss Sloane', 'Forushande', 'Kynodontas', 'Norman: The Moderate Rise and Tragic Fall of a New York Fixer', 'Black Swan', 'The imposible', 'The Lives of Others', 'Zipper', 'Lavender', 'Man Down', 'A Bigger Splash', 'Flight', 'Contagion', 'The Skin I Live In'] len(get_movie_list_at_intersection(u, movies_by_genre, 3)) : 15 EDIT: Upon clarification from OP, the list of names should be printed on the plot. So, we can follow the same method and put the text on the plots manually. I did the following: Modified the _plot_bars() function inside upsetplot.plotting.py such that it allows us to add text from a parameterlist called lol_of_intersection_names; lol stands for list of list. Additionally, I added an alpha parameter to reduce the transparency of the bars when ax.bar is called; otherwise the text will not be visible. (alpha = 0.5) in the example below. for (name, y), color in zip(data_df.items(), colors): rects = ax.bar(x, y, .5, cum_y, color=color, zorder=10, label=name if use_labels else None, align='center',alpha=0.5) cum_y = y if cum_y is None else cum_y + y all_rects.extend(rects) ############# Start of Snippet # Iterate over each bar for bar_num in range(len(y.tolist())): bar = ax.patches[bar_num] # extract the bar for counter in range(y.tolist()[bar_num]): # insert text according to ax.text( bar.get_width()/2 + bar.get_x(), bar.get_y() + bar.get_height() * \ counter/y.tolist()[bar_num] , self.lol_of_intersection_names[bar_num][counter], \ color='blue', ha='center', va='center', fontsize=0.5) counter += 1 ############# End of Snippet self._label_sizes(ax, rects, 'top' if self._horizontal else 'right') Inserted the parameters into the object u of class Upset so that it can be accessed inside the function _plot_bars() as shown below: u = UpSet(movies_by_genre, min_subset_size=15, show_counts=True) lol_of_intersection_names = [] # lol: list of list for i in range(u.intersections.shape[0]): lol_of_intersection_names.append((get_movie_list_at_intersection(u, movies_by_genre, i))) u.lol_of_intersection_names = lol_of_intersection_names u.plot() plt.savefig("Upset_plot.png", dpi=600) plt.show() Finally, the output looks as shown below: However, given the long list of names, I am unsure of the practical importance of plotting like this. Only when I save the image in 600DPI, can I zoom in and see the names of movies.
3
2
75,982,081
2023-4-11
https://stackoverflow.com/questions/75982081/best-way-to-use-python-iterator-as-dataset-in-pytorch
The PyTorch DataLoader turns datasets into iterables. I already have a generator which yields data samples that I want to use for training and testing. The reason I use a generator is because the total number of samples is too large to store in memory. I would like to load the samples in batches for training. What is the best way to do this? Can I do it without a custom DataLoader? The PyTorch dataloader doesn't like taking the generator as input. Below is a minimal example of what I want to do, which produces the error "object of type 'generator' has no len()". import torch from torch import nn from torch.utils.data import DataLoader def example_generator(): for i in range(10): yield i BATCH_SIZE = 3 train_dataloader = DataLoader(example_generator(), batch_size = BATCH_SIZE, shuffle=False) print(f"Length of train_dataloader: {len(train_dataloader)} batches of {BATCH_SIZE}") I am trying to take the data from an iterator and take advantage of the functionality of the PyTorch DataLoader. The example I gave is a minimal example of what I would like to achieve, but it produces an error. Edit: I want to be able to use this function for complex generators in which the len is not known in advance.
PyTorch's DataLoader actually has official support for an iterable dataset, but it just has to be an instance of a subclass of torch.utils.data.IterableDataset: An iterable-style dataset is an instance of a subclass of IterableDataset that implements the __iter__() protocol, and represents an iterable over data samples So your code would be written as: from torch.utils.data import IterableDataset class MyIterableDataset(IterableDataset): def __init__(self, iterable): self.iterable = iterable def __iter__(self): return iter(self.iterable) ... BATCH_SIZE = 3 train_dataloader = DataLoader(MyIterableDataset(example_generator()), batch_size = BATCH_SIZE, shuffle=False)
5
2
75,939,770
2023-4-5
https://stackoverflow.com/questions/75939770/how-can-i-construct-a-dataframe-that-uses-the-pyarrow-backend-directly-i-e-wi
Pandas 2.0 introduces the option to use PyArrow as the backend rather than NumPy. As of version 2.0, using it seems to require either calling one of the pd.read_xxx() methods with type_backend='pyarrow', or else constructing a DataFrame that's NumPy-backed and then calling .convert_dtypes on it. Is there a more direct way to construct a PyArrow-backed DataFrame?
If your data are known to be all of a specific type (say, int64[pyarrow]), this is straightforward: import pandas as pd data = {'col_1': [3, 2, 1, 0], 'col_2': [1, 2, 3, 4]} df = pd.DataFrame( data, dtype='int64[pyarrow]', # ... ) If your data are known to be all of the same type but the type is not known, then I don't know of a way to use the constructor. I tried dtype=pd.ArrowDtype, which does not work, and dtype=pd.ArrowDtype(), which needs an argument that I think would have to be a specific dtype. One option for possibly-mixed and unknown data types is to make a pa.Table (using one of its methods) and then send it to pandas with the types_mapper kwarg. For example, using a dict: import pyarrow as pa data = {'col_1': [3, 2, 1, 0], 'col_2': ['a', 'b', 'c', 'd']} pa_table = pa.Table.from_pydict(data) df = pa_table.to_pandas(types_mapper=pd.ArrowDtype) The last line is exactly what pd.read_parquet with dtype_backend='pyarrow' does under the hood, after reading parquet into a pa.Table. I thought it was worth highlighting the approach since it wouldn't have occurred to me otherwise. The method pa.Table.from_pydict() will infer the data types. If the data are of mixed type, but known, and speed is very important, see https://stackoverflow.com/a/57939649 for how to make a predefined schema to pass to the pa.Table constructor. The above method loses most of the flexibility of the DataFrame constructor (specifying an index, accepting various container types as input, etc.). You might be able to code around this and encapsulate it in a function. Another workaround, as mentioned in the question, is to just construct a NumPy-backed DataFrame and call .convert_dtypes on it: import pandas as pd data = {'col_1': [3, 2, 1, 0], 'col_2': ['a', 'b', 'c', 'd']} df = pd.DataFrame( data, index=[4, 5, 6, 7], # ... ).convert_dtypes(dtype_backend='pyarrow')
7
9
75,979,711
2023-4-10
https://stackoverflow.com/questions/75979711/reduced-dimensions-visualization-for-true-vs-predicted-values
I have a dataframe which looks like this: label predicted F1 F2 F3 .... F40 major minor 2 1 4 major major 1 0 10 minor patch 4 3 23 major patch 2 1 11 minor minor 0 4 8 patch major 7 3 30 patch minor 8 0 1 patch patch 1 7 11 I have label which is the true label for the id(not shown as it is not relevant), and predicted label, and then set of around 40 features in my df. The idea is to transform these 40 features into 2 dimensions and visualize them true vs predicted. We have 9 cases for all the three labels major,minor and patch vs their predictions. With PCA, it is not able to capture much variance with 2 components and I am not sure how to map the PCA values with the labels and predictions in the original df as a whole. A way to achieve this is to separate all cases into 9 dataframes and achieve the result, but this isn't what I am looking for. Is there any other way I can reduce and visualize the given data? Any suggestions would be highly appreciated.
You may want to consider a small multiple plot with one scatterplot for each cell of the confusion matrix. If PCA does not work well, t-distributed stochastic neighbor embedding (TSNE) is often a good alternative in my experience. For example, with the iris dataset, which also has three prediction classes, it could look like this: import pandas as pd import seaborn as sns from sklearn.manifold import TSNE iris = sns.load_dataset('iris') # Mock up some predictions. iris['species_pred'] = (40 * ['setosa'] + 5 * ['versicolor'] + 5 * ['virginica'] + 40 * ['versicolor'] + 5 * ['setosa'] + 5 * ['virginica'] + 40 * ['virginica'] + 5 * ['versicolor'] + 5 * ['setosa']) # Show confusion matrix. pd.crosstab(iris.species, iris.species_pred) species_pred setosa versicolor virginica species setosa 40 5 5 versicolor 5 40 5 virginica 5 5 40 # Reduce features to two dimensions. X = iris.iloc[:, :4].values X_embedded = TSNE(n_components=2, init='random', learning_rate='auto' ).fit_transform(X) iris[['tsne_x', 'tsne_y']] = X_embedded # Plot small multiples, corresponding to confusion matrix. sns.set() g = sns.FacetGrid(iris, row='species', col='species_pred', margin_titles=True) g.map(sns.scatterplot, 'tsne_x', 'tsne_y');
4
2
75,939,141
2023-4-5
https://stackoverflow.com/questions/75939141/create-task-from-within-another-running-task
In Python I create two async tasks: tasks = [ asyncio.create_task(task1(queue)), asyncio.create_task(task2(queue)), ] await asyncio.gather(*tasks) Now, I have a need to create a third task "task3" within task1. So I have: async def task1(queue): # and here I need to create the "task3": asyncio.create_task(task3(queue)) # and how can I schedule this? So I wish to schedule task3 also, without hurting task1 and task2 (they shall stay running). How am I supposed to do this?
You can add a done callback to it, and just let the your task1 run forward. https://docs.python.org/3/library/asyncio-task.html#asyncio.Task.add_done_callback A matter that could arise there, though, and that is buried in the docs: the asyncio loop avoids creating hard references (just weak) to the tasks, and when it is under heavy load, it may just "drop" tasks that are not referenced somewhere else. So, you can have a registry, a set structure at module level will do, to keep track of your new tasks, and then you can use the done callback so that each task can remove itself from there: task3_registry = set() ... async def task1(queue): # and here I need to create the "task3": t3 = asyncio.create_task(task3(queue)) task3_registry.add(t3) t3.add_done_callback(lambda task: task3_registry.remove(task)) ... ... Even with this, when shutting down your asyncio loop (if that happens), the asyncio loop could just cancel the un-awaited-for task3's : then you can simply use that registry again to await for the completion of all of them, before returning from your root co-routine: async def main(): tasks = [ asyncio.create_task(task1(queue)), asyncio.create_task(task2(queue)), ] await asyncio.gather(*tasks) await asyncio.gather(task3_registry) # return Further answering: If I have a t3_is_running = True variable in task1, can lambda in the add_done_callback change it to False? If you need the variable inside task1 co-routine, it can be seen and changed from the callback as a closure variable. That requires writting the callback with the def syntax instead of lambda (it is just completely equivalent performance wise): async def task1(queue): # and here I need to create the "task3": t3 = asyncio.create_task(task3(queue)) task3_registry.add(t3) task3_is_running = True def done_callback(task): nonlocal task3_is_running task3_is_running = False task3_registry.remove(task) t3.add_done_callback(done_callback) ... If you want to see that variable from the the code in main however, and not as just something the code inside task1 can see, the callback have to be able to see the "task1" task instance itself. If they are single tasks, a global (in Python really a "module level variable") is just good enough - otherwise similar registries to task3_registry have to be created. If you don't want to have global states running around, you can just encapsulate your tasks in a class, even if you are not otherwise using object orientation: this allows one task to change the state for others by using attributes in the instance: class TaskSet: def __init__(self): self.task3_instance = None self.task3_is_running = False self.queue = ... pass # can't really be run as async - can actually be omitted async def async_run (self): # this will be executed by Python as an instance of the class # when it is awaited. tasks = [ asyncio.create_task(self.task1(self.queue)), asyncio.create_task(self.task2(self.queue)), ] await asyncio.gather(*tasks) if self.task3_instance: # or use a set if more than one task3 might be running await self.task3_instance async def task1(self): ... self.task3_instance = asyncio.create_task(t3) self.task3_is_running = True # not actually needed, as then one could just # check self.task3_instance and maybe call self.task3_instance.done self.task3_instance.add_done_callback(self.task3_done_callback) ... async def task2(self): ... async def task3(self): ... def task3_done_callback(self, ctx=None): self.task3_instance = None self.task3_is_running = False asyncio.run(TaskSet().async_run())
3
2
75,951,190
2023-4-6
https://stackoverflow.com/questions/75951190/sentence-transformer-use-of-evaluator
I came across this script which is second link on this page and this explanation I am using all-mpnet-base-v2 (link) and I am using my custom data I am having hard time understanding use of evaluator = EmbeddingSimilarityEvaluator.from_input_examples( dev_samples, name='sts-dev') The documentation says: evaluator – An evaluator (sentence_transformers.evaluation) evaluates the model performance during training on held-out dev data. It is used to determine the best model that is saved to disc. But in this case, as we are fine tuning on our own examples, train_dataloaderhas train_samples which has our model sentences and scores. Q1. How is train_samples different than dev_samples? Q2a: If the model is going to print performance against dev_samples then how is it going to help "to determine the best model that is saved to disc"? Q2b: Are we required to run dev_samples against the model saved on the disc and then compare scores? Q3. If my goal is to take a single model and then fine tune it, is it okay to skip parameters evaluator and evaluation_steps? Q4. How to determine total steps in the model? Do I need to set evaluation_steps? Updated I followed the answer provided by Kyle and have below follow up questions In the fit method I used the evaluator and below data was written to a file Q5. which metric is used to select the best epoch? is it cosine_pearson? Q6: why steps are -1 in the above output? Q7a: how to find steps based upon size of my data, batch size etc. Currently i have kept them to 1000. But not sure if that it is too much. I am running for 10 epochs, i have 2509 examples in the training data and batch size is 64. Q7b: are my steps going to be 2509/64? if yes then 1000 seems to be too high number
Question 1 How is train_samples different from dev_samples in the context of the EmbeddingSimilarityEvaluator? One needs to have a "held-out" split of data to be used for evaluation during training to avoid over-fitting. This "held-out" set is commonly referred to as the "development set" as it is the set of data that is used during development of the model/system. A pedagogical analogy can be drawn between a traditional education curriculum and that of training deep learning models: if one were to give students all the questions for a given topic, and then use the same subset of questions for evaluation, then eventually (most) students will learn to memorise the set of answers they repeatedly see while practicing, instead of learning the procedures to solve the questions in general. So if you are using your own custom data, make sure that a subset of that data is allocated to dev_samples in addition to train_samples and test_samples. Alternatively, if your own data is scarce, you can use the original training data to supplement your own training, development and test sets. The "test set" is the one that is only used after training has completed to determine the final performance of the model (i.e. all samples in the test set (ideally) haven't been seen before). Question 2 How is the model going to determine the best model that is saved to disc? Are we required to run dev_samples against the model saved on the disc and then compare scores? The previous answer alludes to how this will work, but in brief, once the evaluator has been instantiated, it will measure the correlation against the gold labels and then return the similarity score (depending on what main_similarity was initially set). If the produced embeddings (based on the development set) offer a higher correlation with their gold labels, and therefore, a higher score overall, then this "better" model is saved to disk. Hence, there is no need for you to "run dev_samples against the model saved on the disc and then compare scores", this process happens automatically provided everything has been set up appropriately. Question 3 If my goal is to take a single model and then fine tune it, is it okay to skip parameters evaluator and evaluation_steps? Based on the above answers, you can understand why you cannot "skip the evaluator and evaluation_steps". The evaluator is an integral part of "fine-tuning" (i.e. training) the model. Question 4 How to determine the total number of steps for the model? I need to set evaluation_steps. The evaluation_steps parameter sets the number of training steps that must occur before the model is evaluated using the evaluator. If the authors have set this to 1000, then leave it as is unless you notice problems with training. Alternatively, experiment with either increasing of decreasing it and select a value that works best for training. Follow-Up Questions Question 5 Which metric is used to select the best epoch? Is it cosine_pearson? By default, the maximum of the Cosine Spearman, Manhattan Spearman, Euclidean Spearman and Dot Product Spearman is used. Question 6 Why are steps -1 in the output? The -1 lets the user know that the evaluator was called after all training steps occurred for a particular epoch. If the steps_per_epoch was not set when calling the model.fit(), it defaults to None which sets the number of steps_per_epoch to the size of the train_dataloader which is passed to train_objectives when model.fit() is initially called, i.e.: model.fit(train_objectives=[(train_dataloader, train_loss)], ...) In your case, train_samples is 2,509 and train_batch_size is 64, so the size of train_dataloader, and therefore steps_per_epoch, will be 39. If the steps_per_epoch, is less than the evaluation_steps, then the number of training steps won't reach or exceed evaluation_steps and so additional calls to _eval_during_training on line 737 won't occur. This isn't a problem as the evaluation is forced to call at the end of each epoch anyway based on line 747. Question 7 How do I find the number of evaluation_steps based on the size of my training data (2,509 samples) and batch size (64)? Is 1000 too high? The evaluation_steps is available to tell the model during the training process whether it should prematurely run an evaluation using the evaluator part-way through an epoch. Otherwise, the evaluation is forced to run at the end of the epoch after steps_per_epoch have completed. Based on the numbers you provided, you could, for example, set evaluation_steps to 20 to get an evaluation to run approx. half-way through an epoch (assuming an epoch is 39 training_steps). See this answer and its question for more info. on batch size vs. epochs vs. steps per epoch.
4
4
75,985,476
2023-4-11
https://stackoverflow.com/questions/75985476/is-there-a-way-to-use-private-fields-to-validation-in-pydantic
I want to set one field, which cannot be in response model in abstract method. I set this field to private. I want to use this field to validate other public field. class BaseAsset(BaseModel, ABC): amount: int precision: int nai: str _nai_pattern: str = None def __init__(self, **kwargs): super().__init__(**kwargs) object.__setattr__(self, '_nai_pattern', self.set_nai_pattern()) @abstractmethod def set_nai_pattern(self): pass @validator('nai') def check_nai_format(cls, v, values, **kwargs): if v != values['_nai_pattern']: raise ValueError else: return v class Config: underscore_attrs_are_private = True class AssetHive(BaseAsset): def set_nai_pattern(self): return "@@000000021"
I would suggest the following approach. Make nai_pattern a regular (not private) field, but exclude it from dumping by setting exclude=True in its Field constructor. In addition, hook into schema_extra of the model Config to remove the field from the schema as well. Make the method to get the nai_pattern a class method, so that it can be called from inside a validator. Then define a always=True field validator for nai_pattern that checks if the value is None and if so, calls the method to get the value. Write your validator for nai as you did before, but make sure that the nai field itself is defined after nai_pattern because that will ensure the nai validator is called after that of nai_pattern and you will be guaranteed to have a value to check against. ("Validation is done in the order fields are defined.") Here is my suggested code: from abc import ABC, abstractmethod from typing import Any, Optional from pydantic import BaseModel, Field, validator class BaseAsset(BaseModel, ABC): amount: int precision: int nai_pattern: Optional[str] = Field(default=None, exclude=True) nai: str @classmethod @abstractmethod def get_nai_pattern(cls) -> str: raise NotImplementedError @validator("nai_pattern", always=True) def set_nai_pattern(cls, v: Optional[str]) -> str: if v is None: return cls.get_nai_pattern() return v @validator("nai") def check_nai(cls, v: str, values: dict[str, Any]) -> str: nai_pattern = values.get("nai_pattern") if v != nai_pattern: raise ValueError(f"{v} does not match {nai_pattern=}") return v class Config: @staticmethod def schema_extra(schema: dict[str, Any]) -> None: del schema["properties"]["nai_pattern"] Demo: from pydantic import ValidationError class Foo(BaseAsset): @classmethod def get_nai_pattern(cls) -> str: return "abc" print(Foo.schema_json(indent=4)) foo = Foo(amount=1, precision=2, nai="abc") print(foo.json(indent=4)) try: Foo(amount=1, precision=2, nai="xyz") except ValidationError as err: print(err.json(indent=4)) Output: { "title": "Foo", "type": "object", "properties": { "amount": { "title": "Amount", "type": "integer" }, "precision": { "title": "Precision", "type": "integer" }, "nai": { "title": "Nai", "type": "string" } }, "required": [ "amount", "precision", "nai" ] } { "amount": 1, "precision": 2, "nai": "abc" } [ { "loc": [ "nai" ], "msg": "xyz does not match nai_pattern='abc'", "type": "value_error" } ]
3
3
75,956,534
2023-4-7
https://stackoverflow.com/questions/75956534/select-camera-programatically
My program should select three cameras and take pictures with each of it. I have the following code at the moment: def getCamera(camera): graph = FilterGraph() print("Camera List: ") print(graph.get_input_devices()) #tbd get right Camera try: device = graph.get_input_devices().index("HD Pro Webcam C920") except ValueError as e: device = graph.get_input_devices().index("Integrated Webcam") return device The code above worked fine. But I have three similar cameras with the same name. Output of this: graph = FilterGraph() print("Camera List: ") print(graph.get_input_devices()) Is a list with three cameras all the same name. I thought they are in an array and I can select them with this: device = graph.get_input_devices().index(0) Like any other Array. But I only can access with the Name. Like in the first code example. How can I access the cameras with index?
You can use the index of the camera in the list to select it. For example, if you want to select the first camera in the list, you can use the following code: device = graph.get_input_devices().index("HD Pro Webcam C920") To select the second camera: device = graph.get_input_devices().index("HD Pro Webcam C920", 1) And to select the third camera, you can use: device = graph.get_input_devices().index("HD Pro Webcam C920", 2) The second argument in the index() method specifies the starting index for the search. You can also modify your getCamera() function to take an argument that specifies the index of the camera you want to use: def getCamera(camera_index): graph = FilterGraph() cameras = graph.get_input_devices() if camera_index >= len(cameras): raise ValueError("Camera index out of range") return cameras[camera_index] for i in range(3): camera = getCamera(i) #Do something with it
3
6
75,990,745
2023-4-11
https://stackoverflow.com/questions/75990745/remove-timezone-from-timestamp-but-keep-the-local-time
I have a dataframe with epoch time. I convert the epoch time to a timestamp with my local timezone. I would like to remove the timezone information but keep my local timezone in the timestamp (subtract the timezone offset from the timestamp and then remove the timezone). This is the code I have: epochs = np.arange(1644516000, 1644516000 + 1800*10, 1800) df = pd.DataFrame({'time': epochs}) df['time'] = pd.to_datetime(df['time'], unit='s').dt.tz_localize("US/Pacific") I cannot use: dt.tz_localize(None) Since it converts it back to UTC. My desired output is a timestamp with no timezone information but in my local timezone: pd.date_range('2022-02-10 10:00', freq='30min', periods=10) How do I do that?
Essentially you're trying to get whatever time it was locally after x seconds since the unix epoch in a tz-naive timestamp. Achieving this is a bit weird because: My experience with tz-naive timestamps in pandas says that they are usually "local" to the user's current timezone. Converting to timestamp from an epoch timestamp feels to me a tz-aware operation, given that you count from UTC 1970-01-01, not your local 1970-01-01. So what I would expect from e.g. pd.to_datetime(1644516000, unit="s") would be one of: A UTC-localized timestamp: Timestamp('2022-02-10 18:00:00+0000', tz='UTC') The above converted to local: Timestamp('2022-02-10 10:00:00-0800', tz='US/Pacific') Or a tz-naive timestamp representing the local time since UTC epoch: Timestamp('2022-02-10 10:00:00') (which is what you're searching for) But instead, pd.to_datetime gives you the UTC local time since the UTC epoch, but as a tz-naive timestamp: >>> pd.to_datetime(1644516000, unit="s") Timestamp('2022-02-10 18:00:00') One solution would be to manually do the three steps above, i.e. by first specifying that the received timestamps are UTC, then converting to your local time, then removing the tz-info: df['time'] = pd.to_datetime( df['time'], unit='s', utc=True ).dt.tz_convert("US/Pacific").dt.tz_localize(None) but it feels like I'm missing something easier here...
4
1
75,987,470
2023-4-11
https://stackoverflow.com/questions/75987470/remove-borders-of-a-n-dimensional-numpy-array
I am trying to replace all the values of the border of my n-dimensional array by False. So far, I have seen that numpy provides np.pad that allows me to grow an array in all dimensions with an arbitrary array. Is there an equivalent to do the opposite and "shrink" the array by cutting the borders? Here is an example in 2D that I would like to extend to arbitrary dimension: import numpy as np nd_array = np.random.randn(100,100)>0 # Just to have a random bool array, but the same would apply with floats, for example cut_array = nd_array[1:-1, 1:-1] # This is what I would like to generalize to arbitrary dimension padded_array = np.pad(cut_array, pad_width=1, mode='constant', constant_values=False) Of course, if there is an easier way to change the values of the borders in arbitrary dimension, that would also be appreciated.
I would not use the approach of first cropping then padding, because like this you move a lot of memory around. Instead I would explicitly set the border indexes to the desired value: import numpy as np border_value = False nd_array = np.random.randn(100,100) > 0 # Iterate over all dimensions of `nd_array` for dim in range(nd_array.ndim): # Make current dimension the first dimension array_moved = np.moveaxis(nd_array, dim, 0) # Set border values in the current dimension array_moved[0] = border_value array_moved[-1] = border_value # We do not even need to move the current dimension # back to its original position, as `np.moveaxis()` # provides a view into the original data, thus by # altering the values of `array_moved`, we also # alter the values of `nd_array`. So we are done. Note that np.moveaxis() is a pretty cheap operation, as it only adjusts the strides of the array (to produce array_moved, in our case), so no actual array data gets moved around.
3
4
75,934,851
2023-4-5
https://stackoverflow.com/questions/75934851/how-to-make-sure-list-defined-in-class-variable-objects-are-not-shared-across-di
I have the following: class Loop: def __init__(self, group_class: Type[SegmentGroup], start: str, end: str): self.group_class = group_class self.start = start self.end = end self._groups: List[SegmentGroup] = [] def from_segments(self, segments): self._groups = [] # have to clear so this is not shared among other `Article` instances for segment in segments: group = self.group_class() if some_conditions_are_valid and after_have_populated_group: self._groups.append(group) class SegmentGroupMetaClass(type): def __new__(cls, name, bases, attrs): exclude = set(dir(type)) components: Dict[str, str] = {} loops: Dict[str, str] = {} for k, v in attrs.items(): if k in exclude: continue if isinstance(v, Segment): components[v.unique_tag] = k elif isinstance(v, Loop): loops[v.start] = k attrs.update(__components__=components, __loops__=loops) return super().__new__(cls, name, bases, attrs) class SegmentGroup(metaclass=SegmentGroupMetaClass): def from_segments(self, ...): """May call `Loop.from_segments` And then I have: class Topic(SegmentGroup): infos = Segment("CAV") ... class Article(SegmentGroup): topics = Loop(Topic, ...) ... I have a routine that essentially calls Article().from_segments(segments) where segments is injected from another service in a loop that creates a new Article instance on every new iteration. As you may have noticed, Loop._groups must be properly handled in order to not have its values shared among different Article instances. My (hacky) solution works fine if we call Loop.from_segments for all iterations, but this is not guaranteed as the topics segment loop may be missing in the injected segments. This means that for an Article without the topics segments, it'll actually use the values from the previous iteration (because Loop._groups won't be cleared since Loop.from_segments won't be called). I can think in a way to fix this by clearing Loop._groups at Article.__init__... class SegmentGroup(metaclass=SegmentGroupMetaClass): def __init__(self) -> None: for loop_label in self.__loops__.values(): setattr(getattr(self, loop_label), "_groups", []) ...but it looks even hackier and non-elegant/less efficient to me. How would you clear Loop._groups for every new Article instance?
How to make sure list defined in class variable objects are not shared across different instances? TL;DR: make so that your class variables instances use the descriptor mechanisms in Python (like the __get__ method), so that it has control on which instance it is operating at each time. You can make your Loop class be a descriptor: this way anytime it is used in an Article instance, Python will get it through its __get__ method: you can them create a sub-object or a closure that will use a list allocated in the host instance. In this example, whenever retrieved from an instance (ex. Article().topics, this will create a temporary copy of the loop object, and set an _instance atribute on it. Then I promoted the ._groups attribute in the Loop object itself to be a property (which uses the same __get__ mechanism under the hood), to store on first access, and then use always, a list on the host instance.All you have to do is to ensure your Loop class works well with Python's copy.copy - if all its code is what is listed here, it will work nicely. Also: using .copy and not .deepcopy ensure the instance-bound Loop that is created will share any other data with the main instance - as well as be lightweight. from copy import copy class Loop: def __init__(self, group_class: Type[SegmentGroup], start: str, end: str): self.group_class = group_class self.start = start self.end = end def __set_name__(self, owner, name): # uses the Python __set_name__ mechanism to automatically know the name # each Loop instance is associated to in a class self.name = name def __get__(self, instance, owner): if instance is None: return self instance_specific_loop = copy(self) instance_specific_loop._instance = instance @property def _groups(self): attr_name = f"_groups_{self.name}" if not hasattr(self._instance, attr_name): setattr(self._instance, attr_name, []) return getattr(self._instance, attr_name) def from_segments(self, segments): for segment in segments: group = self.group_class() if some_conditions_are_valid and after_have_populated_group: self._groups.append(group) class SegmentGroupMetaClass(type): ... class SegmentGroup(metaclass=SegmentGroupMetaClass): def from_segments(self, ...): """May call `Loop.from_segments""" ... # it will sork, as soon as it does: # self._loop_attribute_name_.from_segments(...) class Article(SegmentGroup): topics = Loop(Topic, ...) ...
3
1
75,989,725
2023-4-11
https://stackoverflow.com/questions/75989725/i-cant-install-tiktoken-on-python-with-pip
I am trying to run some python code which includes some tiktoken calls. I've tried to install tiktoken for python but i can't. I am trying to install with pip command but i am getting an error about rust. I am wondering if i should install some rust "thing" outside of python or pip. Collecting tiktoken Using cached tiktoken-0.3.3.tar.gz (25 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: requests>=2.26.0 in g:\program files\python\lib\site-packages (from tiktoken) (2.28.2) Collecting regex>=2022.1.18 Using cached regex-2023.3.23-cp38-cp38-win32.whl (256 kB) Requirement already satisfied: idna<4,>=2.5 in g:\program files\python\lib\site-packages (from requests>=2.26.0->tiktoken) (2.9) Requirement already satisfied: certifi>=2017.4.17 in g:\program files\python\lib\site-packages (from requests>=2.26.0->tiktoken) (2020.4.5.1) Requirement already satisfied: charset-normalizer<4,>=2 in g:\program files\python\lib\site-packages (from requests>=2.26.0->tiktoken) (3.1.0) Requirement already satisfied: urllib3<1.27,>=1.21.1 in g:\program files\python\lib\site-packages (from requests>=2.26.0->tiktoken) (1.26.15) Building wheels for collected packages: tiktoken Building wheel for tiktoken (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for tiktoken (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [37 lines of output] running bdist_wheel running build running build_py creating build creating build\lib.win32-cpython-38 creating build\lib.win32-cpython-38\tiktoken copying tiktoken\core.py -> build\lib.win32-cpython-38\tiktoken copying tiktoken\load.py -> build\lib.win32-cpython-38\tiktoken copying tiktoken\model.py -> build\lib.win32-cpython-38\tiktoken copying tiktoken\registry.py -> build\lib.win32-cpython-38\tiktoken copying tiktoken\__init__.py -> build\lib.win32-cpython-38\tiktoken creating build\lib.win32-cpython-38\tiktoken_ext copying tiktoken_ext\openai_public.py -> build\lib.win32-cpython-38\tiktoken_ext running egg_info writing tiktoken.egg-info\PKG-INFO writing dependency_links to tiktoken.egg-info\dependency_links.txt writing requirements to tiktoken.egg-info\requires.txt writing top-level names to tiktoken.egg-info\top_level.txt reading manifest file 'tiktoken.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching 'Makefile' adding license file 'LICENSE' writing manifest file 'tiktoken.egg-info\SOURCES.txt' copying tiktoken\py.typed -> build\lib.win32-cpython-38\tiktoken running build_ext running build_rust error: can't find Rust compiler If you are using an outdated pip version, it is possible a prebuilt wheel is available for this package but pip is not able to install from it. Installing from the wheel would avoid the need for a Rust compiler. To update pip, run: pip install --upgrade pip and then retry package installation. If you did intend to build this package from source, try installing a Rust compiler from your system package manager and ensure it is on the PATH during installation. Alternatively, rustup (available at https://rustup.rs) is the recommended way to download and update the Rust compiler toolchain. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for tiktoken Failed to build tiktoken ERROR: Could not build wheels for tiktoken, which is required to install pyproject.toml-based projects I've tried "pip install tiktoken".
Pip is trying to build the tiktoken library from source and you are missing the Rust compiler. You can either install the Rust compiler on your system, or install tiktoken from a wheel instead of building it from source. See this issue in the tiktoken GitHub repo for more help
4
2
75,975,807
2023-4-10
https://stackoverflow.com/questions/75975807/how-to-stop-a-loop-on-shutdown-in-fastapi
I have a route / which started an endless loop (technically until the websocket is disconnected but in this simplified example it is truly endless). How do I stop this loop on shutdown: from fastapi import FastAPI import asyncio app = FastAPI() running = True @app.on_event("shutdown") def shutdown_event(): global running running = False @app.get("/") async def index(): while running: await asyncio.sleep(0.1) According to the docs @app.on_event("shutdown") should be called during the shutdown, but is suspect it is called similar to the lifetime event which is called after everything is finished which is a deadlock in this situation. To test: i run it as uvicorn module.filename:app --host 0.0.0.0 curl http://ip:port/ then stop the server (pressing CTRL+C) and you see that it hangs forever since running is never set to false because shutdown_event is not called. (Yes you can force shutdown by pressing CTRL+C)
import signal import asyncio from fastapi import FastAPI app = FastAPI() running = True def stop_server(*args): global running running = False @app.on_event("startup") def startup_event(): signal.signal(signal.SIGINT, stop_server) @app.get("/") async def index(): while running: await asyncio.sleep(0.1) Source: https://github.com/tiangolo/fastapi/discussions/9373#discussioncomment-5573492 Setting up and catching the SIGINT signal allows to catch the first CNTR+C. This will set running to False which ends the loop in index(). Terminating the running request allowing the server to shutdown.
8
2
75,987,725
2023-4-11
https://stackoverflow.com/questions/75987725/using-webdav-to-list-files-on-nextcloud-server-results-in-method-not-supported
I'm trying to list files using webdab but I'm having issues. I can create directories and put files just fine but not list a directory or pull a file. I'm seeing the error, "Method not supported". from webdav3.client import Client options = { 'webdav_hostname': "https://___________.com/remote.php/dav/files/", 'webdav_login': "user_name", 'webdav_password': "password" } client = Client(options) print(client.list('/')) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/<user>/.local/lib/python3.10/site-packages/webdav3/client.py", line 67, in _wrapper res = fn(self, *args, **kw) File "/home/<user>/.local/lib/python3.10/site-packages/webdav3/client.py", line 264, in list response = self.execute_request(action='list', path=directory_urn.quote()) File "/home/<user>/.local/lib/python3.10/site-packages/webdav3/client.py", line 228, in execute_request raise MethodNotSupported(name=action, server=self.webdav.hostname) webdav3.exceptions.MethodNotSupported: Method 'list' not supported for https://________.com/remote.php/dav/files
The client.list() method assumes the remote root directory by default. As you supply https://___________.com/remote.php/dav/files/ as your webdav_hostname the root directory it tries to access when you call client.list('/') is the top level files directory. As a Nextcloud user you don't have access to that level, so listing that is impossible. However, you do have access to the files/<username> directory, so listing client.list('/<username>/') works. To prevent that you have prepend the username to every list command you can set the webdav_hostname to .../remote.php/dav/files/<username>. Then a call to client.list() should work straight away.
4
2
75,968,081
2023-4-8
https://stackoverflow.com/questions/75968081/i-cant-install-anaconda-on-a-macbook-pro-m1-with-ventura-13-3-1
this is my first question here :) When i try to install Anaconda on my MacBook (M1) with Ventura 13.3.1 i receive the following error: "This package is incompatible with this version of macOS." I tried the arm64 installer and the x86 installer, both lead to the same error message. I used Anaconda on the same MacBook just a few days ago, but after the update from Ventura 13.2.1 to 13.3 I couldn't open Jupyter Notebooks from within the Anaconda Navigator. First I thought that the problem might be caused by Anaconda, so I uninstalled it. However, now here I am, unable to install it again. I also did a complete reset of my MacBook, nothing changed. Does anyone have the same issue or know how I can fix this? Thanks a lot
if you have homebrew installed you should be able to run "brew install anaconda"
4
5
75,980,399
2023-4-10
https://stackoverflow.com/questions/75980399/python-linear-getitem-for-a-pair-of-list-of-lists
I currently have a class which stores a list of lists. The inner lists are not of the same length. I made the class subscriptable with the following code (possibly not the best way of doing this, and perhaps overly fancy). class MyClass: def __init__(self): # self.instructions = [] # for demo purposes self.instructions.append([0, 1, 2]) self.instructions.append([3, 4, 5, 6]) self.instructions.append([7, 8]) def __getitem__(self, ind): if ind >= 0: iterator = self.instructions.__iter__() compare = int.__gt__ inc = int.__add__ else: iterator = reversed(self.instructions) compare = int.__le__ inc = int.__sub__ s = 0 for tp in iterator: L = len(tp) if compare(inc(s, L), ind): return tp[ind-s] else: s = inc(s, L) else: raise IndexError('index out of range') This works. For instance >>> x = MyClass() >>> x[5] 5 >>> x[-5] 4 Now, I need to modify the class so it now stores two list of lists. The two lists are instructions and annotations, and both have the same length. But len(instructions[i]) does not have to equal len(annotations[i]). class NewClass: def __init__(self): # self.instructions = [] self.annotations = [] # for demo purposes self.instructions.append([0, 1, 2]) self.instructions.append([5, 6, 7, 8]) self.instructions.append([12, 13]) self.annotations.append([3, 4]) self.annotations.append([9, 10, 11]) self.annotations.append([14, 15, 16]) def __getitem__(self, ind): pass I want to make this subscriptable, with the order of elements oscillating between the instructions sublists and the annotations sublists. The demo data indicates the subscripting order. I want >>> y = NewClass() >>> y[9] 9 >>> y[-4] 13 What's an efficient way of doing this? I could write a solution where I alternatively iterate through the two sublists. But I feel like I am straying far from the correct solution. I am also looking for a non-for-loop solution as I am dealing with long lists.
While your implementation is nice, I would like to share my own way for iterating using chain.from_iterable. Because basically we're chaining the items whether from the beginning or at the end. For one list: The only part that needs explanation is map(reversed, reversed(self.instructions)). We not only need to reverse the whole list, but also the individual sublists. from itertools import chain class MyClass: def __init__(self): self.instructions = [ [0, 1, 2], [3, 4, 5, 6], [7, 8], ] def __getitem__(self, ind): if ind >= 0: chunks = self.instructions range_parameter = ind + 1 else: chunks = map(reversed, reversed(self.instructions)) range_parameter = abs(ind) iterator = chain.from_iterable(chunks) try: for _ in range(range_parameter): n = next(iterator) except StopIteration: raise IndexError("index out of range") return n x = MyClass() print(x[5]) print(x[-5]) For two lists: Since you said we need to oscillate, zip is the right tool for that. When ind is positive it's straightforward. We zip them and use chain.from_iterable two times because otherwise it gives us individual sub-lists. If ind is negative, we need two reversed() before zipping. One for outer lists, and one for sublists. from itertools import chain class MyClass: def __init__(self): self.instructions = [ [0, 1, 2], [5, 6, 7, 8], [12, 13], ] self.annotations = [ [3, 4], [9, 10, 11], [14, 15, 16], ] def __getitem__(self, ind): if ind >= 0: chunks = zip(self.instructions, self.annotations) range_parameter = ind + 1 else: chunks = zip( map(reversed, reversed(self.annotations)), map(reversed, reversed(self.instructions)), ) range_parameter = abs(ind) iterator = chain.from_iterable(chain.from_iterable(chunks)) try: for _ in range(range_parameter): n = next(iterator) except StopIteration: raise IndexError("index out of range") return n x = MyClass() print(x[9]) print(x[-4])
4
2
75,983,861
2023-4-11
https://stackoverflow.com/questions/75983861/scrapy-crawl-only-first-5-pages-of-the-site
I am working on the solution to the following problem, My boss wants from me to create a CrawlSpider in Scrapy to scrape the article details like title, description and paginate only the first 5 pages. I created a CrawlSpider but it is paginating from all the pages, How can I restrict the CrawlSpider to paginate only the first latest 5 pages? The site article listing page markup that opens when we click on pagination next link: Listing page markup: <div class="list"> <div class="snippet-content"> <h2> <a href="https://example.com/article-1">Article 1</a> </h2> </div> <div class="snippet-content"> <h2> <a href="https://example.com/article-2">Article 2</a> </h2> </div> <div class="snippet-content"> <h2> <a href="https://example.com/article-3">Article 3</a> </h2> </div> <div class="snippet-content"> <h2> <a href="https://example.com/article-4">Article 4</a> </h2> </div> </div> <ul class="pagination"> <li class="next"> <a href="https://www.example.com?page=2&keywords=&from=&topic=&year=&type="> Next </a> </li> </ul> For this, I am using Rule object with restrict_xpaths argument to get all the article links, and for the follow I am executing parse_item class method that will get the article title and description from the meta tags. Rule(LinkExtractor(restrict_xpaths='//div[contains(@class, "snippet-content")]/h2/a'), callback="parse_item", follow=True) Detail page markup: <meta property="og:title" content="Article Title"> <meta property="og:description" content="Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum."> After this, I have added another Rule object to handle pagination CrawlSpider will use the following link to open other listing page and do the same procedure again and again. Rule(LinkExtractor(restrict_xpaths='//ul[@class="pagination"]/li[@class="next"]/a')) This is my CrawlSpider code: from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule import w3lib.html class ExampleSpider(CrawlSpider): name = "example" allowed_domains = ["example.com"] start_urls = ["https://www.example.com/"] custom_settings = { 'FEED_URI': 'articles.json', 'FEED_FORMAT': 'json' } total = 0 rules = ( # Get the list of all articles on the one page and follow these links Rule(LinkExtractor(restrict_xpaths='//div[contains(@class, "snippet-content")]/h2/a'), callback="parse_item", follow=True), # After that get pagination next link get href and follow it, repeat the cycle Rule(LinkExtractor(restrict_xpaths='//ul[@class="pagination"]/li[@class="next"]/a')) ) def parse_item(self, response): self.total = self.total + 1 title = response.xpath('//meta[@property="og:title"]/@content').get() or "" description = w3lib.html.remove_tags(response.xpath('//meta[@property="og:description"]/@content').get()) or "" return { 'id': self.total, 'title': title, 'description': description } Is there a way we can restrict the crawler to crawl only the first 5 pages?
Solution 1: use process_request. from scrapy.spiders import CrawlSpider, Rule from scrapy.linkextractors import LinkExtractor def limit_requests(request, response): # here we have the page number. # page_number = request.url[-1] # if int(page_number) >= 6: # return None # here we use a counter if not hasattr(limit_requests, "page_number"): limit_requests.page_number = 0 limit_requests.page_number += 1 if limit_requests.page_number >= 5: return None return request class ExampleSpider(CrawlSpider): name = 'example_spider' start_urls = ['https://scrapingclub.com/exercise/list_basic/'] page = 0 rules = ( # Get the list of all articles on the one page and follow these links Rule(LinkExtractor(restrict_xpaths='//div[@class="card-body"]/h4/a'), callback="parse_item", follow=True), # After that get pagination next link get href and follow it, repeat the cycle Rule(LinkExtractor(restrict_xpaths='//li[@class="page-item"][last()]/a'), process_request=limit_requests) ) total = 0 def parse_item(self, response): title = response.xpath('//h3//text()').get(default='') price = response.xpath('//div[@class="card-body"]/h4//text()').get(default='') self.total = self.total + 1 return { 'id': self.total, 'title': title, 'price': price } Solution 2: overwrite _requests_to_follow method (should be slower though). from scrapy.http import HtmlResponse from scrapy.spiders import CrawlSpider, Rule from scrapy.linkextractors import LinkExtractor class ExampleSpider(CrawlSpider): name = 'example_spider' start_urls = ['https://scrapingclub.com/exercise/list_basic/'] rules = ( # Get the list of all articles on the one page and follow these links Rule(LinkExtractor(restrict_xpaths='//div[@class="card-body"]/h4/a'), callback="parse_item", follow=True), # After that get pagination next link get href and follow it, repeat the cycle Rule(LinkExtractor(restrict_xpaths='//li[@class="page-item"][last()]/a')) ) total = 0 page = 0 def _requests_to_follow(self, response): if not isinstance(response, HtmlResponse): return if self.page >= 5: # stopping condition return seen = set() for rule_index, rule in enumerate(self._rules): links = [ lnk for lnk in rule.link_extractor.extract_links(response) if lnk not in seen ] for link in rule.process_links(links): if rule_index == 1: # assuming there's only one "next button" self.page += 1 seen.add(link) request = self._build_request(rule_index, link) yield rule.process_request(request, response) def parse_item(self, response): title = response.xpath('//h3//text()').get(default='') price = response.xpath('//div[@class="card-body"]/h4//text()').get(default='') self.total = self.total + 1 return { 'id': self.total, 'title': title, 'price': price } The solutions are pretty much self explanatory, if you want me to add something please ask in the comments.
4
2
75,983,653
2023-4-11
https://stackoverflow.com/questions/75983653/how-to-save-a-yolov8-model-after-some-training-on-a-custom-dataset-to-continue-t
I'm training YOLOv8 in Colab on a custom dataset. How can I save the model after some epochs and continue the training later. I did the first epoch like this: import torch model = YOLO("yolov8x.pt") model.train(data="/image_datasets/Website_Screenshots.v1-raw.yolov8/data.yaml", epochs=1) While looking for the options it seems that with YOLOv5 it would be possible to save the model or the weights dict. I tried these but either the save or load doesn't seem to work in this case: torch.save(model, 'yolov8_model.pt') torch.save(model.state_dict(), 'yolov8x_model_state.pt')
"I am currently working on a project using YOLOv8. After training on a custom dataset, the best weight is automatically stored in the runs/detect/train/weights directory as best.pt. When I retrain the model, I use the best.pt weight instead of yolov8x.pt to train the model."
4
6
75,954,655
2023-4-7
https://stackoverflow.com/questions/75954655/sequential-chaining-of-itertools-operators
I'm looking for a nice way to sequentially combine two itertools operators. As an example, suppose we want to select numbers from a generator sequence greater than a threshold, after having gotten past that threshold. For a threshold of 12000, these would correspond to it.takewhile(lambda x: x<12000) and it.takewhile(lambda x: x>=12000): # Set up an example generator: def lcg(a=75,c=74,m=2**16+1,x0 = 1): xn = x0 yield xn while True: xn = (a*xn+c) % m yield xn # First 20 elements: list(it.islice(lcg(), 20)) [1, # <- start sequence, start it.takewhile(lambda x: x<12000) 149, 11249, # <- last element of it.takewhile(lambda x: x<12000) 57305, # <- start it.takewhile(lambda x: x>=12000) here 38044, 35283, 24819, 26463, 18689, 25472, # <- last element of it.takewhile(lambda x: x>=12000); end of sequence 9901, 21742, 57836, 12332, 7456, 34978, 1944, 14800, 61482, 23634] Is there a way to select the sequence of greater than 12000, including the initial values less than 12000, i.e. the desired output is: [1, 149, 11249, 57305, 38044, 35283, 24819, 26463, 18689, 25472] This is trivial to do with two for-loops, but I'm looking for an itertools-type way (maybe a one-liner?) of combining the two operators without reseting the lcg generator.
One approach to writing a one-liner for the problem with the existing itertools library would be to use a flag variable with {0} as a default value to indicate which predicate to use. At first the flag evaluates to a truthy value so that the first predicate (x < 12000) is made effective, and if the first predicate fails, pop the set so the flag becomes falsey to make the second predicate (x >= 12000) effective. By popping 0 from the set, it also allows the expression to fall back to the second predicate on the same iteration when the first predicate fails: takewhile(lambda x, f={0}: f and (x < 12000 or f.pop()) or x >= 12000, lcg()) Note that it's safe to use a mutable object as the default value of an argument in this case because lambda creates a new function object along with a new instance of the mutable object every time it's evaluated. Demo: https://replit.com/@blhsing/VirtualGuiltyRatio
4
3
75,981,635
2023-4-11
https://stackoverflow.com/questions/75981635/tkinter-zoom-with-text-and-other-elements-in-canvas
I'm trying to add support for zooming in and out inside a Canvas widget, containing both text element (created using create_text) and non-text elements, such as rectangles (created with create_rectangle), etc. So far, I made the following MRE using both part of this answer and this one: import tkinter as tk from tkinter.font import Font root = tk.Tk() canvas = tk.Canvas(root, width=400, height=400) canvas.pack() font = Font(family="Arial", size=10) fontsize = 10 # Add elements to canvas rectangle = canvas.create_rectangle(100, 100, 300, 200, fill='red') oval = canvas.create_oval(150, 150, 250, 250, fill='blue') text = canvas.create_text(200, 175, text="Hello", font=font) def do_zoom(event): global fontsize x = canvas.canvasx(event.x) y = canvas.canvasy(event.y) factor = 1.001 ** event.delta if (event.delta > 0): fontsize *= 1.1 font.configure(size=int(fontsize)) elif (event.delta < 0): fontsize *= 0.9 font.configure(size=int(fontsize)) canvas.scale("all", x, y, factor, factor) canvas.bind("<MouseWheel>", do_zoom) canvas.bind('<ButtonPress-1>', lambda event: canvas.scan_mark(event.x, event.y)) canvas.bind("<B1-Motion>", lambda event: canvas.scan_dragto(event.x, event.y, gain=1)) root.mainloop() This seems to work, but has one or two issues: Once the font size get to 0 or 0.0??? in floats (happen when zooming out), the font size doesn't match with the actual visual size of the font, as it seems to be fixed in a higher size (can be seen on this gif here) When zooming in and out fast enough, repeatedly, a discrepancy can be seen on the font size on previous mousewheel scrolling, and the next one (can be seen by printing the font size). In short, I'm wondering if there is a way, either to fix the above (and the reasons for them happening, aside from my own conjecture) or if there is a better way to handle this.
That's because the zoom factor for the coordinates of the canvas items and the zoom factor for the font size are different in your implementation. Keep in mind the XSCALE and YSCALE parameters of the scale command of the canvas accumulate for repeated calls. Because the original coordinates of the items are transformed with roundoff errors, I don't recommend using the scale() to implement zooming. (See the implementation of the polygon, for example.) Anyway, if you don't mind the roundoff errors, do like this. ... factor = 1.0 def do_zoom(event): global factor x = canvas.canvasx(event.x) y = canvas.canvasy(event.y) if (event.delta > 0): ratio = 1.1 elif (event.delta < 0): ratio = 0.9 factor *= ratio font.configure(size=int(fontsize*factor)) canvas.scale("all", x, y, ratio, ratio) ...
3
2
75,946,511
2023-4-6
https://stackoverflow.com/questions/75946511/creating-simple-video-player-using-pyqt6
I was trying to create a simple video player using PyQt6. I found this example on the internet: import sys from PyQt6.QtWidgets import QApplication, QMainWindow, QVBoxLayout, QWidget, QPushButton, QSlider from PyQt6.QtMultimedia import QMediaPlayer, QMediaContent from PyQt6.QtMultimediaWidgets import QVideoWidget from PyQt6.QtCore import QUrl, Qt class VideoPlayer(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle("Video Player") self.setGeometry(100, 100, 1024, 768) self.media_player = QMediaPlayer(None, QMediaPlayer.VideoSurface) self.video_widget = QVideoWidget() self.start_button = QPushButton("Start") self.start_button.clicked.connect(self.start_video) self.pause_button = QPushButton("Pause") self.pause_button.clicked.connect(self.pause_video) self.stop_button = QPushButton("Stop") self.stop_button.clicked.connect(self.stop_video) self.slider = QSlider(Qt.Orientation.Horizontal) self.slider.sliderMoved.connect(self.set_position) self.media_player.setVideoOutput(self.video_widget) self.media_player.setMedia(QMediaContent(QUrl.fromLocalFile("Video.mp4"))) self.media_player.positionChanged.connect(self.position_changed) self.media_player.durationChanged.connect(self.duration_changed) layout = QVBoxLayout() layout.addWidget(self.video_widget) layout.addWidget(self.start_button) layout.addWidget(self.pause_button) layout.addWidget(self.stop_button) layout.addWidget(self.slider) container = QWidget() container.setLayout(layout) self.setCentralWidget(container) def start_video(self): self.media_player.play() def pause_video(self): self.media_player.pause() def stop_video(self): self.media_player.stop() def set_position(self, position): self.media_player.setPosition(position) def position_changed(self, position): self.slider.setValue(position) def duration_changed(self, duration): self.slider.setRange(0, duration) if __name__ == "__main__": app = QApplication(sys.argv) video_player = VideoPlayer() video_player.show() sys.exit(app.exec()) But it doesn't work. As far as I understand, the "QMediaContent" and "QMediaPlayer.VideoSurface" classes are no longer supported. How can I change this code to make the player work? I've been trying to find what these classes were replaced with, but there's very little information on the internet about playing video in PyQt6 at the moment.
PyQt5: QMediaPlayer(parent=None, flags=QMediaPlayer.Flags()) QMediaPlayer.media # property of type QMediaContent QMediaPlayer.setMedia(media, stream=None) PyQt6: The QMediaPlayer.Flags enum was removed, the QMediaContent class was removed, and the media property was replaced with source. QMediaPlayer(parent=None) QMediaPlayer.source # property of type QUrl QMediaPlayer.setSource(source) The example can be fixed by changing those two lines accordingly. class VideoPlayer(QMainWindow): def __init__(self): super().__init__() ... self.media_player = QMediaPlayer() self.media_player.setSource(QUrl.fromLocalFile("Video.mp4")) self.video_widget = QVideoWidget() self.media_player.setVideoOutput(self.video_widget) ... See Changes to Qt Multimedia for the complete list of changes.
5
7
75,969,107
2023-4-9
https://stackoverflow.com/questions/75969107/multiaxis-system-of-equations-optimizations-in-python
My System is as follows: Optimize the value of O_t based on each value of L_t from 1 to 240 according to the below equations. O_t = O1+O2+O3+O4 O1= LS1+3×D O2=3×LS2+4×S O3=S+3×D O4= LS4×4+7×D L_t = LS1+LS2+LS3+LS4+LS5+LS6 L_t = (S+D)/5 Desired outputs: Values of S, D, LS1, LS2, LS3, LS4, LS5, LS6 that result in the highest possible value of O_t for each value of L_t Constraints: Variable to be maximized is O_t LS1, LS2, LS3, LS4, LS5, LS6, S, and D must all be whole numbers. LS1+LS2+LS3+LS4+LS5+LS6=L_t O_t=O1+O2+O3+O4 output prints optimal values of LS1, LS2, LS3, LS4, LS5, LS6, S, and D for each value of L_t output in the form of a table L_t<15 then LS1,2,3,4,5,6 cannot exceed 5 L_t<30 then LS1,2,3,4,5,6 cannot exceed 10 L_t<50 then LS1,2,3,4,5,6 cannot exceed 15 L_t<60 then LS1,2,3,4,5,6 cannot exceed 17 L_t<90 then LS1,2,3,4,5,6 cannot exceed 20 L_t<120 then LS1,2,3,4,5,6 cannot exceed 25 L_t<150 then LS1,2,3,4,5,6 cannot exceed 30 L_t<180 then LS1,2,3,4,5,6 cannot exceed 35 L_t<210 then LS1,2,3,4,5,6 cannot exceed 40 L_t<240 then LS1,2,3,4,5,6 cannot exceed 45 How I am currently attempting to solve the system of equations: Optimized brute force. Generate a list of all possible S, D, and LS1-6 values, calculate O_t, and check against the previous maximum. My code currently works (I am assuming since has been tested on smaller numbers and works well) but takes multiple days of running to solve for any large numbers. Here is my current code optimized as much as I can. import numpy as np import numba as nb from tqdm import tqdm @nb.njit def calculate_O(LS1, LS2, LS3, LS4, LS5, LS6, S, D): O1 = LS1 + 3 * D O2 = 3 * LS2 + 4 * S O3 = S + 3 * D O4 = LS4 * 4 + 7 * D return O1 + O2 + O3 + O4 @nb.njit def find_optimal_value(L_t, possible_S_values, possible_D_values): max_O = -np.inf max_LS1 = 0 max_LS2 = 0 max_LS3 = 0 max_LS4 = 0 max_LS5 = 0 max_LS6 = 0 max_S = 0 max_D = 0 LS1_init = 11 LS2_init = 11 LS3_init = 11 LS4_init = 11 LS5_init = 11 LS6_init = 11 S_init = 2 D_init = 0 if L_t < 15: LS_max = 5 elif L_t < 30: LS_max = 10 elif L_t < 50: LS_max = 15 elif L_t < 60: LS_max = 17 elif L_t < 90: LS_max = 20 elif L_t < 120: LS_max = 25 elif L_t < 150: LS_max = 30 elif L_t < 180: LS_max = 35 elif L_t < 210: LS_max = 40 elif L_t < 240: LS_max = 45 for LS1 in range(L_t + 1): if LS1 > LS_max: continue if LS1 + LS1_init > 50: continue for LS2 in range(L_t + 1 - LS1): if LS2 > LS_max: continue if LS2 + LS2_init > 50: continue for LS3 in range(L_t + 1 - LS1 - LS2): if LS3 > LS_max: continue if LS3 + LS3_init > 50: continue for LS5 in range(L_t + 1 - LS1 - LS2 - LS3): if LS5 > LS_max: continue if LS5 + LS5_init > 50: continue for LS6 in range(L_t + 1 - LS1 - LS2 - LS3 - LS5): if LS6 > LS_max: continue if LS6 + LS6_init > 50: continue LS4 = L_t - LS1 - LS2 - LS3 - LS5 - LS6 if LS4 > LS_max: continue if LS4 + LS4_init > 50: continue for S in possible_S_values: D = 5 * L_t - S if D < 0: break if S > 5 * L_t: continue if LS4 >= 0 and S >= 0 and LS1 + LS2 + LS3 + LS4 + LS5 + LS6 == L_t: O = calculate_O(LS1+LS1_init, LS2+LS2_init, LS3+LS3_init, LS4+LS4_init, LS5+LS5_init, LS6+LS6_init, S+S_init, D+D_init) if O > max_O: max_O = O max_LS1 = LS1 max_LS2 = LS2 max_LS3 = LS3 max_LS4 = LS4 max_LS5 = LS5 max_LS6 = LS6 max_S = S max_D = D return (max_LS1+LS1_init, max_LS2+LS2_init, max_LS3+LS3_init, max_LS4+LS4_init, max_LS5+LS5_init, max_LS6+LS6_init, max_S+S_init, max_D+D_init, max_O) L_t_values = range(1, 100) optimal_values = {} for L_t in tqdm(L_t_values): possible_S_values = np.arange(0, 5 * L_t + 1) possible_D_values = np.arange(0, 5 * L_t + 1) max_LS1, max_LS2, max_LS3, max_LS4, max_LS5, max_LS6, max_S, max_D, max_O = find_optimal_value(L_t, possible_S_values, possible_D_values) optimal_values[L_t] = {'LS1': max_LS1, 'LS2': max_LS2, 'LS3': max_LS3, 'LS4': max_LS4, 'LS5': max_LS5,'LS6': max_LS6, 'S': max_S, 'D': max_D, 'O_t': max_O} print('{:<5s}{:<10s}{:<10s}{:<10s}{:<10s}{:<10s}{:<10s}{:<10s}{:<10s}{:<10s}'.format('L_t', 'LS1', 'LS2', 'LS3', 'LS4', 'LS5', 'LS6', 'S', 'D', 'O_t')) for L_t in L_t_values: values = optimal_values[L_t] print('{:<5d}{:<10d}{:<10d}{:<10d}{:<10d}{:<10d}{:<10d}{:<10d}{:<10d}{:<10.2f}'.format(L_t, values['LS1'], values['LS2'], values['LS3'], values['LS4'], values['LS5'], values['LS6'], values['S'], values['D'], values['O_t'])) Note: this isn't for a class so any module for optimization is on the table as well as any other language - as long as it accurately completes the task in a reasonable timeframe.
Do not brute-force this problem. This is a classic (and somewhat easy) linear programming problem. Your written constraints fail to mention that L, S and D have a lower bound of 0, and S and D have an upper bound of 5Lt. If these constraints are not enforced then the problem is unbounded. You have not specified the upper bound of LS when L_t == 240. I assume that it continues the trend and is 50. The following executes in 0.08 s. It should not take "multiple days" nor should it take multiple minutes. import pandas as pd import pulp ''' O_t = O1+O2+O3+O4 O1= LS1+3×D O2=3×LS2+4×S O3=S+3×D O4= LS4×4+7×D L_t = LS1+LS2+LS3+LS4+LS5+LS6 L_t = (S+D)/5 Variable to be maximized is O_t LS1, LS2, LS3, LS4, LS5, LS6, S, and D must all be whole numbers. LS1+LS2+LS3+LS4+LS5+LS6=L_t O_t=O1+O2+O3+O4 output prints optimal values of LS1, LS2, LS3, LS4, LS5, LS6, S, and D for each value of L_t L_t<15 then LS1,2,3,4,5,6 cannot exceed 5 L_t<30 then LS1,2,3,4,5,6 cannot exceed 10 L_t<50 then LS1,2,3,4,5,6 cannot exceed 15 L_t<60 then LS1,2,3,4,5,6 cannot exceed 17 L_t<90 then LS1,2,3,4,5,6 cannot exceed 20 L_t<120 then LS1,2,3,4,5,6 cannot exceed 25 L_t<150 then LS1,2,3,4,5,6 cannot exceed 30 L_t<180 then LS1,2,3,4,5,6 cannot exceed 35 L_t<210 then LS1,2,3,4,5,6 cannot exceed 40 L_t<240 then LS1,2,3,4,5,6 cannot exceed 45 ''' pd.set_option('display.max_columns', None) df = pd.DataFrame(index=pd.RangeIndex(start=1, stop=241, name='L_t')) maxima = pd.Series( name='L_smax', index=pd.Index(name='L_t', data=( 15, 30, 50, 60, 90, 120, 150, 180, 210, 240, 270)), data=(5, 10, 15, 17, 20, 25, 30, 35, 40, 45, 50)) df['L_smax'] = pd.merge_asof( left=df, right=maxima, left_index=True, right_index=True, direction='forward', allow_exact_matches=False) def make_L(row: pd.Series) -> pd.Series: L_t = row.name suffix = f'({L_t:03d})' LS1, LS2, LS3, LS4, LS5, LS6 = LS = [ pulp.LpVariable(name=f'LS{i}{suffix}', cat=pulp.LpInteger, lowBound=0, upBound=row.L_smax) for i in range(1, 7)] S = pulp.LpVariable(name='S' + suffix, cat=pulp.LpInteger, lowBound=0, upBound=5*L_t) D = pulp.LpVariable(name='D' + suffix, cat=pulp.LpInteger, lowBound=0, upBound=5*L_t) O1 = LS1 + 3*D O2 = 3*LS2 + 4*S O3 = S + 3*D O4 = 4*LS4 + 7*D O_t = O1 + O2 + O3 + O4 prob.addConstraint(name='L_tsum' + suffix, constraint=L_t == pulp.lpSum(LS)) prob.addConstraint(name='L_tSD' + suffix, constraint=L_t*5 == S + D) return pd.Series( data=(*LS, S, D, O_t), index=('LS1', 'LS2', 'LS3', 'LS4', 'LS5', 'LS6', 'S', 'D', 'O_t')) prob = pulp.LpProblem(name='multiaxis', sense=pulp.LpMaximize) df = pd.concat((df, df.apply(make_L, axis=1)), axis=1) prob.objective = pulp.lpSum(df.O_t) print(df) print() print(prob) prob.solve() assert prob.status == pulp.LpStatusOptimal result = pd.concat(( df.L_smax, df.iloc[:, 1:-1].applymap(pulp.LpVariable.value), df.O_t.apply(pulp.LpAffineExpression.value), ), axis=1) with pd.option_context('display.max_rows', None): print(result) L_smax LS1 LS2 LS3 LS4 LS5 LS6 \ L_t 1 5 LS1(001) LS2(001) LS3(001) LS4(001) LS5(001) LS6(001) 2 5 LS1(002) LS2(002) LS3(002) LS4(002) LS5(002) LS6(002) 3 5 LS1(003) LS2(003) LS3(003) LS4(003) LS5(003) LS6(003) 4 5 LS1(004) LS2(004) LS3(004) LS4(004) LS5(004) LS6(004) 5 5 LS1(005) LS2(005) LS3(005) LS4(005) LS5(005) LS6(005) .. ... ... ... ... ... ... ... 236 45 LS1(236) LS2(236) LS3(236) LS4(236) LS5(236) LS6(236) 237 45 LS1(237) LS2(237) LS3(237) LS4(237) LS5(237) LS6(237) 238 45 LS1(238) LS2(238) LS3(238) LS4(238) LS5(238) LS6(238) 239 45 LS1(239) LS2(239) LS3(239) LS4(239) LS5(239) LS6(239) 240 50 LS1(240) LS2(240) LS3(240) LS4(240) LS5(240) LS6(240) S D O_t L_t 1 S(001) D(001) {LS1(001): 1, D(001): 13, LS2(001): 3, S(001):... 2 S(002) D(002) {LS1(002): 1, D(002): 13, LS2(002): 3, S(002):... 3 S(003) D(003) {LS1(003): 1, D(003): 13, LS2(003): 3, S(003):... 4 S(004) D(004) {LS1(004): 1, D(004): 13, LS2(004): 3, S(004):... 5 S(005) D(005) {LS1(005): 1, D(005): 13, LS2(005): 3, S(005):... .. ... ... ... 236 S(236) D(236) {LS1(236): 1, D(236): 13, LS2(236): 3, S(236):... 237 S(237) D(237) {LS1(237): 1, D(237): 13, LS2(237): 3, S(237):... 238 S(238) D(238) {LS1(238): 1, D(238): 13, LS2(238): 3, S(238):... 239 S(239) D(239) {LS1(239): 1, D(239): 13, LS2(239): 3, S(239):... 240 S(240) D(240) {LS1(240): 1, D(240): 13, LS2(240): 3, S(240):... [240 rows x 10 columns] multiaxis: MAXIMIZE 13*D(001) + ... + 5*S(239) + 5*S(240) + 0 SUBJECT TO L_tsum(001): LS1(001) + LS2(001) + LS3(001) + LS4(001) + LS5(001) + LS6(001) = 1 ... 0 <= S(236) <= 1180 Integer 0 <= S(237) <= 1185 Integer 0 <= S(238) <= 1190 Integer 0 <= S(239) <= 1195 Integer 0 <= S(240) <= 1200 Integer Welcome to the CBC MILP Solver Version: 2.10.3 Build Date: Dec 15 2019 command line - .venv\lib\site-packages\pulp\solverdir\cbc\win\64\cbc.exe Temp\5c4bdbccaed24aafaf2911972835bd01-pulp.mps max timeMode elapsed branch printingOptions all solution Temp\5c4bdbccaed24aafaf2911972835bd01-pulp.sol (default strategy 1) At line 2 NAME MODEL At line 3 ROWS At line 485 COLUMNS At line 7446 RHS At line 7927 BOUNDS At line 9848 ENDATA Problem MODEL has 480 rows, 1920 columns and 1920 elements Coin0008I MODEL read with 0 errors Option for timeMode changed from cpu to elapsed Continuous objective value is 1.93204e+06 - 0.01 seconds Cgl0003I 0 fixed, 66 tightened bounds, 0 strengthened rows, 0 substitutions Cgl0004I processed model has 232 rows, 727 columns (727 integer (0 of which binary)) and 727 elements Cutoff increment increased from 1e-05 to 0.9999 Cbc0038I Initial state - 0 integers unsatisfied sum - 5.40012e-13 Cbc0038I Solution found of -1.93204e+06 Cbc0038I Cleaned solution of -1.93204e+06 Cbc0038I Before mini branch and bound, 727 integers at bound fixed and 0 continuous of which 39 were internal integer and 0 internal continuous Cbc0038I Mini branch and bound did not improve solution (0.04 seconds) Cbc0038I After 0.04 seconds - Feasibility pump exiting with objective of -1.93204e+06 - took 0.00 seconds Cbc0012I Integer solution of -1932044 found by feasibility pump after 0 iterations and 0 nodes (0.04 seconds) Cbc0001I Search completed - best objective -1932044, took 0 iterations and 0 nodes (0.04 seconds) Cbc0035I Maximum depth 0, 0 variables fixed on reduced cost Cuts at root node changed objective from -1.93204e+06 to -1.93204e+06 Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) Result - Optimal solution found Objective value: 1932044.00000000 Enumerated nodes: 0 Total iterations: 0 Time (CPU seconds): 0.05 Time (Wallclock seconds): 0.05 Option for printingOptions changed from normal to all Total time (CPU seconds): 0.08 (Wallclock seconds): 0.08 L_smax LS1 LS2 LS3 LS4 LS5 LS6 S D O_t L_t 1 5 0.0 0.0 0.0 1.0 0.0 0.0 0.0 5.0 69.0 2 5 0.0 0.0 0.0 2.0 0.0 0.0 0.0 10.0 138.0 3 5 0.0 0.0 0.0 3.0 0.0 0.0 0.0 15.0 207.0 ... 239 45 45.0 45.0 45.0 45.0 14.0 45.0 0.0 1195.0 15895.0 240 50 50.0 50.0 50.0 50.0 0.0 40.0 0.0 1200.0 16000.0 Initial Conditions To model your initial conditions is somewhat easy - just modify the variable bounds and add offsets to some of the affine expressions. However. For any Lt >= 235 the problem is infeasible. Do you see why? import pandas as pd import pulp pd.set_option('display.max_columns', None) df = pd.DataFrame(index=pd.RangeIndex(start=1, stop=235, name='L_t')) maxima = pd.Series( name='L_smax', index=pd.Index(name='L_t', data=( 15, 30, 50, 60, 90, 120, 150, 180, 210, 240, 270)), data=(5, 10, 15, 17, 20, 25, 30, 35, 40, 45, 50)) df['L_smax'] = pd.merge_asof( left=df, right=maxima, left_index=True, right_index=True, direction='forward', allow_exact_matches=False) def make_L(row: pd.Series) -> pd.Series: L_t = row.name suffix = f'({L_t:03d})' L_init = 11 S_init = 2 D_init = 0 LS = [ pulp.LpVariable( name=f'LS{i}{suffix}', cat=pulp.LpInteger, lowBound=0, upBound=min(50 - L_init, row.L_smax)) for i in range(1, 7)] S = pulp.LpVariable(name='S' + suffix, cat=pulp.LpInteger, lowBound=0, upBound=5*L_t) D = pulp.LpVariable(name='D' + suffix, cat=pulp.LpInteger, lowBound=0, upBound=5*L_t) prob.addConstraint(name='L_tsum' + suffix, constraint=L_t == pulp.lpSum(LS)) prob.addConstraint(name='L_tSD' + suffix, constraint=L_t*5 == S + D) Lo = [L + L_init for L in LS] Lo1, Lo2, Lo3, Lo4, Lo5, Lo6 = Lo Do = D + D_init So = S + S_init O1 = Lo1 + 3*Do O2 = 3*Lo2 + 4*So O3 = So + 3*Do O4 = 4*Lo4 + 7*Do O_t = O1 + O2 + O3 + O4 return pd.Series( data=(*Lo, So, Do, O_t), index=('LS1', 'LS2', 'LS3', 'LS4', 'LS5', 'LS6', 'S', 'D', 'O_t')) prob = pulp.LpProblem(name='multiaxis', sense=pulp.LpMaximize) df = pd.concat((df, df.apply(make_L, axis=1)), axis=1) prob.objective = pulp.lpSum(df.O_t) print(df) print() print(prob) prob.solve() assert prob.status == pulp.LpStatusOptimal result = pd.concat(( df.L_smax, df.iloc[:, 1:].applymap(pulp.LpAffineExpression.value), ), axis=1) with pd.option_context('display.max_rows', None): print(result) L_smax LS1 LS2 LS3 LS4 LS5 LS6 S D O_t L_t 1 5 11.0 11.0 11.0 12.0 11.0 11.0 2.0 5.0 167.0 2 5 11.0 11.0 11.0 13.0 11.0 11.0 2.0 10.0 236.0 3 5 11.0 11.0 11.0 14.0 11.0 11.0 2.0 15.0 305.0 4 5 11.0 11.0 11.0 15.0 11.0 11.0 2.0 20.0 374.0 5 5 11.0 11.0 11.0 16.0 11.0 11.0 2.0 25.0 443.0 6 5 11.0 12.0 11.0 16.0 11.0 11.0 2.0 30.0 511.0 7 5 11.0 13.0 11.0 16.0 11.0 11.0 2.0 35.0 579.0 8 5 11.0 14.0 11.0 16.0 11.0 11.0 2.0 40.0 647.0 9 5 11.0 15.0 11.0 16.0 11.0 11.0 2.0 45.0 715.0 10 5 11.0 16.0 11.0 16.0 11.0 11.0 2.0 50.0 783.0 11 5 12.0 16.0 11.0 16.0 11.0 11.0 2.0 55.0 849.0 12 5 13.0 16.0 11.0 16.0 11.0 11.0 2.0 60.0 915.0 13 5 14.0 16.0 11.0 16.0 11.0 11.0 2.0 65.0 981.0 14 5 15.0 16.0 11.0 16.0 11.0 11.0 2.0 70.0 1047.0 15 10 11.0 16.0 11.0 21.0 11.0 11.0 2.0 75.0 1128.0 16 10 11.0 17.0 11.0 21.0 11.0 11.0 2.0 80.0 1196.0 17 10 11.0 18.0 11.0 21.0 11.0 11.0 2.0 85.0 1264.0 18 10 11.0 19.0 11.0 21.0 11.0 11.0 2.0 90.0 1332.0 19 10 11.0 20.0 11.0 21.0 11.0 11.0 2.0 95.0 1400.0 20 10 11.0 21.0 11.0 21.0 11.0 11.0 2.0 100.0 1468.0 21 10 12.0 21.0 11.0 21.0 11.0 11.0 2.0 105.0 1534.0 22 10 13.0 21.0 11.0 21.0 11.0 11.0 2.0 110.0 1600.0 23 10 14.0 21.0 11.0 21.0 11.0 11.0 2.0 115.0 1666.0 24 10 15.0 21.0 11.0 21.0 11.0 11.0 2.0 120.0 1732.0 25 10 16.0 21.0 11.0 21.0 11.0 11.0 2.0 125.0 1798.0 26 10 17.0 21.0 11.0 21.0 11.0 11.0 2.0 130.0 1864.0 27 10 18.0 21.0 11.0 21.0 11.0 11.0 2.0 135.0 1930.0 28 10 19.0 21.0 11.0 21.0 11.0 11.0 2.0 140.0 1996.0 29 10 20.0 21.0 11.0 21.0 11.0 11.0 2.0 145.0 2062.0 30 15 11.0 26.0 11.0 26.0 11.0 11.0 2.0 150.0 2153.0 31 15 12.0 26.0 11.0 26.0 11.0 11.0 2.0 155.0 2219.0 32 15 13.0 26.0 11.0 26.0 11.0 11.0 2.0 160.0 2285.0 33 15 14.0 26.0 11.0 26.0 11.0 11.0 2.0 165.0 2351.0 34 15 15.0 26.0 11.0 26.0 11.0 11.0 2.0 170.0 2417.0 35 15 16.0 26.0 11.0 26.0 11.0 11.0 2.0 175.0 2483.0 36 15 17.0 26.0 11.0 26.0 11.0 11.0 2.0 180.0 2549.0 37 15 18.0 26.0 11.0 26.0 11.0 11.0 2.0 185.0 2615.0 38 15 19.0 26.0 11.0 26.0 11.0 11.0 2.0 190.0 2681.0 39 15 20.0 26.0 11.0 26.0 11.0 11.0 2.0 195.0 2747.0 40 15 21.0 26.0 11.0 26.0 11.0 11.0 2.0 200.0 2813.0 41 15 22.0 26.0 11.0 26.0 11.0 11.0 2.0 205.0 2879.0 42 15 23.0 26.0 11.0 26.0 11.0 11.0 2.0 210.0 2945.0 43 15 24.0 26.0 11.0 26.0 11.0 11.0 2.0 215.0 3011.0 44 15 25.0 26.0 11.0 26.0 11.0 11.0 2.0 220.0 3077.0 45 15 26.0 26.0 11.0 26.0 11.0 11.0 2.0 225.0 3143.0 46 15 26.0 26.0 11.0 26.0 12.0 11.0 2.0 230.0 3208.0 47 15 26.0 26.0 13.0 26.0 11.0 11.0 2.0 235.0 3273.0 48 15 26.0 26.0 14.0 26.0 11.0 11.0 2.0 240.0 3338.0 49 15 26.0 26.0 15.0 26.0 11.0 11.0 2.0 245.0 3403.0 50 17 27.0 28.0 11.0 28.0 11.0 11.0 2.0 250.0 3483.0 51 17 28.0 28.0 11.0 28.0 11.0 11.0 2.0 255.0 3549.0 52 17 28.0 28.0 12.0 28.0 11.0 11.0 2.0 260.0 3614.0 53 17 28.0 28.0 13.0 28.0 11.0 11.0 2.0 265.0 3679.0 54 17 28.0 28.0 11.0 28.0 14.0 11.0 2.0 270.0 3744.0 55 17 28.0 28.0 11.0 28.0 11.0 15.0 2.0 275.0 3809.0 56 17 28.0 28.0 11.0 28.0 16.0 11.0 2.0 280.0 3874.0 57 17 28.0 28.0 17.0 28.0 11.0 11.0 2.0 285.0 3939.0 58 17 28.0 28.0 18.0 28.0 11.0 11.0 2.0 290.0 4004.0 59 17 28.0 28.0 19.0 28.0 11.0 11.0 2.0 295.0 4069.0 60 20 31.0 31.0 11.0 31.0 11.0 11.0 2.0 300.0 4158.0 61 20 31.0 31.0 11.0 31.0 12.0 11.0 2.0 305.0 4223.0 62 20 31.0 31.0 11.0 31.0 13.0 11.0 2.0 310.0 4288.0 63 20 31.0 31.0 11.0 31.0 14.0 11.0 2.0 315.0 4353.0 64 20 31.0 31.0 11.0 31.0 15.0 11.0 2.0 320.0 4418.0 65 20 31.0 31.0 11.0 31.0 11.0 16.0 2.0 325.0 4483.0 66 20 31.0 31.0 11.0 31.0 17.0 11.0 2.0 330.0 4548.0 67 20 31.0 31.0 18.0 31.0 11.0 11.0 2.0 335.0 4613.0 68 20 31.0 31.0 19.0 31.0 11.0 11.0 2.0 340.0 4678.0 69 20 31.0 31.0 20.0 31.0 11.0 11.0 2.0 345.0 4743.0 70 20 31.0 31.0 21.0 31.0 11.0 11.0 2.0 350.0 4808.0 71 20 31.0 31.0 22.0 31.0 11.0 11.0 2.0 355.0 4873.0 72 20 31.0 31.0 23.0 31.0 11.0 11.0 2.0 360.0 4938.0 73 20 31.0 31.0 24.0 31.0 11.0 11.0 2.0 365.0 5003.0 74 20 31.0 31.0 25.0 31.0 11.0 11.0 2.0 370.0 5068.0 75 20 31.0 31.0 11.0 31.0 26.0 11.0 2.0 375.0 5133.0 76 20 31.0 31.0 11.0 31.0 11.0 27.0 2.0 380.0 5198.0 77 20 31.0 31.0 11.0 31.0 28.0 11.0 2.0 385.0 5263.0 78 20 31.0 31.0 29.0 31.0 11.0 11.0 2.0 390.0 5328.0 79 20 31.0 31.0 30.0 31.0 11.0 11.0 2.0 395.0 5393.0 80 20 31.0 31.0 11.0 31.0 11.0 31.0 2.0 400.0 5458.0 81 20 31.0 31.0 11.0 31.0 31.0 12.0 2.0 405.0 5523.0 82 20 31.0 31.0 13.0 31.0 31.0 11.0 2.0 410.0 5588.0 83 20 31.0 31.0 31.0 31.0 11.0 14.0 2.0 415.0 5653.0 84 20 31.0 31.0 11.0 31.0 31.0 15.0 2.0 420.0 5718.0 85 20 31.0 31.0 16.0 31.0 11.0 31.0 2.0 425.0 5783.0 86 20 31.0 31.0 11.0 31.0 17.0 31.0 2.0 430.0 5848.0 87 20 31.0 31.0 31.0 31.0 11.0 18.0 2.0 435.0 5913.0 88 20 31.0 31.0 19.0 31.0 31.0 11.0 2.0 440.0 5978.0 89 20 31.0 31.0 31.0 31.0 20.0 11.0 2.0 445.0 6043.0 90 25 36.0 36.0 26.0 36.0 11.0 11.0 2.0 450.0 6148.0 91 25 36.0 36.0 27.0 36.0 11.0 11.0 2.0 455.0 6213.0 92 25 36.0 36.0 11.0 36.0 11.0 28.0 2.0 460.0 6278.0 93 25 36.0 36.0 11.0 36.0 11.0 29.0 2.0 465.0 6343.0 94 25 36.0 36.0 11.0 36.0 30.0 11.0 2.0 470.0 6408.0 95 25 36.0 36.0 11.0 36.0 31.0 11.0 2.0 475.0 6473.0 96 25 36.0 36.0 11.0 36.0 32.0 11.0 2.0 480.0 6538.0 97 25 36.0 36.0 11.0 36.0 11.0 33.0 2.0 485.0 6603.0 98 25 36.0 36.0 34.0 36.0 11.0 11.0 2.0 490.0 6668.0 99 25 36.0 36.0 35.0 36.0 11.0 11.0 2.0 495.0 6733.0 100 25 36.0 36.0 11.0 36.0 36.0 11.0 2.0 500.0 6798.0 101 25 36.0 36.0 11.0 36.0 36.0 12.0 2.0 505.0 6863.0 102 25 36.0 36.0 11.0 36.0 36.0 13.0 2.0 510.0 6928.0 103 25 36.0 36.0 11.0 36.0 36.0 14.0 2.0 515.0 6993.0 104 25 36.0 36.0 36.0 36.0 11.0 15.0 2.0 520.0 7058.0 105 25 36.0 36.0 36.0 36.0 16.0 11.0 2.0 525.0 7123.0 106 25 36.0 36.0 36.0 36.0 17.0 11.0 2.0 530.0 7188.0 107 25 36.0 36.0 36.0 36.0 18.0 11.0 2.0 535.0 7253.0 108 25 36.0 36.0 11.0 36.0 36.0 19.0 2.0 540.0 7318.0 109 25 36.0 36.0 20.0 36.0 36.0 11.0 2.0 545.0 7383.0 110 25 36.0 36.0 36.0 36.0 11.0 21.0 2.0 550.0 7448.0 111 25 36.0 36.0 36.0 36.0 22.0 11.0 2.0 555.0 7513.0 112 25 36.0 36.0 23.0 36.0 11.0 36.0 2.0 560.0 7578.0 113 25 36.0 36.0 36.0 36.0 24.0 11.0 2.0 565.0 7643.0 114 25 36.0 36.0 36.0 36.0 25.0 11.0 2.0 570.0 7708.0 115 25 36.0 36.0 26.0 36.0 11.0 36.0 2.0 575.0 7773.0 116 25 36.0 36.0 27.0 36.0 11.0 36.0 2.0 580.0 7838.0 117 25 36.0 36.0 36.0 36.0 28.0 11.0 2.0 585.0 7903.0 118 25 36.0 36.0 36.0 36.0 29.0 11.0 2.0 590.0 7968.0 119 25 36.0 36.0 30.0 36.0 36.0 11.0 2.0 595.0 8033.0 120 30 41.0 41.0 11.0 41.0 41.0 11.0 2.0 600.0 8138.0 121 30 41.0 41.0 11.0 41.0 12.0 41.0 2.0 605.0 8203.0 122 30 41.0 41.0 13.0 41.0 41.0 11.0 2.0 610.0 8268.0 123 30 41.0 41.0 41.0 41.0 14.0 11.0 2.0 615.0 8333.0 124 30 41.0 41.0 15.0 41.0 41.0 11.0 2.0 620.0 8398.0 125 30 41.0 41.0 41.0 41.0 16.0 11.0 2.0 625.0 8463.0 126 30 41.0 41.0 17.0 41.0 11.0 41.0 2.0 630.0 8528.0 127 30 41.0 41.0 41.0 41.0 11.0 18.0 2.0 635.0 8593.0 128 30 41.0 41.0 41.0 41.0 19.0 11.0 2.0 640.0 8658.0 129 30 41.0 41.0 20.0 41.0 41.0 11.0 2.0 645.0 8723.0 130 30 41.0 41.0 11.0 41.0 41.0 21.0 2.0 650.0 8788.0 131 30 41.0 41.0 11.0 41.0 41.0 22.0 2.0 655.0 8853.0 132 30 41.0 41.0 11.0 41.0 41.0 23.0 2.0 660.0 8918.0 133 30 41.0 41.0 11.0 41.0 41.0 24.0 2.0 665.0 8983.0 134 30 41.0 41.0 41.0 41.0 25.0 11.0 2.0 670.0 9048.0 135 30 41.0 41.0 41.0 41.0 26.0 11.0 2.0 675.0 9113.0 136 30 41.0 41.0 27.0 41.0 41.0 11.0 2.0 680.0 9178.0 137 30 41.0 41.0 28.0 41.0 11.0 41.0 2.0 685.0 9243.0 138 30 41.0 41.0 41.0 41.0 11.0 29.0 2.0 690.0 9308.0 139 30 41.0 41.0 41.0 41.0 11.0 30.0 2.0 695.0 9373.0 140 30 41.0 41.0 31.0 41.0 11.0 41.0 2.0 700.0 9438.0 141 30 41.0 41.0 11.0 41.0 32.0 41.0 2.0 705.0 9503.0 142 30 41.0 41.0 41.0 41.0 11.0 33.0 2.0 710.0 9568.0 143 30 41.0 41.0 34.0 41.0 41.0 11.0 2.0 715.0 9633.0 144 30 41.0 41.0 35.0 41.0 41.0 11.0 2.0 720.0 9698.0 145 30 41.0 41.0 36.0 41.0 11.0 41.0 2.0 725.0 9763.0 146 30 41.0 41.0 41.0 41.0 11.0 37.0 2.0 730.0 9828.0 147 30 41.0 41.0 41.0 41.0 38.0 11.0 2.0 735.0 9893.0 148 30 41.0 41.0 39.0 41.0 41.0 11.0 2.0 740.0 9958.0 149 30 41.0 41.0 11.0 41.0 41.0 40.0 2.0 745.0 10023.0 150 35 46.0 46.0 46.0 46.0 21.0 11.0 2.0 750.0 10128.0 151 35 46.0 46.0 46.0 46.0 11.0 22.0 2.0 755.0 10193.0 152 35 46.0 46.0 11.0 46.0 46.0 23.0 2.0 760.0 10258.0 153 35 46.0 46.0 46.0 46.0 24.0 11.0 2.0 765.0 10323.0 154 35 46.0 46.0 46.0 46.0 11.0 25.0 2.0 770.0 10388.0 155 35 46.0 46.0 26.0 46.0 11.0 46.0 2.0 775.0 10453.0 156 35 46.0 46.0 46.0 46.0 27.0 11.0 2.0 780.0 10518.0 157 35 46.0 46.0 46.0 46.0 11.0 28.0 2.0 785.0 10583.0 158 35 46.0 46.0 46.0 46.0 11.0 29.0 2.0 790.0 10648.0 159 35 46.0 46.0 30.0 46.0 11.0 46.0 2.0 795.0 10713.0 160 35 46.0 46.0 11.0 46.0 31.0 46.0 2.0 800.0 10778.0 161 35 46.0 46.0 32.0 46.0 46.0 11.0 2.0 805.0 10843.0 162 35 46.0 46.0 33.0 46.0 46.0 11.0 2.0 810.0 10908.0 163 35 46.0 46.0 46.0 46.0 11.0 34.0 2.0 815.0 10973.0 164 35 46.0 46.0 46.0 46.0 11.0 35.0 2.0 820.0 11038.0 165 35 46.0 46.0 36.0 46.0 11.0 46.0 2.0 825.0 11103.0 166 35 46.0 46.0 46.0 46.0 11.0 37.0 2.0 830.0 11168.0 167 35 46.0 46.0 46.0 46.0 38.0 11.0 2.0 835.0 11233.0 168 35 46.0 46.0 11.0 46.0 39.0 46.0 2.0 840.0 11298.0 169 35 46.0 46.0 40.0 46.0 11.0 46.0 2.0 845.0 11363.0 170 35 46.0 46.0 41.0 46.0 46.0 11.0 2.0 850.0 11428.0 171 35 46.0 46.0 11.0 46.0 46.0 42.0 2.0 855.0 11493.0 172 35 46.0 46.0 11.0 46.0 46.0 43.0 2.0 860.0 11558.0 173 35 46.0 46.0 44.0 46.0 46.0 11.0 2.0 865.0 11623.0 174 35 46.0 46.0 46.0 46.0 11.0 45.0 2.0 870.0 11688.0 175 35 46.0 46.0 46.0 46.0 46.0 11.0 2.0 875.0 11753.0 176 35 46.0 46.0 46.0 46.0 46.0 12.0 2.0 880.0 11818.0 177 35 46.0 46.0 46.0 46.0 46.0 13.0 2.0 885.0 11883.0 178 35 46.0 46.0 46.0 46.0 46.0 14.0 2.0 890.0 11948.0 179 35 46.0 46.0 46.0 46.0 46.0 15.0 2.0 895.0 12013.0 180 40 50.0 50.0 50.0 50.0 35.0 11.0 2.0 900.0 12110.0 181 40 50.0 50.0 50.0 50.0 11.0 36.0 2.0 905.0 12175.0 182 40 50.0 50.0 50.0 50.0 11.0 37.0 2.0 910.0 12240.0 183 40 50.0 50.0 50.0 50.0 38.0 11.0 2.0 915.0 12305.0 184 40 50.0 50.0 50.0 50.0 39.0 11.0 2.0 920.0 12370.0 185 40 50.0 50.0 50.0 50.0 11.0 40.0 2.0 925.0 12435.0 186 40 50.0 50.0 50.0 50.0 11.0 41.0 2.0 930.0 12500.0 187 40 50.0 50.0 11.0 50.0 42.0 50.0 2.0 935.0 12565.0 188 40 50.0 50.0 43.0 50.0 11.0 50.0 2.0 940.0 12630.0 189 40 50.0 50.0 11.0 50.0 50.0 44.0 2.0 945.0 12695.0 190 40 50.0 50.0 50.0 50.0 45.0 11.0 2.0 950.0 12760.0 191 40 50.0 50.0 50.0 50.0 46.0 11.0 2.0 955.0 12825.0 192 40 50.0 50.0 11.0 50.0 50.0 47.0 2.0 960.0 12890.0 193 40 50.0 50.0 50.0 50.0 48.0 11.0 2.0 965.0 12955.0 194 40 50.0 50.0 11.0 50.0 50.0 49.0 2.0 970.0 13020.0 195 40 50.0 50.0 50.0 50.0 50.0 11.0 2.0 975.0 13085.0 196 40 50.0 50.0 12.0 50.0 50.0 50.0 2.0 980.0 13150.0 197 40 50.0 50.0 50.0 50.0 50.0 13.0 2.0 985.0 13215.0 198 40 50.0 50.0 14.0 50.0 50.0 50.0 2.0 990.0 13280.0 199 40 50.0 50.0 50.0 50.0 15.0 50.0 2.0 995.0 13345.0 200 40 50.0 50.0 50.0 50.0 50.0 16.0 2.0 1000.0 13410.0 201 40 50.0 50.0 50.0 50.0 17.0 50.0 2.0 1005.0 13475.0 202 40 50.0 50.0 50.0 50.0 50.0 18.0 2.0 1010.0 13540.0 203 40 50.0 50.0 50.0 50.0 19.0 50.0 2.0 1015.0 13605.0 204 40 50.0 50.0 50.0 50.0 20.0 50.0 2.0 1020.0 13670.0 205 40 50.0 50.0 50.0 50.0 21.0 50.0 2.0 1025.0 13735.0 206 40 50.0 50.0 50.0 50.0 50.0 22.0 2.0 1030.0 13800.0 207 40 50.0 50.0 50.0 50.0 50.0 23.0 2.0 1035.0 13865.0 208 40 50.0 50.0 24.0 50.0 50.0 50.0 2.0 1040.0 13930.0 209 40 50.0 50.0 25.0 50.0 50.0 50.0 2.0 1045.0 13995.0 210 45 50.0 50.0 50.0 50.0 50.0 26.0 2.0 1050.0 14060.0 211 45 50.0 50.0 50.0 50.0 50.0 27.0 2.0 1055.0 14125.0 212 45 50.0 50.0 28.0 50.0 50.0 50.0 2.0 1060.0 14190.0 213 45 50.0 50.0 50.0 50.0 29.0 50.0 2.0 1065.0 14255.0 214 45 50.0 50.0 50.0 50.0 30.0 50.0 2.0 1070.0 14320.0 215 45 50.0 50.0 50.0 50.0 50.0 31.0 2.0 1075.0 14385.0 216 45 50.0 50.0 50.0 50.0 50.0 32.0 2.0 1080.0 14450.0 217 45 50.0 50.0 50.0 50.0 50.0 33.0 2.0 1085.0 14515.0 218 45 50.0 50.0 50.0 50.0 34.0 50.0 2.0 1090.0 14580.0 219 45 50.0 50.0 50.0 50.0 50.0 35.0 2.0 1095.0 14645.0 220 45 50.0 50.0 50.0 50.0 36.0 50.0 2.0 1100.0 14710.0 221 45 50.0 50.0 50.0 50.0 50.0 37.0 2.0 1105.0 14775.0 222 45 50.0 50.0 38.0 50.0 50.0 50.0 2.0 1110.0 14840.0 223 45 50.0 50.0 50.0 50.0 39.0 50.0 2.0 1115.0 14905.0 224 45 50.0 50.0 50.0 50.0 50.0 40.0 2.0 1120.0 14970.0 225 45 50.0 50.0 50.0 50.0 41.0 50.0 2.0 1125.0 15035.0 226 45 50.0 50.0 42.0 50.0 50.0 50.0 2.0 1130.0 15100.0 227 45 50.0 50.0 50.0 50.0 43.0 50.0 2.0 1135.0 15165.0 228 45 50.0 50.0 44.0 50.0 50.0 50.0 2.0 1140.0 15230.0 229 45 50.0 50.0 50.0 50.0 45.0 50.0 2.0 1145.0 15295.0 230 45 50.0 50.0 50.0 50.0 46.0 50.0 2.0 1150.0 15360.0 231 45 50.0 50.0 50.0 50.0 47.0 50.0 2.0 1155.0 15425.0 232 45 50.0 50.0 50.0 50.0 50.0 48.0 2.0 1160.0 15490.0 233 45 50.0 50.0 50.0 50.0 50.0 49.0 2.0 1165.0 15555.0 234 45 50.0 50.0 50.0 50.0 50.0 50.0 2.0 1170.0 15620.0
3
3
75,975,064
2023-4-10
https://stackoverflow.com/questions/75975064/xml2js-is-vulnerable-to-prototype-pollution
xml2js <=0.4.23 Severity: high xml2js is vulnerable to prototype pollution - https://github.com/advisories/GHSA-776f-qx25-q3cc No fix available node_modules/xml2js aws-sdk * Depends on vulnerable versions of xml2js node_modules/aws-sdk 2 high severity vulnerabilities Upgraded aws-sdk npm package to latest version. But vulnerability still exists.
delete your package-lock.json, add this to your package.json: "overrides": { "xml2js": "^0.5.0" } reinstall the deps : npm i
8
16
75,963,236
2023-4-8
https://stackoverflow.com/questions/75963236/why-cant-i-set-trainingarguments-device-in-huggingface
Question When I try to set the .device attribute to torch.device('cpu'), I get an error. How am I supposed to set device then? Python Code from transformers import TrainingArguments from transformers import Trainer import torch training_args = TrainingArguments( output_dir="./some_local_dir", overwrite_output_dir=True, per_device_train_batch_size=4, dataloader_num_workers=2, max_steps=500, logging_steps=1, evaluation_strategy="steps", eval_steps=5 ) trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=test_dataset, compute_metrics=compute_metrics, ) training_args.device = torch.device('cpu') Python Error AttributeError Traceback (most recent call last) <ipython-input-11-30a92c0570b8> in <cell line: 28>() 26 ) 27 ---> 28 training_args.device = torch.device('cpu') AttributeError: can't set attribute
From the docs of the TrainingArguments object doesn't have a settable device attribute. But interestingly device is initialized but non mutable: import torch from transformers import TrainingArguments args = TrainingArguments('./') args.device # [out]: device(type='cpu') args.device = torch.device(type='cpu') [out]: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-12-dcb5ef23be68> in <cell line: 8>() 6 7 ----> 8 args.device = torch.device(type='cpu') AttributeError: can't set attribute From the code, it looks like the device is set after the initialization: https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L1113 And the device is only setup when the TrainingArguments.device is called https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L1678 https://github.com/huggingface/transformers/blob/main/src/transformers/training_args.py#L1524 Perhaps you are referring to the huggingface model and you want to try training it on the CPU. To do that, usually the Trainer will automatically detect it if you accelerate https://huggingface.co/docs/accelerate/index By default, the TrainingArguments is already set to CPU if _n_gpu = -1 But if you want to explicitly set the model to be used on CPU, try: model = model.to('cpu') trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=test_dataset, compute_metrics=compute_metrics, ) Or: trainer = Trainer( model=model.to('cpu'), args=training_args, train_dataset=train_dataset, eval_dataset=test_dataset, compute_metrics=compute_metrics, place_model_on_device=True )
4
0
75,965,051
2023-4-8
https://stackoverflow.com/questions/75965051/how-to-specify-constant-inputs-for-gradio-click-handler
If I have the following code: submit = gr.Button(...) submit.click( fn=my_func, inputs=[ some_slider_1, # gr.Slider some_slider_2, # gr.Slider some_slider_3, # gr.Slider ], outputs=[ some_text_field ] ) How can I substitute let's say some_slider_2 for a constant value like 2? If I simply write 2 in there, I get an error: AttributeError: 'int' object has no attribute '_id'
If you want a constant number input which is not visible on the UI you can pass gr.Number(value=2, visible=False)
3
3
75,943,880
2023-4-5
https://stackoverflow.com/questions/75943880/what-is-the-most-efficient-way-to-identify-text-similarity-between-items-in-larg
The following piece of code achieves the results I'm trying to achieve. There is a list of strings called 'lemmas' that contains the accepted forms of a specific class of words. The other list, called 'forms' contains a lot of spelling variations of words found in a large amount of texts from different periods and different dialects of a specific language. For each one of the words in 'forms', I want to get the string in 'lemmas' that is the closest match. The script, as I said, seems to work well with some test lists I've constructed. The problem I have, though, is that when I use the real lists, which are rather large, it takes forever to produce the results. In fact, I have had to stop the execution of the program because it was taking already more than two hours and the computer was becoming very slow and I couldn't do anything else. What could I do to make this more efficient? How would I have to modify the code using other Python tools or libraries to make this faster? Thanks in advance. import textdistance from textdistance import hamming from textdistance import cosine from textdistance import jaro_winkler import heapq # 'lemmas' is a list containing a huge amount of words, basically dictionary entries # 'forms' is a huge list of spelling variations of words found in hundreds of texts distances = {} processed_pairs = set() # keep track of processed pairs for lemma in lemmas: if lemma is None: continue lemma_lower = lemma.lower() for form in forms: if form is None: continue form_lower = form.lower() pair = (lemma_lower, form_lower) # create a tuple with the lowercase pair if pair not in processed_pairs: # check if the pair has been processed before processed_pairs.add(pair) if textdistance.hamming.normalized_similarity(lemma_lower, form_lower) > 0.34 and textdistance.jaro_winkler(lemma_lower, form_lower) > 0.7 and textdistance.cosine(lemma_lower, form_lower) > 0.5: dist = hamming.normalized_similarity(lemma_lower, form_lower) distances.setdefault(form_lower, []).append((dist, lemma_lower)) # Find the closest pairs closest_pairs = {} for form, dist_lemmas in distances.items(): closest_pairs[form] = heapq.nsmallest(2, dist_lemmas) with open(ROOT / 'potential_lemmas.txt', 'w') as f: for form, pairs in closest_pairs.items(): for dist, lemma in pairs: f.write(f"{form} ➝ {lemma}: {dist}\n") EDIT: In the end, the solution that worked the best was an integration of @Kyle F Hartzenberg's proposal with @Jamie_B suggestion of using joblib to parallelize (see comments after code, though): from itertools import zip_longest from bisect import insort from joblib import Parallel, delayed import line_profiler profile = line_profiler.LineProfiler() emmas = ['gran', 'vermell', 'groc', 'atens', 'Do', 'dOne', 'PUrpose', 'can', 'be', 'use', 'for', 'cannon', 'amuse', 'useful', 'user', 'become', 'downtown', 'develop', 'fulminate', 'deduce', 'de', 'bezant'] forms = ['preriarenos', 'Marinara', 'Grand', 'Gran', 'Grans', 'Grands', 'Grandeses', 'Grandullons', 'grand', 'grandissisimus', 'gran', 'grans', 'grands', 'grandeses', 'grandullons', 'grandullon', 'grandullones', 'uermell', 'uermells', 'vermell', 'vermells', 'vermella', 'vermelles', 'varmellíssimes', 'uarmellíssimes', 'uermellíssimes', 'uarnellíssimes', 'varmellíssima', 'uermella', 'uarmella', 'uarnella', 'varnella', 'uarnellas', 'varnellas', 'varmella', 'uermelles', 'grog', 'grogues', 'doNE', 'donE', 'doIng', 'purposeful', 'canonical', 'becareful', 'being', 'berate', 'best', 'bezant', 'full', 'fulmination', 'predict', 'downgrade', 'down', 'developing', 'deduct', 'deducing'] distances = {} @delayed def calc_distances(form, lemmas_low): form_distances = [] for lemma in lemmas_low: char_matches = [c1 != c2 for c1, c2 in zip_longest(lemma, form)] dist = 1 - (sum(char_matches)/len(char_matches)) if dist > 0.25: insort(form_distances, (dist, lemma)) return (form, form_distances) @profile def profile_distance_calcs(): lemmas_low = [lemma.lower() for lemma in lemmas] forms_low = [form.lower() for form in forms] results = Parallel(n_jobs=-1, prefer="threads")(calc_distances(form, lemmas_low) for form in forms_low) for form, form_distances in results: distances[form] = form_distances with open("potential_lemmas_hamming-like.txt", "w") as f: for form, form_distances in distances.items(): for dist, lemma in reversed(form_distances[-2:]): f.write(f"{form} ➝ {lemma}: {dist}\n") if __name__ == "__main__": profile_distance_calcs() profile.print_stats() This was a HUGE improvement over everything I had tried before. Besides the test with the short lists in the example, I ran it with the actual lists containing around 190,000 strings and the processing time was 118 minutes. While I'm pretty sure this could be improved (one might look for ways to do it using some kind of vectorization - someone suggested using arrays from numpy or AI oriented libraries), for the time being, this is quite manageable. There is still a problem that doesn't have to do with efficiency. I mention this in my comment to @jqurious below but I will explain it here in more detail. Running the script above with the test list, one gets results like the following: berate ➝ bezant: 0.5 berate ➝ become: 0.5 From a linguistic point of view, any English speaker would know that these pairs of words are not related (OK, unless you know about the history of the language and know that be- used to be a productive prefix). What I'm trying to do with this script is to determine what would be the appropriate lemma (the dictionary form or representative word) for all the variants of a particular word found in the texts of a corpus. This is a diachronic corpus containing many texts from many different authors and from many different dialects of a language writen over a period of more than 5 centuries. A 'u' could often be used instead of 'v' or a 'y' instead of an 'i'. An 'h' can als be often be missing from a word that is spelt with 'h' even in the same text by the same author. The variation is huge and yet even a modern speaker of the languate can usually detect whether the words are related quite easily. Of course, the speaker of the language is knowledgeable about the word structure and the morphology and so can immediately see that, for instance, 'uermellíssima' is related to 'vermell' despite the fact that a lot of characters are different. Using Kyle's suggestion with the actual lists, I got results like the following: beato ➝ beat: 0.8 beatriç ➝ tectriu: 0.5714285714285714 beatriç ➝ teatral: 0.5714285714285714 beatte ➝ beats: 0.6666666666666667 beatus ➝ certus: 0.6666666666666667 beatíssim ➝ nequíssim: 0.6666666666666667 beatíssim ➝ gravíssim: 0.6666666666666667 Even if you don't know the language (medieval Catalan in case anybody is interested), you can see how this is very wrong (using other algorithms like the Levenshtein or the cosine distance it is just hopeless). The lemmas 'beat' or 'beats' should ideally be the ones selected as being the "closest" in all these cases. Yet the algorithm does what it does. Perhaps I haven't looked hard enough, but with all the work in NLP, I'm surprised there aren't other algorithms that could do better in this kind of scenario. I know this deviates a little bit from the main point in the original question but if anybody can give me some useful advice, I would greatly appreciate it.
The following solution is based on your original code (Hamming distance) which offers an (almost) order of magnitude speed-up (~89.41%), averaged across five runs of each, as measured by line-profiler. Using this solution as a base for parallel processing may get you closer to the total processing times you are after. To use line-profiler, pip install line-profiler and then run kernprof -l -v test.py after adding @profile and calling the function to be profiled from __main__. from itertools import zip_longest from bisect import insort lemmas = ["Do", "dOne", "PUrpose", "can", "be", "use", "for", "cannon", "amuse", "useful", "user", "become", "downtown", "develop", "fulminate", "deduce", "de", "bezant"] forms = ["doNE", "donE", "doIng", "purposeful", "canonical", "becareful", "being", "berate", "best", "bezant", "full", "fulmination", "predict", "downgrade", "down", "developing", "deduct", "deducing"] distances = {} @profile def profile_distance_calcs(): lemmas_low = [lemma.lower() for lemma in lemmas] forms_low = [form.lower() for form in forms] for form in forms_low: form_distances = [] for lemma in lemmas_low: char_matches = [c1 != c2 for c1, c2 in zip_longest(lemma, form)] dist = 1 - (sum(char_matches)/len(char_matches)) if dist > 0.25: insort(form_distances, (dist, lemma)) distances[form] = form_distances with open("potential_lemmas_hamming.txt", "w") as f: for form, form_distances in distances.items(): for dist, lemma in reversed(form_distances[-2:]): f.write(f"{form} ➝ {lemma}: {dist}\n") if __name__ == "__main__": profile_distance_calcs() From the time profile breakdown below (total time: 0.00122992 s), you can get an idea of where the slow-downs are coming from. The main culprit is (obviously) the distance computation which is why I switched the textdistance.hamming.normalized_similarity for a much more efficient (barebones) manual calculation of the same thing based on the textdistance hamming and hamming.normalized_similarity source code. I also believe using bisect.insort and maintaining a sorted list while inserting is faster than inserting all elements and then running heapq.nlargest. Line # Hits Time Per Hit % Time Line Contents ============================================================== 10 @profile 11 def profile_distance_calcs(): 12 1 7.9 7.9 0.6 lemmas_low = [lemma.lower() for lemma in lemmas] 13 1 7.0 7.0 0.6 forms_low = [form.lower() for form in forms] 14 18 1.8 0.1 0.1 for form in forms_low: 15 18 2.0 0.1 0.2 form_distances = [] 16 324 33.4 0.1 2.7 for lemma in lemmas_low: 17 324 844.5 2.6 68.7 char_matches = [c1 != c2 for c1, c2 in zip_longest(lemma, form)] 18 324 155.6 0.5 12.7 dist = 1 - (sum(char_matches)/len(char_matches)) 19 285 44.4 0.2 3.6 if dist > 0.25: 20 39 12.3 0.3 1.0 insort(form_distances, (dist, lemma)) 21 18 4.7 0.3 0.4 distances[form] = form_distances 22 23 1 52.5 52.5 4.3 with open("potential_lemmas_hamming.txt", "w") as f: 24 17 4.2 0.2 0.3 for form, form_distances in distances.items(): 25 26 11.5 0.4 0.9 for dist, lemma in reversed(form_distances[-2:]): 26 26 48.3 1.9 3.9 f.write(f"{form} ➝ {lemma}: {dist}\n") Original Code Speed Profile Here is your original code for comparison. I modified some aspects of it, the main difference is the use of heapq.nlargest as I believe you were after the 2 most similar lemmas for each form and not the 2 least similar which heapq.nsmallest provided. from textdistance import hamming, cosine, jaro_winkler import heapq lemmas = ["do", "done", "purpose", "can", "be", "use", "for", "cannon", "amuse", "useful", "user", "become", "downtown", "develop", "fulminate", "deduce", "de", "bezant"] forms = ["done", "done", "doing", "purposeful", "canonical", "becareful", "being", "berate", "best", "bezant", "full", "fulmination", "predict", "downgrade", "down", "developing", "deduct", "deducing"] distances = {} processed_pairs = set() # keep track of processed pairs @profile def profile_distance_calcs(): for lemma in lemmas: if lemma is None: continue lemma_lower = lemma.lower() for form in forms: if form is None: continue form_lower = form.lower() pair = (lemma_lower, form_lower) if pair not in processed_pairs: processed_pairs.add(pair) dist = hamming.normalized_similarity(lemma_lower, form_lower) if dist > 0.25: distances.setdefault(form_lower, []).append((dist, lemma_lower)) # Find the closest pairs closest_pairs = {} for form, dist_lemmas in distances.items(): closest_pairs[form] = heapq.nlargest(2, dist_lemmas) with open("potential_lemmas_orig.txt", "w") as f: for form, pairs in closest_pairs.items(): for dist, lemma in pairs: f.write(f"{form} ➝ {lemma}: {dist}\n") if __name__ == "__main__": profile_distance_calcs() Time profile breakdown for the original code (total time: 0.0114992 s): Line # Hits Time Per Hit % Time Line Contents ============================================================== 11 @profile 12 def profile_distance_calcs(): 13 18 2.4 0.1 0.0 for lemma in lemmas: 14 18 1.9 0.1 0.0 if lemma is None: 15 continue 16 18 6.4 0.4 0.1 lemma_lower = lemma.lower() 17 324 38.8 0.1 0.3 for form in forms: 18 324 32.6 0.1 0.3 if form is None: 19 continue 20 324 108.2 0.3 0.9 form_lower = form.lower() 21 324 46.9 0.1 0.4 pair = (lemma_lower, form_lower) 22 306 60.2 0.2 0.5 if pair not in processed_pairs: 23 306 92.0 0.3 0.8 processed_pairs.add(pair) 24 306 10828.9 35.4 94.2 dist = hamming.normalized_similarity(lemma_lower, form_lower) 25 270 47.5 0.2 0.4 if dist > 0.25: 26 36 24.1 0.7 0.2 distances.setdefault(form_lower, []).append((dist, lemma_lower)) 27 28 # Find the closest pairs 29 1 0.2 0.2 0.0 closest_pairs = {} 30 16 4.3 0.3 0.0 for form, dist_lemmas in distances.items(): 31 16 72.7 4.5 0.6 closest_pairs[form] = heapq.nlargest(2, dist_lemmas) 32 33 1 72.3 72.3 0.6 with open("potential_lemmas_orig.txt", "w") as f: 34 16 4.2 0.3 0.0 for form, pairs in closest_pairs.items(): 35 26 6.5 0.3 0.1 for dist, lemma in pairs: 36 26 49.0 1.9 0.4 f.write(f"{form} ➝ {lemma}: {dist}\n") Measuring Natural Language Similarity Measuring the similarity between two pieces of natural language text is a non-trivial task. Attempting to gauge spelling/morphological/semantic similarity based purely on rudimentary character-based metrics (e.g. Hamming distance, Levenshtein distance etc.) won't suffice as these metrics fail to capture complex linguistic patterns (hence why neural network methods are commonly used to pick up these patterns in large bodies of text). With that being said, one can begin to add their own "rules" to calculate more "accurate" similarity scores. For example, the code below modifies the normalised Hamming similarity computation to track how many consecutive characters match, and then scales the "similarity score" accordingly. There is obviously scope for fine-tuning and/or increasing the complexity/number of rules used, but with more complexity comes slower processing times. This custom function avoids the issue of results like beatte ➝ beats: 0.667 and beatus ➝ certus: 0.667, instead scoring them as beatte ➝ beats 0.79167 and beatus ➝ certus 0.33). def custom_hamming_norm_sim(strA, strB, scale=0.5): max_str_len = max(len(strA), len(strB)) max_score_per_char = 1 / max_str_len penalty = 1 score = 0 for c1, c2 in zip_longest(strA, strB): if c1 != c2: penalty = penalty * scale score += max_score_per_char * penalty else: p = penalty / scale if p < max_score_per_char: penalty = p score += max_score_per_char * penalty return score @profile def profile_distance_calcs(): lemmas_low = [lemma.lower() for lemma in lemmas] forms_low = [form.lower() for form in forms] for form in forms_low: form_distances = [] for lemma in lemmas_low: dist = custom_hamming_norm_sim(lemma, form) if dist > 0.25: insort(form_distances, (dist, lemma)) distances[form] = form_distances with open("potential_lemmas_hamming.txt", "w") as f: for form, form_distances in distances.items(): for dist, lemma in reversed(form_distances[-2:]): f.write(f"{form} ➝ {lemma}: {dist}\n") if __name__ == "__main__": profile_distance_calcs()
3
1
75,968,070
2023-4-8
https://stackoverflow.com/questions/75968070/how-do-i-fix-the-return-type-of-a-django-model-managers-method
I'm using Django 4.1.7 with django-stubs 1.16.0, and mypy 1.1.1. I have code that looks like this: class ProductQuerySet(QuerySet): ... class ProductManager(Manager): def create_red_product(self, **kwargs) -> "Product": return self.model(color=Product.Color.RED, **kwargs) _product_manager = ProductManager.from_queryset(ProductQuerySet) class Product(Model): ... objects = _product_manager() When mypy looks at this, it says: models/product.py:46: error: Incompatible return value type (got "_T", expected "Product") [return-value] It seems like the type of self.model in a model manager method is _T, which from what I understand is a generic type bound to the model, which in my case should be "Product". Why isn't this working? How can I fix it?
I can't reproduce one to one your case, but please try to specify what exactly your manager is for by: class ProductManager(Manager["Product"]):
3
2
75,958,929
2023-4-7
https://stackoverflow.com/questions/75958929/image-with-hyperlink-in-borb-table
I am trying to create a pdf document that includes a table. For each row of the table, a transaction will be recorded, and I would like the 4th column to include a paperclip image (or emoji) with a hyperlink to a document stored online. I would also like the 6th column to include tag names with a specific background color associated with each tag name, ideally with the background as a rounded rectangle. So far I am not able to put different backgrounds for a single cell. I have thought about creating one row per tag, and merge the cells for all the other columns, but I am not sure it would be the best option as sometimes several tags could fit in one line. So far I have: from datetime import datetime from decimal import Decimal from pathlib import Path from borb.pdf.canvas.font.simple_font.true_type_font import TrueTypeFont from borb.pdf.canvas.color.color import HexColor from borb.pdf.canvas.layout.layout_element import Alignment from borb.pdf.canvas.layout.table.fixed_column_width_table import FixedColumnWidthTable as Table from borb.pdf.canvas.layout.table.table import TableCell from borb.pdf.canvas.layout.text.paragraph import Paragraph from borb.pdf.canvas.layout.annotation.remote_go_to_annotation import RemoteGoToAnnotation from borb.pdf.canvas.geometry.rectangle import Rectangle from borb.pdf.canvas.layout.emoji.emoji import Emojis rubik = TrueTypeFont.true_type_font_from_file(Path(__file__).parent / "fonts/Rubik-Regular.ttf") rubik_medium = TrueTypeFont.true_type_font_from_file(Path(__file__).parent / "fonts/Rubik-Medium.ttf") background_color = HexColor("#FF5733") def generate_table(transactions): table = Table(number_of_rows=1 + len(transactions), number_of_columns=6, horizontal_alignment=Alignment.JUSTIFIED, column_widths=[Decimal(60), Decimal(60), Decimal(220), Decimal(30), Decimal(50), Decimal(50),Decimal(80)]) table.add(TableCell(Paragraph("Date", font=rubik_medium, font_size=Decimal(11), horizontal_alignment=Alignment.CENTERED))) table.add(TableCell(Paragraph("Account", font=rubik_medium, font_size=Decimal(11), horizontal_alignment=Alignment.CENTERED))) table.add(TableCell(Paragraph("Description", font=rubik_medium, font_size=Decimal(11), horizontal_alignment=Alignment.CENTERED))) table.add(TableCell(Paragraph("Attachment", font=rubik_medium, font_size=Decimal(11), horizontal_alignment=Alignment.CENTERED))) table.add(TableCell(Paragraph("Value", font=rubik_medium, font_size=Decimal(11), horizontal_alignment=Alignment.CENTERED))) table.add(TableCell(Paragraph("Tags", font=rubik_medium, font_size=Decimal(11), horizontal_alignment=Alignment.CENTERED))) for transaction in transactions: # Column 1 table.add(TableCell(Paragraph(datetime.strftime(transaction[0], '%d/%m/%y'), font=rubik, font_size=Decimal(8)))) # Column 2 table.add(TableCell(Paragraph(transaction[1], font=rubik, font_size=Decimal(8), horizontal_alignment=Alignment.CENTERED))) # Column 3 table.add(TableCell(Paragraph(transaction[2], font=rubik, font_size=Decimal(8), horizontal_alignment=Alignment.CENTERED))) # Basically I would need to somehow merge the following lines as my column 4 table.add(TableCell(RemoteGoToAnnotation(Rectangle(Decimal(32), Decimal(32), Decimal(32), Decimal(32)), uri=transaction[3]))) table.add(TableCell(Emojis.PAPERCLIP.value)) # Column 5 table.add(TableCell(Paragraph(f"{transaction[4]:.2f}", font=rubik, font_size=Decimal(8), horizontal_alignment=Alignment.CENTERED))) # Column 6 table.add(TableCell(Paragraph("\n".join(transaction[5]), font=rubik, font_size=Decimal(8), border_radius_top_left=Decimal(10), border_radius_top_right=Decimal(10), border_radius_bottom_left=Decimal(10), border_radius_bottom_right=Decimal(10), background_color=background_color, horizontal_alignment=Alignment.CENTERED)))
disclaimer: I am the author of borb. import random import typing from _decimal import Decimal from borb.pdf import PDF, Document, Page, FlexibleColumnWidthTable, Table, Paragraph, Lipsum, HeterogeneousParagraph, \ ChunkOfText, HexColor, Color, PageLayout, SingleColumnLayout from borb.pdf.canvas.layout.annotation.remote_go_to_annotation import RemoteGoToAnnotation from borb.pdf.canvas.layout.emoji.emoji import Emojis from borb.pdf.canvas.layout.layout_element import LayoutElement def main(): # empty Document doc: Document = Document() # add empty Page page: Page = Page() doc.add_page(page) # layout layout: PageLayout = SingleColumnLayout(page) # build Table t: Table = FlexibleColumnWidthTable(number_of_columns=6, number_of_rows=5) # header t.add(Paragraph("Date", font="Helvetica-Bold")) t.add(Paragraph("Account", font="Helvetica-Bold")) t.add(Paragraph("Description", font="Helvetica-Bold")) t.add(Paragraph("Attachment", font="Helvetica-Bold")) t.add(Paragraph("Value", font="Helvetica-Bold")) t.add(Paragraph("Tags", font="Helvetica-Bold")) # data elements_to_add_annotations_for: typing.List[LayoutElement] = [] for _ in range(0, 4): t.add(Paragraph("04/01/1989")) t.add(Paragraph("Joris Schellekens")) # random description t.add(Paragraph(Lipsum.generate_lipsum_text(2), font_size=Decimal(5))) # emoji # we need to be able to access the position later, so we store the LayoutElement e: LayoutElement = Emojis.SMILE.value elements_to_add_annotations_for += [e] t.add(e) # random value t.add(Paragraph(random.choice(["Good", "Very good", "Above expectations"]))) # random tags tags: typing.List[str] = [] tags += [random.choice(["lorem", "ipsum", "dolor"])] tags += [random.choice(["sit", "amet", "consectetur"])] tags += [random.choice(["adipiscing", "elit", "sed"])] # define the colors for the tags possible_colors_for_tags: typing.List[Color] = [HexColor("ff595e"), HexColor("ffca3a"), HexColor("8ac926"), HexColor("1982c4"), HexColor("6a4c93")] colors: typing.List[Color] = [random.choice(possible_colors_for_tags) for _ in tags] t.add(HeterogeneousParagraph(chunks_of_text=[ChunkOfText(x, background_color=colors[i], border_color=colors[i], border_radius_bottom_left=Decimal(5), border_radius_bottom_right=Decimal(5), border_radius_top_left=Decimal(5), border_radius_top_right=Decimal(5), border_bottom=True, border_top=True, border_left=True, border_right=True) for i,x in enumerate(tags)])) # add Table to Page t.set_padding_on_all_cells(Decimal(5), Decimal(5), Decimal(5), Decimal(5)) layout.add(t) # add Annotation(s) for e in elements_to_add_annotations_for: page.add_annotation(RemoteGoToAnnotation(e.get_previous_paint_box(), "https://www.borbpdf.com/")) # write with open("output.pdf", "wb") as fh: PDF.dumps(fh, doc) if __name__ == "__main__": main() Some of the finer details of this snippet: I used the Lipsum class to generate a random description By keeping track of the LayoutElement it becomes easy to add an Annotation later. Annotation objects require a bounding box, and LayoutElement keeps track of where it was painted. HeterogeneousParagraph allows you to merge other (text-carrying) LayoutElement objects inside it. That is how I implemented the tags. I also set their borders, giving each a border_radius to really enhance the "tag feel". The final PDF looks something like this:
3
1
75,943,441
2023-4-5
https://stackoverflow.com/questions/75943441/scrapy-playwright-scraper-does-not-return-page-or-playwright-page-in-respons
I am stuck in the scraper portion of my project, I continued troubleshooting errors and my latest approach is at least not crashing and burning. However, the response.meta I am getting for whatever reason is not returning a playwright page. Hardware/setup: intel-based MacBook pro running Monterey v12.6.4 python 3.11.2 pipenv environment all packages are updated to latest stable release Functionality I am after is rather simple; scrape results from google. However I need to automate this preferably with a headless browser, and be able to pass in some user-defined parameters including the url, and how many results to scrape before stopping. Here is the main portion of my scraper, i.e. imports and spider definition: from scrapy.crawler import CrawlerProcess import scrapy class GoogleSpider(scrapy.Spider): name = 'google_spider' allowed_domains = ['www.google.com'] custom_settings = { 'CONCURRENT_REQUESTS': 1, 'DOWNLOAD_DELAY': 3, 'COOKIES_ENABLED': False, 'PLAYWRIGHT_BROWSER_TYPE': 'chromium', 'MIDDLEWARES': { 'scrapy_playwright.middleware.PlaywrightMiddleware': 800, }, } def __init__(self, domain, stop, user_agent, *args, **kwargs): super().__init__(*args, **kwargs) self.domain = domain self.stop = int(stop) self.custom_settings['USER_AGENT'] = user_agent self.start_urls = [f'https://www.google.com/search?q=intitle%3A%28%22Data+Scientist%22+OR+%22Data+Engineer%22+OR+%22Machine+Learning%22+OR+%22Data+Analyst%22+OR+%22Software+Engineer%22%29+Remote+-%22Director%22+-%22Principal%22+-%22Staff%22+-%22Frontend%22+-%22Front+End%22+-%22Full+Stack%22+site%3A{self.domain}%2F%2A+after%3A2023-03-27'] self.urls_collected = [] @classmethod def from_crawler(cls, crawler, *args, **kwargs): return super().from_crawler(crawler, *args, **kwargs) def start_requests(self): yield scrapy.Request(self.start_urls[0], meta={"playwright": True, "playwright_include_page": True}) async def parse(self, response): print(f"\n\nRESPONSE STATUS: {response.status}, RESPONSE URL: {response.url}\n\n") print(f"RESPONSE META KEYS: {response.meta.keys()}\n\n") page = response.meta['page'] current_urls_length = 0 while True: locator = page.locator('.yuRUbf>a') urls = await locator.evaluate_all('nodes => nodes.map(n => n.href)') new_urls = [url for url in urls if self.domain in url and url not in self.urls_collected] self.urls_collected.extend(new_urls) if len(self.urls_collected) >= self.stop: self.urls_collected = self.urls_collected[:self.stop] break if len(urls) > current_urls_length: current_urls_length = len(urls) await page.evaluate("window.scrollTo(0, document.body.scrollHeight)") await page.waitForTimeout(1000) else: break self.logger.info(f'Collected {len(self.urls_collected)} URLs:') for url in self.urls_collected: self.logger.info(url) And the latest execution file: from scrapy.crawler import CrawlerProcess from spiders.googlespider import GoogleSpider def main(domain, stop, user_agent): process = CrawlerProcess() process.crawl(GoogleSpider, domain=domain, stop=stop, user_agent=user_agent) process.start() if __name__ == '__main__': domain = 'jobs.lever.co' stop = 25 user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3' user_agent2 = "Opera/9.80 (Windows NT 5.1; U; MRA 5.5 (build 02842); ru) Presto/2.7.62 Version/11.00" user_agent3 = "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.2; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0)" main(domain=domain, stop=stop, user_agent=user_agent3) And the logs: 2023-04-07 09:01:17 [scrapy.utils.log] INFO: Scrapy 2.8.0 started (bot: scrapybot) 2023-04-07 09:01:17 [scrapy.utils.log] INFO: Versions: lxml 4.9.2.0, libxml2 2.9.4, cssselect 1.2.0, parsel 1.7.0, w3lib 2.1.1, Twisted 22.10.0, Python 3.11.2 (v3.11.2:878ead1ac1, Feb 7 2023, 10:02:41) [Clang 13.0.0 (clang-1300.0.29.30)], pyOpenSSL 23.1.1 (OpenSSL 3.1.0 14 Mar 2023), cryptography 40.0.1, Platform macOS-12.6.4-x86_64-i386-64bit 2023-04-07 09:01:17 [scrapy.crawler] INFO: Overridden settings: {'CONCURRENT_REQUESTS': 1, 'COOKIES_ENABLED': False, 'DOWNLOAD_DELAY': 3} 2023-04-07 09:01:17 [py.warnings] WARNING: /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/scrapy/utils/request.py:232: ScrapyDeprecationWarning: '2.6' is a deprecated value for the 'REQUEST_FINGERPRINTER_IMPLEMENTATION' setting. It is also the default value. In other words, it is normal to get this warning if you have not defined a value for the 'REQUEST_FINGERPRINTER_IMPLEMENTATION' setting. This is so for backward compatibility reasons, but it will change in a future version of Scrapy. See the documentation of the 'REQUEST_FINGERPRINTER_IMPLEMENTATION' setting for information on how to handle this deprecation. return cls(crawler) 2023-04-07 09:01:17 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor 2023-04-07 09:01:17 [scrapy.extensions.telnet] INFO: Telnet Password: f1350e3a3455ff22 2023-04-07 09:01:17 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.memusage.MemoryUsage', 'scrapy.extensions.logstats.LogStats'] 2023-04-07 09:01:18 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2023-04-07 09:01:18 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2023-04-07 09:01:18 [scrapy.middleware] INFO: Enabled item pipelines: [] 2023-04-07 09:01:18 [scrapy.core.engine] INFO: Spider opened 2023-04-07 09:01:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2023-04-07 09:01:18 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6024 2023-04-07 09:01:18 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.google.com/search?q=intitle%3A%28%22Data+Scientist%22+OR+%22Data+Engineer%22+OR+%22Machine+Learning%22+OR+%22Data+Analyst%22+OR+%22Software+Engineer%22%29+Remote+-%22Director%22+-%22Principal%22+-%22Staff%22+-%22Frontend%22+-%22Front+End%22+-%22Full+Stack%22+site%3Ajobs.lever.co%2F%2A+after%3A2023-03-27> (referer: None) RESPONSE STATUS: 200, RESPONSE URL: https://www.google.com/search?q=intitle%3A%28%22Data+Scientist%22+OR+%22Data+Engineer%22+OR+%22Machine+Learning%22+OR+%22Data+Analyst%22+OR+%22Software+Engineer%22%29+Remote+-%22Director%22+-%22Principal%22+-%22Staff%22+-%22Frontend%22+-%22Front+End%22+-%22Full+Stack%22+site%3Ajobs.lever.co%2F%2A+after%3A2023-03-27 RESPONSE META KEYS: dict_keys(['playwright', 'playwright_include_page', 'download_timeout', 'download_slot', 'download_latency']) 2023-04-07 09:01:18 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.google.com/search?q=intitle%3A%28%22Data+Scientist%22+OR+%22Data+Engineer%22+OR+%22Machine+Learning%22+OR+%22Data+Analyst%22+OR+%22Software+Engineer%22%29+Remote+-%22Director%22+-%22Principal%22+-%22Staff%22+-%22Frontend%22+-%22Front+End%22+-%22Full+Stack%22+site%3Ajobs.lever.co%2F%2A+after%3A2023-03-27> (referer: None) Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/twisted/internet/defer.py", line 1697, in _inlineCallbacks result = context.run(gen.send, result) File "/Users/reesh/Projects/qj/app/gs/gs/spiders/googlespider.py", line 37, in parse page = response.meta['page'] KeyError: 'page' 2023-04-07 09:01:19 [scrapy.core.engine] INFO: Closing spider (finished) 2023-04-07 09:01:19 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 507, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 17104, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'elapsed_time_seconds': 0.874591, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2023, 4, 7, 16, 1, 19, 103146), 'httpcompression/response_bytes': 53816, 'httpcompression/response_count': 1, 'log_count/DEBUG': 2, 'log_count/ERROR': 1, 'log_count/INFO': 10, 'log_count/WARNING': 1, 'memusage/max': 61571072, 'memusage/startup': 61571072, 'response_received_count': 1, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'spider_exceptions/KeyError': 1, 'start_time': datetime.datetime(2023, 4, 7, 16, 1, 18, 228555)} 2023-04-07 09:01:19 [scrapy.core.engine] INFO: Spider closed (finished) So response.meta is completely missing the "playwright_page" or "page" entry, and that's where my spider stops working. In fact anything after that definition I am not sure works. Truth be told, I am not married to using Scrapy-playwright, it simply was the first solution I found to handle google's new-ish infinite scroll interface. I truly don't mind going back to the drawing board and starting fresh, as long as my scraper works as intended. Please weigh in, I am open to any and all suggestions!
What you are shown in the browser is not always the same as what you might receive when using a headless browser. When in doubt it's a good idea to write the entire contents of the page to an html file and then inspect it either with a code editor or with your browser so you can see exactly what the page you are actually receiving in your response objects is getting. First thing is that your custom settings need to be adjusted. You should have the http and https download handlers installed when using scrapy playwright... like so: custom_settings = { "TWISTED_REACTOR": "twisted.internet.asyncioreactor.AsyncioSelectorReactor", 'CONCURRENT_REQUESTS': 1, 'DOWNLOAD_DELAY': 3, 'COOKIES_ENABLED': False, 'PLAYWRIGHT_BROWSER_TYPE': 'chromium', "DOWNLOAD_HANDLERS": { "https": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler", "http": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler", } } Also you the value you need to search for in response.meta is 'playwright_page', which should fix your issue with not receiving the correct page. Finally if you were to follow my first piece of advice you would see that your html selectors might not exist in the actual page you are receiving from the headless browser. In my case there is no infinite scroll implemented and instead there is a "Next" link on the bottom of each page that needs to be clicked. Also the class selectors are all different from the one shown in the browser. The example below worked for me, but it might not be the same for you, however using the process described above you might be able to get the results you are looking for. import re import scrapy from scrapy.crawler import CrawlerProcess from scrapy.selector import Selector class GoogleSpider(scrapy.Spider): name = 'google_spider' allowed_domains = ['www.google.com'] custom_settings = { "TWISTED_REACTOR": "twisted.internet.asyncioreactor.AsyncioSelectorReactor", "DOWNLOAD_HANDLERS": { "https": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler", "http": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler", } } def __init__(self, domain, stop, user_agent, *args, **kwargs): super().__init__(*args, **kwargs) self.domain = domain self.stop = int(stop) self.custom_settings['USER_AGENT'] = user_agent self.start_urls = [f'https://www.google.com/search?q=intitle%3A%28%22Data+Scientist%22+OR+%22Data+Engineer%22+OR+%22Machine+Learning%22+OR+%22Data+Analyst%22+OR+%22Software+Engineer%22%29+Remote+-%22Director%22+-%22Principal%22+-%22Staff%22+-%22Frontend%22+-%22Front+End%22+-%22Full+Stack%22+site%3A{self.domain}%2F%2A+after%3A2023-03-27'] self.urls_collected = [] @classmethod def from_crawler(cls, crawler, *args, **kwargs): return super().from_crawler(crawler, *args, **kwargs) def start_requests(self): yield scrapy.Request(self.start_urls[0], meta={"playwright": True, "playwright_include_page": True}) async def get_page_info(self, page): for i in range(10): val = page.viewport_size["height"] await page.mouse.wheel(0, val) await page.wait_for_timeout(1000) text = await page.content() selector = Selector(text=text) urls = [] for row in selector.xpath("//div[contains(@class, 'kCrYT')]"): text = row.xpath(".//h3//text()").get() url = row.xpath(".//a/@href").get() if url: urls.append({text: url}) print(urls) self.urls_collected += urls return urls async def parse(self, response): page = response.meta['playwright_page'] urls = await self.get_page_info(page) found = True while found: try: element = page.get_by_text("Next") print(element, "parsing next page") await element.click() more_urls = await self.get_page_info(page) urls += more_urls except: found = False return urls def main(domain, stop, user_agent): process = CrawlerProcess() process.crawl(GoogleSpider, domain=domain, stop=stop, user_agent=user_agent) process.start() if __name__ == '__main__': domain = 'jobs.lever.co' stop = 25 user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3' user_agent2 = "Opera/9.80 (Windows NT 5.1; U; MRA 5.5 (build 02842); ru) Presto/2.7.62 Version/11.00" user_agent3 = "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.2; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0)" main(domain=domain, stop=stop, user_agent=user_agent3)
4
2
75,958,246
2023-4-7
https://stackoverflow.com/questions/75958246/how-to-check-if-a-dataclass-is-frozen
Is there a way to check if a Python dataclass has been set to frozen? If not, would it be valuable to have a method like is_frozen in the dataclasses module to perform this check? e.g. from dataclasses import dataclass, is_frozen @dataclass(frozen=True) class Person: name: str age: int person = Person('Alice', 25) if not is_frozen(person): person.name = 'Bob' One way to check if a dataclass has been set to frozen is to try to modify one of its attributes and catch the FrozenInstanceError exception that will be raised if it's frozen. e.g. from dataclasses import FrozenInstanceError is_frozen = False try: person.name = 'check_if_frozen' except FrozenInstanceError: is_frozen = True However, if the dataclass is not frozen, the attribute will be modified, which may be unwanted just for the sake of performing the check.
Yes looks like you can retrieve parameters' information from __dataclass_params__. It returns an instance of _DataclassParams type which is nothing but an object for holding the values of init, repr, eq, order, unsafe_hash, frozen attributes: from dataclasses import dataclass @dataclass(frozen=True) class Person: name: str age: int print(Person.__dataclass_params__) print(Person.__dataclass_params__.frozen) output: _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=True) True
6
6
75,958,222
2023-4-7
https://stackoverflow.com/questions/75958222/can-i-return-400-error-instead-of-422-error
I validate data using Pydantic schema in my FastAPI project and if it is not ok it returns 422. Can I change it to 400?
Yes, you can. For example, you can apply the next exception handler: from fastapi import FastAPI, Request, status from fastapi.exceptions import RequestValidationError from fastapi.responses import JSONResponse app = FastAPI() @app.exception_handler(RequestValidationError) async def validation_exception_handler(request: Request, exc: RequestValidationError): return JSONResponse( status_code=status.HTTP_400_BAD_REQUEST, content={"detail": exc.errors()}, ) # your routes and endpoints here In this case any validation errors raised by Pydantic will result in a 400 Bad Request status code being returned by your API. The exc.errors() contains the validation errors.
3
5
75,951,019
2023-4-6
https://stackoverflow.com/questions/75951019/unable-to-install-mysqlclient-package-on-ec2-instance
I am trying to install mysqlclient Python package on an Amazon EC2 instance running Amazon Linux 2023 AMI. When I run pip install mysqlclient, I get the following error message: Collecting mysqlclient==2.1.0 Downloading mysqlclient-2.1.0.tar.gz (87 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 87.6/87.6 kB 17.2 MB/s eta 0:00:00 Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [16 lines of output] /bin/sh: line 1: mysql_config: command not found /bin/sh: line 1: mariadb_config: command not found /bin/sh: line 1: mysql_config: command not found Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/tmp/pip-install-ywdhtbse/mysqlclient_802cda8d3393451492ff091d28d6482f/setup.py", line 15, in <module> metadata, options = get_config() File "/tmp/pip-install-ywdhtbse/mysqlclient_802cda8d3393451492ff091d28d6482f/setup_posix.py", line 70, in get_config libs = mysql_config("libs") File "/tmp/pip-install-ywdhtbse/mysqlclient_802cda8d3393451492ff091d28d6482f/setup_posix.py", line 31, in mysql_config raise OSError("{} not found".format(_mysql_config_path)) OSError: mysql_config not found mysql_config --version mariadb_config --version mysql_config --libs [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. I have tried the following : sudo yum -y install mysql which resulted in Last metadata expiration check: 0:41:31 ago on Thu Apr 6 13:36:18 2023. No match for argument: mysql Can anyone suggest a solution to this problem? Thanks in advance! EDIT Here are all the commands I did : sudo yum update -y sudo yum install python3 -y sudo yum install python3-pip -y sudo dnf install -y mariadb105-devel gcc pip install mysqlclient==2.1.0 (in a virtual environment) But with the last one I get the following error : Collecting importlib-metadata Downloading importlib_metadata-6.1.0-py3-none-any.whl (21 kB) Collecting zipp>=0.5 Downloading zipp-3.15.0-py3-none-any.whl (6.8 kB) Building wheels for collected packages: Flask-MySQLdb, mysqlclient Building wheel for Flask-MySQLdb (setup.py) ... done Created wheel for Flask-MySQLdb: filename=Flask_MySQLdb-1.0.1-py3-none-any.whl size=4675 sha256=e0bb7388b33b749ec52908abdb0e34d15c10544c2c2c96932e9ee90b117e0cfa Stored in directory: /home/ec2-user/.cache/pip/wheels/48/ba/f7/32a9b364c18e9e479d5dd05305109e7a72d85d8e29c02a10ea Building wheel for mysqlclient (setup.py) ... error error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [44 lines of output] mysql_config --version ['10.5.5'] mysql_config --libs ['-L/usr/lib64/', '-lmariadb'] mysql_config --cflags ['-I/usr/include/mysql', '-I/usr/include/mysql/mysql'] ext_options: library_dirs: ['/usr/lib64/'] libraries: ['mariadb'] extra_compile_args: ['-std=c99'] extra_link_args: [] include_dirs: ['/usr/include/mysql', '/usr/include/mysql/mysql'] extra_objects: [] define_macros: [('version_info', "(2,1,0,'final',0)"), ('__version__', '2.1.0')] running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-cpython-39 creating build/lib.linux-x86_64-cpython-39/MySQLdb copying MySQLdb/__init__.py -> build/lib.linux-x86_64-cpython-39/MySQLdb copying MySQLdb/_exceptions.py -> build/lib.linux-x86_64-cpython-39/MySQLdb copying MySQLdb/connections.py -> build/lib.linux-x86_64-cpython-39/MySQLdb copying MySQLdb/converters.py -> build/lib.linux-x86_64-cpython-39/MySQLdb copying MySQLdb/cursors.py -> build/lib.linux-x86_64-cpython-39/MySQLdb copying MySQLdb/release.py -> build/lib.linux-x86_64-cpython-39/MySQLdb copying MySQLdb/times.py -> build/lib.linux-x86_64-cpython-39/MySQLdb creating build/lib.linux-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/__init__.py -> build/lib.linux-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/CLIENT.py -> build/lib.linux-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/CR.py -> build/lib.linux-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/ER.py -> build/lib.linux-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/FIELD_TYPE.py -> build/lib.linux-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/FLAG.py -> build/lib.linux-x86_64-cpython-39/MySQLdb/constants running build_ext building 'MySQLdb._mysql' extension creating build/temp.linux-x86_64-cpython-39 creating build/temp.linux-x86_64-cpython-39/MySQLdb gcc -Wno-unused-result -Wsign-compare -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -O2 -ftree-vectorize -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -ftree-vectorize -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -ftree-vectorize -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Dversion_info=(2,1,0,'final',0) -D__version__=2.1.0 -I/usr/include/mysql -I/usr/include/mysql/mysql -I/home/ec2-user/myapp/env/include -I/usr/include/python3.9 -c MySQLdb/_mysql.c -o build/temp.linux-x86_64-cpython-39/MySQLdb/_mysql.o -std=c99 MySQLdb/_mysql.c:46:10: fatal error: Python.h: No such file or directory 46 | #include "Python.h" | ^~~~~~~~~~ compilation terminated. error: command '/usr/bin/gcc' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for mysqlclient Running setup.py clean for mysqlclient Successfully built Flask-MySQLdb Failed to build mysqlclient Installing collected packages: certifi, zipp, Werkzeug, urllib3, six, mysqlclient, MarkupSafe, itsdangerous, idna, colorama, click, charset-normalizer, requests, Jinja2, importlib-metadata, Flask, Flask-MySQLdb, Flask-Cors Running setup.py install for mysqlclient ... error error: subprocess-exited-with-error × Running setup.py install for mysqlclient did not run successfully. │ exit code: 1 ╰─> [46 lines of output] mysql_config --version ['10.5.5'] mysql_config --libs ['-L/usr/lib64/', '-lmariadb'] mysql_config --cflags ['-I/usr/include/mysql', '-I/usr/include/mysql/mysql'] ext_options: library_dirs: ['/usr/lib64/'] libraries: ['mariadb'] extra_compile_args: ['-std=c99'] extra_link_args: [] include_dirs: ['/usr/include/mysql', '/usr/include/mysql/mysql'] extra_objects: [] define_macros: [('version_info', "(2,1,0,'final',0)"), ('__version__', '2.1.0')] running install /home/ec2-user/myapp/env/lib/python3.9/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build/lib.linux-x86_64-cpython-39 creating build/lib.linux-x86_64-cpython-39/MySQLdb copying MySQLdb/__init__.py -> build/lib.linux-x86_64-cpython-39/MySQLdb copying MySQLdb/_exceptions.py -> build/lib.linux-x86_64-cpython-39/MySQLdb copying MySQLdb/connections.py -> build/lib.linux-x86_64-cpython-39/MySQLdb copying MySQLdb/converters.py -> build/lib.linux-x86_64-cpython-39/MySQLdb copying MySQLdb/cursors.py -> build/lib.linux-x86_64-cpython-39/MySQLdb copying MySQLdb/release.py -> build/lib.linux-x86_64-cpython-39/MySQLdb copying MySQLdb/times.py -> build/lib.linux-x86_64-cpython-39/MySQLdb creating build/lib.linux-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/__init__.py -> build/lib.linux-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/CLIENT.py -> build/lib.linux-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/CR.py -> build/lib.linux-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/ER.py -> build/lib.linux-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/FIELD_TYPE.py -> build/lib.linux-x86_64-cpython-39/MySQLdb/constants copying MySQLdb/constants/FLAG.py -> build/lib.linux-x86_64-cpython-39/MySQLdb/constants running build_ext building 'MySQLdb._mysql' extension creating build/temp.linux-x86_64-cpython-39 creating build/temp.linux-x86_64-cpython-39/MySQLdb gcc -Wno-unused-result -Wsign-compare -DDYNAMIC_ANNOTATIONS_ENABLED=1 -DNDEBUG -O2 -ftree-vectorize -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -ftree-vectorize -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -O2 -ftree-vectorize -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fstack-protector-strong -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Dversion_info=(2,1,0,'final',0) -D__version__=2.1.0 -I/usr/include/mysql -I/usr/include/mysql/mysql -I/home/ec2-user/myapp/env/include -I/usr/include/python3.9 -c MySQLdb/_mysql.c -o build/temp.linux-x86_64-cpython-39/MySQLdb/_mysql.o -std=c99 MySQLdb/_mysql.c:46:10: fatal error: Python.h: No such file or directory 46 | #include "Python.h" | ^~~~~~~~~~ compilation terminated. error: command '/usr/bin/gcc' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─> mysqlclient note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure.
On fresh Amazon Linux 2023 you have to do: # install pip (AL 2023 does not have one by default) sudo dnf install -y pip # install dependencies sudo dnf install -y mariadb105-devel gcc python3-devel # install mysqlclient pip install mysqlclient
3
15
75,953,146
2023-4-6
https://stackoverflow.com/questions/75953146/unable-to-use-show-and-unable-to-perform-further-operations-on-a-spark-datafr
I was trying to use an UDF in spark. After applying the udf to a column, df.show() was not working neither I was able to apply any further operation on that dataframe. So, I ran the code which is given in the documentation here and got the same error The code was: from pyspark.sql.types import IntegerType slen = udf(lambda s: len(s), IntegerType()) @udf def to_upper(s): if s is not None: return s.upper() @udf(returnType=IntegerType()) def add_one(x): if x is not None: return x + 1 df = spark.createDataFrame([(1, "John Doe", 21)], ("id", "name", "age")) df.select(slen("name").alias("slen(name)"), to_upper("name"), add_one("age")).show() Here is the error: Py4JJavaError Traceback (most recent call last) Cell In[13], line 14 11 return x + 1 13 df = spark.createDataFrame([(1, "John Doe", 21)], ("id", "name", "age")) ---> 14 df.select(slen("name").alias("slen(name)"), to_upper("name"), add_one("age")).show() File C:\Program Files\Python310\lib\site-packages\pyspark\sql\dataframe.py:606, in DataFrame.show(self, n, truncate, vertical) 603 raise TypeError("Parameter 'vertical' must be a bool") 605 if isinstance(truncate, bool) and truncate: --> 606 print(self._jdf.showString(n, 20, vertical)) 607 else: 608 try: File C:\Program Files\Python310\lib\site-packages\py4j\java_gateway.py:1321, in JavaMember.__call__(self, *args) 1315 command = proto.CALL_COMMAND_NAME +\ 1316 self.command_header +\ 1317 args_command +\ 1318 proto.END_COMMAND_PART 1320 answer = self.gateway_client.send_command(command) -> 1321 return_value = get_return_value( 1322 answer, self.gateway_client, self.target_id, self.name) 1324 for temp_arg in temp_args: 1325 temp_arg._detach() File C:\Program Files\Python310\lib\site-packages\pyspark\sql\utils.py:190, in capture_sql_exception.<locals>.deco(*a, **kw) 188 def deco(*a: Any, **kw: Any) -> Any: 189 try: --> 190 return f(*a, **kw) 191 except Py4JJavaError as e: 192 converted = convert_exception(e.java_exception) File C:\Program Files\Python310\lib\site-packages\py4j\protocol.py:326, in get_return_value(answer, gateway_client, target_id, name) 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client) 325 if answer[1] == REFERENCE_TYPE: --> 326 raise Py4JJavaError( 327 "An error occurred while calling {0}{1}{2}.\n". 328 format(target_id, ".", name), value) 329 else: 330 raise Py4JError( 331 "An error occurred while calling {0}{1}{2}. Trace:\n{3}\n". 332 format(target_id, ".", name, value)) Py4JJavaError: An error occurred while calling o134.showString. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3.0 (TID 2) (LAPTOP-SI50IG8L executor driver): org.apache.spark.SparkException: Python worker failed to connect back. at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:189) at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:109) at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:124) at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:157) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:136) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.net.SocketTimeoutException: Accept timed out at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method) at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source) at java.net.AbstractPlainSocketImpl.accept(Unknown Source) at java.net.PlainSocketImpl.accept(Unknown Source) at java.net.ServerSocket.implAccept(Unknown Source) at java.net.ServerSocket.accept(Unknown Source) at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:176) ... 38 more Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2672) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2608) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2607) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2607) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1182) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1182) at scala.Option.foreach(Option.scala:407) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1182) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2860) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2802) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2791) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:952) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2238) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2259) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2278) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:506) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:459) at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:48) at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3868) at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2863) at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:3858) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:510) at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3856) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:109) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3856) at org.apache.spark.sql.Dataset.head(Dataset.scala:2863) at org.apache.spark.sql.Dataset.take(Dataset.scala:3084) at org.apache.spark.sql.Dataset.getRows(Dataset.scala:288) at org.apache.spark.sql.Dataset.showString(Dataset.scala:327) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) at py4j.ClientServerConnection.run(ClientServerConnection.java:106) at java.lang.Thread.run(Unknown Source) Caused by: org.apache.spark.SparkException: Python worker failed to connect back. at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:189) at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:109) at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:124) at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:157) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:136) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ... 1 more Caused by: java.net.SocketTimeoutException: Accept timed out at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method) at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source) at java.net.AbstractPlainSocketImpl.accept(Unknown Source) at java.net.PlainSocketImpl.accept(Unknown Source) at java.net.ServerSocket.implAccept(Unknown Source) at java.net.ServerSocket.accept(Unknown Source) at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:176) ... 38 more I don't know what is the problem. Versions Python: 3.10.10 Pyspark: 3.3.2 I read one answer on stack overflow saying that UDF's are slow but I think it should work with the code given in the documentation.
It's caused by the connection between PySpark and Python. You can either set the environment variable PYSPARK_DRIVER_PYTHON and PYSPARK_PYTHON to your python or spark.pyspark.driver and spark.pyspark.python when you use spark-submit commit.
5
2
75,955,276
2023-4-7
https://stackoverflow.com/questions/75955276/pyspark-json-to-pyspark-dataframe
I want to transform this json to a pyspark dataframe I have added my current code. json = { "key1": 0.75, "values":[ { "id": 2313, "val1": 350, "val2": 6000 }, { "id": 2477, "val1": 340, "val2": 6500 } ] } my code: I can get the expected output using my code. Hope someone improve this. import json from pyspark.sql import SparkSession spark = SparkSession.builder.appName("CreateDataFrame").getOrCreate() json_string = json.dumps({ "key1": 0.75, "values":[ { "id": 2313, "val1": 350, "val2": 6000 }, { "id": 2477, "val1": 340, "val2": 6500 } ] }) df = spark.read.json(spark.sparkContext.parallelize([json_string])) df = df.select("key1", "values.id", "values.val1", "values.val2") df.show() output +----+-------------+-------------+-------------+ |key1| id| val1| val2| +----+-------------+-------------+-------------+ |0.75| [2313, 2477]| [350, 340]| [6000, 6500]| +----+-------------+-------------+-------------+ Help appreciate to get the expecting output. Expecting output: +----+----+----+----+ |key1| id|val1|val2| +----+----+----+----+ |0.75|2313| 350|6000| |0.75|2477| 340|6500| +----+----+----+----+
You can try the spark inline function. df = df.selectExpr("key1", "inline(values)")
5
4
75,954,148
2023-4-6
https://stackoverflow.com/questions/75954148/tricky-conversion-of-field-names-to-values-while-performing-row-by-row-de-aggreg
I have a dataset where I would like to convert specific field names to values while performing a de aggregation the values into their own unique rows as well as perform a long pivot. Data Start Date End Area Final Type Middle Stat Low Stat High Stat Middle Stat1 Low Stat1 High Stat1 8/1/2013 9/1/2013 10/1/2013 NY 3/1/2023 CC 226 20 10 0 0 0 8/1/2013 9/1/2013 10/1/2013 CA 3/1/2023 AA 130 50 0 0 0 0 data = { "Start": ['8/1/2013', '8/1/2013'], "Date": ['9/1/2013', '9/1/2013'], "End": ['10/1/2013', '10/1/2013'], "Area": ['NY', 'CA'], "Final": ['3/1/2023', '3/1/2023'], "Type": ['CC', 'AA'], "Middle Stat": [226, 130], "Low Stat": [20, 50], "High Stat": [10, 0], "Middle Stat1": [0, 0], "Low Stat1": [0, 0], "High Stat1": [0, 0] } Desired Start Date End Area Final Type Stat Range Stat1 8/1/2013 9/1/2013 10/1/2013 NY 3/1/2023 CC 20 Low 0 8/1/2013 9/1/2013 10/1/2013 CA 3/1/2023 AA 50 Low 0 8/1/2013 9/1/2013 10/1/2013 NY 3/1/2023 CC 226 Middle 0 8/1/2013 9/1/2013 10/1/2013 CA 3/1/2023 AA 130 Middle 0 8/1/2013 9/1/2013 10/1/2013 NY 3/1/2023 CC 10 High 0 8/1/2013 9/1/2013 10/1/2013 CA 3/1/2023 AA 0 High 0 Doing I believe I have to inject some sort of wide to long method, (SO member assisted) however unsure how to incorporate this whilst having the same suffix in the targeted (columns of interest) column names. pd.wide_to_long(df, stubnames=['Low','Middle','High'], i=['Start','Date','End','Area','Final'], j='', sep=' ', suffix='(stat)' ).unstack(level=-1, fill_value=0).stack(level=0).reset_index() Any suggestion is appreciated. #Original Dataset import pandas as pd # create DataFrame data = {'Start': ['9/1/2013', '10/1/2013', '11/1/2013', '12/1/2013'], 'Date': ['10/1/2016', '11/1/2016', '12/1/2016', '1/1/2017'], 'End': ['11/1/2016', '12/1/2016', '1/1/2017', '2/1/2017'], 'Area': ['NY', 'NY', 'NY', 'NY'], 'Final': ['3/1/2023', '3/1/2023', '3/1/2023', '3/1/2023'], 'Type': ['CC', 'CC', 'CC', 'CC'], 'Low Stat': ['', '', '', ''], 'Low Stat1': ['', '', '', ''], 'Middle Stat': ['0', '0', '0', '0'], 'Middle Stat1': ['0', '0', '0', '0'], 'Re': ['','','',''], 'Set': ['0', '0', '0', '0'], 'Set2': ['0', '0', '0', '0'], 'Set3': ['0', '0', '0', '0'], 'High Stat': ['', '', '', ''], 'High Stat1': ['', '', '', '']} df = pd.DataFrame(data)
One option is with pivot_longer from pyjanitor - in this case we use the special placeholder .value to identify the parts of the column that we want to remain as headers, while the rest get collated into a new column : # pip install pyjanitor import pandas as pd import janitor (df .pivot_longer( index = slice('Start', 'Type'), names_to = ("Range", ".value"), names_sep = " ") ) Start Date End Area Final Type Range Stat Stat1 0 8/1/2013 9/1/2013 10/1/2013 NY 3/1/2023 CC Middle 226 0 1 8/1/2013 9/1/2013 10/1/2013 CA 3/1/2023 AA Middle 130 0 2 8/1/2013 9/1/2013 10/1/2013 NY 3/1/2023 CC Low 20 0 3 8/1/2013 9/1/2013 10/1/2013 CA 3/1/2023 AA Low 50 0 4 8/1/2013 9/1/2013 10/1/2013 NY 3/1/2023 CC High 10 0 5 8/1/2013 9/1/2013 10/1/2013 CA 3/1/2023 AA High 0 0
4
1
75,954,211
2023-4-6
https://stackoverflow.com/questions/75954211/valueerror-source-code-string-cannot-contain-null-bytes
I'm originally an Ubuntu user, but I have to use a Windows Virtual Machine for some reason. I was trying to pip-install a package using the CMD, however, I'm getting the following error: from pip._vendor.packaging.utils import canonicalize_name ValueError: source code string cannot contain null bytes I used pip install numpy and pip3 install numpy along with other commands I found while tying to fix the problem. I checked that pip is available and reinstalled Python to make sure the path is added. I've also made sure that I'm running everything as an administrator. Everything seems to be installed properly, but I keep getting that error. I've also checked almost all other StackOverflow questions related to this error message. How can I solve this?
The error that occurred was while using "Python 3.10.11 (64-bit)". Though I reinstalled it the issue continued. When I downgraded to "Python 3.9.0 (64-bit)", the issue was solved.
3
2
75,953,279
2023-4-6
https://stackoverflow.com/questions/75953279/modulenotfounderror-no-module-named-pandas-core-indexes-numeric-using-metaflo
I used Metaflow to load a Dataframe. It was successfully unpickled from the artifact store, but when I try to view its index using df.index, I get an error that says ModuleNotFoundError: No module named 'pandas.core.indexes.numeric'. Why? I've looked at other answers with similar error messages here and here, which say that this is caused by trying to unpickle a dataframe with older versions of Pandas. However, my error is slightly different, and it is not fixed by upgrading Pandas (pip install pandas -U).
This issue is caused by the new Pandas 2.0.0 release breaking backwards compatibility with Pandas 1.x, although I don't see this documented in the release notes. The solution is to downgrade pandas to the 1.x series: pip install "pandas<2.0.0"
39
46
75,945,689
2023-4-6
https://stackoverflow.com/questions/75945689/python-efficient-calculation-where-end-value-of-one-row-is-the-start-value-of
I would like to make simple calculations on a rolling basis, but have heavy performance issues when I try to solve this with a nested for-loop. I need to perform this kind of operations on very large data, but have to use standard Python (incl. Pandas). The values are floats and can be negative, zero or positive. I have a pd.DataFrame (df1) which contains (structured by some dimensions, lets call them key1 and key2) a start column, a end column and some operations-columns in between, which are supposed to be used to calculate the end column based on the start column. Basically, the simple logic is: start + plus - minus = end, where the end value of each row is the start value of the next row. This would need to be done by the two keys, i.e. for AX, AY and BX seperately. df2 shows the desired result, but I don't know how to get there in an efficient way without blowing up my memory if this task is done on much larger tables. import pandas as pd import numpy as np df1 = pd.DataFrame(np.array([["A", "X", 3,6,4,0], ["A", "X", 0,2,10,0], ["A", "X", 0,9,3,0], ["A", "Y", 8,3,1,0], ["A", "Y", 0,2,3,0], ["B", "X", 4,4,2,0], ["B", "X", 0,1,0,0]]), columns=['key1', 'key2', 'start', 'plus', 'minus', 'end']) >>> df1 key1 key2 start plus minus end 0 A X 3 6 4 0 1 A X 0 2 10 0 2 A X 0 9 3 0 3 A Y 8 3 1 0 4 A Y 0 2 3 0 5 B X 4 4 2 0 6 B X 0 1 0 0 df2 = pd.DataFrame(np.array([["A", "X", 3,6,4,5], ["A", "X", 5,2,10,-3], ["A", "X", -3,9,3,3], ["A", "Y", 8,3,1,10], ["A", "Y", 10,2,3,9], ["B", "X", 4,4,2,2], ["B", "X", 2,1,0,3]]), columns=['key1', 'key2', 'start', 'plus', 'minus', 'end']) >>> df2 key1 key2 start plus minus end 0 A X 3 6 4 5 1 A X 5 2 10 -3 2 A X -3 9 3 3 3 A Y 8 3 1 10 4 A Y 10 2 3 9 5 B X 4 4 2 2 6 B X 2 1 0 3
You can create helper Series for subtract plus and minus columns, create cumulative sums per groups by both columns and add first value of start for final end column, then for start column use DataFrameGroupBy.shift with replace first value by original values in Series.fillna: plusminus = df1['plus'].sub(df1['minus']) df1 = df1.assign(plusminus = plusminus) g = df1.groupby(['key1','key2']) df1['end'] = g['plusminus'].cumsum().add(g['start'].transform('first')) df1['start'] = g['end'].shift().fillna(df1['start']).astype(int) df1 = df1.drop('plusminus', axis=1) print (df1) key1 key2 start plus minus end 0 A X 3 6 4 5 1 A X 5 2 1 6 2 A X 6 5 7 4 3 A Y 8 3 1 10 4 A Y 10 2 3 9 5 B X 4 4 2 6 6 B X 6 1 0 7
4
2
75,945,832
2023-4-6
https://stackoverflow.com/questions/75945832/what-is-the-output-type-of-subprocess-communicate
I was going through official documentation of Popen.communicate(). p = subprocess.Popen('echo hello',stdout=subprocess.PIPE,stderr=subprocess.PIPE,shell=True,universal_newlines=True) r,e = p.communicate() I want to know at which case it will return string output and which case it will return in bytes.(with example would be great) In the above case r type is string. Popen.communicate(input=None, timeout=None) Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate and set the returncode attribute. The optional input argument should be data to be sent to the child process, or None, if no data should be sent to the child. If streams were opened in text mode, input must be a string. Otherwise, it must be bytes. communicate() returns a tuple (stdout_data, stderr_data). The data will be strings if streams were opened in text mode; otherwise, bytes.
It depends on the text mode used as per the docs If encoding or errors are specified, or text (also known as universal_newlines) is true, the file objects stdin, stdout and stderr will be opened in text mode using the encoding and errors specified in the call or the defaults for io.TextIOWrapper... If text mode is not used, stdin, stdout and stderr will be opened as binary streams. No encoding or line ending conversion is performed. For future use-cases on well typed libraries, you can use typing.reveal_type with static analyzers such as mypy. This makes it quite easy to determine values in your code. Example, test.py from subprocess import Popen from typing import reveal_type p1 = Popen("blah", text=False) reveal_type(p1.communicate()) p2 = Popen("blah", text=True) reveal_type(p2.communicate()) > mypy test.py test.py:5: note: Revealed type is "Tuple[builtins.bytes, builtins.bytes]" test.py:9: note: Revealed type is "Tuple[builtins.str, builtins.str]" Success: no issues found in 1 source file You can see here, when text=False the values are bytes, when text=True the values are strings.
5
3
75,945,804
2023-4-6
https://stackoverflow.com/questions/75945804/tricky-long-pivot-by-reverse-aggregation-transformation-pandas
I have a dataset where I would like to de aggregate the values into their own unique rows as well as perform a pivot, grouping by category. Data updated Period Date Area BB stat AA stat CC stat DD stat BB test AA test CC test DD test BB re AA re CC re BB test2 AA test2 CC test2 DD test2 8/1/2016 9/1/2016 NY 5 5 5 1 1 1 0 0 0 0 0 0 0 9/1/2016 10/1/2016 NY 6 6 6 4 4 4 0 0 0 0 0 0 0 8/1/2016 9/1/2016 CA 2 2 2 4 4 4 0 0 0 0 0 0 0 9/1/2016 10/1/2016 CA 1 1 1 -2 -2 -2 0 0 0 0 0 0 0 Desired Period Date Area stat test type re test2 8/1/2016 9/1/2016 NY 5 1 BB 0 0 9/1/2016 10/1/2016 NY 6 4 BB 0 0 8/1/2016 9/1/2016 NY 5 1 AA 0 0 9/1/2016 10/1/2016 NY 6 4 AA 0 0 8/1/2016 9/1/2016 NY 5 1 CC 0 0 9/1/2016 10/1/2016 NY 6 4 CC 0 0 8/1/2016 9/1/2016 NY 0 0 DD 0 0 9/1/2016 10/1/2016 NY 0 0 DD 0 0 8/1/2016 9/1/2016 CA 2 4 BB 0 0 9/1/2016 10/1/2016 CA 1 -2 BB 0 0 8/1/2016 9/1/2016 CA 2 4 AA 0 0 9/1/2016 10/1/2016 CA 1 -2 AA 0 0 8/1/2016 9/1/2016 CA 2 4 CC 0 0 9/1/2016 10/1/2016 CA 1 -2 CC 0 0 8/1/2016 9/1/2016 CA 0 0 DD 0 0 9/1/2016 10/1/2016 CA 0 0 DD 0 0 Doing value_vars = ["BB stat", "AA stat", "CC stat", "DD stat", "BB test", "AA test", "CC test", "DD test", "BB re", "AA re", "CC re"] df = df.melt(id_vars=["Period", "Date", "Area"], value_vars=value_vars) temp_df = df.variable.str.split("_", 1, expand=True) df["type"] = temp_df[0] df["name"] = temp_df[1] df = df.drop(columns=["variable"]) first_half = df.iloc[:len(df)//2] second_half = df.iloc[len(df)//2:] df = pd.merge(first_half, second_half, on=["Period", "Date", "Area", "type"], suffixes=("_1", "_2")) df.rename(columns = {'value_3':'stat''value_2':'test', 'value_1':'re'}, inplace = True) df.drop(columns=["name_1", "name_2"], inplace=True) df = df[[ "Period", "Date", "Area", "stat", "test", "type", "re" ]] df.sort_values(["Area", "type"], ascending=False, inplace=True) df.to_markdown() The following code fails to capture all the output columns. Any suggestion is appreciated.
Try pd.wide_to_long: pd.wide_to_long(df, stubnames=['AA', 'BB','CC','DD'], i=['Period','Date','Area'], j='', sep=' ', suffix='(test|re|stat)' ).unstack(level=-1, fill_value=0).stack(level=0).reset_index() Output: Period Date Area type re stat test 0 8/1/2016 9/1/2016 CA AA 0.0 2.0 4.0 1 8/1/2016 9/1/2016 CA BB 0.0 2.0 4.0 2 8/1/2016 9/1/2016 CA CC 0.0 2.0 4.0 3 8/1/2016 9/1/2016 CA DD NaN 0.0 0.0 4 8/1/2016 9/1/2016 NY AA 0.0 5.0 1.0 5 8/1/2016 9/1/2016 NY BB 0.0 5.0 1.0 6 8/1/2016 9/1/2016 NY CC 0.0 5.0 1.0 7 8/1/2016 9/1/2016 NY DD NaN 0.0 0.0 8 9/1/2016 10/1/2016 CA AA 0.0 1.0 -2.0 9 9/1/2016 10/1/2016 CA BB 0.0 1.0 -2.0 10 9/1/2016 10/1/2016 CA CC 0.0 1.0 -2.0 11 9/1/2016 10/1/2016 CA DD NaN 0.0 0.0 12 9/1/2016 10/1/2016 NY AA 0.0 6.0 4.0 13 9/1/2016 10/1/2016 NY BB 0.0 6.0 4.0 14 9/1/2016 10/1/2016 NY CC 0.0 6.0 4.0 15 9/1/2016 10/1/2016 NY DD NaN 0.0 0.0
6
4
75,936,937
2023-4-5
https://stackoverflow.com/questions/75936937/python3-update-date-format
I have a tricky with date format with time series data. In my dataframe of over one hundred thousand rows I have a column datetime with date value but the format is %M:%S.%f. Example: datetime 0 59:57.7 1 00:09.7 2 00:21.8 What I want in output is to convert this format to %m/%d/%Y %H:%M:%S.%f with 01/01/2023 00:59:57.7 as first date and then increment hours and day. It's a time series data on few days. Result: datetime ProcessTime 59:57.7 01/01/2023 00:59:57.7 00:09.7 01/01/2023 01:00:09.7 00:21.8 01/01/2023 01:00:21.8 I did this code to change the first date to try to have a referential and change the others. import pandas as pd from datetime import datetime # Example dataframe df = pd.DataFrame({'datetime': ['59:57.7', '00:09.7', '00:21.8']}) first_time_str = df['datetime'][0] first_time_obj = datetime.strptime(first_time_str, '%M:%S.%f') formatted_first_time = first_time_obj.replace(year=2023, month=1, day=1).strftime('%m/%d/%Y %H:%M:%S.%f') df['datetime'][0] = formatted_first_time Thanks for your help. Regards
The exact logic is unclear assuming a cumulated time You can convert to_timedelta (after adding the missing hours '00:'), then get the cumsum and add the reference date: df['ProcessTime'] = (pd.to_timedelta('00:'+df['datetime']).cumsum() .add(pd.Timestamp('2023-01-01 00:59:57.7')) .dt.strftime('%m/%d/%Y %H:%M:%S.%f') ) Output: datetime ProcessTime 0 59:57.7 01/01/2023 01:59:55.400000 1 00:09.7 01/01/2023 02:00:05.100000 2 00:21.8 01/01/2023 02:00:26.900000 assuming simple timedelta df['ProcessTime'] = (pd.to_timedelta('00:'+df['datetime']) .add(pd.Timestamp('2023-01-01')) .dt.strftime('%m/%d/%Y %H:%M:%S.%f') ) Output: datetime ProcessTime 0 59:57.7 01/01/2023 00:59:57.700000 1 00:09.7 01/01/2023 01:00:07.400000 2 00:21.8 01/01/2023 01:00:29.200000 assuming you want to add 1h when the timedelta gets smaller than the previous one t = pd.to_timedelta('00:'+df['datetime']) df['ProcessTime'] = (pd.to_timedelta(t.diff().lt('0').cumsum(), unit='h') .add(t+pd.Timestamp('2023-01-01')) .dt.strftime('%m/%d/%Y %H:%M:%S.%f') ) Output: datetime ProcessTime 0 59:57.7 01/01/2023 00:59:57.700000 1 00:09.7 01/01/2023 01:00:09.700000 2 00:21.8 01/01/2023 01:00:21.800000
3
2
75,935,256
2023-4-5
https://stackoverflow.com/questions/75935256/how-to-efficiently-apply-a-function-to-every-row-in-a-dataframe
Given the following table: df = pd.DataFrame({'code':['100M','60M10N40M','5S99M','1S25I100M','1D1S1I200M']}) that looks like this: code 0 100M 1 60M10N40M 2 5S99M 3 1S25I100M 4 1D1S1I200M I'd like to convert the code column strings to numbers where M, N, D are each equivalent to (times 1), I is equivalent to (times -1) and S is equivalent to (times 0). The result should look like this: code Val 0 100M 100 This is (100*1) 1 60M10N40M 110 This is (60*1)+(10*1)+(40*1) 2 5S99M 99 This is (5*0)+(99*1) 3 1S25I100M 75 This is (1*0)+(25*-1)+(100*1) 4 1D1S1I200M 200 This is (1*1)+(1*0)+(1*-1)+(200*1) I wrote the following function to this: def String2Val(String): # Generate substrings sstrings = re.findall('.[^A-Z]*.', String) KeyDict = {'M':'*1','N':'*1','I':'*-1','S':'*0','D':'*1'} newlist = [] for key, value in KeyDict.items(): for i in sstrings: if key in i: p = i.replace(key, value) lp = eval(p) newlist.append(lp) OutputVal = sum(newlist) return OutputVal df['Val'] = df.apply(lambda row: String2Val(row['code']), axis = 1) After applying this function to the table, I realized it's not efficient and takes forever when applied to large datasets. How can I optimize this process?
Since pandas string methods are not optimized (although that seems to no longer be true for pandas 2.0), if you're after performance, it's better to use Python string methods in a loop (which are compiled in C). It seems a straightforward loop over each string might give the best performance. def evaluater(s): total, curr = 0, '' for e in s: # if a number concatenate to the previous number if e.isdigit(): curr += e # if a string, look up its value in KeyDict # and multiply the currently collected number by it # and add to the total else: total += int(curr) * KeyDict[e] curr = '' return total KeyDict = {'M': 1, 'N': 1, 'I': -1, 'S': 0, 'D': 1} df['val'] = df['code'].map(evaluater) Performance: KeyDict1 = {'M':'*1+','N':'*1+','I':'*-1+','S':'*0+','D':'*1+'} df = pd.DataFrame({'code':['100M','60M10N40M','5S99M','1S25I100M','1D1S1I200M']*1000}) %timeit df.assign(val=df['code'].map(evaluater)) # 12.2 ms ± 579 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %timeit df.assign(val=df['code'].apply(String2Val)) # @Marcelo Paco # 61.8 ms ± 2.47 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) %timeit df.assign(val=df['code'].replace(KeyDict1, regex=True).str.rstrip('+').apply(pd.eval)) # @Ynjxsjmh # 4.86 s ± 155 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) N.B. You already implement something similar but the outer loop (for key, value in KeyDict.items()) is unnecessary; since KeyDict is a dictionary, use it as a lookup table; don't loop. Also, .apply(axis=1) is a really bad way to loop when only a single column is relevant. Select that column and call apply().
3
2
75,935,363
2023-4-5
https://stackoverflow.com/questions/75935363/adding-a-column-based-on-condition-in-polars
Let's say I have a Polars dataframe like so: df = pl.DataFrame({ 'a': [0.3, 0.7, 0.5, 0.1, 0.9] }) And now I need to add a new column where 1 or 0 is assigned depending on whether a value in column 'a' is greater or less than some threshold. In Pandas I can do this: import numpy as np THRESHOLD = 0.5 df['new'] = np.where(df.a > THRESHOLD, 0, 1) I can also do something very similar in Polars: df = df.with_columns( pl.lit(np.where(df.select('a').to_numpy() > THRESHOLD, 0, 1).ravel()) .alias('new') ) This works fine but I'm sure that using NumPy here is not the best practice. I've also tried something more like: df = df.with_columns( pl.lit(df.filter(pl.col('a') > THRESHOLD).select([0, 1])) .alias('new') ) But with this syntax I keep running into the following error: DuplicateError Traceback (most recent call last) Cell In[47], line 5 1 THRESHOLD = 0.5 2 DELAY_TOLERANCE = 10 4 df = df.with_columns( ----> 5 pl.lit(df.filter(pl.col('a') > THRESHOLD).select([0, 1])) 6 .alias('new') 7 ) 8 df.head() DuplicateError: column with name 'literal' has more than one occurrences So my question is two-fold: what am I doing wrong here and what is the best practice in Polars for such conditional assignments? I did looks through docs and previous questions but couldn't find anything resembling my use-case.
The select([0, 1]) doesn't really make a lot of sense Polars-wise, you're just selecting a literal. Not quite sure why that's throwing a DuplicateError as is. Conditionals in polars are best done with when: df.with_columns(pl.when(pl.col("a") > 0.5).then(0).otherwise(1).alias("b"))
4
7
75,935,293
2023-4-5
https://stackoverflow.com/questions/75935293/pynecone-cannot-get-detail-information-from-item-in-the-cards-example-grid-fo
This is our expected output. And this is the current output. And this is the source code for the current output. import pynecone as pc def show_items(item): return pc.box( pc.text(item), bg="lightgreen" ) class ExampleState(pc.State): my_objects = [["item1", "desc1"], ["item2", "desc2"]] print(my_objects) def home(): homeContainer = pc.container( pc.hstack( pc.container( # watch list pc.vstack( pc.container(h="20px"), pc.hstack( pc.heading("Example List", size="md"), ), pc.grid( pc.foreach(ExampleState.my_objects, show_items), template_columns="repeat(5, 1fr)", h="20vh", width="100%", gap=4, ), justifyContent="start", align_items="start", ), height="100vh", maxWidth="auto", ), bg="#e8e5dc", ), ) return homeContainer app = pc.App(state=ExampleState) app.add_page(home, route="/") app.compile() In this example, we cannot make our expected output. In the above example, if we change the code from pc.text(item) to pc.text(item[0]) , then we will get the following error message File "xxxx_some_code.py", line 47, in <module> app.add_page(home, route="/") File "/xxxx/opt/anaconda3/envs/pynecone-121-py311/lib/python3.11/site-packages/pynecone/app.py", line 261, in add_page raise e File "/xxxx/opt/anaconda3/envs/pynecone-121-py311/lib/python3.11/site-packages/pynecone/app.py", line 252, in add_page component = component if isinstance(component, Component) else component() ^^^^^^^^^^^ File "xxxx_some_code.py", line 26, in home pc.foreach(ExampleState.my_objects, show_items), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/xxxx/opt/anaconda3/envs/pynecone-121-py311/lib/python3.11/site-packages/pynecone/components/layout/foreach.py", line 48, in create children=[IterTag.render_component(render_fn, arg=arg)], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/xxxx/opt/anaconda3/envs/pynecone-121-py311/lib/python3.11/site-packages/pynecone/components/tags/iter_tag.py", line 71, in render_component component = render_fn(arg) ^^^^^^^^^^^^^^ File "xxxx_some_code.py", line 5, in show_items pc.text(item[0]), ~~~~^^^ File "/xxxx/opt/anaconda3/envs/pynecone-121-py311/lib/python3.11/site-packages/pynecone/var.py", line 181, in __getitem__ raise TypeError( TypeError: Could not index into var of type Any. (If you are trying to index into a state var, add a type annotation to the var.) We also read the document related to pc.grid and pc.foreach. And still have no idea how to fix this issue from the two documents. So, what can we do if we want to get detailed information from the item and show it on the layout?
The key point is that the pc.foreach cannot use something like list[list] or list[dict]. The following code can answer our question. It run well and fit the expected output. After testing, It runs well on pynecone==0.1.20 and pynecone==0.1.21 import pynecone as pc class MyObject(pc.Model, table=True): title:str desc:str def __init__(self, title, desc): self.title = title self.desc = desc def __repr__(self) -> str: return "("+self.title+","+self.desc+")" def show_object(object:MyObject): return pc.box( pc.vstack( pc.hstack( pc.text( "title:", font_size="1em", ), pc.text( object.title, font_size="1em", ), ), pc.hstack( pc.text( "desc:", font_size="1em", ), pc.text( object.desc, font_size="1em", ), ), ), bg="lightgreen" ) class ExampleState(pc.State): # my_objects = [{"item":"item1", "desc":"desc1"}, {"item":"item2", "desc":"desc2"}] """ my_objects:list[MyObject] = [ MyObject("title1", "desc1"), MyObject("title2", "desc2"), ] """ # generate objects by loop my_objects:list[MyObject] = [ MyObject("title"+str(i), "desc"+str(i)) for i in range(37)] def home(): homeContainer = pc.container( pc.hstack( pc.container( # watch list pc.vstack( pc.container(h="20px"), pc.hstack( pc.heading("Example List", size="md"), ), pc.grid( #pc.foreach(ExampleState.my_objects, show_items), pc.foreach(ExampleState.my_objects, show_object), template_columns="repeat(5, 1fr)", h="20vh", width="100%", gap=4, ), justifyContent="start", align_items="start", ), height="100vh", maxWidth="auto", ), bg="#e8e5dc", ), ) return homeContainer app = pc.App(state=ExampleState) app.add_page(home, route="/") app.compile() The following is our expected output. The solution is we need to create a class. class MyObject(pc.Model, table=True): title:str desc:str def __init__(self, title, desc): self.title = title self.desc = desc def __repr__(self) -> str: return "("+self.title+","+self.desc+")" And we show the layout by show_object def show_object(object:MyObject): In the show_object function, we get detailed information about the item by getting the member from the object.
3
2
75,898,276
2023-3-31
https://stackoverflow.com/questions/75898276/openai-api-error-429-you-exceeded-your-current-quota-please-check-your-plan-a
I'm making a Python script to use OpenAI via its API. However, I'm getting this error: openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details My script is the following: #!/usr/bin/env python3.8 # -*- coding: utf-8 -*- import openai openai.api_key = "<My PAI Key>" completion = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "user", "content": "Tell the world about the ChatGPT API in the style of a pirate."} ] ) print(completion.choices[0].message.content) I'm declaring the shebang python3.8, because I'm using pyenv. I think it should work, since I did 0 API requests, so I'm assuming there's an error in my code.
TL;DR: You need to upgrade to a paid plan. Set up a paid account, add a credit or debit card, and generate a new API key if your old one was generated before the upgrade. It might take 10 minutes or so after you upgrade to a paid plan before the paid account becomes active and the error disappears. Problem As stated in the official OpenAI documentation: TYPE OVERVIEW RateLimitError Cause: You have hit your assigned rate limit. Solution: Pace your requests. Read more in our rate limit guide. Also, read more about Error Code 429 - You exceeded your current quota, please check your plan and billing details: This (i.e., 429) error message indicates that you have hit your maximum monthly spend (hard limit) for the API. This means that you have consumed all the credits or units allocated to your plan and have reached the limit of your billing cycle. This could happen for several reasons, such as: You are using a high-volume or complex service that consumes a lot of credits or units per request. You are using a large or diverse data set that requires a lot of requests to process. Your limit is set too low for your organization’s usage. Did you sign up some time ago? You're getting error 429 because either you used all your free tokens or 3 months have passed since you signed up. As stated in the official OpenAI article: To explore and experiment with the API, all new users get $5 worth of free tokens. These tokens expire after 3 months. After the quota has passed you can choose to enter billing information to upgrade to a paid plan and continue your use of the API on pay-as-you-go basis. If no billing information is entered you will still have login access, but will be unable to make any further API requests. Please see the pricing page for the latest information on pay-as-you-go pricing. Note: If you signed up earlier (e.g., in December 2022), you got $18 worth of free tokens. Check your API usage in the usage dashboard. For example, my free trial expires tomorrow and this is what I see right now in the usage dashboard: This is how my dashboard looks after expiration: If I run a simple script after my free trial has expired, I get the following error: openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details. Did you create your second OpenAI account? You're getting error 429 because you created a second OpenAI account with the same phone number. It seems like free credit is given based on phone numbers. As explained on the official OpenAI forum by @SapphireFelineBytes: I created an Open AI account in November and my $18 credits expired on March 1st. So, like many of you here, I tried creating a new account with a different email address, but same number. They gave me $0 credits. I tried now with a different phone number and email. This time I got $5 credits. It's confirmed that free credit is given based on phone numbers, as explained on the official OpenAI forum by @logankilpatrick: Also note, you only get free credits for the first account associated with your phone number. Subsequent accounts are not granted free credits. Solution Try to do the following: Set up paid account. Add a credit or debit card. Generate a new API key if your old API key was generated before you upgraded to the paid plan. When you upgrade to a paid plan, don't expect the error to disappear immediately, as @dcferreira mentioned in the comment above. It might take a few minutes to more than an hour after the upgrade before the error disappears. In the comment below, @JoeMornin confirmed that it took 10 minutes for his paid account to become active. In the meantime, he was getting the following error: You've reached your usage limit. See your usage dashboard and billing settings for more details. If you have further questions, please contact us through our help center at help.openai.com.
169
209
75,918,895
2023-4-3
https://stackoverflow.com/questions/75918895/is-there-a-way-to-implement-pandas-wide-to-long-in-polars
I use Pandas wide to long to stack survey data and it works beautifully with regex and stub names, is this possible to do in Polars ? e.g. in Pandas - import pandas as pd df = pd.DataFrame({ 'famid': [1, 1, 1, 2, 2, 2, 3, 3, 3], 'birth': [1, 2, 3, 1, 2, 3, 1, 2, 3], 'ht_one': [2.8, 2.9, 2.2, 2, 1.8, 1.9, 2.2, 2.3, 2.1], 'ht_two': [3.4, 3.8, 2.9, 3.2, 2.8, 2.4, 3.3, 3.4, 2.9] }) changed_df = pd.wide_to_long(df, stubnames='ht', i=['famid', 'birth'], j='age', sep='_', suffix=r'\w+') stubnames can take a list as well. Edit- Added code after taking inspiration from Jqurious - import pandas as pd import numpy as np import polars as pl import re # Create age group data age_groups = np.random.choice(['0-18', '19-35', '36-50', '51-65', '65+'], size=10) # Create gender data genders = np.random.choice(['Male', 'Female', 'Other'], size=10) # Create familiarity and affinity data fam_aff = np.random.rand(10, 4) # Create column names cols = ['Age_group', 'Gender', 'Familiarity_loop1', 'Familiarity_loop2', 'Affinity_loop1', 'Affinity_loop2'] # Combine data into dataframe data = np.column_stack([age_groups, genders, fam_aff]) df = pd.DataFrame(data=data, columns=cols) df["unique_records"] = np.arange(len(df)) regex_pattern = '^.*_loop\d' # get polars DF pl_df = pl.from_pandas(df) # get all columns list col_list = pl_df.columns loop_list = [] # list of columns which contains _loop sans_loop_list = [] # list of columns which do not contain _loop for col in col_list: if re.search(regex_pattern, col): loop_list.append(col) else: sans_loop_list.append(col) pl_long_df = (pl_df .unpivot( index = pl_df.select(sans_loop_list).columns, variable_name = "master_stack") .with_columns(pl.col("master_stack").str.replace(r"_loop\d","")) ) pl_long_df.pivot(on="master_stack", index=sans_loop_list, values="value", aggregate_function=pl.element()) I want to see Affinity and Familiarity as their own columns, but I am not able to achieve it. Edit 2 - Added Polars output and Pandas output Polars - Pandas output -
If we start with .unpivot() df.unpivot(index = ["famid", "birth"], variable_name = "age").head(1) shape: (1, 4) ┌───────┬───────┬────────┬───────┐ │ famid ┆ birth ┆ age ┆ value │ │ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ str ┆ f64 │ ╞═══════╪═══════╪════════╪═══════╡ │ 1 ┆ 1 ┆ ht_one ┆ 2.8 │ └───────┴───────┴────────┴───────┘ The sep="_" and suffix=r"\w+" params used in wide_to_long are just extracting one from ht_one. One way to do this in Polars could be .str.extract() df.unpivot( index = ["famid", "birth"], variable_name = "age" ).with_columns( pl.col("age").str.extract(r"_(\w+)$") ) shape: (18, 4) ┌───────┬───────┬─────┬───────┐ │ famid ┆ birth ┆ age ┆ value │ │ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ str ┆ f64 │ ╞═══════╪═══════╪═════╪═══════╡ │ 1 ┆ 1 ┆ one ┆ 2.8 │ │ 1 ┆ 2 ┆ one ┆ 2.9 │ │ 1 ┆ 3 ┆ one ┆ 2.2 │ │ 2 ┆ 1 ┆ one ┆ 2.0 │ │ 2 ┆ 2 ┆ one ┆ 1.8 │ │ … ┆ … ┆ … ┆ … │ │ 2 ┆ 2 ┆ two ┆ 2.8 │ │ 2 ┆ 3 ┆ two ┆ 2.4 │ │ 3 ┆ 1 ┆ two ┆ 3.3 │ │ 3 ┆ 2 ┆ two ┆ 3.4 │ │ 3 ┆ 3 ┆ two ┆ 2.9 │ └───────┴───────┴─────┴───────┘ EDIT: As per the updated example: The pattern I have been using for this is to .unpivot() and then .pivot() back. Find the columns names not ending in the suffix to use as id_vars / index: suffix = r"_loop\d+$" id_vars = df.select(pl.exclude("^.+" + suffix)).columns ['Age_group', 'Gender', 'unique_records'] (df.unpivot(index=id_vars) .with_columns(pl.col("variable").str.replace(suffix, "")) ) shape: (40, 5) ┌───────────┬────────┬────────────────┬─────────────┬─────────────────────┐ │ Age_group ┆ Gender ┆ unique_records ┆ variable ┆ value │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 ┆ str ┆ str │ ╞═══════════╪════════╪════════════════╪═════════════╪═════════════════════╡ │ 19-35 ┆ Female ┆ 0 ┆ Familiarity ┆ 0.9458448571805742 │ │ 65+ ┆ Other ┆ 1 ┆ Familiarity ┆ 0.29898349718902584 │ │ 36-50 ┆ Other ┆ 2 ┆ Familiarity ┆ 0.6698438749905085 │ │ 0-18 ┆ Female ┆ 3 ┆ Familiarity ┆ 0.9589949988835984 │ │ 36-50 ┆ Female ┆ 4 ┆ Familiarity ┆ 0.8738576462244922 │ │ … ┆ … ┆ … ┆ … ┆ … │ │ 0-18 ┆ Female ┆ 5 ┆ Affinity ┆ 0.13593940132707893 │ │ 36-50 ┆ Female ┆ 6 ┆ Affinity ┆ 0.37172205023277705 │ │ 19-35 ┆ Other ┆ 7 ┆ Affinity ┆ 0.5024658713377818 │ │ 51-65 ┆ Other ┆ 8 ┆ Affinity ┆ 0.00582736048275978 │ │ 36-50 ┆ Female ┆ 9 ┆ Affinity ┆ 0.34380158652767634 │ └───────────┴────────┴────────────────┴─────────────┴─────────────────────┘ We end up with 40 rows and 2 variables (Familiarity, Affinity). In order to pivot into 20 rows, you can add a "row number" per variable and use it as part of the index. (df.unpivot(index=id_vars) .with_columns(pl.col("variable").str.replace(suffix, "")) .with_columns(index = pl.int_range(pl.len()).over("variable")) .pivot(on="variable", index=id_vars + ["index"]) ) shape: (20, 6) ┌───────────┬────────┬────────────────┬───────┬─────────────────────┬─────────────────────┐ │ Age_group ┆ Gender ┆ unique_records ┆ index ┆ Familiarity ┆ Affinity │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 ┆ i64 ┆ str ┆ str │ ╞═══════════╪════════╪════════════════╪═══════╪═════════════════════╪═════════════════════╡ │ 19-35 ┆ Female ┆ 0 ┆ 0 ┆ 0.9458448571805742 ┆ 0.8318885018762573 │ │ 65+ ┆ Other ┆ 1 ┆ 1 ┆ 0.29898349718902584 ┆ 0.5932787653850062 │ │ 36-50 ┆ Other ┆ 2 ┆ 2 ┆ 0.6698438749905085 ┆ 0.3322678195709319 │ │ 0-18 ┆ Female ┆ 3 ┆ 3 ┆ 0.9589949988835984 ┆ 0.2252757821730993 │ │ 36-50 ┆ Female ┆ 4 ┆ 4 ┆ 0.8738576462244922 ┆ 0.42281089740408706 │ │ … ┆ … ┆ … ┆ … ┆ … ┆ … │ │ 0-18 ┆ Female ┆ 5 ┆ 15 ┆ 0.17803848283413837 ┆ 0.13593940132707893 │ │ 36-50 ┆ Female ┆ 6 ┆ 16 ┆ 0.5390844456218246 ┆ 0.37172205023277705 │ │ 19-35 ┆ Other ┆ 7 ┆ 17 ┆ 0.7692067698388259 ┆ 0.5024658713377818 │ │ 51-65 ┆ Other ┆ 8 ┆ 18 ┆ 0.6569518159892904 ┆ 0.00582736048275978 │ │ 36-50 ┆ Female ┆ 9 ┆ 19 ┆ 0.6946040879238368 ┆ 0.34380158652767634 │ └───────────┴────────┴────────────────┴───────┴─────────────────────┴─────────────────────┘
4
4
75,895,460
2023-3-31
https://stackoverflow.com/questions/75895460/the-error-was-re-error-global-flags-not-at-the-start-of-the-expression-at-posi
Can someone please assist how to fix this issue, why i am getting this error while running the Ansible playbook 2023-03-31 11:47:39,902 p=57332 u=NI40153964 n=ansible | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: re.error: global flags not at the start of the expression at position 1 2023-03-31 11:47:39,903 p=57332 Python & Ansible versions: Python 3.11.2 Ansible [core 2.14.3] The YML file which I am using as - name: rename log file to user specific delegate_to: localhost command: mv ~/.ansible/wmDeployment.log ~/.ansible/wmDeployment_{{_user}}.log register: logfile_rename_output run_once: true ignore_errors: yes - name: set the fact for email set_fact: package_list: "{{hostvars['localhost']['package_list']}}" log_output: "{{log_data| regex_search('((?s)(?<=This task is to install package in target servers).*?(?=This task is to enable kafka consumers and send mail of playbook log))')}}" emailkey: code cacheable: true
The error message is saying you need to put the (?s) at the very beginning of the regex. Alternatively, try something like log_output: "{{log_data| regex_search('(?s:(?<=This task is to install package in target servers)(.*?)(?=This task is to enable kafka consumers and send mail of playbook log))" In some more detail, (?s) says "I want the s flag for everything here" or, in other words, a global flag. Python requires this to go at the very beginning of the regex, presumably to avoid weird and hard-to-debug problems which could happen if you had (?s) embedded somewhere in the middle of a long and complex regex. In fact, you could even argue that it's more likely a programming mistake than a considered, conscious decision. By contrast, (?s:...) says "I want to apply the s flag to the following subexpression," up to just before the closing parenthesis; this is obviously local to just the parenthesized subexpression, and could not be expressed by any other means. In this concrete example, ((?s)foo) is invalid, but equivalent to either (?s)(foo) or (?s:(foo)) (or, in practice, ((?s:foo)); or, if you don't actually care about the capturing group, simply (?s:foo)).
6
8
75,900,239
2023-3-31
https://stackoverflow.com/questions/75900239/attributeerror-occurs-with-tikzplotlib-when-legend-is-plotted
I am trying to save a figure using tikzplotlib. However, I am encountering an AttributeError: 'Legend' object has no attribute '_ncol'. I am currently using tikzplotlib version 0.10.1 and matplotlib version 3.7.0. Without using "plt.legend()" everything works. Below is an example that is not working: import numpy as np import matplotlib.pyplot as plt import tikzplotlib # Data x = np.linspace(0, 10, 100) y1 = np.sin(x) y2 = np.cos(x) y3 = np.tan(x) # Plotting plt.figure() plt.plot(x, y1, label='sin(x)') plt.plot(x, y2, label='cos(x)') plt.plot(x, y3, label='tan(x)') plt.legend() # Save as TikZ file tikzplotlib.save("plot.tikz")
Hey I have/had the same problem, the problem is that with matplotlib 3.6 the interface changed. There is already a fix (#558) for tikzplotlib on GitHub, but it looks like nothing will happen for now. However, there is a workaround for the issue on GitHub (Issue). It works quite well. I hope that this answer will soon become obsolete. For the sake of completeness, I'll add the code here again. def tikzplotlib_fix_ncols(obj): """ workaround for matplotlib 3.6 renamed legend's _ncol to _ncols, which breaks tikzplotlib """ if hasattr(obj, "_ncols"): obj._ncol = obj._ncols for child in obj.get_children(): tikzplotlib_fix_ncols(child) Disclaimer: This is not my code. But this problem can be very annoying and that's why I'm sharing the code here. The author is st--
8
15
75,925,323
2023-4-4
https://stackoverflow.com/questions/75925323/how-can-i-fix-importlib-on-python3-10-so-that-it-can-call-entry-points-properl
I am using Python 3.10 on Ubuntu 22.04 working on a project that uses Farama Foundation's gymnasium library. When gymnasium is imported, it uses importlib to get entry points, but when I ran import gymnasium into IDLE I got the following error: Traceback (most recent call last): File "/usr/lib/python3.10/idlelib/run.py", line 578, in runcode exec(code, self.locals) File "<pyshell#2>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/gymnasium/__init__.py", line 12, in <module> from gymnasium.envs.registration import ( File "/usr/local/lib/python3.10/dist-packages/gymnasium/envs/__init__.py", line 382, in <module> load_plugin_envs() File "/usr/local/lib/python3.10/dist-packages/gymnasium/envs/registration.py", line 565, in load_plugin_envs for plugin in metadata.entry_points(group=entry_point): File "/usr/lib/python3.10/importlib/metadata/__init__.py", line 1009, in entry_points return SelectableGroups.load(eps).select(**params) File "/usr/lib/python3.10/importlib/metadata/__init__.py", line 459, in load ordered = sorted(eps, key=by_group) File "/usr/lib/python3.10/importlib/metadata/__init__.py", line 1006, in <genexpr> eps = itertools.chain.from_iterable( File "/usr/lib/python3.10/importlib/metadata/_itertools.py", line 16, in unique_everseen k = key(element) File "/usr/lib/python3.10/importlib/metadata/__init__.py", line 941, in _normalized_name return self._name_from_stem(stem) or super()._normalized_name File "/usr/lib/python3.10/importlib/metadata/__init__.py", line 622, in _normalized_name return Prepared.normalize(self.name) File "/usr/lib/python3.10/importlib/metadata/__init__.py", line 871, in normalize return re.sub(r"[-_.]+", "-", name).lower().replace('-', '_') File "/usr/lib/python3.10/re.py", line 209, in sub return _compile(pattern, flags).sub(repl, string, count) TypeError: expected string or bytes-like object I asked one of the creators of gymnasium about this issue on GitHub, and they asked me to print out a list of entry points, so I ran the following: from importlib.metadata import * eps = entry_points() and got the following error: Traceback (most recent call last): File "/usr/lib/python3.10/idlelib/run.py", line 578, in runcode exec(code, self.locals) File "<pyshell#1>", line 1, in <module> File "/usr/lib/python3.10/importlib/metadata/__init__.py", line 1009, in entry_points return SelectableGroups.load(eps).select(**params) File "/usr/lib/python3.10/importlib/metadata/__init__.py", line 459, in load ordered = sorted(eps, key=by_group) File "/usr/lib/python3.10/importlib/metadata/__init__.py", line 1006, in <genexpr> eps = itertools.chain.from_iterable( File "/usr/lib/python3.10/importlib/metadata/_itertools.py", line 16, in unique_everseen k = key(element) File "/usr/lib/python3.10/importlib/metadata/__init__.py", line 941, in _normalized_name return self._name_from_stem(stem) or super()._normalized_name File "/usr/lib/python3.10/importlib/metadata/__init__.py", line 622, in _normalized_name return Prepared.normalize(self.name) File "/usr/lib/python3.10/importlib/metadata/__init__.py", line 871, in normalize return re.sub(r"[-_.]+", "-", name).lower().replace('-', '_') File "/usr/lib/python3.10/re.py", line 209, in sub return _compile(pattern, flags).sub(repl, string, count) TypeError: expected string or bytes-like object What this seems to suggest is that one of the names that entry_points() is searching for is not in the correct format. I am unsure of how to fix this. I am willing to uninstall Python 3.10 completely and reinstall it and all additional packages from scratch, but I would prefer an approach that is less aggressive.
Try this: >>> import importlib_metadata as md >>> dists = md.distributions() >>> broken = [dist for dist in dists if dist.name is None] >>> for dist in broken: ... print(dist._path) It will list the paths of distributions that are the problem. Reinstalling or deleting them will stop the error. This guy had the same problem
3
2
75,899,158
2023-3-31
https://stackoverflow.com/questions/75899158/shap-summary-plots-for-xgboost-with-categorical-data-inputs
XGBoost supports inputting features as categories directly, which is very useful when there are a lot of categorical variables. This doesn't seem to be compatible with Shap: import pandas as pd import xgboost import shap # Test data test_data = pd.DataFrame({'target':[23,42,58,29,28], 'feature_1' : [38, 83, 38, 28, 57], 'feature_2' : ['A', 'B', 'A', 'C','A']}) test_data['feature_2'] = test_data['feature_2'].astype('category') # Fit xgboost model = xgboost.XGBRegressor(enable_categorical=True, tree_method='hist') model.fit(test_data.drop('target', axis=1), test_data['target'] ) # Explain with Shap explainer = shap.TreeExplainer(model) shap_values = explainer.shap_values(test_data) Throws an error: ValueError: DataFrame.dtypes for data must be int, float, bool or category. Is it possible to use Shap in this situation?
Unfortunately, generating shap values with xgboost using categorical variables is an open issue. See, f.e., https://github.com/slundberg/shap/issues/2662 Given your specific example, I made it run using Dmatrix as input of shap (Dmatrix is the basic data type input of xgboost models, see the Learning API. The sklearn api, that you are using, doesn't need the Dmatrix, at least for training): import pandas as pd import xgboost as xgb import shap # Test data test_data = pd.DataFrame({'target':[23,42,58,29,28], 'feature_1' : [38, 83, 38, 28, 57], 'feature_2' : ['A', 'B', 'A', 'C','A']}) test_data['feature_2'] = test_data['feature_2'].astype('category') print(test_data.info()) # Fit xgboost model = xgb.XGBRegressor(enable_categorical=True, tree_method='hist') model.fit(test_data.drop('target', axis=1), test_data['target'] ) # Explain with Shap test_data_dm = xgb.DMatrix(data=test_data.drop('target', axis=1), label=test_data['target'], enable_categorical=True) explainer = shap.TreeExplainer(model) shap_values = explainer.shap_values(test_data_dm) print(shap_values) But the ability to generate shap values when there are categorical variables is very unstable: f.e., if you add other parameters in the xgboost you get the error "Check failed: !HasCategoricalSplit()", which is the error referenced in my first link import pandas as pd import xgboost as xgb import shap # Test data test_data = pd.DataFrame({'target':[23,42,58,29,28], 'feature_1' : [38, 83, 38, 28, 57], 'feature_2' : ['A', 'B', 'A', 'C','A']}) test_data['feature_2'] = test_data['feature_2'].astype('category') print(test_data.info()) # Fit xgboost model = xgb.XGBRegressor(colsample_bylevel= 0.7, enable_categorical=True, tree_method='hist') model.fit(test_data.drop('target', axis=1), test_data['target'] ) # Explain with Shap test_data_dm = xgb.DMatrix(data=test_data.drop('target', axis=1), label=test_data['target'], enable_categorical=True) explainer = shap.TreeExplainer(model) shap_values = explainer.shap_values(test_data_dm) shap_values I've searched for a solution for months but, to conclude, as for my understanding, it is not really possible yet to generate shap values with xgboost and categorical variables (I hope someone can contradict me, with a reproducible example). I suggest you try with the Catboost ########################## EDIT ############################ An example with Catboost import pandas as pd import catboost as cb import shap # Test data test_data = pd.DataFrame({'target':[23,42,58,29,28], 'feature_1' : [38, 83, 38, 28, 57], 'feature_2' : ['A', 'B', 'A', 'C','A']}) test_data['feature_2'] = test_data['feature_2'].astype('category') print(test_data.info()) model = cb.CatBoostRegressor(iterations=100) model.fit(test_data.drop('target', axis=1), test_data['target'], cat_features=['feature_2'], verbose=False) # Explain with Shap explainer = shap.TreeExplainer(model) shap_values = explainer.shap_values(test_data.drop('target', axis=1)) shap_values print('shap values: \n',shap_values)
5
2
75,929,721
2023-4-4
https://stackoverflow.com/questions/75929721/how-to-show-full-column-width-of-polars-dataframe-in-python
I'm trying to display the full width of column in polars dataframe. Given the following polars dataframe: import polars as pl df = pl.DataFrame({ 'column_1': ['TF-IDF embeddings are done on the initial corpus, with no additional N-Gram representations or further preprocessing', 'In the eager API, the expression is evaluated immediately. The eager API produces results immediately after execution, similar to pandas. The lazy API is similar to Spark, where a plan is formed upon execution of a query, but the plan does not actually access the data until the collect method is called to execute the query in parallel across all CPU cores. In simple terms: Lazy execution means that an expression is not immediately evaluated.'], 'column_2': ['Document clusterings may misrepresent the visualization of document clusterings due to dimensionality reduction (visualization is pleasing for its own sake - rather than for prediction/inference)', 'Polars has two APIs, eager and lazy. In the eager API, the expression is evaluated immediately. The eager API produces results immediately after execution, similar to pandas. The lazy API is similar to Spark, where a plan is formed upon execution of a query, but the plan does not actually access the data until the collect method is called to execute the query in parallel across all CPU cores. In simple terms: Lazy execution means that an expression is not immediately evaluated.'] }) I tried the following: pl.Config.set_fmt_str_lengths = 200 pl.Config.set_tbl_width_chars = 200 The result: shape: (2, 2) ┌───────────────────────────────────┬───────────────────────────────────┐ │ column_1 ┆ column_2 │ │ --- ┆ --- │ │ str ┆ str │ ╞═══════════════════════════════════╪═══════════════════════════════════╡ │ TF-IDF embeddings are done on th… ┆ Document clusterings may misrepr… │ │ In the eager API, the expression… ┆ Polars has two APIs, eager and l… │ └───────────────────────────────────┴───────────────────────────────────┘ How can I display the full width of columns in a polars DataFrame in Python? Thanks in advance!
I think you can use glimpse: > df.glimpse() Rows: 2 Columns: 2 $ column_1 <str> TF-IDF embeddings are done on the initial corpus, with no additional N-Gram representations or further preprocessing, In the eager API, the expression is evaluated immediately. The eager API produces results immediately after execution, similar to pandas. The lazy API is similar to Spark, where a plan is formed upon execution of a query, but the plan does not actually access the data until the collect method is called to execute the query in parallel across all CPU cores. In simple terms: Lazy execution means that an expression is not immediately evaluated. $ column_2 <str> Document clusterings may misrepresent the visualization of document clusterings due to dimensionality reduction (visualization is pleasing for its own sake - rather than for prediction/inference), Polars has two APIs, eager and lazy. In the eager API, the expression is evaluated immediately. The eager API produces results immediately after execution, similar to pandas. The lazy API is similar to Spark, where a plan is formed upon execution of a query, but the plan does not actually access the data until the collect method is called to execute the query in parallel across all CPU cores. In simple terms: Lazy execution means that an expression is not immediately evaluated.
8
10
75,930,508
2023-4-4
https://stackoverflow.com/questions/75930508/most-efficent-way-to-bulk-update-documents-using-mongoengine
So, I have a Collection of documents (e.g. Person) structured in this way: class Person(Document): name = StringField(max_length=200, required=True) nationality = StringField(max_length=200, required=True) earning = ListField(IntField()) when I save the document I only input the name and nationality fields because this is the information. Then, every now and then, I want to update the the earning of each person of a particular nationality. Let's imagine that there is some formula that allows me to compute the earning field (e.g. I query some magical api called EarningAPI that returns the earning of a person given its name). To update them I would do something like: japanese_people = Person.objects(Q(nationality='Japanese'))).all() for japanese_person in japanese_people: japanese_person.earning.append(EarningAPI(japanese_person.name)) Person.objects.insert(japanese_people, load_bulk=False) The EarningAPI has also the possibility to work in batches, so that i can give a list of names and it returns a list of earning(s) (one for each name). This method is far faster and less expensive. Is the one by one way correct? What is the best way to take advantage of the batches? Thanks
Using method from Mongoengine bulk update without objects.update(): from pymongo import UpdateOne from mongoengine import Document, ValidationError class Person(Document): name = StringField(max_length=200, required=True) nationality = StringField(max_length=200, required=True) earning = ListField(IntField()) japanese_people = Person.objects(Q(nationality='Japanese')).all() japanese_ids = [person.id for person in japanese_people] earnings = EarningAPI(japanese_ids) # I'm assuming it takes a list of id's as input and returns a list of earnings. bulk_operatons = [ UpdateOne( {'_id': j_id}, {'$set': {'earning': earn}}, upsert=True ), for j_id, earn in zip(japanese_ids, earnings) ] result = Person._get_collection().bulk_write(bulk_operations, ordered=False) I can't be certain if this is faster than the one by one method because I don't have access to your magic API to benchmark, but this should be the way to do it by batch.
3
3
75,933,975
2023-4-4
https://stackoverflow.com/questions/75933975/how-to-split-a-dataframe-column-into-more-columns-conditional-to-another-column
I am stuck because I can not split a dataframe column into more columns, conditional to another column value. I have a pandas dataframe which I generated straight from a '.csv' file with more than 100K rows. Excerpt1: I want to split column dca by ',' (comma) into more columns. The number of splits will be constrained by the values in n_mppts. Edited on 2023-04-12: I could succesfully perform the split column operation in the dataframe generated from this .csv file with the following code (thanks to @Abdulmajeed's solution): def split_dca(row): values = row['dca'].split(',') if row['dca'] else [] values += [float('NaN')] * (row['n_mppts'] - len(values)) values = values[:row['n_mppts']] return pd.Series(values) df_dca_dcv.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 418643 entries, 0 to 418642 Data columns (total 5 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 pipe_id 418643 non-null int64 1 date 418643 non-null object 2 inverter_id 418643 non-null object 3 n_mppts 418643 non-null int64 4 dca 418538 non-null object 5 dcv 418538 non-null object dtypes: int64(2), object(4) memory usage: 19.2+ MB df_dca_dcv['dca'] = df_dca_dcv['dca'].str.replace('{', '') df_dca_dcv['dca'] = df_dca_dcv['dca'].str.replace('}', '') df_dca_dcv['dca'] = df_dca_dcv['dca'].astype(str) Excerpt2: mppts_dca = df_dca_dcv.apply(split_dca, axis=1) mppts_dca['dca_mppt_0'] = pd.to_numeric(mppts_dca[0], errors='coerce') mppts_dca['dca_mppt_1'] = pd.to_numeric(mppts_dca[1], errors='coerce') mppts_dca['dca_mppt_2'] = pd.to_numeric(mppts_dca[2], errors='coerce') mppts_dca['dca_mppt_3'] = pd.to_numeric(mppts_dca[3], errors='coerce') mppts_dca['dca_mppt_4'] = pd.to_numeric(mppts_dca[4], errors='coerce') mppts_dca['dca_mppt_5'] = pd.to_numeric(mppts_dca[5], errors='coerce') mppts_dca['dca_mppt_6'] = pd.to_numeric(mppts_dca[6], errors='coerce') mppts_dca['dca_mppt_7'] = pd.to_numeric(mppts_dca[7], errors='coerce') mppts_dca['dca_mppt_8'] = pd.to_numeric(mppts_dca[8], errors='coerce') df_dca_dcv = pd.concat([df_dca_dcv, mppts_dca], axis=1) Excerpt3: However, I am facing a problem when I generate the dataframe from a pandas sql query specifying inverter_id=a2 and therefore the current solution wont succeed (the issue also persists with other inverter_id values): df_dca_dcv = pd.read_sql_query("select pipe_id,created_at as date,inverter_id,n_mppts,dca,dcv from inverters where inverter_id = 'a2' order by pipe_id, inverter_id, date;", con=con) # connected to a postgreSQL db df_dca_dcv.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 16507 entries, 0 to 16506 Data columns (total 6 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 pipe_id 16507 non-null object 1 date 16507 non-null datetime64[ns] 2 inverter_id 16507 non-null object 3 n_mppts 16507 non-null int64 4 dca 16428 non-null object 5 dcv 16428 non-null object dtypes: datetime64[ns](1), int64(1), object(4) memory usage: 773.9+ KB Column dca Dtype is still object, but now it has values between "[ ]" instead of "{ }" (unlike in Excerpt 1), and when I perform this: df_dca_dcv['dca'] = df_dca_dcv['dca'].str.replace('[', '') df_dca_dcv['dca'] = df_dca_dcv['dca'].str.replace(']', '') df_dca_dcv['dca'] = df_dca_dcv['dca'].astype(str) I get the following error: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[6], line 2 1 df_dca_dcv['dca'] = df_dca_dcv['dca'].str.replace("[", "") ----> 2 df_dca_dcv['dca'] = df_dca_dcv['dca'].str.replace("]", "") File ~\Anaconda3\lib\site-packages\pandas\core\generic.py:5575, in NDFrame.__getattr__(self, name) 5568 if ( 5569 name not in self._internal_names_set 5570 and name not in self._metadata 5571 and name not in self._accessors 5572 and self._info_axis._can_hold_identifiers_and_holds_name(name) 5573 ): 5574 return self[name] -> 5575 return object.__getattribute__(self, name) File ~\Anaconda3\lib\site-packages\pandas\core\accessor.py:182, in CachedAccessor.__get__(self, obj, cls) 179 if obj is None: 180 # we're accessing the attribute of the class, i.e., Dataset.geo 181 return self._accessor --> 182 accessor_obj = self._accessor(obj) 183 # Replace the property with the accessor object. Inspired by: 184 # https://www.pydanny.com/cached-property.html 185 # We need to use object.__setattr__ because we overwrite __setattr__ on 186 # NDFrame 187 object.__setattr__(obj, self._name, accessor_obj) File ~\Anaconda3\lib\site-packages\pandas\core\strings\accessor.py:177, in StringMethods.__init__(self, data) 174 def __init__(self, data): 175 from pandas.core.arrays.string_ import StringDtype --> 177 self._inferred_dtype = self._validate(data) 178 self._is_categorical = is_categorical_dtype(data.dtype) 179 self._is_string = isinstance(data.dtype, StringDtype) File ~\Anaconda3\lib\site-packages\pandas\core\strings\accessor.py:231, in StringMethods._validate(data) 228 inferred_dtype = lib.infer_dtype(values, skipna=True) 230 if inferred_dtype not in allowed_types: --> 231 raise AttributeError("Can only use .str accessor with string values!") 232 return inferred_dtype AttributeError: Can only use .str accessor with string values! I antecipated the ".astype(str)" operation, and then performed the ".str.replace(...)" operations. However, when I look at the dataframe now Excerpt4: column dca values are not in the same format as they are in Excerpt2 (e.g. "Decimal('2.2'),Decimal('2.2')..."). When I go ahead and execute mppts_dca = df_dca_dcv.apply(split_dca, axis=1) df_dca_dcv = pd.concat([df_dca_dcv, mppts_dca], axis=1) df_dca_dcv['date'] = df_dca_dcv['date'].astype('datetime64[ns]') df_dca_dcv['dca_mppt_0'] = pd.to_numeric(df_dca_dcv[0], errors='coerce') df_dca_dcv['dca_mppt_1'] = pd.to_numeric(df_dca_dcv[1], errors='coerce') the dca values are not passed to the newly splited columns, which (I suppose) is because "pd.to_numeric(" can't read "Decimal(...)": Excerpt5: I've tried all the following methods to convert dca column to string: METHOD1: df_dca_dcv['dca'] = df_dca_dcv['dca'].map(str) #produced same output format as before METHOD2: df_dca_dcv['dca'] = df_dca_dcv['dca'].apply(str) #produced same output format as before METHOD3: df_dca_dcv['dca'] = df_dca_dcv['dca'].astype(str) #generated the following error: ValueError Traceback (most recent call last) Cell In[6], line 1 ----> 1 df_dca_dcv['dca'] = df_dca_dcv['dca'].values.astype(str) ValueError: setting an array element with a sequence METHOD4: df_dca_dcv['dca'] = df_dca_dcv['dca'].values.astype(str) #generated same error as METHOD3 METHOD5: df_dca_dcv['dca'] = df_dca_dcv['dca'].applymap(str) #generated the following error: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[7], line 1 ----> 1 df_dca_dcv['dca'] = df_dca_dcv['dca'].applymap(str) File ~\Anaconda3\lib\site-packages\pandas\core\generic.py:5575, in NDFrame.__getattr__(self, name) 5568 if ( 5569 name not in self._internal_names_set 5570 and name not in self._metadata 5571 and name not in self._accessors 5572 and self._info_axis._can_hold_identifiers_and_holds_name(name) 5573 ): 5574 return self[name] -> 5575 return object.__getattribute__(self, name) AttributeError: 'Series' object has no attribute 'applymap' METHOD6: def convert_float_string(row): float_list = row['dca'] if len(float_list) > 0: string_list = ["%.2f" % i for i in float_list] else: string_list = float('NaN') return string_list df_dca_dcv['dca'] = df_dca_dcv.apply(lambda row: convert_float_string(row), axis=1) #generated the following error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[8], line 1 ----> 1 df_dca_dcv['dca'] = df_dca_dcv.apply(lambda row: convert_float_string(row), axis=1) File ~\Anaconda3\lib\site-packages\pandas\core\frame.py:8839, in DataFrame.apply(self, func, axis, raw, result_type, args, **kwargs) 8828 from pandas.core.apply import frame_apply 8830 op = frame_apply( 8831 self, 8832 func=func, (...) 8837 kwargs=kwargs, 8838 ) -> 8839 return op.apply().__finalize__(self, method="apply") File ~\Anaconda3\lib\site-packages\pandas\core\apply.py:727, in FrameApply.apply(self) 724 elif self.raw: 725 return self.apply_raw() --> 727 return self.apply_standard() File ~\Anaconda3\lib\site-packages\pandas\core\apply.py:851, in FrameApply.apply_standard(self) 850 def apply_standard(self): --> 851 results, res_index = self.apply_series_generator() 853 # wrap results 854 return self.wrap_results(results, res_index) File ~\Anaconda3\lib\site-packages\pandas\core\apply.py:867, in FrameApply.apply_series_generator(self) 864 with option_context("mode.chained_assignment", None): 865 for i, v in enumerate(series_gen): 866 # ignore SettingWithCopy here in case the user mutates --> 867 results[i] = self.f(v) 868 if isinstance(results[i], ABCSeries): 869 # If we have a view on v, we need to make a copy because 870 # series_generator will swap out the underlying data 871 results[i] = results[i].copy(deep=False) Cell In[8], line 1, in <lambda>(row) ----> 1 df_dca_dcv['dca'] = df_dca_dcv.apply(lambda row: convert_float_string(row), axis=1) Cell In[6], line 3, in convert_float_string(row) 1 def convert_float_string(row): 2 float_list = row['dca'] ----> 3 if len(float_list) > 0: 4 string_list = ["%.2f" % i for i in float_list] 5 else: TypeError: object of type 'NoneType' has no len() ...and if I simply skip converting dca to string and use df_dca_dcv['dca'] = df_dca_dcv['dca'].replace("[", "") df_dca_dcv['dca'] = df_dca_dcv['dca'].replace("]", "") the replacement does not take place. I would appreciate any suggestions on how to fix that issue.
Update They can have 0 to 11 elements and the split operation should filter only the 'n' first elements from left to right, where 'n' = row['n_mppts'] Since dca has variable length, you can use this code: # Part 0: fix special cases mask = df['dca'].isna() df.loc[mask, 'dca'] = df.loc[mask, 'dca'].apply(lambda x: []) lens = df['dca'].str.len().values # get the length of each array n_mppts = df['n_mppts'].mask(df['n_mppts'].gt(lens), lens) # Part 1: pad each array to be stacked nrows, ncols = len(df), int(lens.max()) dca = np.zeros((nrows, ncols)) # create a 0s target array mask = lens[:, None] > np.arange(ncols) dca[mask] = np.concatenate(df['dca']).astype(float) # copy data # Part 2: keep values according n_mppts mask = n_mppts.values[:, None] <= np.arange(ncols) dca[mask] = np.nan dca_df = pd.DataFrame(dca).add_prefix('dca_mppt_') dca_df Output: dca_mppt_0 dca_mppt_1 dca_mppt_2 dca_mppt_3 dca_mppt_4 dca_mppt_5 dca_mppt_6 dca_mppt_7 dca_mppt_8 dca_mppt_9 dca_mppt_10 0 2.3 2.3 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1 2.6 2.6 NaN NaN NaN NaN NaN NaN NaN NaN NaN 2 2.9 2.9 NaN NaN NaN NaN NaN NaN NaN NaN NaN 3 6.0 5.9 NaN NaN NaN NaN NaN NaN NaN NaN NaN 4 3.9 3.9 NaN NaN NaN NaN NaN NaN NaN NaN NaN What you receive from read_sql_query is a list of Decimal instances. It seems dca and dcv are a length of 11 items. You can use numpy to get your expected output in vectorized way: dca = np.vstack(df['dca']).astype(float) mask = df['n_mppts'].values[:, None] <= np.arange(12) dca[mask] = np.nan dca_df = pd.DataFrame(dca).add_prefix('dca_mppt_') Output: >>> dca_df dca_mppt_0 dca_mppt_1 dca_mppt_2 dca_mppt_3 dca_mppt_4 dca_mppt_5 dca_mppt_6 dca_mppt_7 dca_mppt_8 dca_mppt_9 dca_mppt_10 0 2.3 2.3 NaN NaN NaN NaN NaN NaN NaN NaN NaN 1 2.6 2.6 NaN NaN NaN NaN NaN NaN NaN NaN NaN 2 2.9 2.9 NaN NaN NaN NaN NaN NaN NaN NaN NaN 3 6.0 5.9 NaN NaN NaN NaN NaN NaN NaN NaN NaN 4 3.9 3.9 NaN NaN NaN NaN NaN NaN NaN NaN NaN
3
1
75,925,524
2023-4-4
https://stackoverflow.com/questions/75925524/accessing-the-chrome-pages-inside-python-without-selenium
There's the requests and urllib page that can access http(s) protocol pages in Python, e.g. import requests requests.get('stackoverflow.com') but when it comes to chrome:// pages, e.g. chrome://settings/help, the url libraries won't work and this: import requests requests.get('chrome://settings/help') throws the error: InvalidSchema: No connection adapters were found for 'chrome://settings/help' I guess there's no way for requests or urllib to determine which chrome to use and where's the binary executable file for the browser. So the adapter can't be easily coded. The goal here is to pythonically obtain the string Version 111.0.5563.146 (Official Build) (x86_64) from the chrome://settings/help page of the default chrome browser on the machine, e.g. it looks like this: Technically, it is possible to get to the page through selenium e.g. from selenium import webdriver driver = webdriver.Chrome("./lib/chromedriver_111.0.5563.64") driver.get('chrome://settings/help') But even if we can get the selenium driver to get to the chrome://settings/help, the .page_source is information for Version ... is missing. Also, other than the chrome version, the access to chrome:// pages would be used to retrieve other information, e.g. https://www.ghacks.net/2012/09/04/list-of-chrome-urls-and-their-purpose/ While's there's a way to call a Windows command line function to retrieve the browser version details, e.g. How to get chrome version using command prompt in windows , the solution won't generalize and work on Mac OS / Linux.
You can use Pyppeteer as Selenium's alternative for accessing the chrome:// pages pip install pyppeteer asyncio Example for Windows OS: from pyppeteer import launch import time import asyncio import nest_asyncio nest_asyncio.apply() async def main(): browser = await launch({ "headless": False, "executablePath": "C:/Program Files/Google/Chrome/Application/chrome.exe" }) page = await browser.newPage() await page.goto('chrome://settings/help') time.sleep(3) # wait 3s chromeVersion = await page.evaluate(''' document.querySelector('settings-ui') .shadowRoot.querySelector('settings-main') .shadowRoot.querySelector('settings-about-page') .shadowRoot.querySelector('div.secondary').innerText ''') await browser.close() print(chromeVersion) asyncio.get_event_loop().run_until_complete(main()) The output will correspond to your Chrome Application: Version 112.0.5615.87 (Official Build) (64-bit) For Linux/Mac, you can change executablePath based on the Chrome application's path.
3
1
75,891,072
2023-3-30
https://stackoverflow.com/questions/75891072/valueerror-unable-to-infer-channel-dimension-format
When training model with transformers, the following error occurs and I do not know how to resolve it (my input is torch.Size([1, 3, 224, 224])) : --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /tmp/ipykernel_23/2337200543.py in 11 ) 12 # begin training ---> 13 results = trainer.train() /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1635 resume_from_checkpoint=resume_from_checkpoint, 1636 trial=trial, -> 1637 ignore_keys_for_eval=ignore_keys_for_eval, 1638 ) 1639 /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1870 1871 step = -1 -> 1872 for step, inputs in enumerate(epoch_iterator): 1873 total_batched_samples += 1 1874 if rng_to_sync: /opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in next(self) 626 # TODO(https://github.com/pytorch/pytorch/issues/76750) 627 self._reset() # type: ignore[call-arg] --> 628 data = self._next_data() 629 self._num_yielded += 1 630 if self._dataset_kind == _DatasetKind.Iterable and \ /opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self) 669 def _next_data(self): 670 index = self._next_index() # may raise StopIteration --> 671 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 672 if self._pin_memory: 673 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) /opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 56 data = self.dataset.getitems(possibly_batched_index) 57 else: ---> 58 data = [self.dataset[idx] for idx in possibly_batched_index] 59 else: 60 data = self.dataset[possibly_batched_index] /opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in (.0) 56 data = self.dataset.getitems(possibly_batched_index) 57 else: ---> 58 data = [self.dataset[idx] for idx in possibly_batched_index] 59 else: 60 data = self.dataset[possibly_batched_index] /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in getitem(self, key) 1763 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" 1764 return self._getitem( -> 1765 key, 1766 ) 1767 /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs) 1748 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) 1749 formatted_output = format_table( -> 1750 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 1751 ) 1752 return formatted_output /opt/conda/lib/python3.7/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns) 530 python_formatter = PythonFormatter(features=None) 531 if format_columns is None: --> 532 return formatter(pa_table, query_type=query_type) 533 elif query_type == "column": 534 if key in format_columns: /opt/conda/lib/python3.7/site-packages/datasets/formatting/formatting.py in call(self, pa_table, query_type) 279 def call(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: 280 if query_type == "row": --> 281 return self.format_row(pa_table) 282 elif query_type == "column": 283 return self.format_column(pa_table) /opt/conda/lib/python3.7/site-packages/datasets/formatting/formatting.py in format_row(self, pa_table) 385 386 def format_row(self, pa_table: pa.Table) -> dict: --> 387 formatted_batch = self.format_batch(pa_table) 388 try: 389 return _unnest(formatted_batch) /opt/conda/lib/python3.7/site-packages/datasets/formatting/formatting.py in format_batch(self, pa_table) 416 if self.decoded: 417 batch = self.python_features_decoder.decode_batch(batch) --> 418 return self.transform(batch) 419 420 /tmp/ipykernel_23/3636630232.py in preprocess(batch) 3 inputs = feature_extractor( 4 batch['image'], ----> 5 return_tensors='pt' 6 ) 7 # include the labels /opt/conda/lib/python3.7/site-packages/transformers/image_processing_utils.py in call(self, images, **kwargs) 456 def call(self, images, **kwargs) -> BatchFeature: 457 """Preprocess an image or a batch of images.""" --> 458 return self.preprocess(images, **kwargs) 459 460 def preprocess(self, images, **kwargs) -> BatchFeature: /opt/conda/lib/python3.7/site-packages/transformers/models/vit/image_processing_vit.py in preprocess(self, images, do_resize, size, resample, do_rescale, rescale_factor, do_normalize, image_mean, image_std, return_tensors, data_format, **kwargs) 260 261 if do_resize: --> 262 images = [self.resize(image=image, size=size_dict, resample=resample) for image in images] 263 264 if do_rescale: /opt/conda/lib/python3.7/site-packages/transformers/models/vit/image_processing_vit.py in (.0) 260 261 if do_resize: --> 262 images = [self.resize(image=image, size=size_dict, resample=resample) for image in images] 263 264 if do_rescale: /opt/conda/lib/python3.7/site-packages/transformers/models/vit/image_processing_vit.py in resize(self, image, size, resample, data_format, **kwargs) 125 raise ValueError(f"The size dictionary must contain the keys height and width. Got {size.keys()}") 126 return resize( --> 127 image, size=(size["height"], size["width"]), resample=resample, data_format=data_format, **kwargs 128 ) 129 /opt/conda/lib/python3.7/site-packages/transformers/image_transforms.py in resize(image, size, resample, reducing_gap, data_format, return_numpy) 288 # For all transformations, we want to keep the same data format as the input image unless otherwise specified. 289 # The resized image from PIL will always have channels last, so find the input format first. --> 290 data_format = infer_channel_dimension_format(image) if data_format is None else data_format 291 292 # To maintain backwards compatibility with the resizing done in previous image feature extractors, we use /opt/conda/lib/python3.7/site-packages/transformers/image_utils.py in infer_channel_dimension_format(image) 163 elif image.shape[last_dim] in (1, 3): 164 return ChannelDimension.LAST --> 165 raise ValueError("Unable to infer channel dimension format") 166 167 ValueError: Unable to infer channel dimension format
I was using .png dataset for this training, once I converted to .jpg, all went well!
5
4
75,925,357
2023-4-4
https://stackoverflow.com/questions/75925357/plotly-hexbin-cutoff-within-specified-json-boundary
I'm plotting a separate hexbin figure and json boundary file. The hexbin grid overlaps the boundary file though. I'm interested in displaying the African continent only. I'm aiming to cut-off or subset the hexbin grid within the African continent. So no grid square should be visualised outside the boundary file. Is there a way to achieve this using Plotly? import numpy as np import pandas as pd import plotly.express as px import plotly.graph_objs as go import plotly.figure_factory as ff import geopandas as gpd import json data = pd.DataFrame({ 'LAT': [1,5,6,7,5,6,7,5,6,7,5,6,7,12,-40,50], 'LON': [10,10,11,12,10,11,12,10,11,12,10,11,12,-20,40,50], }) gdf_poly = gpd.read_file(gpd.datasets.get_path("naturalearth_lowres")) gdf_poly = gdf_poly.drop('name', axis = 1) Afr_gdf_area = gdf_poly[gdf_poly['continent'] == 'Africa'].reset_index(drop = True) fig = ff.create_hexbin_mapbox(data_frame=data, lat="LAT", lon="LON", nx_hexagon=25, opacity=0.4, labels={"color": "Point Count"}, mapbox_style='carto-positron', zoom = 1 ) fig.update_layout(mapbox={ "layers": [ {"source": json.loads(Afr_gdf_area.geometry.to_json()), "below": "traces", "type": "fill", "color": "orange", "opacity" : 0.1, "line": {"width": 1} }, ], }) fig.show() Intended output is to cut-off or clip squares outside the African continent, which is in orange.
If you look inside fig.data[0], it's a Choroplethmapbox with several fields including customdata and geojson. The geojson contains all of the information that plotly needs to draw the hexbins, including the coordinates and unique id for each hexagon. The customdata is an array of shape [n_hexbins x 3] where each element of the array includes the id and the numeric values that plotly uses to determine the color of each hexbin. 'customdata': array([[0.0, '-0.3490658516205964,-0.7648749219440846', 0], [0.0, '-0.3490658516205964,-0.6802309514438665', 0], [0.0, '-0.3490658516205964,-0.5955869809436484', 0], ..., [0.0, '0.8482300176421051,0.8010385323099501', 0], [0.0, '0.8482300176421051,0.8856825028101681', 0], [0.0, '0.8482300176421051,0.9703264733103861', 0]], dtype=object), 'geojson': {'features': [{'geometry': {'coordinates': [[[-20.00000007, -41.31174966478728], [-18.6000000672, -40.70179509236059], [-18.6000000672, -39.464994178287064], [-20.00000007, -38.838189880150665], [-21.4000000728, -39.464994178287064], [-21.4000000728, -40.70179509236059], [-20.00000007, -41.31174966478728]]], 'type': 'Polygon'}, 'id': '-0.3490658516205964,-0.7648749219440846', 'type': 'Feature'}, {'geometry': {'coordinates': [[[-20.00000007, -37.56790013078226], [-18.6000000672, -36.924474103794715], [-18.6000000672, -35.62123099996148], [-20.00000007, -34.96149172026768], [-21.4000000728, -35.62123099996148], [-21.4000000728, -36.924474103794715], [-20.00000007, -37.56790013078226]]], 'type': 'Polygon'}, 'id': '-0.3490658516205964,-0.6802309514438665', 'type': 'Feature'}, {'geometry': {'coordinates ... To select the hexbins within the specified boundary, we can start by extracting the information from customdata and geojson within the fig.data[0] generated by plotly, and create a geopandas dataframe. Then we can create a new geopandas dataframe called hexbins_in_afr which is an inner join between our new gdf of hexbins and Afr_gdf_area (so that we are dropping all hexbins outside of Afr_gdf_area). After we extract the geojson information from hexbins_in_afr as well as the customdata, we can explicitly set the following fields within fig.data[0]: fig.data[0]['geojson']['features'] = new_geojson fig.data[0]['customdata'] = hexbins_in_afr['customdata'] Here is the code with the necessary modifications: import numpy as np import pandas as pd import plotly.express as px import plotly.graph_objs as go import plotly.figure_factory as ff import geopandas as gpd from geopandas.tools import sjoin from shapely.geometry import Polygon import json data = pd.DataFrame({ 'LAT': [1,5,6,7,5,6,7,5,6,7,5,6,7,12,-40,50], 'LON': [10,10,11,12,10,11,12,10,11,12,10,11,12,-20,40,50], }) gdf_poly = gpd.read_file(gpd.datasets.get_path("naturalearth_lowres")) gdf_poly = gdf_poly.drop('name', axis = 1) Afr_gdf_area = gdf_poly[gdf_poly['continent'] == 'Africa'].reset_index(drop = True) fig = ff.create_hexbin_mapbox(data_frame=data, lat="LAT", lon="LON", nx_hexagon=25, opacity=0.4, labels={"color": "Point Count"}, mapbox_style='carto-positron', zoom = 1 ) gdf = gpd.GeoDataFrame({ 'customdata': fig.data[0]['customdata'].tolist(), 'id':[item['id'] for item in fig.data[0]['geojson']['features']], 'geometry':[Polygon(item['geometry']['coordinates'][0]) for item in fig.data[0]['geojson']['features']] }) gdf.set_crs(epsg=4326, inplace=True) hexbins_in_afr = sjoin(gdf, Afr_gdf_area, how='inner') def get_coordinates(polygon): return [[list(i) for i in polygon.exterior.coords]] hexbins_in_afr['coordinates'] = hexbins_in_afr['geometry'].apply(lambda x: get_coordinates(x)) ## create a new geojson that matches the structure of fig.data[0]['geojson']['features'] new_geojson = [{ 'type': 'Feature', 'id': id, 'geometry': { 'type': 'Polygon', 'coordinates': coordinate } } for id, coordinate in zip(hexbins_in_afr['id'],hexbins_in_afr['coordinates'])] fig.data[0]['geojson']['features'] = new_geojson fig.data[0]['customdata'] = hexbins_in_afr['customdata'] fig.update_layout(mapbox={ "layers": [ {"source": json.loads(Afr_gdf_area.geometry.to_json()), "below": "traces", "type": "fill", "color": "orange", "opacity" : 0.1, "line": {"width": 1} }, ], }) fig.show()
5
5
75,921,380
2023-4-3
https://stackoverflow.com/questions/75921380/python-segmentation-fault-in-interactive-mode
The python is installed with conda: (base) [kangl@login05]~% which python ~/miniconda3/bin/python When directly run python in iteractive mode, a segmentation fault will apear: (base) [kangl@login05]~% python Python 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. zsh: segmentation fault python However, the python script runs successfully: (base) [kangl@login05]~% python -c "print('hello world')" hello world And ipython runs successfully: (base) [kangl@login05]~% ipython Python 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0] Type 'copyright', 'credits' or 'license' for more information IPython 8.12.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: Then I try to debug it with gdb: (base) [kangl@login04]~% gdb python GNU gdb (GDB) 11.2 Copyright (C) 2022 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-conda-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: <https://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from python... (gdb) run Starting program: /users/kangl/miniconda3/bin/python [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Python 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. Program received signal SIGSEGV, Segmentation fault. 0x00002aaaab96b691 in __strlen_sse2_pminub () from /lib64/libc.so.6 And (gdb) run Starting program: /users/kangl/miniconda3/bin/python [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Python 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. Program received signal SIGSEGV, Segmentation fault. 0x00002aaaab96b691 in __strlen_sse2_pminub () from /lib64/libc.so.6 (gdb) backtrace #0 0x00002aaaab96b691 in __strlen_sse2_pminub () from /lib64/libc.so.6 #1 0x00002aaaaac53f49 in _rl_init_locale () from /users/kangl/miniconda3/lib/python3.10/lib-dynload/../../libreadline.so.8 #2 0x00002aaaaac54044 in _rl_init_eightbit () from /users/kangl/miniconda3/lib/python3.10/lib-dynload/../../libreadline.so.8 #3 0x00002aaaaac32797 in rl_initialize () from /users/kangl/miniconda3/lib/python3.10/lib-dynload/../../libreadline.so.8 #4 0x00002aaaaaad4a7d in setup_readline (mod_state=0x2aaab21b02a0) at /usr/local/src/conda/python-3.10.10/Modules/readline.c:1289 #5 PyInit_readline () at /usr/local/src/conda/python-3.10.10/Modules/readline.c:1502 #6 0x000055555576939a in _PyImport_LoadDynamicModuleWithSpec (fp=0x0, spec=<ModuleSpec(name='readline', loader=<ExtensionFileLoader(name='readline', path='/users/kangl/miniconda3/lib/python3.10/lib-dynload/readline.cpython-310-x86_64-linux-gnu.so') at remote 0x2aaab21b09a0>, origin='/users/kangl/miniconda3/lib/python3.10/lib-dynload/readline.cpython-310-x86_64-linux-gnu.so', loader_state=None, submodule_search_locations=None, _set_fileattr=True, _cached=None) at remote 0x2aaab21b02e0>) at /usr/local/src/conda/python-3.10.10/Python/importdl.c:167 #7 _imp_create_dynamic_impl (module=<optimized out>, file=<optimized out>, spec=<ModuleSpec(name='readline', loader=<ExtensionFileLoader(name='readline', path='/users/kangl/miniconda3/lib/python3.10/lib-dynload/readline.cpython-310-x86_64-linux-gnu.so') at remote 0x2aaab21b09a0>, origin='/users/kangl/miniconda3/lib/python3.10/lib-dynload/readline.cpython-310-x86_64-linux-gnu.so', loader_state=None, submodule_search_locations=None, _set_fileattr=True, _cached=None) at remote 0x2aaab21b02e0>) at /usr/local/src/conda/python-3.10.10/Python/import.c:2050 #8 _imp_create_dynamic (module=<optimized out>, args=args@entry=0x2aaab212e998, nargs=<optimized out>) at /usr/local/src/conda/python-3.10.10/Python/clinic/import.c.h:330 #9 0x0000555555694d14 in cfunction_vectorcall_FASTCALL ( func=<built-in method create_dynamic of module object at remote 0x2aaaaab8e610>, args=0x2aaab212e998, nargsf=<optimized out>, kwnames=<optimized out>) at /usr/local/src/conda/python-3.10.10/Objects/methodobject.c:430 #10 0x000055555568a39b in do_call_core (kwdict={}, --Type <RET> for more, q to quit, c to continue without paging-- It looks the dependency on readline library is broken as Max said. But I have no idea how to fix it. I have tried to reinstall the readline with mamba install -c conda-forge readline --force-reinstall but it doesn't work.
After encountering similar errors, I have found that the following lines of code solve the segmentation fault: export LANGUAGE=UTF-8 export LC_ALL=en_US.UTF-8 export LANG=UTF-8 export LC_CTYPE=en_US.UTF-8 export LANG=en_US.UTF-8 export LC_COLLATE=$LANG export LC_CTYPE=$LANG export LC_MESSAGES=$LANG export LC_MONETARY=$LANG export LC_NUMERIC=$LANG export LC_TIME=$LANG export LC_ALL=$LANG
4
6
75,915,809
2023-4-3
https://stackoverflow.com/questions/75915809/accuracy-value-more-than-1-with-nn-bcewithlogitsloss-loss-function-pytorch-in
I am trying to use nn.BCEWithLogitsLoss() for model which initially used nn.CrossEntropyLoss(). However, after doing some changes to the training function to accommodate the nn.BCEWithLogitsLoss() loss function the model accuracy values are shown as more than 1. Please find the code below. # Data augmentation and normalization for training # Just normalization for validation data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } data_dir = '/kaggle/input/catsndogsorg/hymenoptera_data' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=4) for x in ['train', 'val']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']} class_names = image_datasets['train'].classes device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") ############# def train_model(model, criterion, optimizer, scheduler, num_epochs=25): since = time.time() best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): print(f'Epoch {epoch}/{num_epochs - 1}') print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'val']: if phase == 'train': model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Iterate over data. for inputs, labels in dataloaders[phase]: inputs = inputs.to(device) labels = labels.to(device).unsqueeze(1) # zero the parameter gradients optimizer.zero_grad() # forward # track history if only in train with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) _, preds = torch.max(outputs, 1) #print(outputs, labels) loss = criterion(outputs, labels.float()) print(loss) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() # statistics running_loss += loss.item() * inputs.size(0) running_corrects += torch.sum(preds == labels.data) if phase == 'train': scheduler.step() epoch_loss = running_loss / dataset_sizes[phase] epoch_acc = running_corrects.double() / dataset_sizes[phase] print(f'{phase} Loss: {epoch_loss:.4f} Acc: {epoch_acc:.4f}') # deep copy the model if phase == 'val' and epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) print() time_elapsed = time.time() - since print(f'Training complete in {time_elapsed // 60:.0f}m {time_elapsed % 60:.0f}s') print(f'Best val Acc: {best_acc:4f}') # load best model weights model.load_state_dict(best_model_wts) return model EDIT:Model pipeline model_ft = models.resnet18(weights='ResNet18_Weights.DEFAULT') num_ftrs = model_ft.fc.in_features # Here the size of each output sample is set to 2. # Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)). model_ft.fc = nn.Linear(num_ftrs, 1) model_ft = model_ft.to(device) criterion = nn.BCEWithLogitsLoss() # Observe that all parameters are being optimized optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9) # Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,num_epochs=25) The outputs of training loop: outputs shape: torch.Size([4, 1]) labels shape: torch.Size([4, 1]) logits: tensor(0.3511,grad_fn<BinaryCrossEntropyWithLogitsBackward0>) train Loss: 1.0000 Acc: 2.0164 val Loss: 1.0000 Acc: 1.8105 Would anyone be able to help me in this matter please. Thanks & Best Regards AMJS
I didn't understand why you are using torch.max as you have one output. Anyway, you should use squeeze before comparing, so this line: running_corrects += torch.sum(preds == labels.data) should become running_corrects += torch.sum(preds == labels.squeeze()) to see why: labels = torch.tensor([[0], [0], [0], [1]]) preds = torch.tensor([0, 0, 0, 0]) print(torch.sum(preds == labels)) # output 12 print(torch.sum(preds == labels.squeeze())) # output 3
4
3
75,920,755
2023-4-3
https://stackoverflow.com/questions/75920755/why-does-the-parameter-disabledwidth-of-a-tkinter-canvas-rectangle-not-work
I want the outline of a rectangle in a canvas to get a bigger width, when the rectangle is in state "disabled". Therefore I use the parameter "disabledwidth=4". But when the rectangle is in state "disabled", the outline has still a width of 1 instead of 4. This is my code, which shows the problem: When I move the mouse over the rectangle, the state of the rectangle changes to "active", everything works as expected, especially the outlinewidth changes to 4. But when I change the state to "disabled" by clicking on the button the outline stays at width 1. What am I doing wrong? import tkinter as tk def disabled(): canvas.itemconfig(rect, state="disabled") def normal(): canvas.itemconfig(rect, state="normal") root = tk.Tk() canvas = tk.Canvas(root, height=250, width=250) button1 = tk.Button(root, text="change rectangle to state disabled", command=disabled) button2 = tk.Button(root, text="change rectangle to state normal" , command=normal ) rect = canvas.create_rectangle(40, 40, 180, 180, fill = "red", activefill = "green2", activeoutline = "green3", activewidth = 4, disabledfill = "grey", disabledoutline= "grey2", disabledwidth = 4 ) canvas.grid() button1.grid() button2.grid() root.mainloop()
This is due to a very old but simple bug in Tcl/Tk which causes -disabledwidth to not be parsed properly for rectangle and oval canvas items, causing it to be silently ignored (at least in some cases). I reported it to Tcl/Tk along with the fix: https://core.tcl-lang.org/tk/info/f4d9d74df628 So hopefully this will be fixed in Tcl/Tk 8.6.14 (which hopefully will then be used in future Python distributions).
6
0
75,907,716
2023-4-1
https://stackoverflow.com/questions/75907716/add-column-with-current-date-and-time-to-polars-dataframe
How can I add a column to a Polars DataFrame with current date and time as value on every row? With Pandas, I would do something like this: df["date"] = pd.Timestamp.today()
EDIT: Better answer from @jqurious: df.with_columns(date = datetime.now()) My original solution: Use pl.lit() and Python's datetime to create a literal of the current date: from datetime import datetime import polars as pl df = pl.DataFrame( ... ) df.with_columns(pl.lit(datetime.now()).dt.datetime().alias("date"))
5
7
75,931,752
2023-4-4
https://stackoverflow.com/questions/75931752/how-to-add-an-empty-facet-to-a-relplot-or-facetgrid
I have a relplot with columns split on one variable. I'd like to add one additional column with no subplot or subpanel. To give a clear example, suppose I have the following plot: import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns x = np.random.random(10000) t = np.random.randint(low=0, high=3, size=10000) y = np.multiply(x, t) df = pd.DataFrame({'x': x, 't': t, 'y': y}) g = sns.relplot(df, x='x', y='y', col='t') This generates a plot something like I want a 4th column for t=3 that displays no data nor axes. I just want a blank white subplot of equal size as the first three subplots. How can I do this?
Add a value not observed in the data to col_order, e.g. g = sns.relplot(df, x='x', y='y', col='t', col_order=[0, 1, 2, ""]) g.axes.flat[-1].set_title("")
4
7
75,893,753
2023-3-30
https://stackoverflow.com/questions/75893753/how-to-write-decorator-without-syntactic-sugar-in-python
This question is rather specific, and I believe there are many similar questions but not exactly like this. I am trying to understand syntactic sugar. My understanding of it is that by definition the code always can be written in a more verbose form, but the sugar exists to make it easier for humans to handle. So there is always a way to write syntactic sugar "without sugar" so to speak? With that in mind, how precisely do you write a decorator without syntactic sugar? I understand it's basically like: # With syntactic sugar @decorator def foo(): pass # Without syntactic sugar def foo(): pass foo = decorator(foo) Except from PEP 318 Current Syntax The current syntax for function decorators as implemented in Python 2.4a2 is: @dec2 @dec1 def func(arg1, arg2, ...): pass This is equivalent to: def func(arg1, arg2, ...): pass func = dec2(dec1(func)) without the intermediate assignment to the variable func. (emphasis mine) In the example I gave above, which is how the syntactic sugar is commonly explained, there is an intermediate assignment. But how does the syntactic sugar work without the intermediate assignment? A lambda function? But I also thought they could only be one line? Or is the name of the decorated function changed? It seems like that could possibly conflict with another method if the user created one coincidentally with that same name. But I don't know which is why I'm asking. To give a specific example, I'm thinking of how a property is defined. Since when defining a property's setter method, it cannot work if the setter method is defined as that would destroy the property. class Person: def __init__(self, name): self.name = name @property def name(self): return self._name # name = property(name) # This would work @name.setter def name(self, value): self._name = value.upper() # name = name.setter(name) # This would not work as name is no longer a property but the immediately preceding method
def func(arg1, arg2, ...): pass func = dec2(dec1(func)) In the example [...] there is an intermediate assignment. But how does the syntactic sugar work without the intermediate assignment? Actually, the "non syntactic sugar" version, as you call it, is not exactly the same as using the decorator syntax, with an @decorator: As you noted, when using the @ notation, the initial function name is never assigned a variable: the only assignment that takes place is for the resolved decorator. So: @deco def func(): ... What is assigned to func in the globals() scope is the value returned by the call to deco. While in: def func(): ... func = deco(func) First func is assigned to the raw function, and just as the func=deco(func) line is executed the former is shadowed by the decorated result. The same apples for cascading decorators: only the final output, of the topmost decorator, is ever assigned to a variable name. And, as well, the name used when using the @ syntax is taken from the source code - the name used in the def statement: if one of the decorators happen to modify the function __name__ attribute, that has no effect in the assigned name for the decorated function. These differences are just implementation details, and derive of the way things work - I am not sure if they are on the language specs, but for those who have a certain grasp on the language, (1) they feel so natural, no one would dare implementing it differently, and (2) they actually make next to no difference - but for code that'd be tracing the program execution (a debugger, or some code using the auditing capabilities of the language (https://docs.python.org/3/library/audit_events.html )). Despite this not being in other language specs, note however that the difference that the decorator syntax does not make the intermediate assignment is written down in PEP 318. Lacking other references, what is in the PEP is the law: This is equivalent to: [example without the @ syntax], though without the intermediate creation of a variable named func. For sake of completeness, it is worth noting that from Python 3.10 (maybe 3.9), the syntax restriction that limited which expressions could be used as decorators after the @ was lifted, superseding the PEP text: any valid expression which evaluates to a callable can be used now. what about property ? class Person: ... @property def name(self): ... @name.setter def name(self, value): ... What takes place in this example, is that name.setter is a callable, called with the setter method (def name(self, value):) defined bellow, as usual - but what happens is that it returns a property object. (Not the same as @name - a new property, but for the purpose of understanding what takes place, it could even return the same object). So that code is equivalent to: class Person: ... def name(self): ... name = property(name) # Creates a property with the getter only def name_setter(self, value): # Indeed: this can't be called simply `name`: it would override the property created above ... name = name.setter(name_setter) # Creates a new property, copying what already was set in the previous "name" and adding a setter. del name_setter # optionally: removes the explicit setter from the class namespace, as it will always be called via the property. In fact, while property was created before the decorator syntax (IRCC, in Python 2.2 - decorator syntax came out in Python 2.4) - it was not initially used as a decorator. The way one used to define properties in Python 2.2 times was: class Person: ... def name_getter(self): ... def name_getter(self): ... name = property(name_getter, name_setter) del name_getter, name_setter # optional It was only in Python 2.5 (or later) they made the clever ".setter" ".getter" and ".deleter" methods to properties so that it could be entirely defined using the decorator syntax. Note that for properties with only a getter, @property would work from Python 2.3 on, as it would just take the single parameter supplied by the decorator syntax to be the getter. But it was not extendable to have setter and deleter afterwards.
4
3