question_id
int64 59.5M
79.4M
| creation_date
stringdate 2020-01-01 00:00:00
2025-02-10 00:00:00
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
73,406,105 | 2022-8-18 | https://stackoverflow.com/questions/73406105/my-code-caused-the-kernel-to-restart-why-is-the-kernel-restarting | I wrote this for python code and got an unexpected output. The output was a number of zeros then it said "restarting kernel". Why is the kernel restarting? def countdown(n): for n in range(0,5): print(n) countdown(n-1) countdown(2) On the other hand, I tried with if and there was no problem: def countdown(n): if n == 0: print("blast of") else: print(n) countdown(n-1) countdown(5) So why is it not working with for? | > def countdown(n): > for n in range(0,5): > print(n) > countdown(n-1) > countdown(2) In your code above, each function call will call itself recursively 5 times. So the first call is 5, second call there will be 25 calls, third call 125 calls, and the recursive calls went exponentially high, causing the Kernel to restart. If you use recursive function, there must be an exit condition. There are a few ways to achieve your goal: (1) recursive with if-condition exit (this is your successful code) def countdown(n): if n == 0: print("blast off!") else: print(n) countdown(n-1) countdown(5) (2) recursive with while-condition exit def countdown(n): while n != 0: print(n) countdown(n-1) return print("blast off!") countdown(5) (3) for loop (no need recursive) def countdown(n): for i in range(n, 0, -1): print(i) print("blast off!") countdown(5) Output: 5 4 3 2 1 blast off! | 4 | 3 |
73,381,902 | 2022-8-17 | https://stackoverflow.com/questions/73381902/running-unittest-with-modules-that-must-import-other-modules | Our Python 3.10 unit tests are breaking when the modules being tested need to import other modules. When we use the packaging techniques recommended by other posts and articles, either the unit tests fail to import modules, or the direct calls to run the app fail to import modules. The other posts and articles we have read do not show how to validate that both the application itself and the unit tests can each import modules when called separately. So we created a bare bones example below and are asking how to structure the packaging correctly. What specific changes must be made to the syntax below in order for the two Python commands given below to successfully run on the bare bones example app given below? Problem description A Python 3.10 app must import modules when called either directly as an app or indirectly through unit tests. Packages must be used to organize the code. Calls to unit tests are breaking because modules cannot be found. The two test commands that must run without errors to validate solution of this problem are: C:\path\to\dir>python repoName\app\first.py C:\path\to\dir>python -m unittest repoName.unitTests.test_example We have reviewed many articles and posts on this topic, but the other sources failed to address our use case, so we have created a more explicit example below to test the two types of commands that must succeed in order to meet the needs of this more explicit use case. App structure The very simple structure of the app that is failing to import packages during unit tests is: repoName app __init__.py first.py second.py third.py unitTests __init__.py test_example.py __init__.py Simple code to reproduce problem The code for a stripped down example to reproduce the problem is as follows: The contents of repoName\app\__init__.py are: print('inside app __init__.py') __all__ = ['first', 'second', 'third'] The contents of first.py are: import second as second from third import third import sys inputArgs=sys.argv def runCommands(): trd = third() if second.something == 'platform': if second.another == 'on': trd.doThree() if second.something != 'unittest' : sys.exit(0) second.processInputArgs(inputArgs) runCommands() The contents of second.py are: something = '' another = '' inputVars = {} def processInputArgs(inputArgs): global something global another global inputVars if ('unittest' in inputArgs[0]): something = 'unittest' elif ('unittest' not in inputArgs[0]): something = 'platform' another = 'on' jonesy = 'go' inputVars = { 'jonesy': jonesy } The contents of third.py are: print('inside third.py') import second as second class third: def __init__(self): pass #@public def doThree(self): print("jonesy is: ", second.inputVars.get('jonesy')) The contents of repoName\unitTests\__init__.py are: print('inside unit-tests __init__.py') __all__ = ['test_example'] The contents of test_example.py are: import unittest class test_third(unittest.TestCase): def test_doThree(self): from repoName.app.third import third num3 = third() num3.doThree() self.assertTrue(True) if __name__ == '__main__': unittest.main() The contents of repoName\__init__.py are: print('inside repoName __init__.py') __all__ = ['app', 'unitTests'] Error resulting from running commands The command line response to the two commands are given below. You can see that the call to the app succeeds, while the call to the unit test fails. C:\path\to\dir>python repoName\app\first.py inside third.py jonesy is: go C:\path\to\dir>python -m unittest repoName.unitTests.test_example inside repoName __init__.py inside unit-tests __init__.py inside app __init__.py inside third.py E ====================================================================== ERROR: test_doThree (repoName.unitTests.test_example.test_third) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\path\to\dir\repoName\unitTests\test_example.py", line 15, in test_doThree from repoName.app.third import third File "C:\path\to\dir\repoName\app\third.py", line 3, in <module> import second as second ModuleNotFoundError: No module named 'second' ---------------------------------------------------------------------- Ran 1 test in 0.002s FAILED (errors=1) What specific changes must be made to the code above in order for all the modules to be imported correctly when either of the given commands are run? | Creating an "alias" for modules Update the contents of repoName\app\__init__.py to: print('inside app __init__.py') __all__ = ['first', 'second', 'third'] import sys import repoName.app.second as second sys.modules['second'] = second import repoName.app.third as third sys.modules['third'] = third import repoName.app.first as first sys.modules['first'] = first How to ensure first.py gets run even when imported So when the test fixture imports repoName.app.third, Python will recursively import the parent packages so that: import repoName.app.third is equivalent to import repoName # inside repoName __init__.py import app #inside app __init__.py import third #inside third.py So running from repoName.app.third import third inside test_doThree, executes repoName\app\__init__.py. In __init__.py, import repoName.app.first as first is called. Importing first will execute the following lines at the bottom of first.py second.processInputArgs(inputArgs) runCommands() In second.processInputArgs, jonesy = 'go' is executed setting the variable to be printed out when the rest of the test is ran. | 4 | 4 |
73,421,579 | 2022-8-19 | https://stackoverflow.com/questions/73421579/list-comprehension-instead-of-extend-in-loop | Can I write this code in one line? I tried use chain in list comprehension. def divisors(n): result = [] for div in range(1, int(sqrt(n)) + 1): if n % div == 0: result.extend([div, n / div]) return list(set(result)) | Are you looking for something like this ? from math import sqrt import itertools def divisors(n): return list(set(itertools.chain.from_iterable([[div, n / div] for div in range(1, int(sqrt(n)) + 1) if n % div == 0]))) | 3 | 2 |
73,419,138 | 2022-8-19 | https://stackoverflow.com/questions/73419138/how-to-change-annotations-in-one-column-of-seaborn-heatmap | I plotted a heatmap using seaborn as shown below. I want to change the highlighted column into "Thousands K and Million M" values (shown in the table below). I tried it doing it but the column is changing into string and giving me an error when I am trying to plot those values on the heatmap. Is there a way I can change the values of first column to the desired values on the heatmap? Values Desired Values 662183343.70 662.83M 155554910.90 155.55M Code used for creating the heatmap sns.heatmap(heatmap_df, cmap='rocket_r', annot=True, fmt='.4f', linewidths=2, cbar_kws={'label': 'Percentiles', 'orientation': 'vertical', }, vmin=0.95, vmax=1, xticklabels=xticks_label, ) Changing the column into K, M format # convert zeros to K, M etc. from numerize import numerize as nz df['PropDmgAdj'] = df['PropDmgAdj'].apply(nz.numerize) | Using the lesser known texts attribute of matplotlib.Axes object: import pandas as pd import seaborn as sns import matplotlib.pyplot as plt heatmap_df = pd.DataFrame( [ {"colA": 662183343.70, "colB": 0.9976, "colC": 0.9962}, {"colA": 155567736.90, "colB": 1.0000, "colC": 1.0000}, {"colA": 77777.70, "colB": 0.9976, "colC": 0.9962}, {"colA": 14456.20, "colB": 0.1243, "colC": 0.5356}, ] ) heatmap = sns.heatmap( heatmap_df, cmap="rocket_r", annot=True, fmt=".4f", linewidths=2, cbar_kws={"label": "Percentiles", "orientation": "vertical",}, vmin=0.95, vmax=1, ) # https://stackoverflow.com/questions/37602885/adding-units-to-heatmap-annotation-in-seaborn for t in heatmap.texts: current_text = t.get_text() # https://stackoverflow.com/questions/67629794/pandas-format-large-numbers text_transform = ( lambda x: f"{x//1000000000}B" if x / 1000000000 >= 1 else f"{x//1000000}M" if x / 1000000 >= 1 else f"{int(x//1000)}K" if x / 10000 >= 1 else f"{x}" ) t.set_text(text_transform(float(current_text))) plt.show() | 3 | 2 |
73,420,018 | 2022-8-19 | https://stackoverflow.com/questions/73420018/how-do-i-construct-a-self-referential-recursive-sqlmodel | I want to define a model that has a self-referential (or recursive) foreign key using SQLModel. (This relationship pattern is also sometimes referred to as an adjacency list.) The pure SQLAlchemy implementation is described here in their documentation. Let's say I want to implement the basic tree structure as described in the SQLAlchemy example linked above, where I have a Node model and each instance has an id primary key, a data field (say of type str), and an optional reference (read foreign key) to another node that we call its parent node (field name parent_id). Ideally, every Node object should have a parent attribute, which will be None, if the node has no parent node; otherwise it will contain (a pointer to) the parent Node object. And even better, every Node object should have a children attribute, which will be a list of Node objects that reference it as their parent. The question is twofold: What is an elegant way to implement this with SQLModel? How would I create such node instances and insert them into the database? | The sqlmodel.Relationship function allows explicitly passing additional keyword-arguments to the sqlalchemy.orm.relationship constructor that is being called under the hood via the sa_relationship_kwargs parameter. This parameter expects a mapping of strings representing the SQLAlchemy parameter names to the values we want to pass through as arguments. Since SQLAlchemy relationships provide the remote_side parameter for just such an occasion, we can leverage that directly to construct the self-referential pattern with minimal code. The documentation mentions this in passing, but crucially the remote_side value may be passed as a Python-evaluable string when using Declarative. This is exactly what we need. The only missing piece then is the proper use of the back_populates parameter and we can build the model like so: from typing import Optional from sqlmodel import Field, Relationship, Session, SQLModel, create_engine class Node(SQLModel, table=True): __tablename__ = 'node' # just to be explicit id: Optional[int] = Field(default=None, primary_key=True) data: str parent_id: Optional[int] = Field( foreign_key='node.id', # notice the lowercase "n" to refer to the database table name default=None, nullable=True ) parent: Optional['Node'] = Relationship( back_populates='children', sa_relationship_kwargs=dict( remote_side='Node.id' # notice the uppercase "N" to refer to this table class ) ) children: list['Node'] = Relationship(back_populates='parent') # more code below... Side note: We define id as optional as is customary with SQLModel to avoid being nagged by our IDE when we want to create an instance, for which the id will only be known, after we have added it to the database. The parent_id and parent attributes are obviously defined as optional because not every node needs to have a parent in our model. To test that everything works the way we expect it to: def test() -> None: # Initialize database & session: sqlite_file_name = 'database.db' sqlite_uri = f'sqlite:///{sqlite_file_name}' engine = create_engine(sqlite_uri, echo=True) SQLModel.metadata.drop_all(engine) SQLModel.metadata.create_all(engine) session = Session(engine) # Initialize nodes: root_node = Node(data='I am root') # Set the children's `parent` attributes; # the parent nodes' `children` lists are then set automatically: node_a = Node(parent=root_node, data='a') node_b = Node(parent=root_node, data='b') node_aa = Node(parent=node_a, data='aa') node_ab = Node(parent=node_a, data='ab') # Add to the parent node's `children` list; # the child node's `parent` attribute is then set automatically: node_ba = Node(data='ba') node_b.children.append(node_ba) # Commit to DB: session.add(root_node) session.commit() # Do some checks: assert root_node.children == [node_a, node_b] assert node_aa.parent.parent.children[1].parent is root_node assert node_ba.parent.data == 'b' assert all(n.data.startswith('a') for n in node_ab.parent.children) assert (node_ba.parent.parent.id == node_ba.parent.parent_id == root_node.id) \ and isinstance(root_node.id, int) if __name__ == '__main__': test() All the assertions are satisfied and the test runs without a hitch. Also, by using the echo=True switch for the database engine, we can verify in our log output that the table is created as we expected: CREATE TABLE node ( id INTEGER, data VARCHAR NOT NULL, parent_id INTEGER, PRIMARY KEY (id), FOREIGN KEY(parent_id) REFERENCES node (id) ) | 8 | 17 |
73,407,527 | 2022-8-18 | https://stackoverflow.com/questions/73407527/installing-ssl-package-with-pip-requires-ssl-package-to-be-already-installed | CentOS 7 (strict requirement) Python 3.11 (strict requirement) I had to upgrage a software and it requires now Python 3.11. I followed instructions from Internet (https://linuxstans.com/how-to-install-python-centos/), and now Python 3.11 is installed, but cannot download anything, so all the programs that have something to do with Internet, including PIP, do not work because SSL package is not installed. The normal way to install a Python-package is to use PIP, which doesn't work because the SSL package I'm going to install is not installed. I tried all the advices in internet, but they are all outdated and not working any more, because they are either not for the 3.11 version of Python or not for CentOS 7. The error I'm getting when running the application software: ModuleNotFoundError: No module named '_ssl' When I try to install ssl with pip: # pip install --trusted-host pypi.org ssl WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/ssl/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/ssl/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/ssl/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/ssl/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/ssl/ Could not fetch URL https://pypi.org/simple/ssl/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/ssl/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping ERROR: Could not find a version that satisfies the requirement ssl (from versions: none) ERROR: No matching distribution found for ssl WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping I downloaded GZip files from https://pypi.org/simple/ssl/, unpacked them locally and tried to install them from local source, but PIP insists on HTTPS connection ... stupid tool. What to do? | How to get PIP and other HTTPS-based Python programs to work after upgrading to Python 3.11: First of all: you don't necessarily need any magical tools like pyenv. May be pyenv would do these steps, but I'd like to understand what is happening. (Ok, I admit that make is also a "magic" tool) Briefly describing: during compilation of Python from source code there is an option to inject OpenSSL support directly into it. In CentOS 7 Python 2.7.5 is installed by default and couldn't be updated to the later ones using built-in package manager. Python 3.6.8 is the latest version available in the CentOS 7 repos. 3.6 also couldn't be updated to the later ones using the package manager. So the only possible solution is to compile Python from source code. Update your yum packages, reboot, install all the packages neccesssary to run OpenSSL and Python. Download the latest OpenSSL source code, unpack and compile. Download the latest Python source code, unpack, configure to use the compiled OpenSSL and compile with altinstall parameter. Do not remove previous Python versions! You will have more problems than benefits. I had to revert virtual machine to the latest snapshot several times, because I destroyed something completely. Update and install yum packages > yum update > yum install openssl-devel bzip2-devel libffi-devel An article suggests also to install some "Development Tools" > yum groupinstall "Development Tools" but this step failed for me and I was able to finish the installation without it. Download the latest OpenSSL source code, unpack and compile I've choosen /usr/src directory to do the manipulations with source code. Download > cd /usr/src > wget https://ftp.openssl.org/source/openssl-1.1.1q.tar.gz --no-check-certificate Unpack > tar -xzvf openssl-1.1.1q.tar.gz > cd openssl-1.1.1q Compile > ./config --prefix=/usr --openssldir=/etc/ssl --libdir=lib no-shared zlib-dynamic > make Run tests for the compiled OpenSSL > make test Install > make install Check that OpenSSL is installed > openssl version OpenSSL 1.1.1q 5 Jul 2022 > which openssl /usr/bin/openssl Download and compile Python Download > cd /usr/src > wget https://www.python.org/ftp/python/3.11.0/Python-3.11.0a4.tgz Unpack > tar -xzf Python-3.11.0a4.tgz > cd Python-3.11.0a4 Configure > ./configure --enable-optimizations --with-openssl=/usr It is important that the --with-openssl option has the same value as the --prefix option when you configured OpenSSL above!!! Compile and install (It's time for a cup of coffee - it takes time) > make altinstall Checking that Python 3.11 is installed: > python3.11 -V Python 3.11.0a4 If you have set symbolic links, then Python 3.11 should be callable by "python3" and/or "python" aliases > python3 -V Python 3.11.0a4 > python -V Python 3.11.0a4 Also check that PIP is working and that symlink-aliases for it are there. Now it's time to check that your Python-based programs are working. Some of them should be installed again by PIP, because they were installed in subdirectories of previous Python versions. After doing these manipulations I also got SSL certificates error: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:998)> After running > pip3 install certifi the problem is gone. | 7 | 23 |
73,411,073 | 2022-8-19 | https://stackoverflow.com/questions/73411073/how-to-find-index-of-the-first-unique-elements-in-pandas-dataframe | Consider df1 = pd.DataFrame("Name":["Adam","Joseph","James","James","Kevin","Kevin","Kevin","Peter","Peter"]) I want to get the index of the unique values in the dataframe. When I do df1["Name"].unique() I get the output as ['Adam','Joseph','James','Kevin','Peter'] But I want to get the location of the first occurrence of each value [0,1,2,4,7] | I would suggest to use numpy.unique with the return_index as True. np.unique(df1, return_index=True) Out[13]: (array(['Adam', 'James', 'Joseph', 'Kevin', 'Peter'], dtype=object), array([0, 2, 1, 4, 7], dtype=int64)) | 4 | 2 |
73,405,368 | 2022-8-18 | https://stackoverflow.com/questions/73405368/rust-function-as-slow-as-its-python-counterpart | I am trying to speed up Python programs using Rust, a language in which I am a total beginner. I wrote a function that counts the occurrences of each possible string of length n within a larger string. For instance, if the main string is "AAAAT" and n=3, the outcome would be a hashmap {"AAA":2,"AAT":1}. I use pyo3 to call the Rust function from Python. The code of the Rust function is: fn count_nmers(seq: &str, n: usize) -> PyResult<HashMap<&str,u64>> { let mut current_pos: usize = 0; let mut counts: HashMap<&str,u64> = HashMap::new(); while current_pos+n <= seq.len() { //print!("{}\n", &seq[current_pos..current_pos+n]); match counts.get(&seq[current_pos..current_pos+n]) { Some(repeats) => counts.insert(&seq[current_pos..current_pos+n],repeats+1), None => counts.insert(&seq[current_pos..current_pos+n],1) }; current_pos +=1; } //print!("{:?}",counts) Ok(counts) } When I use small values for n (n<10), Rust is about an order of magnitude faster than Python, but as the length of n increases, the gap tends to zero with both functions having the same speed by n=200. (see graph) Times to count for different n-mer lengths (Python black, rust red) I must be doing something wrong with the strings, but I can't find the mistake. The python code is: def nmer_freq_table(sequence,nmer_length=6): nmer_dict=dict() for nmer in seq_win(sequence,window_size=nmer_length): if str(nmer) in nmer_dict.keys(): nmer_dict[str(nmer)]=nmer_dict[str(nmer)]+1 else: nmer_dict[str(nmer)]=1 return nmer_dict def seq_win(seq,window_size=2): length=len(seq) i=0 while i+window_size <= length: yield seq[i:i+window_size] i+=1 | You are computing hash function multiple times, this may matter for large n values. Try using entry function instead of manual inserts: while current_pos+n <= seq.len() { let en = counts.entry(&seq[current_pos..current_pos+n]).or_default(); *en += 1; current_pos +=1; } Complete code here Next, make sure you are running --release compiled code like cargo run --release. And one more thing to take in mind is discussed here, Rust may use non-optimal hash function for your case which you can change. And finally, on large data, most of time is spent in HashMap/dict internals which are not a python, but compiled code. So don't expect it to scale well. | 5 | 2 |
73,400,563 | 2022-8-18 | https://stackoverflow.com/questions/73400563/pip-install-e-local-git-branch-for-development-environment | I'm trying to set up the development environment for modifying a Python library. Currently, I have a fork of the library, I cloned it from remote and installed it with pip install -e git+file:///work/projects/dev/git_project@branch#egg=git_project However, it seems that instead of creating a symbolic link with pip install -e to the directory where I cloned my package, pip would copy the package to src/git_project in my virtual environment, making it difficult to modify it from there and push changes to my fork at the same time. Am I missing out on something or pip install -e doesn't actually make a symlink when installing from VCS? I know that I can also do pip install -e git+git:// to install from my remote, but it makes it difficult to see real-time changes I make without pushing my code to this fork all the time. Is there a way I can clone a fork to my local development environment, pip install a specific branch from this cloned repo, and create a symlink link to the actual git_project folder so that I can modify the package there, push changes to my remote, and at the same time import the library anywhere in my environment to see real-time changes I make on my branch without committing anything yet? Thanks for any help! | pip install -e git+URL means "clone the repository from the URL locally and install". If you already have the repository cloned locally and want to simply install from it: just install without Git: cd /work/projects/dev/git_project git checkout branch pip install -e . | 3 | 6 |
73,396,611 | 2022-8-18 | https://stackoverflow.com/questions/73396611/how-can-you-include-path-parameters-in-nested-router-w-fastapi | How can I include one APIRouter inside another AND specify that some path param is a required prefix? For context: Let's say I have the concept of an organization and a user. A user only belongs to one organization. My web app could be structured as follows: βββ web_app βββ endpoints βββ __init__.py # produces main_router by including other routers inside each other βββ organization.py # contains endpoints relevant to the organization βββ user.py # contains endpoints relevant to the user βββ main.py # includes main_router in app Let's assume I want to achieve basic CRUD functionality for organizations and users. My endpoints could look something like this: For orgs: GET /api/latest/org/{org_id} POST /api/latest/org/{org_id} PUT /api/latest/org/{org_id} DELETE /api/latest/org/{org_id} For users: GET /api/latest/org/{org_id}/users/{user_id} POST /api/latest/org/{org_id}/users/{user_id} PUT /api/latest/org/{org_id}/users/{user_id} DELETE /api/latest/org/{org_id}/users/{user_id} Since users are nested under orgs, within user.py, I could write all of my endpoints like this: user_router = APIRouter() @user_router.get("/org/{org_id}/users/{user_id}") async def get_user(org_id, user_id): ... But that gets gross really quick. The user_router is completely disjointed from the org_router even though one should be nested inside the other. If I make a change to the org router, I now need to change every single user router endpoint. God forbid I have something nested under users.... So as per my question, I was hoping something like this would work: # user.py user_router = APIRouter(prefix="/org/{org_id}/users") @user_router.get("/{user_id}") async def get_user(org_id, user_id): # __init__.py org_router.include_router(user_router) main_router = APIRouter(prefix="/api/latest/") main_router.include_router(org_router) but that gives me the following error: AssertionError: Path params must be of one of the supported types. I don't get the error if I remove {org_id} from the prefix, so I know APIRouter(prefix="/org/{org_id}/users") is the problem. This is the only documentation we get from FastAPI on the matter: https://fastapi.tiangolo.com/tutorial/bigger-applications/#include-an-apirouter-in-another Is what I'm looking for even possible? This seems like an extremely common situation so I'm curious what other folks do. | I am not sure why you are getting this error, but the thing that you are requesting (using path parameters in APIRoute prefix) is absolutely possible. When you include the APIRoute to the app instance, it will evaluate all routes and appends them from your dedicated APIRoute to the main APIRoute. The latter is always present in the app = FastAPI() app object. The path parameters itself are resolved upon a request. Below is a fully working example, which will run as is: from fastapi import APIRouter, FastAPI app = FastAPI() @app.get("/") async def root(): return {"hello": "world"} router = APIRouter(prefix="/org/{org_id}") @router.get("/users/{user_id}") def get_user(org_id: int, user_id: int): return {"org": org_id, "user:": user_id} app.include_router(router) if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) We can check the outcome with curl: % curl localhost:8000/org/1/users/2 {"org":1,"user:":2}% | 4 | 7 |
73,397,515 | 2022-8-18 | https://stackoverflow.com/questions/73397515/in-python-how-can-i-check-if-an-array-contains-all-elements-of-another-array-li | There appear to be several ways to determine if a set is a subset of another set, but I haven't been able to find something concise that determines whether all elements of a list (or array), including duplicate values, appear in another list (or array). For example, for the hypothetical function contains_all(A, B) which checks whether all elements of B are contained in A, these are some expected outputs: contains_all([11, 4, 11, 6], [6, 11]) returns True (order doesn't matter) contains_all([11, 4, 11, 6], [11, 9]) returns False (no 9) contains_all([11, 4, 11, 6], [11, 11]) returns True contains_all([11, 4, 11, 6], [11, 11, 11]) returns False (only two 11's) contains_all([11, 4, 11, 6], [6, 6]) returns False (only one 6) contains_all([11, 4, 11, 6], [11, 4, 6, 11]) returns True contains_all([11, 4, 11, 6], [11, 4, 11, 6, 5]) returns False (no 5) The fourth and fifth examples above, specifically, are what I'm having trouble implementing. set(B).issubset(A) or list comprehensions cover the other cases, but not these, since sets don't have duplicate elements. Is there a concise way to do this? If not, what would be the best way to approach writing a function that does this? It seems something like this may be possible with collections.Counter objects or multisets, but I'm not sure how to go about it. | You are correct, collections.Counter is a good way to go. You just need to go over the B counter and check if the the value is smaller or equal. Do it in all() to check all the key-value pairs def contains_all(a, b): counter_a = Counter(a) counter_b = Counter(b) return all(v <= counter_a[k] for k, v in counter_b.items()) Edit user2357112 answer is much nicer, but apply to Pyton3.10 or newer. For older versions you can use this answer. | 3 | 2 |
73,396,488 | 2022-8-18 | https://stackoverflow.com/questions/73396488/is-there-a-way-to-use-python-argparse-with-nargs-choices-and-default | My use case is multiple optional positional arguments, taken from a constrained set of choices, with a default value that is a list containing two of those choices. I can't change the interface, due to backwards compatibility issues. I also have to maintain compatibility with Python 3.4. Here is my code. You can see that I want my default to be a list of two values from the set of choices. parser = argparse.ArgumentParser() parser.add_argument('tests', nargs='*', choices=['a', 'b', 'c', 'd'], default=['a', 'd']) args = parser.parse_args() print(args.tests) All of this is correct: $ ./test.py a ['a'] $ ./test.py a d ['a', 'd'] $ ./test.py a e usage: test.py [-h] [{a,b,c,d} ...] test.py: error: argument tests: invalid choice: 'e' (choose from 'a', 'b', 'c', 'd') This is incorrect: $ ./test.py usage: test.py [-h] [{a,b,c,d} ...] test.py: error: argument tests: invalid choice: ['a', 'd'] (choose from 'a', 'b', 'c', 'd') I've found a LOT of similar questions but none that address this particular use case. The most promising suggestion I've found (in a different context) is to write a custom action and use that instead of choices: https://stackoverflow.com/a/8624107/7660197 That's not ideal. I'm hoping someone can point me to an option I've missed. Here's the workaround I plan to use if not: parser.add_argument('tests', nargs='*', choices=['a', 'b', 'c', 'd', 'default'], default='default') I'm allowed to add arguments as long as I maintain backwards compatibility. Thanks! Update: I ended up going with a custom action. I was resistant because this doesn't feel like a use case that should require custom anything. However, it seems like more or less the intended use case of subclassing argparse.Action, and it makes the intent very explicit and gives the cleanest user-facing result I've found. class TestsArgAction(argparse.Action): def __call__(self, parser, namespace, values, option_string=None): all_tests = ['a', 'b', 'c', 'd'] default_tests = ['a', 'd'] if not values: setattr(namespace, self.dest, default_tests) return # If no argument is specified, the default gets passed as a # string 'default' instead of as a list ['default']. Probably # a bug in argparse. The below gives us a list. if not isinstance(values, list): values = [values] tests = set(values) # If 'all', is found, replace it with the tests it represents. # For reasons of compatibility, 'all' does not actually include # one of the tests (let's call it 'e'). So we can't just do # tests = all_tests. try: tests.remove('all') tests.update(set(all_tests)) except KeyError: pass # Same for 'default' try: tests.remove('default') tests.update(set(default_tests)) except KeyError: pass setattr(namespace, self.dest, sorted(list(tests))) | The behavior noted as incorrect is caused by the fact that the raw default value ['a', 'd'] is not inside the specified choices (see: relevant code as found in Python 3.4.10; this check method is effectively unchanged as of Python 3.10.3). I will reproduce the code from the Python argparse.py source code: def _check_value(self, action, value): # converted value must be one of the choices (if specified) if action.choices is not None and value not in action.choices: args = {'value': value, 'choices': ', '.join(map(repr, action.choices))} msg = _('invalid choice: %(value)r (choose from %(choices)s)') raise ArgumentError(action, msg % args) When a default value is specified as a list, that entire value is passed to that _check_value method and thus it will fail (as any given list will not match any strings inside another list). You can actually verify that by setting a breakpoint with pdb in that method and trace through the values by stepping through each line, or alternatively test and verify the stated limitations with the following code: import argparse DEFAULT = ['a', 'd'] parser = argparse.ArgumentParser() parser.add_argument('tests', nargs='*', choices=['a', 'b', 'c', 'd', DEFAULT], default=DEFAULT) args = parser.parse_args() print(args.tests) Then run python test.py $ python test.py ['a', 'd'] This clearly passed because that very same DEFAULT value is present in the list of choices. However, calling -h or passing any unsupported value will result in: $ python test.py z usage: test.py [-h] [{a,b,c,d,['a', 'd']} ...] test.py: error: argument tests: invalid choice: 'z' (choose from 'a', 'b', 'c', 'd', ['a', 'd']) $ python test.py -h usage: test.py [-h] [{a,b,c,d,['a', 'd']} ...] positional arguments: {a,b,c,d,['a', 'd']} ... Which may or may not be ideal depending on use case as the output looks weird if not confusing. If this output is going to be user-facing it's probably not ideal, but if this is to maintain some internal system call emulation that won't leak out to users, the messages are probably not visible so this may be an acceptable workaround. Hence, I do not recommend this approach if the clarity of the choice message being generated is vital (which is >99% of typical use cases). However, given that custom action is considered not ideal, I will assume overriding the ArgumentParser class may be a possible choice, and given that _check_value has not changed between 3.4 and 3.10, this might represent the most minimum additional code to nip out the incompatible check (with the specified use case as per the question): class ArgumentParser(argparse.ArgumentParser): def _check_value(self, action, value): if value is action.default: return return super()._check_value(action, value) This would ensure that the default value be considered a valid choice (return None if the value is the action's default, otherwise return the default check) before using the default implementation that is unsuitable for the requirement as outlined in the question; do please note that this prevents deeper inspection of what that action.default provides being a valid one (if that's necessary, custom Action class is most certainly the way to go). Might as well show the example usage with the custom class (i.e. copy/pasted the original code, remove the argparse. to use the new custom class): parser = ArgumentParser() parser.add_argument('tests', nargs='*', choices=['a', 'b', 'c', 'd'], default=['a', 'd']) args = parser.parse_args() print(args.tests) Usage: $ python test.py ['a', 'd'] $ python test.py a z usage: test.py [-h] [{a,b,c,d} ...] test.py: error: argument tests: invalid choice: 'z' (choose from 'a', 'b', 'c', 'd') $ python test.py -h usage: test.py [-h] [{a,b,c,d} ...] positional arguments: {a,b,c,d} optional arguments: -h, --help show this help message and exit | 4 | 2 |
73,396,203 | 2022-8-18 | https://stackoverflow.com/questions/73396203/how-to-use-trained-pytorch-model-for-prediction | I have a pretrained pytorch model which is saved in .pth format. How can i use it for prediction on new dataset in a separate python file. I need a detailed guide. | To use a pretrained model you should load the state on a new instance of the architecture as explained in the docs/tutorials: Here models is imported beforehand: model = models.vgg16() model.load_state_dict(torch.load('model_weights.pth')) # This line uses .load() to read a .pth file and load the network weights on to the architecture. model.eval() # enabling the eval mode to test with new samples. If you are using a custom architecture you only need to change the first line. model = MyCustomModel() After enabling the eval mode, you can proceed as follows: Load your data into a Dataset instance and then in a DataLoader. Make your predictions with the data. Calculate metrics on the results. More about Dataset and DataLoader here. | 3 | 6 |
73,395,864 | 2022-8-17 | https://stackoverflow.com/questions/73395864/how-do-i-wait-when-all-threadpoolexecutor-threads-are-busy | My understanding of how a ThreadPoolExecutor works is that when I call #submit, tasks are assigned to threads until all available threads are busy, at which point the executor puts the tasks in a queue awaiting a thread becoming available. The behavior I want is to block when there is not a thread available, to wait until one becomes available and then only submit my task. The background is that my tasks are coming from a queue, and I only want to pull messages off my queue when there are threads available to work on these messages. In an ideal world, I'd be able to provide an option to #submit to tell it to block if a thread is not available, rather than putting them in a queue. However, that option does not exist. So what I'm looking at is something like: with concurrent.futures.ThreadPoolExecutor(max_workers=CONCURRENCY) as executor: while True: wait_for_available_thread(executor) message = pull_from_queue() executor.submit(do_work_for_message, message) And I'm not sure of the cleanest implementation of wait_for_available_thread. Honestly, I'm surprised this isn't actually in concurrent.futures, as I would have thought the pattern of pulling from a queue and submitting to a thread pool executor would be relatively common. | One approach might be to keep track of your currently running threads via a set of Futures: active_threads = set() def pop_future(future): active_threads.pop(future) with concurrent.futures.ThreadPoolExecutor(max_workers=CONCURRENCY) as executor: while True: while len(active_threads) >= CONCURRENCY: time.sleep(0.1) # or whatever message = pull_from_queue() future = executor.submit(do_work_for_message, message) active_threads.add(future) future.add_done_callback(pop_future) A more sophisticated approach might be to have the done_callback be the thing that triggers a queue pull, rather than polling and blocking, but then you need to fall back to polling the queue if the workers manage to get ahead of it. | 4 | 2 |
73,394,593 | 2022-8-17 | https://stackoverflow.com/questions/73394593/aws-lambda-function-returns-errormessage-errno-30-read-only-file-system | I get the following error { "errorMessage": "[Errno 30] Read-only file system: '/home/sbx_user1051'", "errorType": "OSError", "stackTrace": [ " File \"/var/lang/lib/python3.8/imp.py\", line 234, in load_module\n return load_source(name, filename, file)\n", " File \"/var/lang/lib/python3.8/imp.py\", line 171, in load_source\n module = _load(spec)\n", " File \"<frozen importlib._bootstrap>\", line 702, in _load\n", " File \"<frozen importlib._bootstrap>\", line 671, in _load_unlocked\n", " File \"<frozen importlib._bootstrap_external>\", line 843, in exec_module\n", " File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\n", " File \"/var/task/app.py\", line 3, in <module>\n nltk.download('stopwords')\n", " File \"/var/task/nltk/downloader.py\", line 777, in download\n for msg in self.incr_download(info_or_id, download_dir, force):\n", " File \"/var/task/nltk/downloader.py\", line 642, in incr_download\n yield from self._download_package(info, download_dir, force)\n", " File \"/var/task/nltk/downloader.py\", line 699, in _download_package\n os.makedirs(download_dir)\n", " File \"/var/lang/lib/python3.8/os.py\", line 213, in makedirs\n makedirs(head, exist_ok=exist_ok)\n", " File \"/var/lang/lib/python3.8/os.py\", line 223, in makedirs\n mkdir(name, mode)\n" ] } when testing my lambda function. I don't understand what this error is telling me to do about the docker image I am using, if that even is the correct route to explore. What should I do | AWS Lambda is not a generic docker runner. The docker containers you deploy to Lambda have to comply with the AWS Lambda runtime environment. The docker image you are using is trying to write to the path /home/sbx_user1051 apparently. On AWS Lambda the file system is always read-only except for the /tmp path. You will have to modify the code running in the docker image to prevent it from writing anywhere else but /tmp/. | 4 | 11 |
73,392,878 | 2022-8-17 | https://stackoverflow.com/questions/73392878/access-data-in-python3-associative-arrays | I'd like how to create and print associative arrays in python3... like in bash I do: declare -A array array["alfa",1]="text1" array["beta",1]="text2" array["alfa",2]="text3" array["beta",2]="text4" In bash I can do echo "${array["beta",1]}" to access data to print "text2". How can I define a similar array in python3 and how to access to data in a similar way? I tried some approaches, but none worked. Stuff like this: array = () array[1].append({ 'alfa': "text1", 'beta': "text2", }) But I can't access to data with print(array['beta', 1]). It is not printing "text2" :( | It looks like you want a dictionary with compound keys: adict = { ("alfa", 1): "text1", ("beta", 1): "text2", ("alfa", 2): "text3", ("beta", 2): "text4" } print(adict[("beta", 1)]) | 3 | 3 |
73,392,385 | 2022-8-17 | https://stackoverflow.com/questions/73392385/can-f-strings-auto-pad-to-the-next-even-number-of-digits-on-output | Based on this answer (among others) it seems like f-strings is [one of] the preferred ways to convert to hexadecimal representation. While one can specify an explicit target length, up to which to pad with leading zeroes, given a goal of an output with an even number of digits, and inputs with an arbitrary # of bits, I can imagine: pre-processing to determine the number of bits of the input, to feed an input-specific value in to the fstring, or post-processing a-la out = "0"+f"{val:x}" if len(f"{val:x}") % 2 else f"{val:02x}" (or even using .zfill()) The latter seems like it might be more efficient than the former - is there a built-in way to do this with fstrings, or a better alternative? Examples of input + expected output: [0x]1 -> [0x]01 [0x]22 -> [0x]22 [0x]333 -> [0x]0333 [0x]4444 -> [0x]4444 and so on. | Here's a postprocessing alternative that uses assignment expressions (Python 3.8+): print((len(hx:=f"{val:x}") % 2) * '0' + hx) If you still want a one-liner without assignment expressions you have to evaluate your f-string twice: print((len(f"{val:x}") % 2) * '0' + f"{val:x}") As a two-liner hx = f"{val:x}" print((len(hx) % 2) * '0' + hx) And one more version: print(f"{'0'[:len(hex(val))%2]}{val:x}") | 5 | 1 |
73,388,357 | 2022-8-17 | https://stackoverflow.com/questions/73388357/poetry-lock-empty-hashes | I am doing poetry lock Then I open the poetry.lock file and see that the metadata.files block does not contain hashes: [metadata.files] aiohttp = [] aiosignal = [] apscheduler = [] ... Before, it wasn't like that. What could be the reasons for empty hashes? | You are probably running into this issue https://github.com/python-poetry/poetry/issues/5970 Just upgrade to poetry 1.1.14 or the prereleases for the 1.2 series. | 5 | 3 |
73,385,797 | 2022-8-17 | https://stackoverflow.com/questions/73385797/can-i-naively-check-if-a-b-c-d | I was doing leetcode when I had to do some arithmetic with rational numbers (both numerator and denominator integers). I needed to count slopes in a list. In python collections.Counter( [ x/y if y != 0 else "inf" for (x,y) in points ] ) did the job, and I passed all the tests with it. ((edit: they've pointed out in the comments that in that exercise numbers were much smaller, not general 32 bit integers)) I wonder if this is correct, that is, python correctly recognizes if a/b == c/d as rationals, for a,b,c,d 32 bit integers. I am also interested the case for c++, and any additional facts that may be useful (footguns, best practices, theory behind it if not too long etc). Also this question seems frequent and useful, but I don't really find anything about it (give me the duplicates!), maybe I am missing some important keywords? | It's not safe, and I've seen at least one LeetCode problem where you'd fail with that (maybe Max Points on a Line). Example: a = 94911150 b = 94911151 c = 94911151 d = 94911152 print(a/b == c/d) print(a/b) print(c/d) Both a/b and c/d are the same float value even though the slopes actually differ (Try it online!): True 0.9999999894638303 0.9999999894638303 You could use fractions.Fraction(x, y) or the tuple (x//g, y//g) after g = math.gcd(d, y) ( if I remember correctly, this is more lightweight/efficient than the Fraction class). | 3 | 5 |
73,383,458 | 2022-8-17 | https://stackoverflow.com/questions/73383458/login-authentication-failed-with-gmail-smtp-updated | When trying to login into a Gmail account using SMTP, this error message occurs: SMTPAuthenticationError(code, resp) smtplib.SMTPAuthenticationError: (535, b'5.7.8 Username and Password not accepted. Code causing the error: import smtplib server = smtplib.SMTP("smtp.gmail.com", 587) server.starttls() server.login("[email protected]", "your_password") message = "TEST" server.sendmail("[email protected]", "[email protected]", message) server.quit() | Google has disabled the ability to enable less secure apps as of May 2022. Because of this, the previous solution of enabling less secure apps is no longer valid. Steps: Go into your sending email address and make your way to the settings. Find two-step authentication and enable it. Under two-step authentication there should be a tab labeled App passwords. Click on it then select mail as the app and your device of choice Use the password generated from the app password as the password for your Gmail account. Credits to: Link to source | 4 | 12 |
73,382,163 | 2022-8-17 | https://stackoverflow.com/questions/73382163/filtering-out-rows-based-on-other-rows-using-pandas | I have a dataframe that looks like this: dict = {'companyId': {0: 198236, 1: 198236, 2: 900814, 3: 153421, 4: 153421, 5: 337815}, 'region': {0: 'Europe', 1: 'Europe', 2: 'Asia-Pacific', 3: 'North America', 4: 'North America', 5:'Africa'}, 'value': {0: 560, 1: 771, 2: 964, 3: 217, 4: 433, 5: 680}, 'type': {0: 'actual', 1: 'forecast', 2: 'actual', 3: 'forecast', 4: 'actual', 5: 'forecast'}} df = pd.DataFrame(dict) companyId region value type 0 198236 Europe 560 actual 1 198236 Europe 771 forecast 2 900814 Asia-Pacific 964 actual 3 153421 North America 217 forecast 4 153421 North America 433 actual 5 337815 Africa 680 forecast I can't seem to figure out a way to filter out certain rows based on the following condition: If there are two entries under the same companyId, as is the case for 198236 and 153421, I want to keep only the entry where type is actual. If there is only one entry under a companyId, as is the case for 337815 and 900814, I want to keep that row, irrespective of the value in column type. Does anyone have an idea how to go about this? | You can use a groupby and transform to create boolean indexing: #Your condition i.e. retain the rows which are not duplicated and those # which are duplicated but only type==actual. Lets express that as a lambda. to_filter = lambda x: (len(x) == 1) | ((len(x) > 1) & (x == 'actual')) #then create a boolean indexing mask as below m = df.groupby('companyId')['type'].transform(to_filter) #then filter your df with that m: df[m]: companyId region value type 0 198236 Europe 560 actual 2 900814 Asia-Pacific 964 actual 4 153421 North America 433 actual 5 337815 Africa 680 forecast | 3 | 2 |
73,381,897 | 2022-8-17 | https://stackoverflow.com/questions/73381897/how-can-%c4%b1-use-geoadd-in-python-redis | I want the store geospatial information in Redis. I am executing the following code from redis import Redis redis_con = Redis(host="localhost", port=6379) redis_con.geoadd("Sicily", 13.361389, 38.115556, "Palermo") But Δ± got error like that raise DataError("GEOADD allows either 'nx' or 'xx', not both") redis.exceptions.DataError: GEOADD allows either 'nx' or 'xx', not both | This will work for you: from redis import Redis redis_con = Redis(host="localhost", port=6379) coords = (13.361389, 38.115556, "Palermo") redis_con.geoadd("Sicily", coords) The signature for geoadd is: geoadd(name, values, nx=False, xx=False, ch=False) name: Union[bytes, str, memoryview] values: Sequence[Union[bytes, memoryview, str, int, float]] nx (bool, default: False) xx (bool, default: False) ch (bool, default: False) You need to specify your coordinates as a sequence like a list or tuple, because right now you're specifying arguments so that the method thinks you've specified nx, xx, and ch. | 3 | 7 |
73,365,780 | 2022-8-15 | https://stackoverflow.com/questions/73365780/why-is-not-recommended-to-install-poetry-with-homebrew | Poetry official documentation strictly recommends sticking with the official installer. However, homebrew has poetry formulae. brew install poetry Usually, I like to keep everything I can in homebrew to manage installations easily. What is the drawback and risks of installing poetry using homebrew instead of the recommended installation script? | The drawback is that poetry will be unable to upgrade itself (I've no idea what'd actually happen), and you'll not be able to install specific poetry versions. Homebrew installed poetry will probably also use the Homebrew-installed Python environment, etc, instead of having its own isolated venv to execute from. If you use homebrew to install poetry, don't try to manage that installation any way outside of homebrew. Otherwise, it's probably fine. Other misc drawbacks include: Can't control specific Python version used to run Poetry (want to get the latest core Python speedups!) Uncertain support and management of poetry plugins (part of "managing the installation") Can't run multiple poetry versions side-by-side edit My personal recommendation is to install pipx using homebrew, and then install poetry via pipx (as Poetry themselves now also recommend): brew install pipx pipx ensurepath pipx install poetry # latest version pipx install poetry==1.2.2 [email protected] pipx install poetry==1.3.2 [email protected] pipx install poetry==1.6.1 [email protected] so you'll get something like: β― pipx list venvs are in /Users/redacted/.local/pipx/venvs apps are exposed on your $PATH at /Users/redacted/Code/dotfiles/bin package hatch 1.7.0, installed using Python 3.11.5 - hatch package poetry 1.2.2, installed using Python 3.11.5 - poetry package poetry 1.3.2 ([email protected]), installed using Python 3.11.5 - [email protected] package poetry 1.6.1 ([email protected]), installed using Python 3.11.5 - [email protected] β― poetry --version Poetry (version 1.2.2) β― [email protected] --version Poetry (version 1.3.2) e.g. via: brew install pipx pipx ensurepath pipx install poetry # latest version pipx install poetry==1.2.2 [email protected] pipx install poetry==1.3.2 [email protected] pipx install poetry==1.6.1 [email protected] Poetry is still under very active development, and its API is not very stable. This gives me full control of what version to use/when, e.g. across different projects. | 32 | 29 |
73,340,211 | 2022-8-12 | https://stackoverflow.com/questions/73340211/getting-no-develop-install-with-pip-install-e-unless-i-delete-pyproject-to | I have the following situation that pip install -e . does not build the develop version unless I delete the pyproject.toml which does not contain packaging information, but black configuration. Does somebody know what to do in order to get the develop build. my pyproject.toml looks like this: # Source https://github.com/psf/black#configuration-format [tool.black] line-length = 100 target-version = ['py38'] exclude = ''' ''' setup.py from setuptools import find_namespace_packages from setuptools import setup setup( name="webservice", packages=find_packages(), version="0.1.0", description="description", author="Author", license="License", ) running pip install -e . with these two files... (webservice_tool)pk@LAP1:~/webservice_tool$ pip install -e . Obtaining file:///home/pk/webservice_tool Installing build dependencies ... done Checking if build backend supports build_editable ... done Getting requirements to build editable ... done Preparing editable metadata (pyproject.toml) ... done Building wheels for collected packages: webservice Building editable for webservice (pyproject.toml) ... done Created wheel for webservice: filename=webservice-0.1.0-0.editable-py3-none-any.whl size=4070 sha256=dcb7c034ba437503d1059fe9370ccafbda144cd19f3e5d92340a63a7da396422 Stored in directory: /tmp/pip-ephem-wheel-cache-6iqiqbob/wheels/e6/b5/ba/40d8c3d66df94ee2ae46e181034e0c3c47132784db53284d0b Successfully built webservice Installing collected packages: webservice Successfully installed webservice-0.1.0 I delete pyproject.toml and only then Running setup.py develop shows up. (webservice_tool) pk@LAP1:~/webservice_tool$ pip install -e . Obtaining file:///home/pk/webservice_tool Preparing metadata (setup.py) ... done Installing collected packages: webservice Attempting uninstall: webservice Found existing installation: webservice 0.1.0 Uninstalling webservice-0.1.0: Successfully uninstalled webservice-0.1.0 Running setup.py develop for webservice Successfully installed webservice-0.1.0 versions of some selected packages from my conda env, running within wsl2 packaging 21.3 pyhd3eb1b0_0 pip 22.1.2 py38h06a4308_0 python 3.8.13 h12debd9_0 setuptools 61.2.0 py38h06a4308_0 folder structure |-- data_utils | |-- clean_cache.py | `-- advanced_utils.py |-- deployment | |-- base_deployment | | |-- auth-proxy.yaml | | |-- kustomization.yaml | | |-- webapi.yaml | | `-- webui.yaml | `-- mysql_from_helm | |-- mysql-from-helm.yaml | `-- mysql-kustomization.yaml |-- docker-compose.yml |-- Dockerfile |-- environment.yml |-- live_api | |-- definitions.json | |-- __init__.py | `-- live_api.py |-- params.py |-- pyproject.toml |-- setup.py |-- shared_helpers | |-- data_cleaning.py | |-- handle_time.py | |-- __init__.py | |-- plot_timesequence.py | |-- read_samples.py | `-- save_samples.py |-- setup.py |-- util.py |-- webtool | |-- clean_data | | |-- clean_data.py | | `-- __init__.py | |-- evaluation | | |-- draw_figures.py | | |-- __init__.py | | `-- webtool_metrics.py | |-- __init__.py | |-- preprocess | | |-- __init__.py | | `-- preprocess.py | |-- ui | | |-- __init__.py | | `-- create_ui.py | `-- util | |-- data_input.py | |-- data_redefinitions.py | `-- __init__.py |-- webtool.egg-info | |-- dependency_links.txt | |-- entry_points.txt | |-- PKG-INFO | |-- SOURCES.txt | `-- top_level.txt `-- webtool_tests |-- clean_data | `-- test_clean_data.py |-- evaluation | `-- test_draw_figures.py |-- preprocess | `-- test_preprocess.py `-- util |-- test_data_input.py `-- test_data_redefinitions.py | These are both development installs. The difference in the pip output here is because the presence (or absence) of a pyproject.toml file causes pip to use the build backend hooks (or not). From PEP 517: If the pyproject.toml file is absent ... the source tree is not using this specification, and tools should revert to the legacy behaviour of running setup.py You can also control that with a pip command line option: $ pip install --help | grep pep --use-pep517 Use PEP 517 for building source distributions (use --no-use-pep517 to force legacy behaviour). With a PEP 517 style build, pip is setting up a dedicated environment and freshly installing setuptools for the purposes of build and/or packaging behind the scenes - see "Installing build dependencies ... done" in the log. Without it, python setup.py develop is invoked directly where it is just assumed that an adequate setuptools version is already installed within the Python runtime which was used to execute the setup.py file. The point here is that using a PEP 517 style build system allows the project to specify the setuptools version it requires (or, indeed, to use some other build system entirely). The end result will be the same - a .pth path configuration file placed in site-packages will expose the source directory as a development installation. Since util.py is not contained in any package, you'll also need to list this module alongside find_packages() in the setup call for it to be picked up in the development installation (as opposed to just importing from the current working directory): # in setup.py from setuptools import setup setup( name="webservice", version="0.1.0", packages=find_packages(), py_modules=["util"], # <--- here ... ) | 6 | 3 |
73,338,942 | 2022-8-12 | https://stackoverflow.com/questions/73338942/how-to-install-a-new-font-in-altair-and-specifying-it-in-alt-titleparams | I am wondering if it's possible to install fonts to use in altair to use in alt.TitleParams. For this example, without font specified, I get a default font and size. import altair as alt import pandas as pd source = pd.DataFrame({ 'a': ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I'], 'b': [28, 55, 43, 91, 81, 53, 19, 87, 52] }) alt.Chart(source, title = alt.TitleParams(text = 'Example Chart')).mark_bar().encode( x='a', y='b' ) Changing the font size I get the bigger letters: alt.Chart(source, title = alt.TitleParams(text = 'Example Chart', fontSize=24)).mark_bar().encode( x='a', y='b' ) But, when I add a font style, the size doesn't work anymore: alt.Chart(source, title = alt.TitleParams(text = 'Example Chart' , fontSize=24 , fontStyle = 'Arial')).mark_bar().encode( x='a', y='b' ) And the text looks always the same regardless of what font is specified: alt.Chart(source, title = alt.TitleParams(text = 'Example Chart' , fontSize=24 , fontStyle = 'Calibri')).mark_bar().encode( x='a', y='b' ) Same thing: I would like to know how I can display the correct font, not only with standard fonts, but with non-standard ones and how to install them. | fontStyle refers to the style of the font, such as "bold", "italic", etc. If you want to specify a font by name, use the font parameter: alt.Chart( source, title=alt.Title( # Since altair 5 you can use just Title instead of TitleParams text='Example Chart', fontSize=24, fontStyle='italic', font='Times' ) ).mark_bar().encode( x='a', y='b' ) | 4 | 5 |
73,316,102 | 2022-8-11 | https://stackoverflow.com/questions/73316102/fastai-fastcore-patch-decorator-vs-simple-monkey-patching | I'm trying to understand the value-added of using fastai's fastcore.basics.patch_to decorator. Here's the fastcore way: from fastcore.basics import patch_to class _T3(int): pass @patch_to(_T3) def func1(self, a): return self + a And here's the simple monkey-patching approach: class simple_T3(int): pass def func1(self, a): return self + a simple_T3.func1 = func1 Inspecting the two classes does not reveal any differences. I understand that simple monkey-patching might cause problems in more complex cases, so it would be great to know what such cases are? In other words, what's the value-added of fastcore.basics.patch_to? | TL;DR More informative debugging messages, better IDE support. Answer patch and patch_to are decorators in the fastcore basics module that are helpful to make the monkey_patched method to look more like as if it was a method originally placed inside the Class, the classical way (pun intended). If you create a function outside a class and then monkey-patch it, the outsider method typically has different attributes, such as its name, module, and documentation, compared to the original function. This can be confusing and unhelpful when debugging or working with the "outsider" function. Source: Official documentation: https://github.com/fastai/fastcore/blob/master/nbs/01_basics.ipynb Usage suggestion Consider using patch instead of patch_to, because this way you can add type annotations. from fastcore.basics import patch class _T3(int): pass @patch def func1(self: _T3, a): return self + a What if I don't want to use the library? Credits: Kai Lichtenberg fastcore itself is extremely low weight: The only external library used is numpy (and dataclasses if your python is < 3.7). But if you really want to not use it, here's an implementation with only two built-in dependencies: import functools from copy import copy from types import FunctionType def copy_func(f): "Copy a non-builtin function (NB `copy.copy` does not work for this)" if not isinstance(f,FunctionType): return copy(f) fn = FunctionType(f.__code__, f.__globals__, f.__name__, f.__defaults__, f.__closure__) fn.__dict__.update(f.__dict__) return fn def patch_to(cls, as_prop=False): "Decorator: add `f` to `cls`" if not isinstance(cls, (tuple,list)): cls=(cls,) def _inner(f): for c_ in cls: nf = copy_func(f) # `functools.update_wrapper` when passing patched function to `Pipeline`, so we do it manually for o in functools.WRAPPER_ASSIGNMENTS: setattr(nf, o, getattr(f,o)) nf.__qualname__ = f"{c_.__name__}.{f.__name__}" setattr(c_, f.__name__, property(nf) if as_prop else nf) return f return _inner def patch(f): "Decorator: add `f` to the first parameter's class (based on f's type annotations)" cls = next(iter(f.__annotations__.values())) return patch_to(cls)(f) class MyClass(): def __init__(self): pass @patch def new_fun(self:MyClass): print("I'm a patched function!") MyInstance = MyClass() MyInstance.new_fun() "I'm a patched function!" | 7 | 5 |
73,351,280 | 2022-8-14 | https://stackoverflow.com/questions/73351280/what-is-the-difference-between-threadpoolexecutor-and-threadpool | Is there a difference between ThreadPoolExecutor from concurrent.futures and ThreadPool from multiprocessing.dummy? This article suggests that ThreadPool queries the "threads" (task) to the different "threads" of the CPU. Does concurrent.futures do the same or will the "threads" (tasks) query to a single "thread" of a CPU? | The multiprocessing.dummy.ThreadPool is a copy of the multiprocessing.Pool API which uses threads rather than processes, which leads to some weirdness since thread and processes are very different, including returning a AsyncResult type which only it understands. The concurrent.futures.ThreadPoolExecutor is a subclass concurrent.futures.Executor which is a newer simpler API, developped with both processes and threads in mind and so returns a common concurrent.futures.Future. On a very broad and far look, both do the same but concurrent.futures.ThreadPoolExecutor does it better. References: From the multiprocessing.dummy documentation: multiprocessing.dummy replicates the API of multiprocessing but is no more than a wrapper around the threading module. In particular, the Pool function provided by multiprocessing.dummy returns an instance of ThreadPool, which is a subclass of Pool that supports all the same method calls but uses a pool of worker threads rather than worker processes. From the multiprocessing.dummy.ThreadPool documentation A ThreadPool shares the same interface as Pool, which is designed around a pool of processes and predates the introduction of the concurrent.futures module. As such, it inherits some operations that donβt make sense for a pool backed by threads, and it has its own type for representing the status of asynchronous jobs, AsyncResult, that is not understood by any other libraries. Users should generally prefer to use concurrent.futures.ThreadPoolExecutor, which has a simpler interface that was designed around threads from the start, and which returns concurrent.futures.Future instances that are compatible with many other libraries, including asyncio. | 9 | 11 |
73,315,599 | 2022-8-11 | https://stackoverflow.com/questions/73315599/skipping-analyzing-feedparser-util-module-is-installed-but-missing-library-s | How do I fix this error? It seems feedparser does not support mypy typings? I could not find a typeshed implementation for feedparser UPDATE 1 I see an option called ignore_missing_imports that I can add to pyproject.toml. Isn't it a bad idea to do this? | I see an option called ignore_missing_imports that I can add to pyproject.toml. Isn't it a bad idea to do this? Yes, it usually is a bad idea to enable this on all modules. Consider using a more constrained approach: You can ignore missing imports only for this specific package by adding a [tools.mypy.override] section in pyproject.toml. This way you don't need enable the flag on everything. [[tool.mypy.overrides]] module = "feedparser.*" ignore_missing_imports = true The work for supporting typing in feedparser has been merged to the develop branch and you should be able to remove this workaround when it is released. | 11 | 14 |
73,361,664 | 2022-8-15 | https://stackoverflow.com/questions/73361664/asyncio-get-event-loop-deprecationwarning-there-is-no-current-event-loop | I'm building an SMTP server with aiosmtpd and used the examples as a base to build from. Below is the code snippet for the entry point to the program. if __name__ == '__main__': loop = asyncio.get_event_loop() loop.create_task(amain(loop=loop)) try: loop.run_forever() except KeyboardInterrupt: pass When I run the program, I get the following warning: server.py:61: DeprecationWarning: There is no current event loop loop = asyncio.get_event_loop() What's the correct way to implement this? | Your code will run on Python3.10 but as of 3.11 it will be an error to call asyncio.get_event_loop when there is no running loop in the current thread. Since you need loop as an argument to amain, apparently, you must explicitly create and set it. It is better to launch your main task with asyncio.run than loop.run_forever, unless you have a specific reason for doing it that way. [But see below] Try this: if __name__ == '__main__': loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) try: asyncio.run(amain(loop=loop)) except KeyboardInterrupt: pass Added April 15, 2023: There is a difference between calling asyncio.run(), which I have done here, and calling loop.run_forever() (as in the original question) or loop.run_until_complete(). When I wrote this answer I did not realize that asyncio.run() always creates a new event loop. Therefore in my code above, the variable loop that is passed to amain will not become the "running loop." So my code avoids the DeprecationWarning/RuntimeException, but it doesn't pass a useful loop into amain. To correct that, replace the line asyncio.run(amain(loop=loop)) with loop.run_until_complete(amain(loop=loop)) It would be best to modify amain to obtain the running event loop inside the function instead of passing it in. Then you could launch the program with asyncio.run. But if amain cannot be changed that won't be possible. Note that run_until_complete, unlike asyncio.run, does not clean up async generators. This is documented in the standard docs. | 53 | 66 |
73,329,011 | 2022-8-12 | https://stackoverflow.com/questions/73329011/use-pip-install-psutil-on-docker-image-python3-9-13-alpine3-16-error-linux-e | I tried to install python module psutil in docker python:3.9.13-alpine3.16 But it reported the following mistake: Building wheels for collected packages: psutil Building wheel for psutil (pyproject.toml) ... error error: subprocess-exited-with-error Γ Building wheel for psutil (pyproject.toml) did not run successfully. β exit code: 1 β°β> [51 lines of output] /tmp/tmpb62wij4i.c:1:10: fatal error: linux/ethtool.h: No such file or directory 1 | #include <linux/ethtool.h> | ^~~~~~~~~~~~~~~~~ compilation terminated. running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-cpython-39 creating build/lib.linux-x86_64-cpython-39/psutil copying psutil/__init__.py -> build/lib.linux-x86_64-cpython-39/psutil copying psutil/_common.py -> build/lib.linux-x86_64-cpython-39/psutil copying psutil/_compat.py -> build/lib.linux-x86_64-cpython-39/psutil copying psutil/_psaix.py -> build/lib.linux-x86_64-cpython-39/psutil copying psutil/_psbsd.py -> build/lib.linux-x86_64-cpython-39/psutil copying psutil/_pslinux.py -> build/lib.linux-x86_64-cpython-39/psutil copying psutil/_psosx.py -> build/lib.linux-x86_64-cpython-39/psutil copying psutil/_psposix.py -> build/lib.linux-x86_64-cpython-39/psutil copying psutil/_pssunos.py -> build/lib.linux-x86_64-cpython-39/psutil copying psutil/_pswindows.py -> build/lib.linux-x86_64-cpython-39/psutil creating build/lib.linux-x86_64-cpython-39/psutil/tests copying psutil/tests/__init__.py -> build/lib.linux-x86_64-cpython-39/psutil/tests copying psutil/tests/__main__.py -> build/lib.linux-x86_64-cpython-39/psutil/tests copying psutil/tests/foo.py -> build/lib.linux-x86_64-cpython-39/psutil/tests copying psutil/tests/runner.py -> build/lib.linux-x86_64-cpython-39/psutil/tests copying psutil/tests/test_aix.py -> build/lib.linux-x86_64-cpython-39/psutil/tests copying psutil/tests/test_bsd.py -> build/lib.linux-x86_64-cpython-39/psutil/tests copying psutil/tests/test_connections.py -> build/lib.linux-x86_64-cpython-39/psutil/tests copying psutil/tests/test_contracts.py -> build/lib.linux-x86_64-cpython-39/psutil/tests copying psutil/tests/test_linux.py -> build/lib.linux-x86_64-cpython-39/psutil/tests copying psutil/tests/test_memleaks.py -> build/lib.linux-x86_64-cpython-39/psutil/tests copying psutil/tests/test_misc.py -> build/lib.linux-x86_64-cpython-39/psutil/tests copying psutil/tests/test_osx.py -> build/lib.linux-x86_64-cpython-39/psutil/tests copying psutil/tests/test_posix.py -> build/lib.linux-x86_64-cpython-39/psutil/tests copying psutil/tests/test_process.py -> build/lib.linux-x86_64-cpython-39/psutil/tests copying psutil/tests/test_sunos.py -> build/lib.linux-x86_64-cpython-39/psutil/tests copying psutil/tests/test_system.py -> build/lib.linux-x86_64-cpython-39/psutil/tests copying psutil/tests/test_testutils.py -> build/lib.linux-x86_64-cpython-39/psutil/tests copying psutil/tests/test_unicode.py -> build/lib.linux-x86_64-cpython-39/psutil/tests copying psutil/tests/test_windows.py -> build/lib.linux-x86_64-cpython-39/psutil/tests running build_ext building 'psutil._psutil_linux' extension creating build/temp.linux-x86_64-cpython-39 creating build/temp.linux-x86_64-cpython-39/psutil gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_SIZEOF_PID_T=4 -DPSUTIL_VERSION=591 -DPSUTIL_LINUX=1 -DPSUTIL_ETHTOOL_MISSING_TYPES=1 -I/usr/local/include/python3.9 -c psutil/_psutil_common.c -o build/temp.linux-x86_64-cpython-39/psutil/_psutil_common.o gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -DPSUTIL_POSIX=1 -DPSUTIL_SIZEOF_PID_T=4 -DPSUTIL_VERSION=591 -DPSUTIL_LINUX=1 -DPSUTIL_ETHTOOL_MISSING_TYPES=1 -I/usr/local/include/python3.9 -c psutil/_psutil_linux.c -o build/temp.linux-x86_64-cpython-39/psutil/_psutil_linux.o psutil/_psutil_linux.c:19:10: fatal error: linux/version.h: No such file or directory 19 | #include <linux/version.h> | ^~~~~~~~~~~~~~~~~ compilation terminated. error: command '/usr/bin/gcc' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for psutil Failed to build psutil ERROR: Could not build wheels for psutil, which is required to install pyproject.toml-based projects Recurrence process: docker pull python:3.9.13-alpine3.16 docker run --name alpine-python3 -it [image-id] /bin/sh (In container)# apk add build-base (In container)# pip install psutil Key Error: /tmp/tmpb62wij4i.c:1:10: fatal error: linux/ethtool.h: No such file or directory 1 | #include <linux/ethtool.h> | ^~~~~~~~~~~~~~~~~ psutil/_psutil_linux.c:19:10: fatal error: linux/version.h: No such file or directory 19 | #include <linux/version.h> | ^~~~~~~~~~~~~~~~~ compilation terminated. error: command '/usr/bin/gcc' failed with exit code 1 So, what do I need to pre-install in advance to fix? Thanks! | You need to add the linux-headers package. apk add build-base linux-headers python -m pip install psutil Update 0 Great, you need the linux-headers package. But, why do you need the linux-headers package? This package provides data structures and the signatures for functions in the kernel source. This information is required in order to compile modules which call these functions. We don't need the actual source code, just the specification, the interface, if you will. This is the information psutil needs in order to use the host OS's network utility, for instance. Why separate this from the implementation? Today, for personal computers, it's largely unnecessary. But, this was very useful back when storage was much smaller and much more expensive. No need to keep the entire kernel source around when all you're doing is building a module which calls some function in ethtool. Generally, any time you're building a module which interacts with the linux OS, you're going to need linux-headers installed. | 14 | 26 |
73,334,557 | 2022-8-12 | https://stackoverflow.com/questions/73334557/how-do-you-get-tkinter-to-work-with-asyncio | How do you get tkinter to work with asyncio? My studies suggest this general question does resolve into the specific problem of getting tkinter to await a coroutine function. Context If tkinter's event loop is blocked the loop will freeze until the blocking function returns. If the event loop also happens to be running a GUI that will freeze as well. The traditional solution to this problem is to move any blocking code into a thread. The new asyncio module is able to schedule threaded calls using the coroutine function asyncio.to_thread(coro). I gather this avoids the difficulties of writing correct threaded code. Baseline: blocked.py As a starting point I wrote a baseline program (See code below). It creates a tkinter event loop which attempts to destroy itself and end the program after 2000ms. That attempt is thwarted by a blocking function which runs for 4s. The program output is: 08:51:57: Program started. 08:51:58: blocking_func started. 08:52:02: blocking_func completed. 08:52:02: Tk event loop terminated. 08:52:02: Program ended. Process finished with exit code 0 1st try: async_blocked.py The blocking code has been refactored as a coroutine function so there are two event loops - tkinter's and asyncio's. The function blocking_io_handler is scheduled onto tkinter's event loop which runs it successfully. The coroutine function blocking_func is scheduled onto asyncio's loop where it starts successfully. The problem is it doesn't start until after tkinter's event loop has terminated. Asyncio's loop was available throughout the execution of the coroutine function main so it was available when tk_root.mainloop() was executed. In spite of this asyncio was helpless because control was not yielded by an await statement during the execution of tk_root.mainloop. It had to wait for the await asyncio.sleep(3) statement which ran later and, by then, tkinter had stopped running. At that time the await expression returns control to the async loop for three seconds β enough to start the four second blocking_func but not enough for it to finish. 08:38:22: Program started. 08:38:22: blocking_io_handler started. 08:38:22: blocking_io_handler completed. 08:38:24: Tk event loop terminated. 08:38:24: blocking_func started. 08:38:27: Program ended. Process finished with exit code 0 2nd try: asyncth_blocked.py This code replaces the function asyncio.create_task with the coroutine function asyncio.to_thread. This fails with a runtime warning: 07:26:46: Program started. 07:26:47: blocking_io_handler started. 07:26:47: blocking_io_handler completed. RuntimeWarning: coroutine 'to_thread' was never awaited asyncio.to_thread(blocking_func) RuntimeWarning: Enable tracemalloc to get the object allocation traceback 07:26:49: Tk event loop terminated. 07:26:49: Program ended. > Process finished with exit code 0 3rd try: asyncth_blocked_2.py asyncio.to_thread must be awaited because it is a coroutine function and not a regular function: await asyncio.to_thread(blocking_func). Since the await keyword is a syntax error inside a regular function, def blocking_io_handler has to be changed into a coroutine function: async def blocking_io_handler. These changes are shown in asyncth_blocked_2.py which produces this output: 07:52:29: Program started. RuntimeWarning: coroutine 'blocking_io_handler' was never awaited func(*args) RuntimeWarning: Enable tracemalloc to get the object allocation traceback 07:52:31: Tk event loop terminated. 07:52:31: Program ended. Process finished with exit code 0 Conclusion For tkinter to work with asyncio the scheduled function call tk_root.after(0, blocking_io_handler) has to be somehow turned into a scheduled coroutine function call. This is the only way the asycio loop will have a chance to run scheduled async tasks. Is it possible? Code """blocked.py""" import time import tkinter as tk def timestamped_msg(msg: str): print(f"{time.strftime('%X')}: {msg}") def blocking_func(): timestamped_msg('blocking_func started.') time.sleep(4) timestamped_msg('blocking_func completed.') def main(): timestamped_msg('Program started.') tk_root = tk.Tk() tk_root.after(0, blocking_func) tk_root.after(2000, tk_root.destroy) tk_root.mainloop() timestamped_msg('Tk event loop terminated.') timestamped_msg('Program ended.') if __name__ == '__main__': main() """async_blocked.py""" import asyncio import time import tkinter as tk def timestamped_msg(msg: str): print(f"{time.strftime('%X')}: {msg}") async def blocking_func(): timestamped_msg('blocking_func started.') await asyncio.sleep(4) timestamped_msg('blocking_func completed.') def blocking_io_handler(): timestamped_msg('blocking_io_handler started.') asyncio.create_task(blocking_func()) timestamped_msg('blocking_io_handler completed.') async def main(): timestamped_msg('Program started.') tk_root = tk.Tk() tk_root.after(0, blocking_io_handler) tk_root.after(2000, tk_root.destroy) tk_root.mainloop() timestamped_msg('Tk event loop terminated.') await asyncio.sleep(3) timestamped_msg('Program ended.') if __name__ == '__main__': asyncio.run(main()) """asyncth_blocked.py""" import asyncio import time import tkinter as tk def timestamped_msg(msg: str): print(f"{time.strftime('%X')}: {msg}") async def blocking_func(): timestamped_msg('blocking_func started.') await asyncio.sleep(4) timestamped_msg('blocking_func completed.') def blocking_io_handler(): timestamped_msg('blocking_io_handler started.') asyncio.to_thread(blocking_func) timestamped_msg('blocking_io_handler completed.') async def main(): timestamped_msg('Program started.') tk_root = tk.Tk() tk_root.after(0, blocking_io_handler) tk_root.after(2000, tk_root.destroy) tk_root.mainloop() timestamped_msg('Tk event loop terminated.') timestamped_msg('Program ended.') if __name__ == '__main__': asyncio.run(main()) """asyncth_blocked_2.py""" import asyncio import time import tkinter as tk def timestamped_msg(msg: str): print(f"{time.strftime('%X')}: {msg}") async def blocking_func(): timestamped_msg('blocking_func started.') await asyncio.sleep(4) timestamped_msg('blocking_func completed.') async def blocking_io_handler(): timestamped_msg('blocking_io_handler started.') await asyncio.to_thread(blocking_func) timestamped_msg('blocking_io_handler completed.') async def main(): timestamped_msg('Program started.') tk_root = tk.Tk() tk_root.after(0, blocking_io_handler) tk_root.after(2000, tk_root.destroy) tk_root.mainloop() timestamped_msg('Tk event loop terminated.') timestamped_msg('Program ended.') if __name__ == '__main__': asyncio.run(main()) | Tkinter's Problem with Blocking IO Calls The statement asyncio.sleep(60) will block tkinter for a minute if both are running in the same thread. Blocking coroutine functions cannot run in the same thread as tkinter. Similarly, the statement time.sleep(60) will block both tkinter and asyncio for a minute if all three are running in the same thread. Blocking non-coroutine functions cannot run in the same thread as either tkinter or asyncio. Sleep commands have been used here to simplify this example of the blocking problem. The principles shown are applicable to internet or database accesses. Solution A solution is to create three distinct environments and take care when moving data between them. Environment 1 - Main Thread This is Python's MainThread. It's where Python starts and Tkinter lives. No blocking code can be allowed in this environment. Environment 2 - Asyncio's Thread This is where asyncio and all its coroutine functions live. Blocking functions are only allowed if they are coroutine functions. Environment 3 - Multiple single use threads This is where non-coroutine blocking functions run. Since these are capable of blocking each other each needs its own thread. Data Data returned from blocking IO to tkinter should be returned in threadsafe queues using a producer/consumer pattern. Arguments and return values should not be passed between environments using regular functions. Use the threadsafe calling protocols provided by Python as illustrated below. Wrong code func(*args, **kwargs) return_value = func(*args, **kwargs) print(*args, **kwargs) Correct code threading.Thread(func, *args, **kwargs).start() The return_value is not directly available. Use a queue. future = asyncio.run_coroutine_threadsafe(func(*args, **kwargs), loop) return_value = future.result(). print: Use a threadsafe queue to move printable objects to a single print thread. (See the SafePrinter context maanger in the code below). The Polling Problem With tkinter, asyncio, and threading all running together there are three event loops controlling different stuff. Bad things can happen when they mix. For example threading's Queue.get() will block environment 1 where tkinter's loop is trying to control events. In this particular case, Queue.get_nowait() has to be used with polling via tkinter's after command. See the code below for other examples of unusual polling of queues. GUI Console output 0.001s In Print Thread of 2 without a loop: The SafePrinter is open for output. 0.001s In MainThread of 2 without a loop --- main starting 0.001s In Asyncio Thread of 3 without a loop --- aio_main starting 0.001s In MainThread of 3 without a loop --- tk_main starting 0.305s In Asyncio Thread of 3 with a loop --- manage_aio_loop starting 0.350s In MainThread of 3 without a loop --- tk_callbacks starting 0.350s In MainThread of 3 without a loop --- tk_callback_consumer starting 0.350s In Asyncio Thread of 3 with a loop --- aio_blocker starting. block=3.1s. 0.350s In MainThread of 3 without a loop --- aio_exception_handler starting. block=3.1s 0.351s In MainThread of 3 without a loop --- aio_exception_handler starting. block=1.1s 0.351s In Asyncio Thread of 4 with a loop --- aio_blocker starting. block=1.1s. 0.351s In IO Block Thread (3.2s) of 4 without a loop --- io_exception_handler starting. block=3.2s. 0.351s In IO Block Thread (3.2s) of 4 without a loop --- io_blocker starting. block=3.2s. 0.351s In IO Block Thread (1.2s) of 5 without a loop --- io_exception_handler starting. block=1.2s. 0.351s In IO Block Thread (1.2s) of 5 without a loop --- io_blocker starting. block=1.2s. 0.351s In MainThread of 5 without a loop --- tk_callbacks ending - All blocking callbacks have been scheduled. 1.451s In Asyncio Thread of 5 with a loop --- aio_blocker ending. block=1.1s. 1.459s In MainThread of 5 without a loop --- aio_exception_handler ending. block=1.1s 1.555s In IO Block Thread (1.2s) of 5 without a loop --- io_blocker ending. block=1.2s. 1.555s In IO Block Thread (1.2s) of 5 without a loop --- io_exception_handler ending. block=1.2s. 3.450s In Asyncio Thread of 4 with a loop --- aio_blocker ending. block=3.1s. 3.474s In MainThread of 4 without a loop --- aio_exception_handler ending. block=3.1s 3.553s In IO Block Thread (3.2s) of 4 without a loop --- io_blocker ending. block=3.2s. 3.553s In IO Block Thread (3.2s) of 4 without a loop --- io_exception_handler ending. block=3.2s. 4.140s In MainThread of 3 without a loop --- tk_callback_consumer ending 4.140s In MainThread of 3 without a loop --- tk_main ending 4.141s In Asyncio Thread of 3 with a loop --- manage_aio_loop ending 4.141s In Asyncio Thread of 3 without a loop --- aio_main ending 4.141s In MainThread of 2 without a loop --- main ending 4.141s In Print Thread of 2 without a loop: The SafePrinter has closed. Process finished with exit code 0 Code """ tkinter_demo.py Created with Python 3.10 """ import asyncio import concurrent.futures import functools import itertools import queue import sys import threading import time import tkinter as tk import tkinter.ttk as ttk from collections.abc import Iterator from contextlib import AbstractContextManager from dataclasses import dataclass from types import TracebackType from typing import Optional, Type # Global reference to loop allows access from different environments. aio_loop: Optional[asyncio.AbstractEventLoop] = None def io_blocker(task_id: int, tk_q: queue.Queue, block: float = 0) -> None: """ Block the thread and put a 'Hello World' work package into Tkinter's work queue. This is a producer for Tkinter's work queue. It will run in a special thread created solely for running this function. The statement `time.sleep(block)` can be replaced with any non-awaitable blocking code. Args: task_id: Sequentially issued tkinter task number. tk_q: tkinter's work queue. block: block time Returns: Nothing. The work package is returned via the threadsafe tk_q. """ safeprint(f'io_blocker starting. {block=}s.') time.sleep(block) # Exceptions for testing handlers. Uncomment these to see what happens when exceptions are raised. # raise IOError('Just testing an expected error.') # raise ValueError('Just testing an unexpected error.') work_package = f"Task #{task_id} {block}s: 'Hello Threading World'." tk_q.put(work_package) safeprint(f'io_blocker ending. {block=}s.') def io_exception_handler(task_id: int, tk_q: queue.Queue, block: float = 0) -> None: """ Exception handler for non-awaitable blocking callback. It will run in a special thread created solely for running io_blocker. Args: task_id: Sequentially issued tkinter task number. tk_q: tkinter's work queue. block: block time """ safeprint(f'io_exception_handler starting. {block=}s.') try: io_blocker(task_id, tk_q, block) except IOError as exc: safeprint(f'io_exception_handler: {exc!r} was handled correctly. ') finally: safeprint(f'io_exception_handler ending. {block=}s.') async def aio_blocker(task_id: int, tk_q: queue.Queue, block: float = 0) -> None: """ Asynchronously block the thread and put a 'Hello World' work package into Tkinter's work queue. This is a producer for Tkinter's work queue. It will run in the same thread as the asyncio loop. The statement `await asyncio.sleep(block)` can be replaced with any awaitable blocking code. Args: task_id: Sequentially issued tkinter task number. tk_q: tkinter's work queue. block: block time Returns: Nothing. The work package is returned via the threadsafe tk_q. """ safeprint(f'aio_blocker starting. {block=}s.') await asyncio.sleep(block) # Exceptions for testing handlers. Uncomment these to see what happens when exceptions are raised. # raise IOError('Just testing an expected error.') # raise ValueError('Just testing an unexpected error.') work_package = f"Task #{task_id} {block}s: 'Hello Asynchronous World'." # Put the work package into the tkinter's work queue. while True: try: # Asyncio can't wait for the thread blocking `put` methodβ¦ tk_q.put_nowait(work_package) except queue.Full: # Give control back to asyncio's loop. await asyncio.sleep(0) else: # The work package has been placed in the queue so we're done. break safeprint(f'aio_blocker ending. {block=}s.') def aio_exception_handler(mainframe: ttk.Frame, future: concurrent.futures.Future, block: float, first_call: bool = True) -> None: """ Exception handler for future coroutine callbacks. This non-coroutine function uses tkinter's event loop to wait for the future to finish. It runs in the Main Thread. Args: mainframe: The after method of this object is used to poll this function. future: The future running the future coroutine callback. block: The block time parameter used to identify which future coroutine callback is being reported. first_call: If True will cause an opening line to be printed on stdout. """ if first_call: safeprint(f'aio_exception_handler starting. {block=}s') poll_interval = 100 # milliseconds try: # Python will not raise exceptions during future execution until `future.result` is called. A zero timeout is # required to avoid blocking the thread. future.result(0) # If the future hasn't completed, reschedule this function on tkinter's event loop. except concurrent.futures.TimeoutError: mainframe.after(poll_interval, functools.partial(aio_exception_handler, mainframe, future, block, first_call=False)) # Handle an expected error. except IOError as exc: safeprint(f'aio_exception_handler: {exc!r} was handled correctly. ') else: safeprint(f'aio_exception_handler ending. {block=}s') def tk_callback_consumer(tk_q: queue.Queue, mainframe: ttk.Frame, row_itr: Iterator): """ Display queued 'Hello world' messages in the Tkinter window. This is the consumer for Tkinter's work queue. It runs in the Main Thread. After starting, it runs continuously until the GUI is closed by the user. """ # Poll continuously while queue has work needing processing. poll_interval = 0 try: # Tkinter can't wait for the thread blocking `get` methodβ¦ work_package = tk_q.get_nowait() except queue.Empty: # β¦so be prepared for an empty queue and slow the polling rate. poll_interval = 40 else: # Process a work package. label = ttk.Label(mainframe, text=work_package) label.grid(column=0, row=(next(row_itr)), sticky='w', padx=10) finally: # Have tkinter call this function again after the poll interval. mainframe.after(poll_interval, functools.partial(tk_callback_consumer, tk_q, mainframe, row_itr)) def tk_callbacks(mainframe: ttk.Frame, row_itr: Iterator): """ Set up 'Hello world' callbacks. This runs in the Main Thread. Args: mainframe: The mainframe of the GUI used for displaying results from the work queue. row_itr: A generator of line numbers for displaying items from the work queue. """ safeprint('tk_callbacks starting') task_id_itr = itertools.count(1) # Create the job queue and start its consumer. tk_q = queue.Queue() safeprint('tk_callback_consumer starting') tk_callback_consumer(tk_q, mainframe, row_itr) # Schedule the asyncio blocker. for block in [3.1, 1.1]: # This is a concurrent.futures.Future not an asyncio.Future because it isn't threadsafe. Also, # it doesn't have a wait with timeout which we shall need. task_id = next(task_id_itr) future = asyncio.run_coroutine_threadsafe(aio_blocker(task_id, tk_q, block), aio_loop) # Can't use Future.add_done_callback here. It doesn't return until the future is done and that would block # tkinter's event loop. aio_exception_handler(mainframe, future, block) # Run the thread blocker. for block in [3.2, 1.2]: task_id = next(task_id_itr) threading.Thread(target=io_exception_handler, args=(task_id, tk_q, block), name=f'IO Block Thread ({block}s)').start() safeprint('tk_callbacks ending - All blocking callbacks have been scheduled.\n') def tk_main(): """ Run tkinter. This runs in the Main Thread. """ safeprint('tk_main starting\n') row_itr = itertools.count() # Create the Tk root and mainframe. root = tk.Tk() mainframe = ttk.Frame(root, padding="15 15 15 15") mainframe.grid(column=0, row=0) # Add a close button button = ttk.Button(mainframe, text='Shutdown', command=root.destroy) button.grid(column=0, row=next(row_itr), sticky='w') # Add an information widget. label = ttk.Label(mainframe, text=f'\nWelcome to hello_world*4.py.\n') label.grid(column=0, row=next(row_itr), sticky='w') # Schedule the 'Hello World' callbacks mainframe.after(0, functools.partial(tk_callbacks, mainframe, row_itr)) # The asyncio loop must start before the tkinter event loop. while not aio_loop: time.sleep(0) root.mainloop() safeprint(' ', timestamp=False) safeprint('tk_callback_consumer ending') safeprint('tk_main ending') async def manage_aio_loop(aio_initiate_shutdown: threading.Event): """ Run the asyncio loop. This provides an always available asyncio service for tkinter to make any number of simultaneous blocking IO calls. 'Any number' includes zero. This runs in Asyncio's thread and in asyncio's loop. """ safeprint('manage_aio_loop starting') # Communicate the asyncio loop status to tkinter via a global variable. global aio_loop aio_loop = asyncio.get_running_loop() # If there are no awaitables left in the queue asyncio will close. # The usual wait command β Event.wait() β would block the current thread and the asyncio loop. while not aio_initiate_shutdown.is_set(): await asyncio.sleep(0) safeprint('manage_aio_loop ending') def aio_main(aio_initiate_shutdown: threading.Event): """ Start the asyncio loop. This non-coroutine function runs in Asyncio's thread. """ safeprint('aio_main starting') asyncio.run(manage_aio_loop(aio_initiate_shutdown)) safeprint('aio_main ending') def main(): """Set up working environments for asyncio and tkinter. This runs in the Main Thread. """ safeprint('main starting') # Start the permanent asyncio loop in a new thread. # aio_shutdown is signalled between threads. `asyncio.Event()` is not threadsafe. aio_initiate_shutdown = threading.Event() aio_thread = threading.Thread(target=aio_main, args=(aio_initiate_shutdown,), name="Asyncio's Thread") aio_thread.start() tk_main() # Close the asyncio permanent loop and join the thread in which it runs. aio_initiate_shutdown.set() aio_thread.join() safeprint('main ending') @dataclass class SafePrinter(AbstractContextManager): _time_0 = time.perf_counter() _print_q = queue.Queue() _print_thread: threading.Thread | None = None def __enter__(self): """ Run the safeprint consumer method in a print thread. Returns: Thw safeprint producer method. (a.k.a. the runtime context) """ self._print_thread = threading.Thread(target=self._safeprint_consumer, name='Print Thread') self._print_thread.start() return self._safeprint def __exit__(self, __exc_type: Type[BaseException] | None, __exc_value: BaseException | None, __traceback: TracebackType | None) -> bool | None: """ Close the print and join the print thread. Args: None or the exception raised during the execution of the safeprint producer method. __exc_type: __exc_value: __traceback: Returns: False to indicate that any exception raised in self._safeprint has not been handled. """ self._print_q.put(None) self._print_thread.join() return False def _safeprint(self, msg: str, *, timestamp: bool = True, reset: bool = False): """Put a string into the print queue. 'None' is a special msg. It is not printed but will close the queue and this context manager. The exclusive thread and a threadsafe print queue ensure race free printing. This is the producer in the print queue's producer/consumer pattern. It runs in the same thread as the calling function Args: msg: The message to be printed. timestamp: Print a timestamp (Default = True). reset: Reset the time to zero (Default = False). """ if reset: self._time_0 = time.perf_counter() if timestamp: self._print_q.put(f'{self._timestamp()} --- {msg}') else: self._print_q.put(msg) def _safeprint_consumer(self): """Get strings from the print queue and print them on stdout. The print statement is not threadsafe, so it must run in its own thread. This is the consumer in the print queue's producer/consumer pattern. """ print(f'{self._timestamp()}: The SafePrinter is open for output.') while True: msg = self._print_q.get() # Exit function when any producer function places 'None'. if msg is not None: print(msg) else: break print(f'{self._timestamp()}: The SafePrinter has closed.') def _timestamp(self) -> str: """Create a timestamp with useful status information. This is a support function for the print queue producers. It runs in the same thread as the calling function so the returned data does not cross between threads. Returns: timestamp """ secs = time.perf_counter() - self._time_0 try: asyncio.get_running_loop() except RuntimeError as exc: if exc.args[0] == 'no running event loop': loop_text = 'without a loop' else: raise else: loop_text = 'with a loop' return f'{secs:.3f}s In {threading.current_thread().name} of {threading.active_count()} {loop_text}' if __name__ == '__main__': with SafePrinter() as safeprint: sys.exit(main()) | 4 | 2 |
73,362,588 | 2022-8-15 | https://stackoverflow.com/questions/73362588/type-hinting-a-decorator-that-adds-a-parameter-with-default-value | Motivation: I realised I had a lot of class methods that were also being used as TKinter callbacks, which pass a tk.Event object as the first (non-self) argument. If my application wants to call them normally as well, this event argument should be None by default... class Writer: # very unwieldy second parameter... def write(self, event: Optional[tk.Event] = None, number: int = 0) -> str: return str(number) It's more boilerplate, it forces me to provide default arguments for everything, and pylint is screaming about unused arguments. So I wrote a decorator to add the extra parameter... but how do I type hint it correctly? (I'm using Python 3.8.10 and mypy 0.971.) def tk_callback(method): @functools.wraps(method) def wrapper(self, event=None, *args, **kwargs): return method(self, *args, **kwargs) return wrapper The callback should not hide the types of the original parameters. It should reflect that the added parameter event gets passed a default value of None. I did some reading about generics, Protocols, and a little searching (e.g.) How to decorate a function in python in a way that adds a new argument and creates the correct signature? Python decorator adding argument to function and its signature The linked questions are similar but not duplicates: I'd like type hinting, and specifically make the extra argument on the wrapper function take a default value. Then I made an attempt: # foo.py from __future__ import annotations import functools import tkinter as tk from typing import Callable, Optional, Protocol, TypeVar from typing_extensions import Concatenate, ParamSpec P = ParamSpec("P") CT_contra = TypeVar("CT_contra", contravariant=True) RT_co = TypeVar("RT_co", covariant=True) C = TypeVar("C") R = TypeVar("R") class Prot(Protocol[CT_contra, P, RT_co]): def __call__(self, _: CT_contra, # this would be "self" on the method itself event: Optional[tk.Event] = ..., /, *args: P.args, **kwargs: P.kwargs ) -> RT_co: ... def tk_callback(method: Callable[Concatenate[C, P], R] ) -> Prot[C, P, R]: @functools.wraps(method) def wrapper( self: C, event: Optional[tk.Event] = None, *args: P.args, **kwargs: P.kwargs ) -> R: return method(self, *args, **kwargs) return wrapper Which doesn't seem to work. mypy complains the decorator doesn't return what the type hint declares. error: Incompatible return value type (got "Callable[[C, Optional[Event[Any]], **P], R]", expected "Prot[C, P, R]") It also notes that the returned wrapper functions should have a very similar type: "Prot[C, P, R].__call__" has type "Callable[[C, DefaultArg(Optional[Event[Any]]), **P], R]" (Digression: not relevant to my use case, but if I don't supply the default argument in the protocol, it still complains while noting that "Prot[C, P, R].__call__" has type "Callable[[C, Optional[Event[Any]], **P], R]") even though this is exactly what is returned!) So what should be the right way to type hint this decorator, and/or how can I get the type checking to work correctly? More troubleshooting information: the revealed type of a method is also strange. # also in foo.py class Writer: def __init__(self) -> None: return @tk_callback def write(self, number: int) -> str: return str(number) writer = Writer() reveal_type(writer.write) # mypy: Revealed type is "foo.Prot[foo.Writer, [number: builtins.int], builtins.str] | This question boils down to: How do I type hint a decorator which adds an argument to a class method? Inspired by the docs for typing.Concatenate, I've come up with a solution which works when the first parameter of the function being decorated is self. I've simplified your example a bit, to use an optional string instead of your more involved tk.Event which isn't really relevant to the question. from collections.abc import Callable from typing import Concatenate, Optional, ParamSpec, TypeVar S = TypeVar("S") P = ParamSpec("P") R = TypeVar("R") def with_event( method: Callable[Concatenate[S, P], R] ) -> Callable[Concatenate[S, Optional[str], P], R]: def wrapper( _self: S, event: Optional[str] = None, *args: P.args, **kwargs: P.kwargs ) -> R: print(f"event was {event}") return method(_self, *args, **kwargs) return wrapper class Writer: @with_event def write(self, number: int = 0) -> str: return f"number was {number}" writer = Writer() print(writer.write("my event", number=5)) This prints: event was my event number was 5 The with_event signature works like this: def with_event( # Take one parameter which is a function with takes a # single positional argument (S), then some other stuff (P) # and return another function which takes the same position # argument S, followed by an Optional[str], followed by # the "other stuff", P. The return types of these two functions # is the same (R). method: Callable[Concatenate[S, P], R] ) -> Callable[Concatenate[S, Optional[str], P], R]: | 4 | 1 |
73,318,724 | 2022-8-11 | https://stackoverflow.com/questions/73318724/mitigating-tcp-connection-resets-in-aws-fargate | I am using Amazon ECS on AWS Fargate, My instances can access the internet, but the connection drops after 350 seconds. On average, out of 100 times, my service is getting ConnectionResetError: [Errno 104] Connection reset by peer error approximately 5 times. I found a couple of suggestions to fix that issue on my server-side code, see here and here Cause If a connection that's using a NAT gateway is idle for 350 seconds or more, the connection times out. When a connection times out, a NAT gateway returns an RST packet to any resources behind the NAT gateway that attempt to continue the connection (it does not send a FIN packet). Solution To prevent the connection from being dropped, you can initiate more traffic over the connection. Alternatively, you can enable TCP keepalive on the instance with a value less than 350 seconds. Existing Code: url = "url to call http" params = { "year": year, "month": month } response = self.session.get(url, params=params) To fix that I am currently using a band-aid retry logic solution using tenacity, @retry( retry=( retry_if_not_exception_type( HTTPError ) # specific: requests.exceptions.ConnectionError ), reraise=True, wait=wait_fixed(2), stop=stop_after_attempt(5), ) def call_to_api(): url = "url to call HTTP" params = { "year": year, "month": month } response = self.session.get(url, params=params) So my basic question is how can I use python requests correctly to do any of the below solutions, Close the connection before 350 seconds of inactivity Enable Keep-Alive for TCP connections | Posting solution for the future user who will face this issue while working on AWS Farget + NAT, We need to set the TCP keepalive settings to the values dictated by our server-side configuration, this PR helps me a lot to fix my issue: https://github.com/customerio/customerio-python/pull/70/files import socket from urllib3.connection import HTTPConnection HTTPConnection.default_socket_options = ( HTTPConnection.default_socket_options + [ (socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1), (socket.SOL_TCP, socket.TCP_KEEPIDLE, 300), (socket.SOL_TCP, socket.TCP_KEEPINTVL, 60) ] ) | 5 | 1 |
73,363,627 | 2022-8-15 | https://stackoverflow.com/questions/73363627/pass-a-macro-as-a-parameter-jinja-dbt | {{ today_date_milliseconds() }} - is my macro in the project. How to redirect this macro as a parameter, so it will be by default and I could in yml write another macro? {% test valid_date(model, column_name, exclude_condition = '1=1') %} SELECT {{ column_name }} FROM {{ model }} WHERE (CAST( {{ column_name }} AS BIGINT) < {{ today_date_milliseconds() }} AND {{ exclude_condition }} {% endtest %} In yml it will look like - name: date_3 description: column for third date tests: - valid_date: lower_bound: 'name of another macro' | I love this question -- I just learned something looking into it (and I've wanted to do this in another project, so I'm glad I did!). First, a caveat that this is undocumented and probably not encouraged, and could probably break at any time. The good news is that unbound macros act like python functions, so you can assign them to variables in jinja and execute them later. As an example, I have two macros that just log a word to stdout: -- log_one.sql {% macro log_one() %} {{ log("one", info=True) }} {% endmacro %} -- log_two.sql {% macro log_two() %} {{ log("two", info=True) }} {% endmacro %} In a model, I can assign one of these two macros to a variable, and then execute the variable, like this: -- this model will print `one` to the console when it is built {% if execute %} {% set bound_macro = log_one %} {{ bound_macro() }} {% endif %} select 1 Note that there are no quotes around log_one, so I'm passing an actual reference to the macro into the variable called bound_macro, which is why this works. But this isn't enough for your use case, since your config is going to enter the jinja context as a string, not a reference to the macro. In Python, you can use eval() to evaluate a string as code, but jinja doesn't allow that. (This is the undocumented part that could break in the future but works on dbt v1.2) Fortunately, in every dbt macro, you have access to a global object called context. context quacks like a dictionary, and you can access all of the macros, built-ins, etc., with the context's get method, which works just like a Python dict's get. So you can use a macro's name as a string to get a reference to the macro itself by using context.get("macro_name"). And just like with a dict, you can provide a default value as a second argument to get, if the first argument isn't present in the context. -- this will print `two` when the model is built {% if execute %} {% set macro_name = "log_two" %} {% set bound_macro = context.get(macro_name, log_one) %} {{ bound_macro() }} {% endif %} select 1 -- this will print `one` when the model is built, since macro_name is not defined {% if execute %} {% set bound_macro = context.get(macro_name, log_one) %} {{ bound_macro() }} {% endif %} select 1 Edit for clarity: context will only be populated after parsing, so any time you access context, it should be wrapped in an {% if execute %} block, as in the examples above. For your specific example, I would add an argument called lower_bound to your test, and give it a default value (of a string!), and then use context to retrieve the right macro. To be a little safer, you could also provide a default arg to get, although that might make it harder to debug typos in the config: {% test valid_date(model, column_name, exclude_condition = '1=1', lower_bound="today_date_milliseconds") %} {% set lower_bound_macro = context.get(lower_bound, today_date_milliseconds) %} SELECT {{ column_name }} FROM {{ model }} WHERE (CAST( {{ column_name }} AS BIGINT) < {{ lower_bound_macro() }} AND {{ exclude_condition }} {% endtest %} | 5 | 6 |
73,322,507 | 2022-8-11 | https://stackoverflow.com/questions/73322507/how-to-pass-a-fixture-that-returns-a-variable-length-iterable-of-values-to-pytes | I have a pytest fixture that produces an iterable and I would like to parameterize a test using the items in this iterable, but I cannot figure out the correct syntax. Does anyone know how to parametrize a test using the values of a fixture? Here is some dummy code that shows my current approach: import pytest @pytest.fixture() def values(): return [1, 1, 2] @pytest.mark.parametrize('value', values) def test_equal(value): assert value == 1 | The short answer is pytest doesn't support passing fixtures to parametrize. The out-of-the-box solution provided by pytest is to either use indirect parametrization or define your own parametrization scheme using pytest_generate_tests as described in How can I pass fixtures to pytest.mark.parameterize? These are the workarounds I've used before to solve this problem. Option 1: Separate function for generating values from typing import Iterator, List import pytest def generate_values() -> Iterator[str]: # ... some computationally-intensive operation ... all_possible_values = [1, 2, 3] for value in all_possible_values: yield value @pytest.fixture() def values() -> List[str]: return list(generate_values()) def test_all_values(values): assert len(values) > 5 @pytest.mark.parametrize("value", generate_values()) def test_one_value_at_a_time(value: int): assert value == 999 $ pytest -vv tests ... ========================================================== short test summary info =========================================================== FAILED tests/test_main.py::test_all_values - assert 3 > 5 FAILED tests/test_main.py::test_one_value_at_a_time[1] - assert 1 == 999 FAILED tests/test_main.py::test_one_value_at_a_time[2] - assert 2 == 999 FAILED tests/test_main.py::test_one_value_at_a_time[3] - assert 3 == 999 The main change is moving the generation of the list of values to a regular, non-fixture function generate_values. If it's a static list, then you can even forego making it a function and just define it as a regular module-level variable. ALL_POSSIBLE_VALUES = [1, 2, 3] Not everything always needs to be a fixture. It's advantageous for injecting the test data into functions, yes, but it doesn't mean you can't use regular Python functions and variables. The only problem with this solution is if generating the list of values depends on other fixtures, i.e. reusable fixtures. In that case, you would have to re-define those as well. I've kept the values fixture here for tests where you do need to get all the possible values as a list, like in test_all_values. If this list of values is going to be used for multiple other tests, instead of decorating with parametrize for each one, you can do that in the pytest_generate_tests hook. def pytest_generate_tests(metafunc: pytest.Metafunc): if "value" in metafunc.fixturenames: metafunc.parametrize("value", generate_values()) def test_one_value_at_a_time(value: int): assert value == 999 This option avoids a lot of duplication and you can then even change generate_values to whatever or however you need it to be independently of the tests and the testing framework. Option 2: Use indirect and let fixture return 1 value at a time If it's possible to know the length of the list of values beforehand (as in before running the tests), you can then use parametrize's indirect= and then let the fixture only return one value at a time. # Set/Defined by some env/test configuration? MAX_SUPPORTED_VALUES = 5 @pytest.fixture def value(request: pytest.FixtureRequest) -> int: all_possible_values = [1, 2, 3, 4, 5] selected_index = request.param return all_possible_values[selected_index] @pytest.mark.parametrize("value", range(MAX_SUPPORTED_VALUES), indirect=True) def test_one_value_at_a_time(value: int): assert value == 999 The main change is letting the fixture accept a parameter (an index), and then returning the value at that index. (Also renaming the fixture from values to value to match the return value). Then, in the test, you use indirect=True and then pass a range of indices, which is passed to the fixture as request.param. Again, this only works if you know at least the length of the list of values. Also, again, instead of applying the parametrize decorator for each test that uses this fixture, you can use pytest_generate_tests: def pytest_generate_tests(metafunc: pytest.Metafunc): if "value" in metafunc.fixturenames: metafunc.parametrize("value", range(MAX_SUPPORTED_VALUES), indirect=True) def test_one_value_at_a_time(value: int): assert value == 999 | 4 | 4 |
73,320,669 | 2022-8-11 | https://stackoverflow.com/questions/73320669/celery-kombu-exceptions-contentdisallowed-in-docker | I am using celery with a fastAPI. Getting Can't decode message body: ContentDisallowed('Refusing to deserialize untrusted content of type json (application/json)') while running in docker. When running the same in local machine without docker there is not issue. The configuration for the same is as below. celery_app = Celery('cda-celery-tasks', broker=CFG.BROKER_URL, backend=CFG.BACKEND_URL, include=['src.tasks.tasks'] ) celery_app.conf.task_serializer = 'pickle' celery_app.conf.result_serializer = 'pickle' celery_app.conf.accept_content = ['pickle'] celery_app.conf.enable_utc = True While Running in docker I am getting the error continuously FROM python:3.8 WORKDIR /app COPY . . RUN pip3 install poetry ENV PATH="/root/.poetry/bin:$PATH" RUN poetry install the celery is started using the following command from kubernetes. poetry run celery -A src.infrastructure.celery_application worker --loglevel=INFO --concurrency 2 While running I am getting the error continuously Can't decode message body: ContentDisallowed('Refusing to deserialize untrusted content of type json (application/json)') body: '{"method": "enable_events", "arguments": {}, "destination": null, "pattern": null, "matcher": null}' (99b) Traceback (most recent call last): File "/root/.cache/pypoetry/virtualenvs/cda-9TtSrW0h-py3.8/lib/python3.8/site-packages/kombu/messaging.py", line 620, in _receive_callback decoded = None if on_m else message.decode() File "/root/.cache/pypoetry/virtualenvs/cda-9TtSrW0h-py3.8/lib/python3.8/site-packages/kombu/message.py", line 194, in decode self._decoded_cache = self._decode() File "/root/.cache/pypoetry/virtualenvs/cda-9TtSrW0h-py3.8/lib/python3.8/site-packages/kombu/message.py", line 198, in _decode return loads(self.body, self.content_type, File "/root/.cache/pypoetry/virtualenvs/cda-9TtSrW0h-py3.8/lib/python3.8/site-packages/kombu/serialization.py", line 242, in loads raise self._for_untrusted_content(content_type, 'untrusted') kombu.exceptions.ContentDisallowed: Refusing to deserialize untrusted content of type json (application/json) Could someone please tell me the possible cause and solution to manage the same? If I've missed anything, over- or under-emphasized a specific point, please let me know in the comments. Thank you so much in advance for your time. | Configuring the celery_app with the accept_content type seems to fix the issue: celery_app.conf.accept_content = ['application/json', 'application/x-python-serialize', 'pickle'] | 6 | 5 |
73,318,552 | 2022-8-11 | https://stackoverflow.com/questions/73318552/django-orm-query-optimisation-with-multiple-joins | In my app, I can describe an Entity using different Protocols, with each Protocol being a collection of various Traits, and each Trait allows two or more Classes. So, a Description is a collection of Expressions. E.g., I want to describe an entity "John" with the Protocol "X" that comprises the following two Traits and Classes: Protocol ABC Trait 1: Height Available Classes: a. Short b. Medium c. Tall Trait 2: Weight Available Classes: a. Light b. Medium c. Heavy John's Description: Expression 1: c. Tall, Expression 2: b. Medium My model specification (barebone essentials for simplicity): class Protocol(models.Model): """ A Protocol is a collection of Traits """ name = models.CharField() class Trait(models.Model): """ Stores the Traits. Each Trait can have multiple Classes """ name = models.CharField() protocol = models.ForeignKey( Protocol, help_text="The reference protocol of the trait", ) class Class(models.Model): """ Stores the different Classes related to a Trait. """ name = models.CharField() trait = models.ForeignKey(Trait) class Description(models.Model): """ Stores the Descriptions. A description is a collection of Expressions. """ name = models.CharField() protocol = models.ForeignKey( Protocol, help_text="reference to the protocol used to make the description;\ this will define which Traits will be available", ) entity = models.ForeignKey( Entity, help_text="the Entity to which the description refers to", ) class Expression(models.Model): """ Stores the expressions of entities related to a specific Description. It refers to one particular Class (which is then associated with a specific Trait) """ class = models.ForeignKey(Class) description = models.ForeignKey(Description) Following the previous example, let's say I want to find all the Entities that are medium or tall (Trait 1) and heavy (Trait 2). The query I'm now using is the following: # This is the filter returned by the HTML form, which list # all the available Classes for each Trait of the selected Protocol filters = [ {'trait': 1, 'class': [2, 3]}, {'trait': 2, 'class': [6,]}, ] queryset = Description.objects.all() for filter in filters: queryset = queryset.filter(expression_set__class__in=filter["class"]) The problem is that the query is slow (I have ATM ~1000 Descriptions, described with a Protocol of 40 Traits, each Trait having 2 to 5 Classes). It takes about two seconds to return the results even when filtering by only 5-6 Expressions. I tried using prefetch_related("expression_set") or prefetch_related("expression_set__class") but with no significant improvement. The question is: can you suggest a way to improve the performance, or this is simply the reality of searching through so many tables? Thank you very much for your time. EDIT: The following is the query generated by the Manager when, e.g., eight filters (see previous code snippet) are applied. SELECT "describe_description"."id", "describe_description"."name", "describe_description"."protocol_id", FROM "describe_description" INNER JOIN "describe_expression" ON ("describe_description"."id" = "describe_expression"."description_id") INNER JOIN "describe_expression" T4 ON ("describe_description"."id" = T4."description_id") INNER JOIN "describe_expression" T6 ON ("describe_description"."id" = T6."description_id") INNER JOIN "describe_expression" T8 ON ("describe_description"."id" = T8."description_id") INNER JOIN "describe_expression" T10 ON ("describe_description"."id" = T10."description_id") INNER JOIN "describe_expression" T12 ON ("describe_description"."id" = T12."description_id") INNER JOIN "describe_expression" T14 ON ("describe_description"."id" = T14."description_id") INNER JOIN "describe_expression" T16 ON ("describe_description"."id" = T16."description_id") INNER JOIN "describe_expression" T18 ON ("describe_description"."id" = T18."description_id") WHERE ("describe_expression"."class_id" IN (732) AND T4."class_id" IN (740) AND T6."class_id" IN (760) AND T8."class_id" IN (783) AND T10."class_id" IN (794) AND T12."class_id" IN (851) AND T14."class_id" IN (857) AND T16."class_id" IN (860) AND T18."class_id" IN (874)) | First you should avoid multiple joins by aggregating desired filters upfront: filters = [ {'trait': 1, 'class': [2, 3]}, {'trait': 2, 'class': [6,]}, ] queryset = Description.objects.all() class_filter = [] for filter_entry in filters: class_filter.append(filter_entry["class"]) queryset = queryset.filter(expression_set__class__in=class_filter) Second problem is scanning for text values. Use db_index=True on your Class.name field. EDIT: There is a difference in chaining the filters on the same table and using Q objects. It does not act like AND on the same object. It seems counterintuitive as in the sql you can see AND but that is AND on multiple joins, where each join effectively duplicates the descriptions (that is why it get's slow). Best explained in the Django docs or this article. Quick excerpt of the docs: To select all blogs containing at least one entry from 2008 having βLennonβ in its headline (the same entry satisfying both conditions), we would write: Blog.objects.filter(entry__headline__contains='Lennon', entry__pub_date__year=2008) Otherwise, to perform a more permissive query selecting any blogs with merely some entry with βLennonβ in its headline and some entry from 2008, we would write: Blog.objects.filter(entry__headline__contains='Lennon').filter(entry__pub_date__year=2008) EDIT 2: Schematic example of above from this answer: Blog.objects.filter(entry__headline_contains='Lennon', entry__pub_date__year=2008) filters only Blog 1 Blog.objects.filter(entry__headline_contains='Lennon').filter( entry__pub_date__year=2008) filters blog 1 and 2 | 4 | 2 |
73,343,529 | 2022-8-13 | https://stackoverflow.com/questions/73343529/django-google-kubernetes-client-not-running-exe-inside-the-job | I have a docker image that I want to run inside my django code. Inside that image there is an executable that I have written using c++ that writes it's output to google cloud storage. Normally when I run the django code like this: container = client.V1Container(name=container_name, command=["//usr//bin//sleep"], args=["3600"], image=container_image, env=env_list, security_context=security) And manually go inside the container to run this: gcloud container clusters get-credentials my-cluster --region us-central1 --project proj_name && kubectl exec pod-id -c jobcontainer -- xvfb-run -a "path/to/exe" It works as intended and gives off the output to cloud storage. (I need to use a virtual monitor so I'm using xvfb first). However I must call this through django like this: container = client.V1Container(name=container_name, command=["xvfb-run"], args=["-a","\"path/to/exe\""], image=container_image, env=env_list, security_context=security) But when I do this, the job gets created but never finishes and does not give off an output to the storage. When I go inside my container to run ps aux I get this output: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 2888 1836 ? Ss 07:34 0:00 /bin/sh /usr/bin/xvfb-run -a "path/to/exe" root 16 0.0 1.6 196196 66256 ? S 07:34 0:00 Xvfb :99 -screen 0 1280x1024x24 -nolisten tcp -auth /tmp/xvfb-run.r5gaBO/Xauthority root 35 0.0 0.0 7016 1552 ? Rs 10:31 0:00 ps aux It looks like it's stuck inside my code but my code does not have a loop that it can stuck inside, perhaps there is an error occurring (I don't think so since the exact same command is working when typed manually). If there is an error how can I see the console output? Why is my code get stuck and how can I get my desired output? Could there be an error caused by permissions (The code does a lot of stuff that requires permissions like writing to storage and reading files inside the pod, but like mentioned works normally when i run it via the command line)? | Apparently for anyone having a similar issue, we fixed it by adding the command we want to run at the end of the Dockerfile instead of passing it as a parameter inside django's container call like this: cmd["entrypoint.sh"] entrypoint.sh: xvfb-run -a "path/to/exe" Instead of calling it inside django like we did before and simply removing the command argument from the container call so it looked like this: container = client.V1Container(name=container_name, image=container_image, env=env_list, stdin=True, security_context=security) | 6 | 4 |
73,302,347 | 2022-8-10 | https://stackoverflow.com/questions/73302347/how-to-handle-files-imported-using-import-in-pyinstaller | Here's an example of the file-structure of my app, which I'm trying to turn into a standalone distributable one-dir application using auto-py-to-exe: - plugins/file1.py - plugins/file2.py - plugins/... - plugins/display/file1.py - plugins/display/... - main.py - UI.py The files which are under the plugins directory are dynamically imported using __import__. Here's the part of the code that does that: for plugin_filename in plugin_files[ plugin_path ]: plugin_file = plugin_filename.replace('.pye', '').replace('.py', '') plugin = __import__(plugin_file) Of course, this doesn't get recognized by the Pyinstaller analysis and the plugin files don't get bundled into the exe. From the documentation I've gathered that I need to use the --hidden-import option, so I tried this but with no success. This results in ERROR: Hidden import not found: Is there any way I can add these files to the exe? | Iβve managed to solve this by adding the absolute path of the plugins folder to paths and adding file1, file2, etc. to the β-hidden-import option. I also had to get rid of the if statement that checks if the file is a Python one and replaced that block with a loop that looks something like this: modules = [βfile1β, βfile2β] for module in modules: plugin = __import__(module) | 4 | 0 |
73,363,017 | 2022-8-15 | https://stackoverflow.com/questions/73363017/cannot-connect-to-mongodb-when-tunneling-ssh-connection | I am developing a GUI using Flask. I am running Ubuntu 20.04. The data that I need is from MongoDB, which is running on a Docker container on a server. My Mongo URI is "mongodb://<USERNAME>:<PASSWORD>@localhost:9999/<DB>?authSource=<AUTHDB>". First I tunnel the SSH connection to the server: ssh -L 9999:localhost:27017 -N <USER>@<HOST> When I connect to a Mongo shell after tunneling the connection, it works as expected. The issue is when I try to run the Flask app. PyMongo cannot seem to connect to the database (connection refused). pymongo.errors.ServerSelectionTimeoutError: 127.0.0.1:9999: [Errno 111] Connection refused, Timeout: 30s, Topology Description: <TopologyDescription id: 62fa50c9015294b6e1df5811, topology_type: Unknown, servers: [<ServerDescription ('127.0.0.1', 9999) server_type: Unknown, rtt: None, error=AutoReconnect('127.0.0.1:9999: [Errno 111] Connection refused')>]> Another error I've been seeing is PyMongo authentication failed. This only happens if I have a Docker container with Mongo running on my local machine. (The previous error only happens when the local container is not running). pymongo.errors.OperationFailure: Authentication failed., full error: {'ok': 0.0, 'errmsg': 'Authentication failed.', 'code': 18, 'codeName': 'AuthenticationFailed', '$clusterTime': {'clusterTime': Timestamp(1660576452, 1), 'signature': {'hash': b'\x16\x8c\x11\xb2Uf\xe3\x80\x85\xb0 >\xcb\xbe\xa5S\x8f\xec(g', 'keyId': 7128054189753630722}}, 'operationTime': Timestamp(1660576452, 1)} One last thing: I have not experienced any errors when trying to connect to the local MongoDB running in a local Docker container. This leads me to believe that I am missing a step when connecting to the remote MongoDB. And it's not a problem with the URI since I can log into a Mongo shell with it when the SSH connection is up. Here is some code from my Flask app: init.py from flask import Flask from flask_bootstrap import Bootstrap from .frontend import frontend from .database import mongo def create_app(configfile=None): app = Flask(__name__) app.config["MONGO_URI"] = "mongodb://<USERNAME>:<PASSWORD>@localhost:9999/<DB>?authSource=<AUTHDB>" Bootstrap(app) mongo.init_app(app) app.register_blueprint(frontend) app.secret_key = os.environ['SECRET_KEY'] app.config['BOOTSTRAP_SERVER_LOCAL'] = True return app database.py from flask_pymongo import PyMongo mongo = PyMongo() frontend.py from flask import Blueprint from .database import mongo frontend = Blueprint('frontend', __name__) # Rest of file contains routes with calls to database Does anyone know why I am getting these errors? Thanks. | I discovered that pymongo isn't setting the port correctly, possibly due to a bug in the package. Even though I specify the port as 9999, it thinks it should connect to 27017. If I set the URI to "mongodb://<USERNAME>:<PASSWORD>@garbage:9999/<DB>?authSource=<AUTHDB>", then it will actually try to connect to port 9999. I ended up changing my ssh tunnel to port 27017, and everything worked. ssh -L 27017:localhost:27017 -N <USER>@<HOST> | 4 | 2 |
73,378,545 | 2022-8-16 | https://stackoverflow.com/questions/73378545/pip-install-gives-error-on-some-packages | Some packages give errors when I try to install them using pip install. This is the error when I try to install chatterbot, but some other packages give this error as well: pip install chatterbot Collecting chatterbot Using cached ChatterBot-1.0.5-py2.py3-none-any.whl (67 kB) Collecting pint>=0.8.1 Downloading Pint-0.19.2.tar.gz (292 kB) ββββββββββββββββββββββββββββββββββββββββ 292.0/292.0 kB 1.6 MB/s eta 0:00:00 Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting pyyaml<5.2,>=5.1 Using cached PyYAML-5.1.2.tar.gz (265 kB) Preparing metadata (setup.py) ... done Collecting spacy<2.2,>=2.1 Using cached spacy-2.1.9.tar.gz (30.7 MB) Installing build dependencies ... error error: subprocess-exited-with-error Γ pip subprocess to install build dependencies did not run successfully. β exit code: 1 β°β> [35 lines of output] Collecting setuptools Using cached setuptools-65.0.1-py3-none-any.whl (1.2 MB) Collecting wheel<0.33.0,>0.32.0 Using cached wheel-0.32.3-py2.py3-none-any.whl (21 kB) Collecting Cython Using cached Cython-0.29.32-py2.py3-none-any.whl (986 kB) Collecting cymem<2.1.0,>=2.0.2 Using cached cymem-2.0.6-cp310-cp310-win_amd64.whl (36 kB) Collecting preshed<2.1.0,>=2.0.1 Using cached preshed-2.0.1.tar.gz (113 kB) Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'error' error: subprocess-exited-with-error python setup.py egg_info did not run successfully. exit code: 1 [6 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "C:\Users\oguls\AppData\Local\Temp\pip-install-qce7tdof\preshed_546a51fe26c74852ab50db073ad57f1f\setup.py", line 9, in <module> from distutils import ccompiler, msvccompiler ImportError: cannot import name 'msvccompiler' from 'distutils' (C:\Users\oguls\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\__init__.py) [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Encountered error while generating package metadata. See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error Γ pip subprocess to install build dependencies did not run successfully. β exit code: 1 β°β> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. I don't specifically know which packages cause this error, a lot of them install without any problems. I have tried updating pip, changing environment variables and other possible solutions I've found on the internet, but nothing seems to work. Edit: The package I am trying to install supports my Python version. | The real error in your case is: ImportError: cannot import name 'msvccompiler' from 'distutils' It occured because setuptools has broken distutils in version 65.0.0 (and has already fixed it in version 65.0.2). According to your log, the error occured in your global setuptools installation (see the path in error message), so you need to update it with the following command: pip install -U setuptools Those packages, however, may still not get installed or not work properly as the module causing this error doesn't support compiler versions needed for currently supported versions of Python. | 12 | 9 |
73,371,336 | 2022-8-16 | https://stackoverflow.com/questions/73371336/setup-pys-install-requires-how-do-i-specify-python-version-range-for-a-specif | I'm working on a python project and the package supports python 3.6 - 3.10. There were these 2 lines in the install_requires list in setup.py: "numpy>=1.18.5, <=1.19.5; python_version=='3.6'", "numpy>=1.19.5; python_version>='3.7'", And I tried to change them to "numpy>=1.18.5, <=1.19.5; python_version=='3.6'", "numpy>=1.23.1; python_version>='3.10'", "numpy>=1.19.5; python_version>='3.7', <'3.10'", And when I ran python setup.py install, I got this error: $ python setup.py install # yeah, I know `pip install .` is a better command. error in mypackage setup command: 'install_requires' must be a string or list of strings containing valid project/version requirement specifiers; Parse error at "", <'3.10"": Expected string_end I tried some different variants to specify a python_version range of 3.7 to 3.9, but none of them worked. So how do I specify python version range for a specific dependency in setup.py? | Referring to Complete Grammar, you can use and to achieve your purpose. "numpy>=1.18.5, <=1.19.5; python_version=='3.6'", "numpy>=1.23.1; python_version>='3.10'", "numpy>=1.19.5; python_version>='3.7' and python_version<'3.10'", | 6 | 5 |
73,375,390 | 2022-8-16 | https://stackoverflow.com/questions/73375390/how-to-override-env-file-during-tests | I'm reading env variables from .prod.env file in my config.py: from pydantic import BaseSettings class Settings(BaseSettings): A: int class Config: env_file = ".prod.env" env_file_encoding = "utf-8" settings = Settings() in my main.py I'm creating the app like so: from fastapi import FastAPI from app.config import settings app = FastAPI() print(settings.A) I am able to override settings variables like this in my conftest.py: import pytest from fastapi.testclient import TestClient from app.main import app from app.config import settings settings.A = 42 @pytest.fixture(scope="module") def test_clinet(): with TestClient(app) as client: yield client This works fine, whenever I use settings.A I get 42. But is it possible to override the whole env_file from .prod.env to another env file .test.env? Also I probably want to call settings.A = 42 in conftest.py before I import app, right? | You can override the env file you use by creating a Settings instance with the _env_file keyword argument. From documentation: Passing a file path via the _env_file keyword argument on instantiation (method 2) will override the value (if any) set on the Config class. If the above snippets were used in conjunction, prod.env would be loaded while .env would be ignored. For example, this should work for your test - import pytest from fastapi.testclient import TestClient import app.config as conf from app.config import Settings # replace the settings object that you created in the module conf.settings = Settings(_env_file='.test.env') from app.main import app # just to show you that you changed the module-level # settings from app.config import settings @pytest.fixture(scope="module") def test_client(): with TestClient(app) as client: yield client def test_settings(): print(conf.settings) print(settings) And you could create a .test.env, set A=10000000, and run with pytest -rP conftest.py # stuff ----- Captured stdout call ----- A=10000000 A=10000000 This looks a little messy (though this is probably only used for test purposes), so I'd recommend not creating a settings object that is importable by everything in your code, and instead making it something you create in, say, your __main__ that actually creates and runs the app, but that's a design choice for you to make. | 6 | 3 |
73,376,661 | 2022-8-16 | https://stackoverflow.com/questions/73376661/saving-jupyter-notebook-session | I am currently trying to save my whole Jupyter Notebook environment (working throught Anaconda 3). By environment, I mean, all the objects created (dataframes, lists, tuples, models, ...). Unfortunately I don't have Linux even though there seemed to be Linux command solutions. I tried finding a solution with pickle as recommended in the following topic but it seems that you have to specify which objects you want to dump and load. saving-and-loading-multiple-objects-in-pickle What I would like to do is that everything saves and loads as it can be done with R, where you just save and load an .RData file. | You can use Dill to store your session pip install dill Save a Notebook session: import dill dill.dump_session('notebook_env.db') Restore a Notebook session: import dill dill.load_session('notebook_env.db') | 3 | 6 |
73,375,944 | 2022-8-16 | https://stackoverflow.com/questions/73375944/is-there-a-python-function-to-compute-minimal-l2-norm-between-2-matrices-up-to-c | I have two sets of column vectors X = (X_1 ... X_n), Y = (Y_1 ... Y_n) of same shape. I would like to compute something like this: i.e the minimal L2 norm between X and Y up to column permutation. Is it possible to do it in less than O(n!)? Is it already implemented in Numpy for instance? Thank you in advance. | Apply scipy.optimize.linear_sum_assignment to the matrix A where Aij = βXi β Yjβ. | 3 | 5 |
73,373,275 | 2022-8-16 | https://stackoverflow.com/questions/73373275/modifying-horizontal-bar-size-in-subplot | I am trying to add a horizontal bar at the bottom of each pie chart in a figure. I am using subplots to achieve this, but I don't know how to customise the subplots with the horizontal bars. import matplotlib.pyplot as plt fig, axes = plt.subplots(28, 11) countery=0 for y in range(1,15): counterx=0 for x in range(1,12): axes[countery,counterx].pie([70,20]) axes[countery+1,counterx].barh('a',40,height=1) axes[countery+1,counterx].barh('a',60,left=40,height=1) axes[countery+1,counterx].axis('off') counterx=counterx+1 countery=countery+2 plt.show() I would like to change the size of the horizontal bar so it doesn't take all the horizontal space. and make it look smaller overall. I would like it to look something like this for each: I've tried changing wspace and hspace in plt.subplots_adjust(left=0.1, bottom=0.1, right=0.9, top=0.9, wspace=0.2, hspace=5) but no luck. Any help with this? | You can set the box_aspect of the bar charts. Also, using 14 rows with subgridspecs of 2 rows each is slightly faster than the 28 rows gridspec: import matplotlib.pyplot as plt fig = plt.figure() gs = fig.add_gridspec(14, 11) for y in range(gs.nrows): for x in range(gs.ncols): ax_pie, ax_bar = gs[y, x].subgridspec(2, 1, height_ratios=[5, 1]).subplots() ax_pie.pie([70,20]) ax_bar.barh('a', 40, height=1) ax_bar.barh('a', 60, left=40, height=1) ax_bar.set_box_aspect(1/5) ax_bar.axis('off') | 3 | 3 |
73,371,934 | 2022-8-16 | https://stackoverflow.com/questions/73371934/how-to-center-and-coloured-the-button | I have an app which convert the image into pencil sketch in that app i need three changes in the buttons Need to align the both buttons into center Need to give some colour to the buttons The both button should be in same size Sample Code: import streamlit as st #web app and camera import numpy as np # for image processing from PIL import Image #Image processing import cv2 #computer vision def dodgeV2(x, y): return cv2.divide(x, 255 - y, scale=256) def pencilsketch(inp_img): img_gray = cv2.cvtColor(inp_img, cv2.COLOR_BGR2GRAY) img_invert = cv2.bitwise_not(img_gray) img_smoothing = cv2.GaussianBlur(img_invert, (21, 21),sigmaX=0, sigmaY=0) final_img = dodgeV2(img_gray, img_smoothing) return(final_img) def download_image(x): with open(x, "rb") as file: btn = st.download_button( label="Download image", data=file, file_name=x, mime="image/jpg" ) def email_box(x): if st.checkbox("Email"): form = st.form(key='my-form') name = form.text_input('Enter your name') submit = form.form_submit_button('Send Email') if submit: st.write(f'x {name}') file_image = st.camera_input(label = "Take a pic of you to be sketched out") if file_image: input_img = Image.open(file_image) final_sketch = pencilsketch(np.array(input_img)) st.write("**Output Pencil Sketch**") st.image(final_sketch, use_column_width=True) download_image("final_image.jpeg") email_box("hello") else: st.write("You haven't uploaded any image file") | I have modified the above code. Hope it helps customized_button = st.markdown(""" <style > .stDownloadButton, div.stButton {text-align:center} .stDownloadButton button, div.stButton > button:first-child { background-color: #ADD8E6; color:#000000; padding-left: 20px; padding-right: 20px; } .stDownloadButton button:hover, div.stButton > button:hover { background-color: #ADD8E6; color:#000000; } } </style>""", unsafe_allow_html=True) | 4 | 7 |
73,361,556 | 2022-8-15 | https://stackoverflow.com/questions/73361556/error-discord-errors-notfound-404-not-found-error-code-10062-unknown-inter | I'm making a discord bot that sends a message with two buttons. Both buttons sends a message with a picture/gif when pressed. One of them works but the other one gives an error: raise NotFound(response, data) discord.errors.NotFound: 404 Not Found (error code: 10062): Unknown interaction Here is the full code: import os import discord from discord.ext import commands from discord.ui import Button from discord.ui import View from dotenv import load_dotenv import random intents = discord.Intents.default() intents.message_content = True load_dotenv() TOKEN = os.getenv('Sommer_Challenge_2022_TOKEN') bot = commands.Bot(command_prefix=';', intents=intents, help_command=None) channel = bot.get_channel(channel id here) #facts about sea and beach #fakta om hav og strand fact1 = ('Verdens lΓ¦ngste strand hedder "Praia Do Cassino". Den ligger i brasilien og er 241 km lang.ποΈ') fact2 = ('Havet dΓ¦kker omkring 71% af jordens overflade.π') fact3 = ('Ca. 73% af folk der besΓΈger stranden, gΓ₯r i vandet.π') fact4 = ('Der udledes omkring 8-10 tons plastik i havet hvert Γ₯r. Det svarer til ca. 375.000 halvliters plastikflasker.') fact5 = ('Over 400 milioner amerikanere gΓ₯r pΓ₯ stranden hvert Γ₯r.') fact6 = ('Det RΓΈde Hav er det salteste hav i verden. Vandet indenholder ca. 60 gram salt pr. liter.π§') fact7 = ('Ca. 94% af dyrelivet findes havet.π') fact8 = ('Man siger at regnskoven er "jordens lunger", men i virkeligheden producere havet mere end 70% af alt ilt.') fact9 = ('Det er solen som gΓΈr vandet blΓ₯t. Det samme gΓ¦lder himlen.βοΈ') fact10 = ('Hvert Γ₯r drΓ¦ber hajer mellem fem til ti mennesker. Til gengΓ¦ld drΓ¦ber mennesker omkring 100 millioner hajer om Γ₯ret.π¦') fact11 = ('Ved vesterhavet kan man se bunkere fra 2. verdenskrig.') fact12 = ('Verdens stΓΈrste sandslot har en diameter pΓ₯ 32 meter og har en hΓΈjde pΓ₯ 21 meter.') #Scratch games about sea and beach #Scratch spil om hav og strand game_button1 = Button(label="Scratch", url='https://scratch.mit.edu/projects/119134771/') game_button2 = Button(label="Scratch", url='https://scratch.mit.edu/projects/113456625/') game_button3 = Button(label="Scratch", url='https://scratch.mit.edu/projects/20743182/') game_button4 = Button(label="Scratch", url='https://scratch.mit.edu/projects/16250800/') game_button5 = Button(label="Scratch", url='https://scratch.mit.edu/projects/559210446/') game_button6 = Button(label="Scratch", url='https://scratch.mit.edu/projects/73644874/') game_button7 = Button(label="Scratch", url='https://scratch.mit.edu/projects/546214248/') game_button8 = Button(label="Scratch", url='https://scratch.mit.edu/projects/571081880/') #tells when bot goes online #fortΓ¦ller nΓ₯r en bot gΓ₯r online @bot.event async def on_ready(): channel = bot.get_channel(channel name here) await channel.send('Jeg er online!') print(f'{bot.user.name} has connected to Discord!') print(f'conected to: {channel}') @bot.event async def on_member_join(member): await member.send(f'Hej {member.mention}, velkommen pΓ₯ stranden. Nyd solen!βοΈ') #does stuff when a specific message is recived #gΓΈr ting nΓ₯r en bestemt besked er modtaget @bot.event async def on_message(msg): if msg.author != bot.user: if msg.content == (';info'): await msg.channel.send(f'{bot.user.mention} er lavet af "username" i fobindelse med sommer challenge 2022. \n Hvis du har nogle spΓΈgsmΓ₯l eller har brug for hjΓ¦lp, er du velkommen til at sende en dm til "username". \n Kun et af scratch spillene der er givet link til er lavet af "username" \n Piratskibet.dk brugernavn: other username') elif msg.content == (';fact'): await msg.channel.send(random.choice([fact1, fact2, fact3, fact5, fact6])) elif msg.content == (';game'): view = View() button = random.choice([game_button1, game_button2, game_button3, game_button4, game_button5, game_button6, game_button7, game_button8]) view.add_item(button) if button != game_button8: await msg.channel.send(view=view) else: await msg.channel.send('Det her spil er lavet af "username"', view=view) elif 'hello' in msg.content: msg.channel.send(f'hello {msg.author.mention}!') elif 'hej' in msg.content: await msg.channel.send(f'hej {msg.author.mention}!') elif msg.content == (';choice'): embed = discord.Embed(title="", description="", color=0xc27c0e) file = discord.File(r"C:\Users\username\vs-code-files\Sommer_challenge_2022\sandslot_fΓΈr_DESTROY.png", filename="sandslot_fΓΈr_DESTROY.png") embed.set_image(url="attachment://sandslot_fΓΈr_DESTROY.png") embed.set_footer(text="ΓdelΓ¦g sandslottet?") button = Button(label="ΓdelΓ¦g!", style=discord.ButtonStyle.danger) button2 = Button(label="Lad det vΓ¦re", style=discord.ButtonStyle.success) view = View() view.add_item(button) view.add_item(button2) async def button2_callback(interaction): await msg.delete() embed = discord.Embed(title="", description="", color=0xc27c0e) file = discord.File(r"C:\Users\username\vs-code-files\Sommer_challenge_2022\sandslot_don't_destroy.png", filename="sandslot_don't_destroy.png") embed.set_image(url="attachment://sandslot_don't_destroy.png") embed.set_footer(text="Ok") await interaction.response.send_message(file=file, embed=embed, view=None, delete_after=6.25) async def button_callback(interaction): await msg.delete() embed = discord.Embed(title="", description="", color=0xc27c0e) file = discord.File(r"C:\Users\username\Pictures\sandslot_destoy.gif", filename="sandcastle_destoy.gif") embed.set_image(url="attachment://sandcastle_destoy.gif") embed.set_footer(text="Sandslottet blev ΓΈdelagt") await interaction.response.send_message(file=file, embed=embed, view=None, delete_after=6.25) button.callback = button_callback button2.callback = button2_callback await msg.channel.send(file=file, embed=embed, view=view, delete_after=5) bot.run(TOKEN) Why does this happen? | With discord API you need to send an initial response within 3 seconds and afterward, you have 15 minutes to send the follow-up message. You should look into deferring. You're uploading an image that might take some time and you might need to defer the message. Instead of doing : Interaction.response.send_message() Try : Interaction.response.defer() asyncio.sleep() Interaction.followup.send() | 5 | 12 |
73,367,220 | 2022-8-15 | https://stackoverflow.com/questions/73367220/pythons-requests-triggers-cloudflares-security-while-accessing-etherscan-io | I am trying to parse/scrape https://etherscan.io/tokens website using requests in Python but I get the following error: etherscan.io Checking if the site connection is secure etherscan.io needs to review the security of your connection before proceeding. Ray ID: 73b56fc71bc276ed Performance & security by Cloudflare Now, I found a solution here: https://stackoverflow.com/a/62687390/4190159 but when I try to use this solution, I am still not being able to read the actual content of the website and getting a different error stated below. My code as follows: import requests from collections import OrderedDict from requests import Session import socket answers = socket.getaddrinfo('etherscan.io', 443) (family, type, proto, canonname, (address, port)) = answers[0] s = Session() headers = OrderedDict({ 'Accept-Encoding': 'gzip, deflate, br', 'Host': "grimaldis.myguestaccount.com", 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0' }) s.headers = headers response = s.get(f"https://{address}/tokens", headers=headers, verify=False).text print(response) Error for the above code as follows: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 677, in urlopen chunked=chunked, File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 381, in _make_request self._validate_conn(conn) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 976, in validate_conn conn.connect() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connection.py", line 370, in connect ssl_context=context, File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/util/ssl.py", line 390, in ssl_wrap_socket return context.wrap_socket(sock) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 407, in wrap_socket _context=self, _session=session) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 814, in init self.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 1068, in do_handshake self._sslobj.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 689, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:833) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/adapters.py", line 449, in send timeout=timeout File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 725, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/util/retry.py", line 439, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='172.67.8.107', port=443): Max retries exceeded with url: /tokens (Caused by SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:833)'),)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "label_scrapper.py", line 16, in response = s.get(f"https://{address}/tokens", headers=headers, verify=False).text File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/sessions.py", line 543, in get return self.request('GET', url, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/sessions.py", line 530, in request resp = self.send(prep, **send_kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/sessions.py", line 643, in send r = adapter.send(request, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/adapters.py", line 514, in send raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='172.67.8.107', port=443): Max retries exceeded with url: /tokens (Caused by SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:833)'),)) Somdips-MacBook-Pro:Downloads somdipdey$ python3 label_scrapper.py Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 677, in urlopen chunked=chunked, File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 381, in _make_request self._validate_conn(conn) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 976, in validate_conn conn.connect() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connection.py", line 370, in connect ssl_context=context, File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/util/ssl.py", line 390, in ssl_wrap_socket return context.wrap_socket(sock) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 407, in wrap_socket _context=self, _session=session) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 814, in init self.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 1068, in do_handshake self._sslobj.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 689, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:833) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/adapters.py", line 449, in send timeout=timeout File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 725, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/util/retry.py", line 439, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='172.67.8.107', port=443): Max retries exceeded with url: /tokens (Caused by SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:833)'),)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "label_scrapper.py", line 16, in response = s.get(f"https://{address}/tokens", headers=headers, verify=False).text File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/sessions.py", line 543, in get return self.request('GET', url, **kwargs) Somdips-MacBook-Pro:Downloads somdipdey$ python3 label_scrapper.py Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 677, in urlopen chunked=chunked, File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 381, in _make_request self._validate_conn(conn) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 976, in validate_conn conn.connect() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connection.py", line 370, in connect ssl_context=context, File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/util/ssl.py", line 390, in ssl_wrap_socket return context.wrap_socket(sock) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 407, in wrap_socket _context=self, _session=session) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 814, in init self.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 1068, in do_handshake self._sslobj.do_handshake() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/ssl.py", line 689, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:833) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/adapters.py", line 449, in send timeout=timeout File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 725, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/util/retry.py", line 439, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='172.67.8.107', port=443): Max retries exceeded with url: /tokens (Caused by SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:833)'),)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "label_scrapper.py", line 16, in response = s.get(f"https://{address}/tokens", headers=headers, verify=False).text File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/sessions.py", line 543, in get return self.request('GET', url, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/sessions.py", line 530, in request resp = self.send(prep, **send_kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/sessions.py", line 643, in send r = adapter.send(request, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/adapters.py", line 514, in send raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='172.67.8.107', port=443): Max retries exceeded with url: /tokens (Caused by SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:833)'),)) How to resolve this? | The website is under cloudflare protection. So you can use cloudscraper instead of requests to get rid of the protection. Now it's working fine. Example: from bs4 import BeautifulSoup import cloudscraper scraper = cloudscraper.create_scraper(delay=10, browser={'custom': 'ScraperBot/1.0',}) url = 'https://etherscan.io/tokens' req = scraper.get(url) soup = BeautifulSoup(req.content,'lxml') for tr in soup.select('table#tblResult tbody tr'): d= list(tr.stripped_strings) print(d) Output: ['1', 'Tether USD (USDT)', 'Tether gives you the joint benefits of open blockchain technology and traditional currency by converting your cash into a stable digital currency equivalent.', '$1.005', '0.000042\xa0Btc', '0.000534\xa0Eth', '0.40%', '$56,223,000,912.00', '$67,594,315,221.00', '$40,022,235,444.23', '4,384,118', '0.003%'] ['2', 'USD Coin (USDC)', 'USDC is a fully collateralized US Dollar stablecoin developed by CENTRE, the open source project with Circle being the first of several forthcoming issuers.', '$1.006', '0.000042\xa0Btc', '0.000534\xa0Eth', '0.44%', '$7,028,538,289.00', '$53,707,876,130.00', '$46,882,045,425.04', '1,470,173', '0.132%'] ['3', 'BNB (BNB)', 'Binance aims to build a world-class crypto exchange, powering the future\nof crypto finance.', '$316.6558', '0.013290\xa0Btc', '0.168135\xa0Eth', '-0.87%', '$1,131,380,099.00', '$51,088,380,246.00', '$5,250,000,297.97', '322,395', '0.001%'] ['4', 'Binance USD (BUSD)', 'Binance USD (BUSD) is a dollar-backed stablecoin issued and custodied by Paxos Trust Company, and regulated by the New York State Department of Financial Services. BUSD is available directly for sale 1:1 with USD on Paxos.com and will be listed for trading on Binance.', '$0.9976', '0.000042\xa0Btc', '0.000530\xa0Eth', '-0.30%', '$6,219,951,485.00', '$17,920,238,921.00', '$17,532,299,450.15', '125,632', '0.197%'] ['5', 'HEX (HEX)', "HEX.com averages 25% APY interest recently. HEX virtually lends value from stakers to non-stakers as staking reduces supply. The launch ends Nov. 19th, 2020 when HEX stakers get credited ~200B HEX. HEX's total supply is now ~350B. Audited 3 times, 2 security, and 1 economics.", '$0.0626', '0.000003\xa0Btc', '0.000033\xa0Eth', '-5.10%', '$22,598,794.00', '$10,856,229,132.00', '$36,158,058,204.94', '308,035', '-0.040%'] ['6', 'SHIBA INU (SHIB)', 'SHIBA INU is a 100% decentralized community experiment with it claims that 1/2 the tokens have been sent to Vitalik and the other half were locked to a Uniswap pool and the keys burned.', '$0.00', '0.000000\xa0Btc', '0.000000\xa0Eth', '-9.20%', '$2,070,477,368.00', '$9,155,756,506.00', '$15,489,873,503.95', '1,206,115', '0.088%'] ['7', 'stETH (stETH)', 'stETH is a token that represents staked ether in Lido, combining the value of initial deposit + staking rewards. stETH tokens are pegged 1:1 to the ETH staked with Lido and can be used as one would use ether, allowing users to earn Eth2 staking rewards whilst benefiting from Defi yields.', '$1,844.28', '0.077404\xa0Btc', '0.979260\xa0Eth', '-2.26%', '$3,408,944.00', '$7,909,446,933.00', '$3,418,574,006.52', '94,316', '0.215%'] ['8', 'Matic Token (MATIC)', 'Matic Network brings massive scale to Ethereum using an adapted version of Plasma with PoS based side chains. Polygon is a well-structured, easy-to-use platform for Ethereum scaling and infrastructure development.', '$0.9397', '0.000039\xa0Btc', '0.000499\xa0Eth', '-6.39%', '$529,032,596.00', '$7,551,024,649.00', '$9,397,310,544.00', '466,641', '0.083%'] ['9', 'Dai Stablecoin (DAI)', 'Multi-Collateral Dai, brings a lot of new and exciting features, such as support for new CDP collateral types and Dai Savings Rate.', '$1.005', '0.000042\xa0Btc', '0.000534\xa0Eth', '0.36%', '$635,956,564.00', '$6,800,555,162.00', '$9,848,650,590.65', '479,078', '-0.008%'] ['10', 'Wrapped BTC (WBTC)', 'Wrapped Bitcoin (WBTC) is an ERC20 token backed 1:1 with Bitcoin.\nCompletely transparent. 100% verifiable. Community led.', '$23,983.00', '1.006567\xa0Btc', '12.734291\xa0Eth', '-1.26%', '$263,338,154.00', '$5,928,934,190.00', '$6,279,085,162.00', '51,459', '0.058%'] ... so on cloudscraper | 6 | 4 |
73,364,802 | 2022-8-15 | https://stackoverflow.com/questions/73364802/qt-creator-design-mode-disabled-for-qt-quick-pyside6-project | In newly installed Qt Creator 8.0.1, the Design Mode is disabled. I can only code in QML in Edit Mode. I can easily reproduce the problem by just creating a new Python Qt Quick Project as shown below. The Design Button on the left menu is always disabled. I tried to add all modules related to the Design Mode with Qt maintenance tool, but it's still disabled. I really get stuck on this configuration problem. | You can enable the QmlDesigner by going to Help > About Plugins... and using the filter to find QmlDesigner. Activate the CheckBox and restart QtCreator. https://www.qt.io/blog/qt-creator-6-released The integrated Qt Quick Designer is now disabled by default. Qt Creator will open .ui.qml files in Qt Design Studio. This is a step towards a more integrated workflow between Qt Design Studio and Qt Creator (video). Qt Quick Designer is still there, you can manually enable it again by checking the QmlDesigner plugin in Help > About Plugins. | 3 | 6 |
73,366,049 | 2022-8-15 | https://stackoverflow.com/questions/73366049/extract-substring-from-dot-untill-colon-with-python-regex | I have a string that resembles the following string: 'My substring1. My substring2: My substring3: My substring4' Ideally, my aim is to extract 'My substring2' from this string with Python regex. However, I would also be pleased with a result that resembles '. My substring2:' So far, I am able to extract '. My substring2: My substring3:' with "\.\s.*:" Alternatively, I have been able to extract - by using Wiktor StribiΕΌew's solution that deals with a somewhat similar problem posted in How can i extract words from a string before colon and excluding \n from them in python using regex - 'My substring1. My substring2' specifically with r'^[^:-][^:]*' However, I have been unable, after many hours of searching and trying (I am quite new to regex), to combine the two results into a single effective regex expression that will extract 'My substring2' out of my aforementioned string. I would be eternally greatfull if someone could help me find to correct regex expression to extract 'My substring2'. Thanks! | You can use non-greedy regex (with ?): import re s = "My substring1. My substring2: My substring3: My substring4" print(re.search(r"\.\s*(.*?):", s).group(1)) Prints: My substring2 | 3 | 3 |
73,356,688 | 2022-8-15 | https://stackoverflow.com/questions/73356688/average-and-sums-task | Basic task of making code that can find the sum & average of 10 numbers from the user. My current situation so far: Sum = 0 print("Please Enter 10 Numbers\n") for i in range (1,11): num = int(input("Number %d =" %i)) sum = Sum + num avg = Sum / 10 However, I want to make it so that if the user inputs an answer such as "Number 2 = 1sd" it doesn't stop the code immediately. Instead I'm looking for it to respond as "Invalid input. Try again" and then it restarts from the failed input. e.g. "Number 2 = 1sd" "Invalid input. Try again." "Number 2 = ..." How would I achieve that? | You can output an error message and prompt for re-entry of the ith number by prompting in a loop which outputs an error message and reprompts on user input error. This can be done by handling the ValueError exception that is raised when the input to the int() function is an invalid integer object string initializer such as as "1sd". That can be done by catching that exception, outputting Invalid input. Try again. and continuing when that occurs. One possible resulting code that may satisfy your requirements is: Sum = 0 print("Please Enter 10 Numbers\n") for i in range (1,11): num = None while num is None: try: num = int(input("Number %d =" %i)) except ValueError: print('Invalid input. Try again.') sum = Sum + num avg = Sum / 10 | 3 | 3 |
73,335,410 | 2022-8-12 | https://stackoverflow.com/questions/73335410/how-to-read-sys-stdin-containing-binary-data-in-python-ignore-errors | How do I read sys.stdin, but ignoring decoding errors? I know that sys.stdin.buffer exists, and I can read the binary data and then decode it with .decode('utf8', errors='ignore'), but I want to read sys.stdin line by line. Maybe I can somehow reopen the sys.stdin file but with errors='ignore' option? | Found three solutions from here as Mark Setchell mentioned. import sys import io def first(): with open(sys.stdin.fileno(), 'r', errors='ignore') as f: return f.read() def second(): sys.stdin = io.TextIOWrapper(sys.stdin.buffer, errors='ignore') return sys.stdin.read() def third(): sys.stdin.reconfigure(errors='ignore') return sys.stdin.read() print(first()) #print(second()) #print(third()) Usage: $ echo 'a\x80b' | python solution.py ab | 4 | 1 |
73,347,010 | 2022-8-13 | https://stackoverflow.com/questions/73347010/why-do-i-get-an-error-when-trying-to-read-a-file-in-geopandas-included-datasets | I've just installed Anaconda in my new laptop, and created an environment with geopandas installed in it. I've tried to upload the world map that comes with geopandas through the following code: import geopandas as gpd world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres')) But I obtain the following error message: File ~/anaconda3/envs/mapas_test/lib/python3.8/site-packages/shapely/geometry/base.py:854, in BaseMultipartGeometry.__array_interface__(self) 851 @property 852 def __array_interface__(self): 853 """Provide the Numpy array protocol.""" --> 854 raise NotImplementedError("Multi-part geometries do not themselves " 855 "provide the array interface") NotImplementedError: Multi-part geometries do not themselves provide the array interface Since this error has never appeared in my old laptop, I guess it is related to some problem during installation, but I could be wrong. Here are the technical details about the installation. OS: Ubuntu 22.04.1 Python version: 3.9.12 Conda version 4.13.0 geopandas version 0.9.0 Shapely version 1.7.1 And not sure if it is relevant, but the only other package installed in the environment is jupyter version 1.0.0 | This is caused by incompatibility of shapely 1.7 and numpy 1.23. Either update shapely to 1.8 or downgrade numpy, otherwise it won't work. | 6 | 14 |
73,353,612 | 2022-8-14 | https://stackoverflow.com/questions/73353612/why-is-cython-slower-than-pythonnumpy-here | I want to implement some fast convex analysis operations - proximal operators and the like. I'm new to Cython and thought that this would be the right tool for the job. I have implementations both in pure Python and in Cython (mwe_py.py and mwe_c.pyx below). However, when I compare them, the Python + Numpy version is significantly faster than the Cython version. Why is this? I have tried using memoryviews, which are supposed to allow for faster indexing/operations; however, the performance difference is very pronounced! Any advice on how to fix mwe_c.pyx below to approach "optimal" Cython code would be greatly appreciated. import pyximport; pyximport.install(language_level=3) import mwe_c import mwe_py import numpy as np from time import time n = 100000 nreps = 10000 x = np.random.randn(n) z = np.random.randn(n) tau = 1.0 t0 = time() for _ in range(nreps): out = mwe_c.prox_translation(mwe_c.prox_two_norm, x, z, tau) t1 = time() print(t1 - t0) t0 = time() for _ in range(nreps): out = mwe_py.prox_translation(mwe_py.prox_two_norm, x, z, tau) t1 = time() print(t1 - t0) which gives the outputs, respectively: 10.76103401184082 # (seconds) 5.988733291625977 # (seconds) Below is mwe_py.py: import numpy.linalg as la def proj_two_norm(x): """projection onto l2 unit ball""" X = la.norm(x) if X <= 1: return x return x / X def prox_two_norm(x, tau): """proximal mapping of l2 norm with parameter tau""" return x - proj_two_norm(x / tau) def prox_translation(prox_func, x, z, tau=None): """Compute prox(f(. - z))(x) where prox_func(x, tau) is prox(tau * f)(x).""" if tau is None: tau = 1.0 return z + prox_func(x - z, tau) And here, at last, is mwe_c.pyx: import numpy as np cimport numpy as np cdef double [::1] aasubtract(double [::1] x, double [::1] z): cdef unsigned int i, m = len(x), n = len(z); assert m == n, f"vectors must have the same length" cdef double [::1] out = np.copy(x); for i in range(n): out[i] -= z[i] return out cdef double [::1] vsdivide(double [::1] x, double tau): """Divide an array by a scalar element-wise.""" cdef: unsigned int i, n = len(x); double [::1] out = np.copy(x); for i in range(n): out[i] /= tau return out cdef double two_norm(double [::1] x): cdef: double out = 0.0; unsigned int i, n=len(x); for i in range(n): out = out + x[i]**2 out = out **.5 return out cdef double [::1] proj_two_norm(double [::1] x): """project x onto the unit two ball.""" cdef double x_norm = two_norm(x); cdef unsigned int i, n = len(x); cdef double [::1] p = np.copy(x); if x_norm <= 1: return p for i in range(n): p[i] = p[i] / x_norm return p cpdef double [::1] prox_two_norm(double [::1] x, double tau): """double [::1] prox_two_norm(double [::1] x, double tau)""" cdef unsigned int i, n = len(x); cdef double [::1] out = x.copy(), Px = x.copy(); Px = proj_two_norm(vsdivide(Px, tau)); for i in range(n): out[i] = out[i] - Px[i] return out cpdef prox_translation( prox_func, double [::1] x, double [::1] z, double tau=1.0 ): cdef: unsigned int i, n = len(x); double [::1] out = prox_func(aasubtract(x, z), tau); for i in range(n): out[i] += z[i]; return out | The main issue is that you compare an optimized Numpy code with a less-optimized Cython code. Indeed, Numpy makes use of SIMD instructions (like SSE and AVX/AVX2 on x86-64 processors) that are able to compute many items in a row. Cython use the default -O2 optimization level by default which do not enable any auto-vectorization strategy resulting in a slower scalar code (unless you use a very recent version of GCC). You can use -O3 to tell to most compilers (eg. old GCC and Clang) to enable auto-vectorization. Note that this is not sufficient to produce a very fast code. Indeed, compilers only use legacy SIMD instruction on x86-64 processors for sake of compatibility. -mavx and -mavx2 enable the AVX/AVX-2 instruction set so to produce a faster code assuming it is supported on your machine (otherwise it will simply crash). -mfma might also help. -march=native can also be used so to select the best instruction set available on the target platform. Note that Numpy does this check (partially) at runtime (thanks to GCC-specific C features). The second main issue is that out = out + x[i]**2 results in a big loop-carried dependency chain that compilers cannot optimize without breaking the IEEE-754 standard. Indeed, there is a very long chain of additions to perform and the processor cannot execute this faster than executing each addition instruction serially with the current code. The thing is adding two floating-point number has a quite big latency (typically 3 to 4 cycles on quite-modern x86-64 processors). This means the processor cannot pipeline the instruction. In fact, modern processor can often execute two addition in parallel (per core) but the current loop prevent that. In the end, this loop is completely latency-bound. You can fix this problem by unrolling the loop manually. Using -ffast-math can help compilers to do such optimizations but at the expense of breaking the IEEE-754 standard. If you use this option, then you need not to use special values like NaN numbers nor some operations. For more information, please read What does gcc's ffast-math actually do? . Moreover, note that array copies are expensive and I am not sure all copies are needed. You can create a new empty array and fill it rather than operating on a copy of an array. This will be faster, especially for big arrays. Finally, divisions are slow. Please consider a multiplication by the inverse. Compilers cannot do this optimization because of the IEEE-754 standard but you can easily do it. That being said, you need to make sure this is fine in your case as it can slightly change the result. Using -ffast-math should also fix this problem automatically. Note that Numpy many developers know how compilers and processors works so they already do manual optimizations so to generate a fast code (like I did several times). You can hardly beat Numpy when dealing with large arrays except if you merge loops or you use multiple-threads. Indeed, the RAM is quite slow nowadays compared to computing units and Numpy create many temporary arrays. Cython can be used to avoid creating most of these temporary arrays. | 5 | 6 |
73,353,608 | 2022-8-14 | https://stackoverflow.com/questions/73353608/why-does-argparse-not-accept-as-argument | My script takes -d, --delimiter as argument: parser.add_argument('-d', '--delimiter') but when I pass it -- as delimiter, it is empty script.py --delimiter='--' I know -- is special in argument/parameter parsing, but I am using it in the form --option='--' and quoted. Why does it not work? I am using Python 3.7.3 Here is test code: #!/bin/python3 import argparse parser = argparse.ArgumentParser() parser.add_argument('--delimiter') parser.add_argument('pattern') args = parser.parse_args() print(args.delimiter) When I run it as script --delimiter=-- AAA it prints empty args.delimiter. | This looks like a bug. You should report it. This code in argparse.py is the start of _get_values, one of the primary helper functions for parsing values: if action.nargs not in [PARSER, REMAINDER]: try: arg_strings.remove('--') except ValueError: pass The code receives the -- argument as the single element of a list ['--']. It tries to remove '--' from the list, because when using -- as an end-of-options marker, the '--' string will end up in arg_strings for one of the _get_values calls. However, when '--' is the actual argument value, the code still removes it anyway, so arg_strings ends up being an empty list instead of a single-element list. The code then goes through an else-if chain for handling different kinds of argument (branch bodies omitted to save space here): # optional argument produces a default when not present if not arg_strings and action.nargs == OPTIONAL: ... # when nargs='*' on a positional, if there were no command-line # args, use the default if it is anything other than None elif (not arg_strings and action.nargs == ZERO_OR_MORE and not action.option_strings): ... # single argument or optional argument produces a single value elif len(arg_strings) == 1 and action.nargs in [None, OPTIONAL]: ... # REMAINDER arguments convert all values, checking none elif action.nargs == REMAINDER: ... # PARSER arguments convert all values, but check only the first elif action.nargs == PARSER: ... # SUPPRESS argument does not put anything in the namespace elif action.nargs == SUPPRESS: ... # all other types of nargs produce a list else: ... This code should go through the 3rd branch, # single argument or optional argument produces a single value elif len(arg_strings) == 1 and action.nargs in [None, OPTIONAL]: but because the argument is missing from arg_strings, len(arg_strings) is 0. It instead hits the final case, which is supposed to handle a completely different kind of argument. That branch ends up returning an empty list instead of the '--' string that should have been returned, which is why args.delimiter ends up being an empty list instead of a '--' string. This bug manifests with positional arguments too. For example, import argparse parser = argparse.ArgumentParser() parser.add_argument('a') parser.add_argument('b') args = parser.parse_args(["--", "--", "--"]) print(args) prints Namespace(a='--', b=[]) because when _get_values handles the b argument, it receives ['--'] as arg_strings and removes the '--'. When handling the a argument, it receives ['--', '--'], representing one end-of-options marker and one actual -- argument value, and it successfully removes the end-of-options marker, but when handling b, it removes the actual argument value. | 22 | 19 |
73,347,065 | 2022-8-13 | https://stackoverflow.com/questions/73347065/pyspark-data-frames-when-to-use-select-vs-withcolumn | I'm new to PySpark and I see there are two ways to select columns in PySpark, either with ".select()" or ".withColumn()". From what I've heard ".withColumn()" is worse for performance but otherwise than that I'm confused as to why there are two ways to do the same thing. So when am I supposed to use ".select()" instead of ".withColumn()"? I've googled this question but I haven't found a clear explanation. | Using: df.withColumn('new', func('old')) where func is your spark processing code, is equivalent to: df.select('*', func('old').alias('new')) # '*' selects all existing columns As you see, withColumn() is very convenient to use (probably why it is available), however as you noted, there are performance implications. See this post for details: Spark DAG differs with 'withColumn' vs 'select' | 3 | 6 |
73,346,395 | 2022-8-13 | https://stackoverflow.com/questions/73346395/unexpected-keyword-argument-when-running-lightgbm-on-gpu | When running the code below: import lightgbm as lgb params = {'num_leaves': 38, 'min_data_in_leaf': 50, 'objective': 'regression', 'max_depth': -1, 'learning_rate': 0.1, 'device': 'gpu' } trn_data = lgb.Dataset(x_train, y_train) val_data = lgb.Dataset(x_test, y_test) model = lgb.train(params, trn_data, 20000, valid_sets=[trn_data, val_data], verbose_eval=300, early_stopping_rounds=1000) I get the follow errors: train() got an unexpected keyword argument 'verbose_eval' train() got an unexpected keyword argument 'early_stopping_rounds' It is important to note that I run this on GPU. When running it on CPU i do not get this error. Has anyone got an idea how I can incorporate a verbose output and early stopping rounds when running Lightgbm on GPU?? | For Lightgbm on GPU you can check the official documentation. On documentation there are no configuration option as vebose_ecal and early_stopping_rounds Official Documentation Also you can check this link Running LightGBM on GPU with python | 3 | 2 |
73,344,242 | 2022-8-13 | https://stackoverflow.com/questions/73344242/converting-float32-to-float64-takes-more-than-expected-in-numpy | I had a performance issue in a numpy project and then I realized that about 3 fourth of the execution time is wasted on a single line of code: error = abs(detected_matrix[i, step] - original_matrix[j, new]) and when I have changed the line to error = abs(original_matrix[j, new] - detected_matrix[i, step]) the problem has disappeared. I relized that the type of original_matrix was float64 and type of detected_matrix was float32. By changing types of either of these two varibles the problem solved. I was wondering that if this is a well known issue? Here is a sample code that represents the problem from timeit import timeit import numpy as np f64 = np.array([1.0], dtype='float64')[0] f32 = np.array([1.0], dtype='float32')[0] timeit_result = timeit(stmt="abs(f32 - f64)", number=1000000, globals=globals()) print(timeit_result) timeit_result = timeit(stmt="abs(f64 - f32)", number=1000000, globals=globals()) print(timeit_result) Output in my computer: 2.8707289 0.15719420000000017 which is quite strange. | TL;DR: Please use Numpy >= 1.23.0. This problem has been fixed in Numpy 1.23.0 (more specifically the version 1.23.0-rc1). This pull request rewrites the scalar math logic so to make it faster in many cases including in your specific use-case. With version 1.22.4, the former code is 10 times slower than the second one. This is also true for earlier versions like the 1.21.5. In the 1.23.0, the former is only 10%-15% slower but both takes a very small time: 140 ns/operation versus 122 ns/operation. The small difference is due to a slightly different path taken in the type-checking part of the code. For more information about this low-level behavior, please read this post. Note that iterating over Numpy items it not meant to be very fast, nor operating on Numpy scalar. If your code is limited by that, please consider converting Numpy scalar into Python ones as stated in the 1.23.0 release notes: Many operations on NumPy scalars are now significantly faster, although rare operations (e.g. with 0-D arrays rather than scalars) may be slower in some cases. However, even with these improvements users who want the best performance for their scalars, may want to convert a known NumPy scalar into a Python one using scalar.item(). An even faster solution is to use Numba/Cython in this case or just to try to vectorize the encompassing loop if possible. | 6 | 3 |
73,332,655 | 2022-8-12 | https://stackoverflow.com/questions/73332655/how-to-use-value-in-the-python-implementation-of-protobuf | I have a proto file defined as: syntax = "proto3"; import "google/protobuf/struct.proto"; package generic.name; message Message { uint32 increment = 1; google.protobuf.Value payload = 2; } I have figured out how to make this work if I swap the payload type from Value for Struct: struct = Struct() struct.update({"a": 1}) msg = Message(payload=struct, increment=1) However I cannot work out how to use the Value type in python. The python documentation for the protobuf Value field seems lacking compared to the other languages. Ultimately, all I want is to be able to have the payload data structure able to take a few different types (strings, ints, none, dict). What is the best way of achieving this? | Here's an example: from foo_pb2 import Bar from google.protobuf.struct_pb2 import Struct bar = Bar() bar.payload.string_value="Freddie" print(bar) print(bar.SerializeToString()) bar.payload.bool_value=True print(bar) print(bar.SerializeToString()) # No need to initialize the Struct bar.payload.struct_value.update({"foo":"bar"}) bar.payload.struct_value["baz"]=5 print(bar) print(bar.SerializeToString()) | 4 | 3 |
73,333,044 | 2022-8-12 | https://stackoverflow.com/questions/73333044/change-elements-of-a-numpy-array-based-on-a-return-value-of-a-function-to-which | I have an array of RGBA values that looks something like this: # Not all elements are [0, 0, 0, 0] array([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], ..., [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]) I also have a function which returns one of 5 values that a certain RGBA value is closest to (green, red, orange, brown, white). def closest_colour(requested_colour): min_colours = {} for key, name in webcolors.CSS3_HEX_TO_NAMES.items(): if name in ['green', 'red', 'orange', 'brown', 'white']: r_c, g_c, b_c = webcolors.hex_to_rgb(key) rd = (r_c - requested_colour[0]) ** 2 gd = (g_c - requested_colour[1]) ** 2 bd = (b_c - requested_colour[2]) ** 2 min_colours[(rd + gd + bd)] = name return min_colours[min(min_colours.keys())] I'd like to apply this function to each element of my numpy array and change those elements. I tried doing it this way: img_array[closest_colour(img_array) == 'green'] = (0, 255, 0, 1) img_array[closest_colour(img_array) == 'red'] = (255, 0, 0, 1) img_array[closest_colour(img_array) == 'brown'] = (92, 64, 51, 1) img_array[closest_colour(img_array) == 'orange'] = (255, 165, 0, 1) img_array[closest_colour(img_array) == 'white'] = (255, 255, 255, 0) but I get an error: TypeError: unhashable type: 'numpy.ndarray' I am aware of why this error occurs but I also don't know a different way to do this efficiently. Is there a way to do this efficiently as I'm working with a fairly large array (image)? | I would rewrite your function to be a bit more vectorized. First, you really don't need to loop through the entire dictionary of CSS colors for every pixel: the lookup table can be trivially precomputed. Second, you can map the five colors you want to RGBA values without using the names as an intermediary. This will make your life much easier since you'll be working with numbers instead of strings most of the time. names = dict.fromkeys(['green', 'red', 'orange', 'brown', 'white']) for key, name in webcolors.CSS3_HEX_TO_NAMES.items(): if name in names: names[name] = key lookup = np.array([webcolors.hex_to_rgb(key) + (1,) for key in names.values()]) Since the number of colors is small, you can compute an Nx5 array of distances to the colors: distance = ((rgba[..., None, :] - lookup)**2).sum(axis=-1) If you don't want to include the transparency in the distance, remove it from the comparison: distance = ((rgba[..., None, :3] - lookup[..., :3])**2).sum(axis=-1) This gives you an Nx5 array of distances (where N can be more than one dimension, because of the intentional use of ... instead of :). The minima are at closest = distance.argmin(-1) Now you can apply this index directly to the lookup table: result = lookup[closest] Here is a sample run: >>> np.random.seed(42) >>> rgba = np.random.randint(255, size=(10, 4)) >>> rgba array([[102, 179, 92, 14], [106, 71, 188, 20], [102, 121, 210, 214], [ 74, 202, 87, 116], [ 99, 103, 151, 130], [149, 52, 1, 87], [235, 157, 37, 129], [191, 187, 20, 160], [203, 57, 21, 252], [235, 88, 48, 218]]) >>> lookup = np.array([ ... [0, 255, 0, 1], ... [255, 0, 0, 1], ... [92, 64, 51, 1], ... [255, 165, 0, 1], ... [255, 255, 255, 0]], dtype=np.uint8) >>> distance = ((rgba[..., None, :3] - lookup[..., :3])**2).sum(axis=-1) >>> distance array([[ 24644, 63914, 15006, 32069, 55754], [ 80436, 62586, 19014, 66381, 60546], [ 72460, 82150, 28630, 69445, 43390], [ 15854, 81134, 20664, 41699, 63794], [ 55706, 57746, 11570, 50981, 58256], [ 63411, 13941, 5893, 24006, 116961], [ 66198, 26418, 29294, 1833, 57528], [ 41505, 39465, 25891, 4980, 63945], [ 80854, 6394, 13270, 14809, 96664], [ 85418, 10448, 21034, 8633, 71138]]) >>> closest = distance.argmin(-1) >>> closest array([2, 2, 2, 0, 2, 2, 3, 3, 1, 3]) >>> lookup[closest] array([[ 92, 64, 51, 1], [ 92, 64, 51, 1], [ 92, 64, 51, 1], [ 0, 255, 0, 1], [ 92, 64, 51, 1], [ 92, 64, 51, 1], [255, 165, 0, 1], [255, 165, 0, 1], [255, 0, 0, 1], [255, 165, 0, 1]], dtype=uint8) | 3 | 3 |
73,323,222 | 2022-8-11 | https://stackoverflow.com/questions/73323222/error-in-azure-storage-explorer-with-azurite-the-first-argument-must-be-of-typ | I'm running an Azure function locally, from VSCode, that outputs a string to a blob. I'm using Azurite to emulate the output blob container. My function looks like this: import azure.functions as func def main(mytimer: func.TimerRequest, outputblob:func.Out[str]): outputblob.set("hello") My function.json: { "scriptFile": "__init__.py", "bindings": [ { "name": "mytimer", "type": "timerTrigger", "direction": "in", "schedule": "0 * * * * *" }, { "name": "outputblob", "type": "blob", "dataType": "string", "direction": "out", "path": "testblob/hello" } ] } In local.settings.json, I've set "AzureWebJobsStorage": "UseDevelopmentStorage=true". The problem is, when I run the function and check in Azure Storage Explorer, the container is created (testblob) (along with 2 other containers: azure-webjobs-hosts and azure-webjobs-secrets) but it is empty and Azure Storage Explorer displays an error message when I refresh : The first argument must be of type string or an instance of Buffer, ArrayBuffer, or Array or an Array-like Object.Received undefined The function runs and doesn't return any error message. When I use a queue instead of a blob as output, it works and I can see the string in the emulated queue storage. When I use the blob storage in my Azure subscription instead of the emulated blob, it works as well, a new blob is created with the string. I've tried the following: clean and restart Azurite several times replace "UseDevelopmentStorage=true" by the connection string of the emulated storage reinstall Azure Storage Explorer I keep getting the same error message. I'm using Azure Storage Explorer Version 1.25.0 on Windows 11. Thanks for any help! | It looks like this is a known issue with the latest release (v1.25.0) of Azure Storage Explorer version see: https://github.com/microsoft/AzureStorageExplorer/issues/6008 Simplest solution is to uninstall and re-install an earlier version: https://github.com/microsoft/AzureStorageExplorer/releases/tag/v1.24.3 | 4 | 6 |
73,336,136 | 2022-8-12 | https://stackoverflow.com/questions/73336136/does-having-a-wrapper-object-return-value-e-g-integer-cause-auto-boxing-in-ja | I couldn't find a definitive answer for this seemingly simple question. If I write a method like this: public Integer getAnInt() { int[] i = {4}; return i[0]; } is the return value autoboxed into an Integer, or does it depend on what happens to the value after it's returned (e.g. whether the variable it is assigned to is declared as an Integer or int)? | Yes, boxed It will be (auto)boxed in the bytecode (.class file) because it's part of the public API, so other code might depend on the return value being an Integer. The boxing and unboxing might be removed at runtime by the JITter under the right circumstances, but I don't know if it does that sort of thing. | 9 | 8 |
73,320,708 | 2022-8-11 | https://stackoverflow.com/questions/73320708/set-run-description-programmatically-in-mlflow | Similar to this question, I'd like to edit/set the description of a run via code, instead of editing it via UI. To clarify, I don't want to set the description of my entire experiment, only of a single run. | There are two ways to set the description. 1. description parameter You can set a description using a markdown string for your run in mlflow.start_run() using description parameter. Here is an example. if __name__ == "__main__": # load dataset and other stuff run_description = """ ### Header This is a test **Bold**, *italic*, ~~strikethrough~~ text. [And this is an example hayperlink](http://example.com/). """ with mlflow.start_run(description=run_description) as run: # train model and other stuff 2. mlflow.note.content tag You can set/edit run names by setting the tag with the key mlflow.note.content, which is what the UI (currently) does under the hood. if __name__ == "__main__": # load dataset and other stuff run_description = """ ### Header This is a test **Bold**, *italic*, ~~strikethrough~~ text. [And this is an example hayperlink](http://example.com/). """ tags = { 'mlflow.note.content': run_description } with mlflow.start_run(tags=tags) as run: # train model and other stuff Result If you set description parameter and mlflow.note.content tag in mlflow.start_run(), you'll get this error. Description is already set via the tag mlflow.note.content in tags. Remove the key mlflow.note.content from the tags or omit the description. | 5 | 14 |
73,332,533 | 2022-8-12 | https://stackoverflow.com/questions/73332533/django-4-error-no-time-zone-found-with-key | After rebuild of my Django 4 Docker container the web service stopped working with this error: zoneinfo._common.ZoneInfoNotFoundError: 'No time zone found with key Asia/Hanoi' My setup is: Python 3.10 Django 4.0.5 Error: web_1 Traceback (most recent call last): web_1 File "/usr/local/lib/python3.10/zoneinfo/_common.py", line 12, in load_tzdata web_1 return importlib.resources.open_binary(package_name, resource_name) web_1 File "/usr/local/lib/python3.10/importlib/resources.py", line 46, in open_binary web_1 return reader.open_resource(resource) web_1 File "/usr/local/lib/python3.10/importlib/abc.py", line 433, in open_resource web_1 return self.files().joinpath(resource).open('rb') web_1 File "/usr/local/lib/python3.10/pathlib.py", line 1119, in open web_1 return self._accessor.open(self, mode, buffering, encoding, errors, web_1 FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/lib/python3.10/site-packages/tzdata/zoneinfo/Asia/Hanoi' web_1 web_1 During handling of the above exception, another exception occurred: web_1 web_1 Traceback (most recent call last): web_1 File "/home/app/web/manage.py", line 22, in <module> web_1 main() web_1 File "/home/app/web/manage.py", line 18, in main web_1 execute_from_command_line(sys.argv) web_1 File "/usr/local/lib/python3.10/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line web_1 utility.execute() web_1 File "/usr/local/lib/python3.10/site-packages/django/core/management/__init__.py", line 420, in execute web_1 django.setup() web_1 File "/usr/local/lib/python3.10/site-packages/django/__init__.py", line 24, in setup web_1 apps.populate(settings.INSTALLED_APPS) web_1 File "/usr/local/lib/python3.10/site-packages/django/apps/registry.py", line 116, in populate web_1 app_config.import_models() web_1 File "/usr/local/lib/python3.10/site-packages/django/apps/config.py", line 304, in import_models web_1 self.models_module = import_module(models_module_name) web_1 File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module web_1 return _bootstrap._gcd_import(name[level:], package, level) web_1 File "<frozen importlib._bootstrap>", line 1050, in _gcd_import web_1 File "<frozen importlib._bootstrap>", line 1027, in _find_and_load web_1 File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked web_1 File "<frozen importlib._bootstrap>", line 688, in _load_unlocked web_1 File "<frozen importlib._bootstrap_external>", line 883, in exec_module web_1 File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed web_1 File "/usr/local/lib/python3.10/site-packages/django_celery_beat/models.py", line 8, in <module> web_1 import timezone_field web_1 File "/usr/local/lib/python3.10/site-packages/timezone_field/__init__.py", line 1, in <module> web_1 from timezone_field.fields import TimeZoneField web_1 File "/usr/local/lib/python3.10/site-packages/timezone_field/fields.py", line 11, in <module> web_1 class TimeZoneField(models.Field): web_1 File "/usr/local/lib/python3.10/site-packages/timezone_field/fields.py", line 41, in TimeZoneField web_1 default_zoneinfo_tzs = [ZoneInfo(tz) for tz in pytz.common_timezones] web_1 File "/usr/local/lib/python3.10/site-packages/timezone_field/fields.py", line 41, in <listcomp> web_1 default_zoneinfo_tzs = [ZoneInfo(tz) for tz in pytz.common_timezones] web_1 File "/usr/local/lib/python3.10/zoneinfo/_common.py", line 24, in load_tzdata web_1 raise ZoneInfoNotFoundError(f"No time zone found with key {key}") web_1 zoneinfo._common.ZoneInfoNotFoundError: 'No time zone found with key Asia/Hanoi' web_1 [2022-08-12 09:18:36 +0000] [1] [INFO] Starting gunicorn 20.0.4 web_1 [2022-08-12 09:18:36 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1) web_1 [2022-08-12 09:18:36 +0000] [1] [INFO] Using worker: sync web_1 [2022-08-12 09:18:36 +0000] [11] [INFO] Booting worker with pid: 11 web_1 [2022-08-12 12:18:37 +0300] [11] [ERROR] Exception in worker process web_1 Traceback (most recent call last): web_1 File "/usr/local/lib/python3.10/zoneinfo/_common.py", line 12, in load_tzdata web_1 return importlib.resources.open_binary(package_name, resource_name) web_1 File "/usr/local/lib/python3.10/importlib/resources.py", line 46, in open_binary web_1 return reader.open_resource(resource) web_1 File "/usr/local/lib/python3.10/importlib/abc.py", line 433, in open_resource web_1 return self.files().joinpath(resource).open('rb') web_1 File "/usr/local/lib/python3.10/pathlib.py", line 1119, in open web_1 return self._accessor.open(self, mode, buffering, encoding, errors, web_1 FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/lib/python3.10/site-packages/tzdata/zoneinfo/Asia/Hanoi' web_1 web_1 During handling of the above exception, another exception occurred: web_1 web_1 Traceback (most recent call last): web_1 File "/usr/local/lib/python3.10/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker web_1 worker.init_process() web_1 File "/usr/local/lib/python3.10/site-packages/gunicorn/workers/base.py", line 119, in init_process web_1 self.load_wsgi() web_1 File "/usr/local/lib/python3.10/site-packages/gunicorn/workers/base.py", line 144, in load_wsgi web_1 self.wsgi = self.app.wsgi() web_1 File "/usr/local/lib/python3.10/site-packages/gunicorn/app/base.py", line 67, in wsgi web_1 self.callable = self.load() web_1 File "/usr/local/lib/python3.10/site-packages/gunicorn/app/wsgiapp.py", line 49, in load web_1 return self.load_wsgiapp() web_1 File "/usr/local/lib/python3.10/site-packages/gunicorn/app/wsgiapp.py", line 39, in load_wsgiapp web_1 return util.import_app(self.app_uri) web_1 File "/usr/local/lib/python3.10/site-packages/gunicorn/util.py", line 358, in import_app web_1 mod = importlib.import_module(module) web_1 File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module web_1 return _bootstrap._gcd_import(name[level:], package, level) web_1 File "<frozen importlib._bootstrap>", line 1050, in _gcd_import web_1 File "<frozen importlib._bootstrap>", line 1027, in _find_and_load web_1 File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked web_1 File "<frozen importlib._bootstrap>", line 688, in _load_unlocked web_1 File "<frozen importlib._bootstrap_external>", line 883, in exec_module web_1 File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed web_1 File "/home/app/web/config/wsgi.py", line 16, in <module> web_1 application = get_wsgi_application() web_1 File "/usr/local/lib/python3.10/site-packages/django/core/wsgi.py", line 12, in get_wsgi_application web_1 django.setup(set_prefix=False) web_1 File "/usr/local/lib/python3.10/site-packages/django/__init__.py", line 24, in setup web_1 apps.populate(settings.INSTALLED_APPS) web_1 File "/usr/local/lib/python3.10/site-packages/django/apps/registry.py", line 116, in populate web_1 app_config.import_models() web_1 File "/usr/local/lib/python3.10/site-packages/django/apps/config.py", line 304, in import_models web_1 self.models_module = import_module(models_module_name) web_1 File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module web_1 return _bootstrap._gcd_import(name[level:], package, level) web_1 File "<frozen importlib._bootstrap>", line 1050, in _gcd_import web_1 File "<frozen importlib._bootstrap>", line 1027, in _find_and_load web_1 File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked web_1 File "<frozen importlib._bootstrap>", line 688, in _load_unlocked web_1 File "<frozen importlib._bootstrap_external>", line 883, in exec_module web_1 File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed web_1 File "/usr/local/lib/python3.10/site-packages/django_celery_beat/models.py", line 8, in <module> web_1 import timezone_field web_1 File "/usr/local/lib/python3.10/site-packages/timezone_field/__init__.py", line 1, in <module> web_1 from timezone_field.fields import TimeZoneField web_1 File "/usr/local/lib/python3.10/site-packages/timezone_field/fields.py", line 11, in <module> web_1 class TimeZoneField(models.Field): web_1 File "/usr/local/lib/python3.10/site-packages/timezone_field/fields.py", line 41, in TimeZoneField web_1 default_zoneinfo_tzs = [ZoneInfo(tz) for tz in pytz.common_timezones] web_1 File "/usr/local/lib/python3.10/site-packages/timezone_field/fields.py", line 41, in <listcomp> web_1 default_zoneinfo_tzs = [ZoneInfo(tz) for tz in pytz.common_timezones] web_1 File "/usr/local/lib/python3.10/zoneinfo/_common.py", line 24, in load_tzdata web_1 raise ZoneInfoNotFoundError(f"No time zone found with key {key}") web_1 zoneinfo._common.ZoneInfoNotFoundError: 'No time zone found with key Asia/Hanoi' web_1 [2022-08-12 12:18:37 +0300] [11] [INFO] Worker exiting (pid: 11) web_1 [2022-08-12 09:18:37 +0000] [1] [INFO] Shutting down: Master web_1 [2022-08-12 09:18:37 +0000] [1] [INFO] Reason: Worker failed to boot. In the Django settings file: TIME_ZONE = 'UTC' USE_TZ = True PS: As suggested in another post I added tzdata to my requirements file but nothing changed. | Downgrading the pytz version from 2022.2 to 2022.1 seems to have solved this issue for me. | 17 | 16 |
73,316,644 | 2022-8-11 | https://stackoverflow.com/questions/73316644/usage-of-direct-references-in-pyproject-toml-with-hatchling-backend | If I understand the documentation for hatchling correctly, in a pyproject.toml with hatchling as a backend, I should be able to to add a local package within the package folder by using the local direct reference schema <NAME> @ {root:uri}/pkg_inside_project. Here is a minimal non-working example where in the stackoverflow_demo package, I added the dependency to a package called my_local_package via my_local_package @ {root:uri}/my_local_package. When I clone the repo, go in the folder and try to install stackoverflow_demo by running pip install -e ., I receive a long error (see below). How can I correctly use the local package direct reference? Side notes: I am using the -e flag for pip, as I want to be able to install it in editable mode. Both modes, editable and normal, should be supported. At the moment, it doesn't work in either mode and the error is (nearly) the same. I am using pip version 22.2.2 I can only use pip to install, no other package manager unfortunately. I cannot install the local package from some remote git repository or so. The output of pip install with the error (personal information removed): Looking in indexes: [artifactstore-1], [artifactstore-2] Obtaining file:///C:/Path/to/demo/stackoverflow_demo Installing build dependencies ... done Checking if build backend supports build_editable ... done Getting requirements to build editable ... done Preparing editable metadata (pyproject.toml) ... error error: subprocess-exited-with-error Γ Preparing editable metadata (pyproject.toml) did not run successfully. β exit code: 1 β°β> [27 lines of output] Traceback (most recent call last): File "C:\Path\to\demo\stackoverflow_demo\venv\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 177, in prepare_metadata_for_build_editable hook = backend.prepare_metadata_for_build_editable AttributeError: module 'hatchling.build' has no attribute 'prepare_metadata_for_build_editable' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Path\to\demo\stackoverflow_demo\venv\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 363, in <module> main() File "C:\Path\to\demo\stackoverflow_demo\venv\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 345, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "C:\Path\to\demo\stackoverflow_demo\venv\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 186, in prepare_metadata_for_build_editable whl_basename = build_hook(metadata_directory, config_settings) File "C:\Other\Path\to\Local\Temp\1\pip-build-env-4qkba2h_\overlay\Lib\site-packages\hatchling\build.py", line 61, in build_editable return os.path.basename(next(builder.build(wheel_directory, ['editable']))) File "C:\Other\Path\to\Local\Temp\1\pip-build-env-4qkba2h_\overlay\Lib\site-packages\hatchling\builders\plugin\interface.py", line 80, in build self.metadata.validate_fields() File "C:\Other\Path\to\Local\Temp\1\pip-build-env-4qkba2h_\overlay\Lib\site-packages\hatchling\metadata\core.py", line 168, in validate_fields self.core.validate_fields() File "C:\Other\Path\to\Local\Temp\1\pip-build-env-4qkba2h_\overlay\Lib\site-packages\hatchling\metadata\core.py", line 1129, in validate_fields getattr(self, attribute) File "C:\Other\Path\to\Local\Temp\1\pip-build-env-4qkba2h_\overlay\Lib\site-packages\hatchling\metadata\core.py", line 1017, in dependencies self._dependencies = list(self.dependencies_complex) File "C:\Other\Path\to\Local\Temp\1\pip-build-env-4qkba2h_\overlay\Lib\site-packages\hatchling\metadata\core.py", line 1001, in dependencies_complex raise ValueError( ValueError: Dependency #1 of field `project.dependencies` cannot be a direct reference [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Γ Encountered error while generating package metadata. β°β> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. | From the issue tracker I received the answer, that I just have to add the following two lines to the pyproject.toml. See the documentation. [tool.hatch.metadata] allow-direct-references = true | 12 | 18 |
73,315,383 | 2022-8-11 | https://stackoverflow.com/questions/73315383/in-spacy-add-a-span-docab-as-entity-in-a-spacy-doc-python | I am using regex over a whole document to catch the spans in which such regex occurs: import spacy import re nlp = spacy.load("en_core_web_sm") doc = nlp("The United States of America (USA) are commonly known as the United States (U.S. or US) or America.") expression = r"[Uu](nited|\.?) ?[Ss](tates|\.?)" for match in re.finditer(expression, doc.text): start, end = match.span() span = doc.char_span(start, end) # This is a Span object or None # if match doesn't map to valid token sequence if span is not None: print("Found match:", span.text) There is a way to get the span (list of tokens) corresponding to the regex match on the doc even if the boundaries of the regex match do not correspond to token boundaries. See: How can I expand the match to a valid token sequence? In https://spacy.io/usage/rule-based-matching So far so good. Now that I have a collectuon of spans how do I convert them into entities? I am aware of the entity ruler: The EntityRuler is a pipeline component (see also the link above) but that entityruler takes patterns as inputs to search in the doc and not spans. If I want to use regex over the whole document to get the collection os spans I want to convert into ents what is the next step here? Entityruler? How? Or something else? Put simpler: nlp = spacy.load("en_core_web_sm") doc = nlp("The aplicable law is article 102 section b sentence 6 that deals with robery") I would like to generate an spacy ent (entity) out of doc[5,10] with label "law" in order to be able to: A) loop over all the law entities in the texts B) use the visualizer to display the different entities contained in the doc | The most flexible way to add spans as entities to a doc is to use Doc.set_ents: from spacy.tokens import Span span = doc.char_span(start, end, label="ENT") doc.set_ents(entities=[span], default="unmodified") Use the default option to specify how to set all the other tokens in the doc. By default the other tokens are set to O, but you can use default="unmodified" to leave them untouched, e.g. if you're adding entities incrementally. https://spacy.io/api/doc#set_ents | 4 | 5 |
73,322,634 | 2022-8-11 | https://stackoverflow.com/questions/73322634/how-to-call-an-api-endpoint-from-a-different-api-endpoint-in-the-same-fastapi-ap | (I did find the following question on SO, but it didn't help me: Is it possible to have an api call another api, having them both in same application?) I am making an app using Fastapi with the following folder structure main.py is the entry point to the app from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware from app.api.v1 import lines, upload from app.core.config import settings app = FastAPI( title=settings.PROJECT_NAME, version=0.1, openapi_url=f'{settings.API_V1_STR}/openapi.json', root_path=settings.ROOT_PATH ) app.add_middleware( CORSMiddleware, allow_origins=settings.BACKEND_CORS_ORIGINS, allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) app.include_router(upload.router, prefix=settings.API_V1_STR) app.include_router(lines.router, prefix=settings.API_V1_STR) In the lines.py, I have 2 GET endpoints: /one-random-line --> returns a random line from a .txt file /one-random-line-backwards --> should return the output of the /one-random-line Since the output of the second GET endpoint should be the reversed string of the output of the first GET endpoint, I tried doing the following steps mentioned here The codes: import random from fastapi import APIRouter, Request from starlette.responses import RedirectResponse router = APIRouter( prefix="/get-info", tags=["Get Information"], responses={ 200: {'description': 'Success'}, 400: {'description': 'Bad Request'}, 403: {'description': 'Forbidden'}, 500: {'description': 'Internal Server Error'} } ) @router.get('/one-random-line') def get_one_random_line(request: Request): lines = open('netflix_list.txt').read().splitlines() if request.headers.get('accept') in ['application/json', 'application/xml']: random_line = random.choice(lines) else: random_line = 'This is an example' return {'line': random_line} @router.get('/one-random-line-backwards') def get_one_random_line_backwards(): url = router.url_path_for('get_one_random_line') response = RedirectResponse(url=url) return {'message': response[::-1]} When I do this, I get the following error: TypeError: 'RedirectResponse' object is not subscriptable When I change the return of the second GET endpoint to return {'message': response}, I get the following output What is the mistake I am doing? Example: If the output of /one-random-line endpoint is 'Maverick', then the output of /one-random-line-backwards should be 'kcirevam' | Refactor your code to have the common part as a function you call - you'd usually have this in a module external to your controller. # this function could live as LineService.get_random_line for example # its responsibility is to fetch a random line from a file def get_random_line(path="netflix_list.txt"): lines = open(path).read().splitlines() return random.choice(lines) # this function encodes the rule that "if the accepted response is json or xml # we do the random value, otherwise we return a default value" def get_random_or_default_line_for_accept_value(accept, path="netflix_list.txt", default_value="This is an example"): if accept not in ("application/json", "application/xml"): return default_value return get_random_line(path=path) @router.get('/one-random-line') def get_one_random_line(request: Request): return { "line": get_random_or_default_line_for_accept_value( accept=request.headers.get('accept'), ), } @router.get('/one-random-line-backwards') def get_one_random_line_backwards(request: Request): return { "line": get_random_or_default_line_for_accept_value( accept=request.headers.get('accept'), )[::-1], } | 4 | 1 |
73,329,642 | 2022-8-12 | https://stackoverflow.com/questions/73329642/unittests-for-mypy-reveal-type | I have some points in legacy code (python library: music21) that uses a lot of overloading and Generic variables to show/typecheck that all subelements in a t.Sequence belong to a particular type. There are a lot of @overload decorators to show how different attributes can return different values. At this point the functions work as they should, but a number of PRs in the past have broken the introspection that other devs require. The code is extensively tested, but the inferred types by checkers such as mypy and PyCharm are not. Is there a way to run testing on inferred types? Something like: SomeClassType = typing.TypeVar('SomeClassType', bound='SomeClass') class SomeClass: pass class DerivedClass(SomeClass): pass class MyIter(typing.Sequence[typing.Type[SomeClassType]]): def __init__(self, classType: typing.Type[SomeClassType]): self.classType = classType # ------- type_checks.py... derived_iterator = MyIter(DerivedClass) # this is the line that I don't know exists... typing_utilities.assert_reveal_type_eq(derived_iterator, MyIter[DerivedClass]) # or as a string 'MyIter[DerivedClass]' mypy's reveal_type seems like it would be helpful here, but I can't seem to find any integration into a testing system, etc. Thanks! | The function you are looking for actually exists. But it is called differently: First, define a type test: from typing_extensions import assert_type def function_to_test() -> int: pass # this is a positive test: we want the return type to be int assert_type(function_to_test(), int) # this is a negative test: we don't want the return type to be str assert_type(function_to_test(), str) # type: ignore Then run mypy on the file: mypy --strict --warn-unused-ignores. The positive tests that fail are simply reported as a mypy error and the negative tests that fail are reported as 'Unused "type: Ignore" comment'. The package typing_extensions is installed alongside mypy. Source: https://typing.readthedocs.io/en/latest/source/quality.html | 4 | 5 |
73,326,570 | 2022-8-11 | https://stackoverflow.com/questions/73326570/why-is-the-float-int-multiplication-faster-than-int-float-in-cpython | Basically, the expression 0.4 * a is consistently, and surprisingly, significantly faster than a * 0.4. a being an integer. And I have no idea why. I speculated that it is a case of a LOAD_CONST LOAD_FAST bytecode pair being "more specialized" than the LOAD_FAST LOAD_CONST and I would be entirely satisfied with this explanation, except that this quirk seems to apply only to multiplications where types of multiplied variables differ. (By the way, I can no longer find the link to this "bytecode instruction pair popularity ranking" I once found on github, does anyone have a link?) Anyway, here are the micro benchmarks: $ python3.10 -m pyperf timeit -s"a = 9" "a * 0.4" Mean +- std dev: 34.2 ns +- 0.2 ns $ python3.10 -m pyperf timeit -s"a = 9" "0.4 * a" Mean +- std dev: 30.8 ns +- 0.1 ns $ python3.10 -m pyperf timeit -s"a = 0.4" "a * 9" Mean +- std dev: 30.3 ns +- 0.3 ns $ python3.10 -m pyperf timeit -s"a = 0.4" "9 * a" Mean +- std dev: 33.6 ns +- 0.3 ns As you can see - in the runs where the float comes first (2nd and 3rd) - it is faster. So my question is where does this behavior come from? I'm 90% sure that it is an implementation detail of CPython, but I'm not that familiar with low level instructions to state that for sure. | It's CPython's implementation of the BINARY_MULTIPLY opcode. It has no idea what the types are at compile-time, so everything has to be figured out at run-time. Regardless of what a and b may be, BINARY_MULTIPLY ends up inoking a.__mul__(b). When a is of int type int.__mul__(a, b) has no idea what to do unless b is also of int type. It returns Py_RETURN_NOTIMPLEMENTED (an internal C constant). This is in longobject.c's CHECK_BINOP macro. The interpreter sess that, and effectively says "OK, a.__mul__ has no idea what to do, so let's give b.__rmul__ a shot at it". None of that is free - it all takes time. float.__mul__(b, a) (same as float.__rmul__) does know what to do with an int (converts it to float first), so that succeeds. But when a is of float type to begin with, we go to float.__mul__ first, and that's the end of it. No time burned figuring out that the int type doesn't know what to do. The actual code is quite a bit more involved than the above pretends, but that's the gist of it. | 8 | 9 |
73,325,131 | 2022-8-11 | https://stackoverflow.com/questions/73325131/how-to-set-all-elements-of-pytorch-tensor-to-zero-after-a-certain-index-in-the-g | I've two PyTorch tensors mask = torch.ones(1024, 64, dtype=torch.float32) indices = torch.randint(0, 64, (1024, )) For every ith row in mask, I want to set all the elements after the index specified by ith element of indices to zero. For example, if the first element of indices is 50, then I want to set mask[0, 50:]=0. Is it possible to achieve this without using for loop? Solution with for loop: for i in range(mask.shape[0]): mask[i, indices[i]:] = 0 | You can first generate a tensor of size (1024x64) where each row has numbers arranged from 0 to 63. Then apply a logical operation using the indices reshaped as (1024x1) mask = torch.ones(1024, 64, dtype=torch.float32) indices = torch.randint(0, 64, (1024, 1)) # Note the dimensions mask[torch.arange(0, 64, dtype=torch.float32).repeat(1024,1) >= indices] = 0 | 9 | 6 |
73,302,356 | 2022-8-10 | https://stackoverflow.com/questions/73302356/how-to-make-pip-fail-early-when-one-of-the-requested-requirements-does-not-exist | Minimal example: pip install tensorflow==2.9.1 non-existing==1.2.3 Defaulting to user installation because normal site-packages is not writeable Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Collecting tensorflow==2.9.1 Downloading tensorflow-2.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (511.7 MB) ββββββββββββββββββββββββββββββββββββββββ 511.7/511.7 MB 7.6 MB/s eta 0:00:00 ERROR: Could not find a version that satisfies the requirement non-existing==1.2.3 (from versions: none) ERROR: No matching distribution found for non-existing==1.2.3 So pip downloads the (rather huge) TensorFlow first, only to then tell me that non-existing does not exist. Is there a way to make it fail earlier, i.e., print the error and quit before downloading? | I'm afraid there's no straightforward way of handling it. I ended up writing a simple bash script where I check the availability of packages using pip's index command: check_packages_availability () { while IFS= read -r line || [ -n "$line" ]; do package_name="${line%%=*}" package_version="${line#*==}" if ! pip index versions $package_name | grep "$package_version"; then echo "package $line not found" exit -1 fi done < requirements.txt } if ! check_packages_availability; then pip install -r requirements.txt fi This is a hacky solution but may work. For every package in requirements.txt this script tries to retrieve information about it and match the specified version. If everything's alright it starts installing them. Or you can use poetry, it handles resolving dependencies for you, for example: pyproject.toml [tool.poetry] name = "test_missing_packages" version = "0.1.0" description = "" authors = ["funnydman"] [tool.poetry.dependencies] python = "^3.10" tensorflow = "2.9.1" non-existing = "1.2.3" [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" At the resolving stage it throws exception without installing/downloading packages: Updating dependencies Resolving dependencies... (0.2s) SolverProblemError Because test-missing-packages depends on non-existing (1.2.3) which doesn't match any versions, version solving failed. | 6 | 3 |
73,306,792 | 2022-8-10 | https://stackoverflow.com/questions/73306792/dundered-global-variable-cannot-be-accessed-inside-a-class-method | I am experiencing an obscure (to me) effect of dundered scoping, and trying to work out the rule for it: #!/usr/bin/env python3 stuff = "the things" __MORE_STUFF = "even more" class Thing: def __init__(self): global __MORE_STUFF # doesn't help print(stuff) # is OK print(__MORE_STUFF) # fail! Thing() results in $ python3 dunder.py the things Traceback (most recent call last): File "dunder.py", line 12, in <module> Thing() File "dunder.py", line 10, in __init__ print(__MORE_STUFF) # fail! NameError: name '_Thing__MORE_STUFF' is not defined What is supposed to be a module-global variable is being treated as a class-level property - which, being undefined, is flagged as being undefined. I've been trying to look through the documentation for this, but I cannot seem to work out what the rule is. Can anyone point me to the appropriate documentation? | The documentation refers to such names as class-private names: __* Class-private names. Names in this category, when used within the context of a class definition, are re-written to use a mangled form to help avoid name clashes between βprivateβ attributes of base and derived classes. See section Identifiers (Names). Since subclassing isn't something that applies to modules (at least, not to the extent that the language provides any tools or recommendations regarding how you might go about it), there's no need to use class-private names where _-prefixed names will do. #!/usr/bin/env python3 stuff = "the things" _MORE_STUFF = "even more" class Thing: def __init__(self): print(stuff) # is OK print(_MORE_STUFF) Thing() | 6 | 4 |
73,302,071 | 2022-8-10 | https://stackoverflow.com/questions/73302071/nonetype-error-when-trying-to-use-pdb-via-formmatedtb | When executing the following code: from IPython.core import ultratb sys.excepthook = ultratb.FormattedTB(mode='Verbose', color_scheme='Linux', call_pdb=1) In order to catch exceptions, I receive the following error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/dinari/miniconda3/envs/testenv/lib/python3.8/site-packages/IPython/core/ultratb.py", line 994, in __init__ VerboseTB.__init__(self, color_scheme=color_scheme, call_pdb=call_pdb, File "/Users/dinari/miniconda3/envs/testenv/lib/python3.8/site-packages/IPython/core/ultratb.py", line 638, in __init__ TBTools.__init__( File "/Users/dinari/miniconda3/envs/testenv/lib/python3.8/site-packages/IPython/core/ultratb.py", line 242, in __init__ self.pdb = debugger_cls() TypeError: 'NoneType' object is not callable Using python 3.8.2 and IPython 8.4.0 pdb otherwise is working fine. Any idea for a fix for this? | Downgrading IPython to 7.34.0 solved this. | 6 | 1 |
73,302,473 | 2022-8-10 | https://stackoverflow.com/questions/73302473/pyqt5-delete-a-qlistwidgetitem-when-button-in-widget-is-pressed | I have a listWidget that contains multiple QListWidgetItem and for simplicity's sake, let's assume that each QListWidgetItem consists of a QWidget with a QPushButton called delete. This is assembled with this code: class widgetItem(QtWidgets.QWidget): def __init__(self, parent): super(widgetItem, self).__init__() uic.loadUi('UIfiles/trainingWidget.ui', self) # Load the .ui file self.listWidgetItem = QtWidgets.QListWidgetItem() self.listWidgetItem.setSizeHint(self.sizeHint()) self.delete.clicked.connect(self.deleteSelf) parent.listWidget.addItem(self.listWidgetItem) parent.listWidget.setItemWidget(self.listWidgetItem, self) and this is called in the main app with this: def additem(self): self.referenceDict[self.itemID] = widgetItem(self) Now, my goal is to delete this particular widget from both the referenceDict as well as the listWidget when the button is pressed. Each widgetItem also has their own itemID as a string, and can access the listWidget as well as the referenceDict. How do I write this deleteSelf? I've tried using self.deleteLater but it seems like it only deletes the QWidget but not the QListWidgetItem. Calling self.listWidgetItem.deleteLater raises an attribute error as it cannot be deleted this way. I also tried parent.referenceDict.pop(self.itemID) but for some reason it raises a key error despite both the keys matching when I printed the dict out. | You can remove listitems using the takeItem method and delete widgets using the deleteLater method. I wasn't to fond of your chosen process for creating the widgets and adding them to the list, so I went ahead and created a minimal example but using QPushButton instead of QWidgets for the itemWidgets. Example: from PyQt5.QtWidgets import * from PyQt5.QtCore import * from PyQt5.QtGui import * class Window(QWidget): def __init__(self, parent=None): super().__init__(parent=parent) self.layout = QVBoxLayout() self.setLayout(self.layout) self.listWidget = QListWidget(self) self.layout.addWidget(self.listWidget) self.addListItems() def addListItems(self): # creates the item widgets for _ in range(5): item = QListWidgetItem(type=0) widget = ListItem("button", self, item) self.listWidget.addItem(item) self.listWidget.setItemWidget(item, widget) def removeWidgetItem(self, item): # removes the item widgets index = self.listWidget.indexFromItem(item).row() item = self.listWidget.takeItem(index) class ListItem(QPushButton): def __init__(self, text, parent, item): super().__init__(text, parent) self.item = item # the ListWidgetItem self._parent = parent # the Window self.clicked.connect(self.deleteSelf) def deleteSelf(self): # slot for button click self._parent.removeWidgetItem(self.item) self.deleteLater() app = QApplication([]) window = Window() window.show() app.exec() | 5 | 2 |
73,302,291 | 2022-8-10 | https://stackoverflow.com/questions/73302291/is-it-legal-to-use-more-parameters-than-expected-when-calling-a-function | Context: I have written a Red Black tree implementation in C language. To allow it to use variable types, it only handles const void * elements, and initialisation of a tree must be given a comparison function with a signature int (*comp)(const void *, const void *). So far, so good, but I now try to use that C code to build an extension module for Python. It looks simple as first sight, because Python languages always pass references to objects which are received as pointers by C routines. Problem: Python objects come with rich comparison operators. That means that from a C extension module, comparing 2 arbitrary objects is trivial: just a matter of using int PyObject_RichCompareBool(PyObject *o1, PyObject *o2, int opid). But the comparison may return -1 to indicate that the objects are not comparable. In Python or C++ it would be simple enough to throw an exception to signal an abnormal condition. Unfortunately C has no notion of exception, and I could not find a way using setjmp-longjmp because: the environment buffer has do be known to both the englobing function and the internal one I should free any allocated memory at longjmp time, when the internal function does not know what has been allocated First idea: A simple solution is to give a third parameter to the comparison function for it to signal an abnormal condition. But when the library is used in a plain C environment, that third parameter just does not make sense. I then remembered that in the 80', I had learned that in C language, parameters were passed in the stack in reversed order and unstacked by the caller to allow functions with a variable number of parameters. That means that provided the first 2 parameters are correct passing a third parameter to a function expecting 2 should be harmless. Demo code: #include <stdio.h> // declares a type for the comparison functions typedef int (*func)(); // A simple function for comparing integers - only 2 params int f1(int a, int b) { return a - b; } /* Inserts a value into an increasing array * By convention 0 denotes the end of the array * No size control implemented for brievety * The comp function recieves a pointer to an int * to be able to signal abnormal conditions * */ int insert(int* arr, int val, func comp) { int err = 0; while ((0 != *arr) && (comp(*arr, val, &err) < 0)) { // 1 if (err) return 0; ++arr; } do { int tmp = *arr; *arr = val; val = tmp; } while (0 != *arr++); return 1; } int main() { func f = &f1; // a simple test with 3 parameters int cr = f(3, 1, 5); // 2 printf("%d\n", cr); // demo usage of the insert function int arr[10] = {0}; int data[] = { 1,5,3,2,4 }; for (int i = 0; i < sizeof(data) / sizeof(*data); i++) { insert(arr, data[i], f1); } for (int i = 0; i < sizeof(data) / sizeof(*data); i++) { printf("%d ", arr[i]); } return 0; } At (1) and (2) the 2 parameter function is called with 3 parameters. Of course, this code compiles without even a warning in Clang or MSVC, and runs fine giving the expected result. Question: While this code works fine on common implementations, I wonder whether actually passing a third parameter to a function expecting only two is really legit or does it invokes Undefined Behaviour? Current research Is it safe to invoke a C function with more parameters than it expects? : the accepted answer suggests that it should be safe when the C calling convention is used (which is my use case) while other answers show that the MSVC stdcall calling convention would not allow it 6.7.6.3 Function declarators (including prototypes) and 6.5.2.2 Function calls in draft n1570 for C11, but as English is not my first language, I could not understand where it was or not allowed Remark: The originality of this question is that it uses function pointers conversions. | I think it invokes Undefined Behavior. From 6.5.2.2p6: If the expression that denotes the called function has a type that does not include a prototype, the integer promotions are performed on each argument, and arguments that have type float are promoted to double. These are called the default argument promotions. If the number of arguments does not equal the number of parameters, the behavior is undefined. The proper solution is redesign the Red Black tree implementation to allow passing a context as a third parameter. int (*comp)(const void *, const void *, void *); It is highly recommended to add a context argument to any function pointer type to allow emulate closures. As a workaround, you could use a global variable. static int err; int f1(int a, int b) { err = 0; return a - b; } int insert(int* arr, int val, int comp(int,int)) { err = 0; while ((0 != *arr) && (comp(*arr, val) < 0)) { // 1 if (err) return 0; ++arr; } ... } It is not the best solution because it is not re-entrant. Only a single instance of insert()/f1() can run at a time. | 4 | 7 |
73,234,675 | 2022-8-4 | https://stackoverflow.com/questions/73234675/how-to-download-a-file-after-posting-data-using-fastapi | I am creating a web application that receives some text, converts the text into speech, and returns an mp3 file, which is saved to a temporary directory. I want to be able to download the file from the html page (i.e., the frontend), but I don't know how to do that properly. I know with Flask you can do this: from app import app from flask import Flask, send_file, render_template @app.route('/') def upload_form(): return render_template('download.html') @app.route('/download') def download_file(): path = "html2pdf.pdf" return send_file(path, as_attachment=True) if __name__ == "__main__": app.run() HTML Example: <!doctype html> <title>Python Flask File Download Example</title> <h2>Download a file</h2> <p><a href="{{ url_for('.download_file') }}">Download</a></p> So how do I replicate this with FastAPI? FastAPI Code: from fastapi import FastAPI, File, Request, Response, UploadFile from fastapi.middleware.cors import CORSMiddleware from fastapi.responses import FileResponse, HTMLResponse, StreamingResponse from fastapi.templating import Jinja2Templates from gtts import gTTS templates = Jinja2Templates(directory="templates") def text_to_speech(language:str, text: str) -> str: tts = gTTS(text=text, lang=language, slow=False) tts.save("./temp/welcome.mp3") #os.system("mpg321 /temp/welcome.mp3") return "Text to speech conversion successful" @app.get("/") def home(request: Request): return templates.TemplateResponse("index.html", {"request": request}) @app.get("/text2speech") async def home(request: Request): if request.method == "POST": form = await request.form() if form["message"] and form["language"]: language = form["language"] text = form["message"] translate = text_to_speech(language, text) path = './temp/welcome.mp3' value = FileResponse("./temp/welcome.mp3", media_type="audio/mp3") return value # return templates.TemplateResponse( # "index.html", # {"request": request, "message": text, "language": language, "download": value}, # ) Sample HTML File: <!doctype html> <title>Download MP3 File</title> <h2>Download a file</h2> <p><a href="{{ url_for('text2speech') }}">Download</a></p> | Use the Form keyword to define Form-data in your endpoint, and more specifically, use Form(...) to make a parameter required, instead of using await request.form() and manually checking if the user submitted the required parameters. After processing the received data and generating the audio file, you can use FileResponse to return the file to the user. Note: use the headers argument in FileResponse to set the Content-Disposition header using the attachment parameterβas described in this answerβto have the file downloaded to your device. Failing to set the headers, or using the inline parameter isntead, would lead to 405 Method Not Allowed error, as, in that case, the browser would attempt accessing the file using a GET request, in order to display the file contents inline (however, only POST requests are allowed to your /text2speech endpoint). Have a look at Option 1 in the examples below. If you wanted the /text2speech endpoint supporting both GET and POST requests (as shown in your question), you could either (1) use @app.api_route("/text2speech", methods=["GET", "POST"]) and use request.method to check whether it is a GET or POST request (see this answer for a working example), or (2) define two different endpoints with the following decorators on each one, i.e., @app.post('/text2speech') and @app.get('/text2speech') (Side note: These decorators could also be used on the same endpoint/function, and then use request.method as described in (1) option). However, you don't necessarily need to do that in this case. Additionally, you have added a Download hyperlink to your template for the user to download the file. However, you haven't provided any information as to how you expect this to work. This wouldn't work in a scenario where you don't have static files, but dynamically generated audio files (as in your case), as well as multiple users accessing the API at the same time; unless, for example, you generated random UUIDs for the filenames and saved the files to a StaticFiles directoryβor added that unique identifier as a query/path parameter (you could also use cookies instead, see here and here) to the URL in order to identify the file to be downloadedβand sent the URL back to the user. In that case, you would need a Javascript interface/library, such as Fetch API, to make an asynchronous HTTP requestβas described in this answerβin order to get the URL to the file and display it in the Download hyperlink. Have a look at Option 2 below. Note: The example in Option 2, for demo purposes, uses a simple dict to map the filepaths to UUIDs. In a real-world scenario, however, where multiple users access the API and several workers might be used, you may consider using a database storage, or Key-Value stores (Caches), as described here and here. You would also need to have a mechanism for deleting the files from the database and disk, once they have been downloaded, as well as make sure that users do not have unauthorized access to other users' audio files. It is also worth mentioning that in the same example, the UUID is expected as a query parameter to the /download endpoint, but in a real-world scenario, you should never pass sensitive information to the query string, as this would pose security/privacy risks (more details on that are given in Solution 1 of this answer). Instead, you should pass sensitive data to the request body (similar to Solution 2 of this answer), and always use the HTTPS protocol. Option 1 app.py from fastapi import FastAPI, Request, Form from fastapi.templating import Jinja2Templates from fastapi.responses import FileResponse import os app = FastAPI() templates = Jinja2Templates(directory="templates") @app.get('/') async def main(request: Request): return templates.TemplateResponse("index.html", {"request": request}) @app.post('/text2speech') def convert(request: Request, message: str = Form(...), language: str = Form(...)): # do some processing here filepath = './temp/welcome.mp3' filename = os.path.basename(filepath) headers = {'Content-Disposition': f'attachment; filename="{filename}"'} return FileResponse(filepath, headers=headers, media_type="audio/mp3") An alternative to the above would be to read the file data inside your endpointβor, in case the data were fully loaded into memory beforehand, such as here, here and hereβand return a custom Response directly, as shown below: from fastapi import Response @app.post('/text2speech') ... with open(filepath, "rb") as f: contents = f.read() # file contents could be already fully loaded into RAM headers = {'Content-Disposition': f'attachment; filename="{filename}"'} return Response(contents, headers=headers, media_type='audio/mp3') In case you had to return a file that is too large to fit into memoryβe.g., if you have 8GB of RAM, you canβt load a 50GB fileβyou could use StreamingResponse, which would load the file into memory in chunks and process the data one chunk at a time. If you find yield from f, shown in the example below, being rather slow, please have a look at this answer for faster alternatives. It should also be noted that using FileResponse would also load the file contents into memory in chunks (instead of the entire contents at once); however, the chunk size in that case would be 64KB, as specified in the implementation class of FileResponse. Thus, if that chunk size does not suit your requirements, you could instead use a StreamingResponse, as demonstrated below, or as shown in this this answer, by specifying the chunk size as desired. from fastapi.responses import StreamingResponse @app.post('/text2speech') ... def iterfile(): with open(filepath, "rb") as f: yield from f headers = {'Content-Disposition': f'attachment; filename="{filename}"'} return StreamingResponse(iterfile(), headers=headers, media_type="audio/mp3") templates/index.html <!DOCTYPE html> <html> <head> <title>Convert Text to Speech</title> </head> <body> <form method="post" action="http://127.0.0.1:8000/text2speech"> message : <input type="text" name="message" value="This is a sample message"><br> language : <input type="text" name="language" value="en"><br> <input type="submit" value="submit"> </form> </body> </html> Using JavaScript to download the file In case you used a JavaScript interface, such as Fetch API, in the frontend to issue the file-download requestβinstead of using an HTML <form>, as demonstrated aboveβplease have a look at this answer, as well as this answer and this answer on how to download the file in the frontend through JavaScript. Option 2 app.py from fastapi import FastAPI, Request, Form from fastapi.templating import Jinja2Templates from fastapi.responses import FileResponse import uuid import os app = FastAPI() templates = Jinja2Templates(directory="templates") files = {} @app.get('/') async def main(request: Request): return templates.TemplateResponse("index.html", {"request": request}) @app.get('/download') def download_file(request: Request, fileId: str): filepath = files.get(fileId) if filepath: filename = os.path.basename(filepath) headers = {'Content-Disposition': f'attachment; filename="{filename}"'} return FileResponse(filepath, headers=headers, media_type='audio/mp3') @app.post('/text2speech') def convert(request: Request, message: str = Form(...), language: str = Form(...)): # do some processing here filepath = './temp/welcome.mp3' file_id = str(uuid.uuid4()) files[file_id] = filepath file_url = f'/download?fileId={file_id}' return {"fileURL": file_url} templates/index.html <!DOCTYPE html> <html> <head> <title>Convert Text to Speech</title> </head> <body> <form method="post" id="myForm"> message : <input type="text" name="message" value="This is a sample message"><br> language : <input type="text" name="language" value="en"><br> <input type="button" value="Submit" onclick="submitForm()"> </form> <a id="downloadLink" href=""></a> <script type="text/javascript"> function submitForm() { var formElement = document.getElementById('myForm'); var data = new FormData(formElement); fetch('/text2speech', { method: 'POST', body: data, }) .then(response => response.json()) .then(data => { document.getElementById("downloadLink").href = data.fileURL; document.getElementById("downloadLink").innerHTML = "Download"; }) .catch(error => { console.error(error); }); } </script> </body> </html> Removing a File after it's been downloaded For Option 1 above, to remove a file after it has been downloaded by the user, you can simply define a BackgroundTask to be run after returning the response. For example: from fastapi import BackgroundTasks import os @app.post('/text2speech') def convert(request: Request, background_tasks: BackgroundTasks, ...): filepath = 'welcome.mp3' # ... background_tasks.add_task(os.remove, path=filepath) return FileResponse(filepath, headers=headers, media_type="audio/mp3") For Option 2, however, you would have to make sure to delete the key (i.e., file_id) pointing to the given filepath from the cache as well. Hence, you should create a background task function, as shown below: from fastapi import BackgroundTasks import os files = {} def remove_file(filepath, fileId): os.remove(filepath) del files[fileId] @app.get('/download') def download_file(request: Request, fileId: str, background_tasks: BackgroundTasks): filepath = files.get(fileId) if filepath: # ... background_tasks.add_task(remove_file, filepath=filepath, fileId=fileId) return FileResponse(filepath, headers=headers, media_type='audio/mp3') More details and examples on background tasks can be found here, as well as here. | 4 | 6 |
73,241,883 | 2022-8-4 | https://stackoverflow.com/questions/73241883/multiple-pipenv-virtual-environments-in-single-project | The scenario: I have a locally cloned github repo with multiple branches. Each branch needs potentially different dependencies. The question: I would like to switch between these branches as I work and therefore would like multiple pipenv virtual environments (one per branch). How can I achieve this, given that pipenv by default associates a single virtual environment with the project root folder? | Check out each branch into a separate directory (e.g., using git worktree). Because each branch would have a separate directory, pipenv would work without any additional changes. Assuming that you're currently in your work tree (let's say on the main branch), and you have additional branches named branch1 and branch2, that might look like: $ git worktree add ../branch1 branch1 $ git worktree add ../branch2 branch2 Now you main checked out in your current directory, branch1 checked out in ../branch1, and branch2 checked out in ../branch2. You can cd between these directories and work on them like normal, and pipenv will do what you want since each branch is now in a separate directory. | 4 | 3 |
73,289,360 | 2022-8-9 | https://stackoverflow.com/questions/73289360/how-can-i-run-several-test-files-with-pytest | I have a pytest project and a want to run tests from TWO python files. The project structure looks like this: at the root of the project there is a "tests" folder, it contains several folders "test_api1", "test_api2", "test_api3", each of them contains conftest.py and a test file. tests: test_api1: conftest.py, test_api_1 test_api2: conftest.py, test_api_2 test_api3: conftest.py, test_api_3 Usually I run tests like this python -m pytest -vs -k tests (if I want to run all tests from tests directory) or like this python -m pytest -vs -k test_api1.py (if I want to run a certain test). But now I want to run tests from TWO certain files test_api1.py and test_api1.py. How can I do that? | Just add files that you want to run as positional arguments, e.g.: python -m pytest tests/test_api1.py tests/test_api2.py NOTE: you need to run pytest without -k flag and hence use paths to the files instead of just filenames. | 4 | 7 |
73,257,387 | 2022-8-6 | https://stackoverflow.com/questions/73257387/how-to-un-pyinstaller-converted-python-app-with-shiny-for-python | I downloaded and installed Python 3.10.6 on windows 10 pro, installed Shiny for Python, created the sample app and run it. This worked fine. I installed pyinstaller and converted the app to an exe. I tried to run the app it threw (please see below). Does anyone know if this can work and if so how? This is the file2.spec that worked: # -*- mode: python ; coding: utf-8 -*- block_cipher = None import os # /c/Users/raz/AppData/Local/Programs/Python/Python310/Lib/site-packages/ shiny = os.path.abspath("../AppData/Local/Programs/Python/Python310/Lib/site-packages/shiny") a = Analysis( ['file2.py'], pathex=[], binaries=[], datas=[('app.py', '/'), (shiny,'/shiny')], hiddenimports=[], hookspath=[], hooksconfig={}, runtime_hooks=[], excludes=[], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher, noarchive=False, ) pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher) exe = EXE( pyz, a.scripts, a.binaries, a.zipfiles, a.datas, [], name='file2', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, upx_exclude=[], runtime_tmpdir=None, console=True, disable_windowed_traceback=False, argv_emulation=False, target_arch=None, codesign_identity=None, entitlements_file=None, ) This below did not work: raz@rays8350 MINGW64 ~/shiny $ cat app.py from shiny import App, render, ui app_ui = ui.page_fluid( ui.h2("Hello Shiny!"), ui.input_slider("n", "N", 0, 100, 20), ui.output_text_verbatim("txt"), ) def server(input, output, session): @output @render.text def txt(): return f"n*2 is {input.n() * 2}" app = App(app_ui, server) raz@rays8350 MINGW64 ~/shiny $ raz@rays8350 MINGW64 ~/shiny $ ../AppData/Local/Programs/Python/Python310/Scripts/shiny.exe run --reload dist/app/app.exe INFO: Will watch for changes in these directories: ['C:\\Users\\raz\\shiny\\dist\\app'] INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) INFO: Started reloader process [23368] using StatReload Process SpawnProcess-1: Traceback (most recent call last): File "C:\Users\raz\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 314, in _bootstrap self.run() File "C:\Users\raz\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "C:\Users\raz\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\_subprocess.py", line 76, in subp rocess_started target(sockets=sockets) File "C:\Users\raz\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\server.py", line 60, in run return asyncio.run(self.serve(sockets=sockets)) File "C:\Users\raz\AppData\Local\Programs\Python\Python310\lib\asyncio\runners.py", line 44, in run return loop.run_until_complete(main) File "C:\Users\raz\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 646, in run_until_complet e return future.result() File "C:\Users\raz\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\server.py", line 67, in serve config.load() File "C:\Users\raz\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\config.py", line 479, in load self.loaded_app = import_from_string(self.app) File "C:\Users\raz\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\importer.py", line 24, in import_ from_string raise exc from None File "C:\Users\raz\AppData\Local\Programs\Python\Python310\lib\site-packages\uvicorn\importer.py", line 21, in import_ from_string module = importlib.import_module(module_str) File "C:\Users\raz\AppData\Local\Programs\Python\Python310\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1050, in _gcd_import File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1050, in _gcd_import File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked ModuleNotFoundError: No module named 'app' | New Answer Since writing my original answer I have discovered an even simpler means of using pyinstaller to create a shiny application executable. This is a simplified step by step: Steps 1-4 are the same as the original answer (see below) and simply involve opening a new directory and creating a virtual environment for the project. Create a file for your shiny app and use the run_app to execute the app without needing to use the command line: file1.py from shiny import App, render, ui from shiny._main import run_app app_ui = ui.page_fluid( ui.h2("Hello Shiny!"), ui.input_slider("n", "N", 0, 100, 20), ui.output_text_verbatim("txt"), ) def server(input, output, session): @output @render.text def txt(): return f"n*2 is {input.n() * 2}" app = App(app_ui, server) run_app(app) run pyinstaller -F --collect-all shiny --name MyAppName file1.py And thats it. The executable will be in the dist folder. Old Answer Okay... So here are the steps I took to make it work. open new directory (my_new_dir) python -m venv venv venv\scripts\activate.bat or your OS's equivalent. pip install pyinstaller shiny open new file and paste this code in (file1.py) file1 from shiny import App, render, ui app_ui = ui.page_fluid( ui.h2("Hello Shiny!"), ui.input_slider("n", "N", 0, 100, 20), ui.output_text_verbatim("txt"), ) def server(input, output, session): @output @render.text def txt(): return f"n*2 is {input.n() * 2}" app = App(app_ui, server) open a second new file next to the first (file2.py) and copy paste file2.py import os import sys from shiny._main import main path = os.path.dirname(os.path.abspath(__file__)) apath = os.path.join(path, "file1.py") # these next two lines are only if you are using Windows OS drive, apath = os.path.splitdrive(apath) apath = apath.replace("\\","/") # sys.argv = ['shiny', 'run', apath] main() pyinstaller -F file2.py this will create a file2.spec file open it and make the changes in the code below: file2.spec # -*- mode: python ; coding: utf-8 -*- block_cipher = None import os # use OS equivalent for path below shiny = os.path.abspath("./venv/Lib/site-packages/shiny") a = Analysis( ... ... datas = [('file1.py', '/'), (shiny,'/shiny')] # fill in the datas value ... ... Last step: pyinstaller file2.spec At the end of this your top level directory should look like this: build dist venv file1.py file2.py file2.spec That is what worked for me. And the exe is in the dist folder. If you want to change the name of the executable or the icon or any of that stuff that can all be done in the spec file and instructions can be found in the pyinstaller docs | 4 | 3 |
73,260,250 | 2022-8-6 | https://stackoverflow.com/questions/73260250/how-do-i-type-hint-opencv-images-in-python | I get that in Python OpenCV images are numpy arrays, that correspond to cv::Mat in c++. This question is about what type-hint to put into python functions to properly restrict for OpenCV images (maybe even for a specific kind of OpenCV image). What I do now is: import numpy as np import cv2 Mat = np.ndarray def my_fun(image: Mat): cv2.imshow('display', image) cv2.waitKey() Is there any better way to add typing information for OpenCV images in python? | UPD: As mentioned in another answer, now, OpenCV has cv2.typing.MatLike. Then, the code would be: import cv2 def my_fun(img: cv2.typing.MatLike) -> None: pass You can specify it as numpy.typing.NDArray with an entry type. For example, import numpy as np Mat = np.typing.NDArray[np.uint8] def my_fun(img: Mat): pass | 13 | 8 |
73,287,475 | 2022-8-9 | https://stackoverflow.com/questions/73287475/how-to-specify-pip-extra-index-url-in-environment-yml | Conda can create an environment.yml that specifies both conda packages & pip packages. The problem is, I want to specify a pip package (torch==1.12.1+cu116), that is only available in the following index: https://download.pytorch.org/whl/cu116. How can I specify this in the environment.yml? Or at the very least, when running conda env create -f environment.yml, I would like to specify the extra index for pip. | This configuration should work, see the advanced-pip-example for other options. name: foo channels: - defaults dependencies: - python - pip - pip: - --extra-index-url https://download.pytorch.org/whl/cu116 - torch==1.12.1+cu116 See also Combining conda environment.yml with pip requirements.txt Can conda be configured to use a private pypi repo? | 17 | 27 |
73,282,411 | 2022-8-8 | https://stackoverflow.com/questions/73282411/how-to-add-background-tasks-when-request-fails-and-httpexception-is-raised-in-fa | I was trying to generate logs when an exception occurs in my FastAPI endpoint using a Background task as: from fastapi import BackgroundTasks, FastAPI app = FastAPI() def write_notification(message=""): with open("log.txt", mode="w") as email_file: content = f"{message}" email_file.write(content) @app.post("/send-notification/{email}") async def send_notification(email: str, background_tasks: BackgroundTasks): if "hello" in email: background_tasks.add_task(write_notification, message="helloworld") raise HTTPException(status_code=500, detail="example error") background_tasks.add_task(write_notification, message="hello world.") return {"message": "Notification sent in the background"} However, the logs are not generated because according to the documentation here and here, a background task runs "only" after the return statement is executed. Is there any workaround to this? | The way to do this is to override the HTTPException error handler, and since there is no BackgroundTasks object in the exception_handler, you can add a background task to a response in the way that is described in Starlette documentation (FastAPI is actually Starlette underneath). On a side note, FastAPI will run a background task defined with async def directly in the event loop, whereas a background task defined with normal def will be run in a separate thread from an external threadpool that is then awaited, as it would otherwise block the event loop. It is the same concept as API endpoints. Please have a look at this answer and this answer for more details that would help you decide on how to define your background task function. Example from fastapi import BackgroundTasks, FastAPI, HTTPException, Request from fastapi.responses import PlainTextResponse from starlette.exceptions import HTTPException as StarletteHTTPException from starlette.background import BackgroundTask app = FastAPI() def write_notification(message): with open('log.txt', 'a') as f: f.write(f'{message}'+'\n') @app.exception_handler(StarletteHTTPException) async def http_exception_handler(request, exc): task = BackgroundTask(write_notification, message=exc.detail) return PlainTextResponse(str(exc.detail), status_code=exc.status_code, background=task) @app.get("/{msg}") def send_notification(msg: str, background_tasks: BackgroundTasks): if "hello" in msg: raise HTTPException(status_code=500, detail="Something went wrong") background_tasks.add_task(write_notification, message="Success") return {"message": "Request has been successfully submitted."} If you need to add multiple background tasks to a response, then use: from fastapi import BackgroundTasks @app.exception_handler(StarletteHTTPException) async def http_exception_handler(request, exc): tasks = BackgroundTasks() tasks.add_task(write_notification, message=exc.detail) tasks.add_task(some_other_function, message="some other message") return PlainTextResponse(str(exc.detail), status_code=exc.status_code, background=tasks) A variation of the above approach is the following (suggested here): from starlette.background import BackgroundTask @app.exception_handler(StarletteHTTPException) async def http_exception_handler(request, exc): response = PlainTextResponse(str(exc.detail), status_code=exc.status_code) response.background = BackgroundTask(write_notification, message=exc.detail) # to add multiple background tasks use: # response.background = tasks # create `tasks` as shown in the code above return response Here are some references that you might also helpful: (1) this answer demonstrates how to add custom exception handlers and (2) this answer shows a custom logging system for the incoming requests and outgoing responses. | 7 | 13 |
73,226,501 | 2022-8-3 | https://stackoverflow.com/questions/73226501/how-to-move-or-remove-the-legend-from-a-seaborn-jointgrid-or-jointplot | How to remove the legend in the seaborn.JoingGrid plot? The reference code is like below: import matplotlib.pyplot as plt import seaborn as sns penguins = sns.load_dataset("penguins") g = sns.JointGrid(data=penguins, x="bill_length_mm", y="bill_depth_mm", hue="species") g.plot_joint(sns.scatterplot) sns.boxplot(data=penguins, x=g.hue, y=g.y, ax=g.ax_marg_y) sns.boxplot(data=penguins, y=g.hue, x=g.x, ax=g.ax_marg_x) plt.show() I have tried to use the following methods that are known to work on the other seaborn plots, but failed on the jointplot: plt.legend([],[], frameon=False) g._legend.remove() | To remove the legend, the correct part of the sns.JointGrid must be accessed. In this case g.ax_joint is where the legend is located. As stated in a comment by mwaskom, Matplotlib axes have .legend (a method that creates a legend) and .legend_ (the resulting object). Don't access variables that start with an underscore (_legend), because it indicates they are private. Tested in python 3.10, matplotlib 3.5.1, seaborn 0.11.2 g.plot_joint(sns.scatterplot, legend=False) may be used instead. sns.JointGrid import seaborn as sns penguins = sns.load_dataset("penguins") g = sns.JointGrid(data=penguins, x="bill_length_mm", y="bill_depth_mm", hue="species") g.plot_joint(sns.scatterplot) # legend=False can be added instead sns.boxplot(data=penguins, x=g.hue, y=g.y, ax=g.ax_marg_y) sns.boxplot(data=penguins, y=g.hue, x=g.x, ax=g.ax_marg_x) # remove the legend from ax_joint g.ax_joint.legend_.remove() Moving the JointGrid legend can be done with sns.move_legend, as shown in this answer. This also requires using g.ax_joint. penguins = sns.load_dataset("penguins") g = sns.JointGrid(data=penguins, x="bill_length_mm", y="bill_depth_mm", hue="species") g.plot_joint(sns.scatterplot) sns.boxplot(data=penguins, x=g.hue, y=g.y, ax=g.ax_marg_y) sns.boxplot(data=penguins, y=g.hue, x=g.x, ax=g.ax_marg_x) # move the legend in ax_joint sns.move_legend(g.ax_joint, "lower right", title='Species', frameon=False) With sns.jointplot g.ax_joint.legend_.remove() can be used, but removing the legend is more easily accomplished by passing legend=False to the plot: sns.jointplot(..., legend=False). g.ax_joint. is still required to move the legend. penguins = sns.load_dataset("penguins") g = sns.jointplot(data=penguins, x="bill_length_mm", y="bill_depth_mm", hue="species") # legend=False can be added instead # remove the legend g.ax_joint.legend_.remove() # or # move the legend # sns.move_legend(g.ax_joint, "lower right", title='Species', frameon=False) | 5 | 3 |
73,257,839 | 2022-8-6 | https://stackoverflow.com/questions/73257839/setup-py-install-is-deprecated-warning-shows-up-every-time-i-open-a-terminal-i | Every time I boot up terminal on VSCode, I get the following prompt. This does not happen on Terminal.app. /usr/local/lib/python3.9/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. How do I resolve this? | Install the setuptools 58.2.0 version using the following command: pip install setuptools==58.2.0 | 40 | 40 |
73,297,326 | 2022-8-9 | https://stackoverflow.com/questions/73297326/how-to-validate-keys-with-whitespaces-in-pydantic | I have a json key with whitespace in it: My_dict = {"my key": 1} I want to create a Model to model it: from pydantic import BaseModel class MyModel(BaseModel): mykey: int # my key isn't a legit variable name # my_key is, but like mykey - it doesn't catch the correct key from the json MyModel(**my_dict) This doesn't work. I tried playing with the BaseModel.Config, but didn't get anywhere. Didn't see anything on the docs as well. Is this possible? I can use a workaround: Go over the json, replace all key's whitespaces into underscores, and then use pydantic but I would love to not use this... | Yes, it's possible by using Field's aliases: from pydantic import BaseModel, Field class MyModel(BaseModel): mykey: int = Field(alias='my key') class Config: allow_population_by_field_name = True print(MyModel(**{"my key": 1})) print(MyModel(**{"mykey": 1})) [Edit @Omer Iftikhar] For Pydantic V2: You need to use populate_by_name instead of allow_population_by_field_name otherwise you will get following warning UserWarning: Valid config keys have changed in V2: * 'allow_population_by_field_name' has been renamed to 'populate_by_name' | 5 | 11 |
73,245,011 | 2022-8-5 | https://stackoverflow.com/questions/73245011/are-abstract-base-classes-redundant-since-protocol-classes-were-introduced | I'm learning how to use Protocol classes that have been introduced in Python 3.8 (PEP 544). So typing.Protocol classes are subclasses from ABCMeta and they are treated just like abstract classes are with the added benefit of allowing to use structural subtyping. I was trying to think of what I would use abstract base classes now and I'm drawing a blank. What are the downsides of Protocol classes compared to ABCs (if any)? Maybe they come with a performance hit? Are there any specific cases where an ABC is still the best choice? | I prefer ABCs beacuse they're explicit. With a Protocol someone reading the code may not know your class is intended to implement an interface in another module or deep in a dependency. Similarly, you can accidentally conform to a Protocol's signature, without conforming to its contract. For example, if a function accepts a class Image(Protocol): def draw() -> None: ... it's obviously not going to make sense with a class Cowboy: def draw() -> None: ... but the type checker would happily accept it. | 17 | 10 |
73,241,420 | 2022-8-4 | https://stackoverflow.com/questions/73241420/how-to-align-yticklabels-when-combining-a-barplot-with-heatmap | I have similar problems as this question; I am trying to combine three plots in Seaborn, but the labels on my y-axis are not aligned with the bars. My code (now a working copy-paste example): import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from matplotlib.colors import LogNorm ### Generate example data np.random.seed(123) year = [2018, 2019, 2020, 2021] task = [x + 2 for x in range(18)] student = [x for x in range(200)] amount = [x + 10 for x in range(90)] violation = [letter for letter in "thisisjustsampletextforlabels"] # one letter labels df_example = pd.DataFrame({ # some ways to create random data 'year':np.random.choice(year,500), 'task':np.random.choice(task,500), 'violation':np.random.choice(violation, 500), 'amount':np.random.choice(amount, 500), 'student':np.random.choice(student, 500) }) ### My code temp = df_example.groupby(["violation"])["amount"].sum().sort_values(ascending = False).reset_index() total_violations = temp["amount"].sum() sns.set(font_scale = 1.2) f, axs = plt.subplots(1,3, figsize=(5,5), sharey="row", gridspec_kw=dict(width_ratios=[3,1.5,5])) # Plot frequency df1 = df_example.groupby(["year","violation"])["amount"].sum().sort_values(ascending = False).reset_index() frequency = sns.barplot(data = df1, y = "violation", x = "amount", log = True, ax=axs[0]) # Plot percent df2 = df_example.groupby(["violation"])["amount"].sum().sort_values(ascending = False).reset_index() total_violations = df2["amount"].sum() percent = sns.barplot(x='amount', y='violation', estimator=lambda x: sum(x) / total_violations * 100, data=df2, ax=axs[1]) # Pivot table and plot heatmap df_heatmap = df_example.groupby(["violation", "task"])["amount"].sum().sort_values(ascending = False).reset_index() df_heatmap_pivot = df_heatmap.pivot("violation", "task", "amount") df_heatmap_pivot = df_heatmap_pivot.reindex(index=df_heatmap["violation"].unique()) heatmap = sns.heatmap(df_heatmap_pivot, fmt = "d", cmap="Greys", norm=LogNorm(), ax=axs[2]) plt.subplots_adjust(top=1) axs[2].set_facecolor('xkcd:white') axs[2].set(ylabel="",xlabel="Task") axs[0].set_xlabel('Total amount of violations per year') axs[1].set_xlabel('Percent (%)') axs[1].set_ylabel('') axs[0].set_ylabel('Violation') The result can be seen here: The y-labels are aligned according to my last plot, the heatmap. However, the bars in the bar plots are clipping at the top, and are not aligned to the labels. I just have to nudge the bars in the barplot -- but how? I've been looking through the documentation, but I feel quite clueless as of now. | See here that none of the y-axis ticklabels are aligned because multiple dataframes are used for plotting. It will be better to create a single dataframe, violations, with the aggregated data to be plotted. Start with the sum of amounts by violation, and then add a new percent column. This will insure the two bar plots have the same y-axis. Instead of using .groupby and then .pivot, to create df_heatmap_pivot, use .pivot_table, and then reindex using violations.violation. Tested in python 3.10, pandas 1.4.3, matplotlib 3.5.1, seaborn 0.11.2 DataFrames and Imports import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import LogNorm # Generate example data year = [2018, 2019, 2020, 2021] task = [x + 2 for x in range(18)] student = [x for x in range(200)] amount = [x + 10 for x in range(90)] violation = list("thisisjustsampletextforlabels") # one letter labels np.random.seed(123) df_example = pd.DataFrame({name: np.random.choice(group, 500) for name, group in zip(['year', 'task', 'violation', 'amount', 'student'], [year, task, violation, amount, student])}) # organize all of the data # violations frequency violations = df_example.groupby(["violation"])["amount"].sum().sort_values(ascending=False).reset_index() total_violations = violations["amount"].sum() # add percent violations['percent'] = violations.amount.div(total_violations).mul(100).round(2) # Use .pivot_table to create the pivot table df_heatmap_pivot = df_example.pivot_table(index='violation', columns='task', values='amount', aggfunc='sum') # Set the index to match the plot order of the 'violation' column df_heatmap_pivot = df_heatmap_pivot.reindex(index=violations.violation) Plotting Using sharey='row' is causing the alignment problem. Use sharey=False, and remove the yticklabels from axs[1] and axs[2], with axs[1 or 2].set_yticks([]). This is the case because ylim for the heatmap is not the same as for barplot. As such, the heatmap is shifted. .get_ylim() for axs[0] and axs[1] is (15.5, -0.5), which for axs[2] is (16.0, 0.0). See How to add value labels on a bar chart for additional details and examples using .bar_label. # set seaborn plot format sns.set(font_scale=1.2) # create the figure and set sharey=False f, axs = plt.subplots(1, 3, figsize=(12, 12), sharey=False, gridspec_kw=dict(width_ratios=[3,1.5,5])) # Plot frequency sns.barplot(data=violations, x="amount", y="violation", log=True, ax=axs[0]) # Plot percent sns.barplot(data=violations, x='percent', y='violation', ax=axs[1]) # add the bar labels axs[1].bar_label(axs[1].containers[0], fmt='%.2f%%', label_type='edge', padding=3) # add extra space for the annotation axs[1].margins(x=1.3) # plot the heatmap heatmap = sns.heatmap(df_heatmap_pivot, fmt = "d", cmap="Greys", norm=LogNorm(), ax=axs[2]) # additional formatting axs[2].set_facecolor('xkcd:white') axs[2].set(ylabel="", xlabel="Task") axs[0].set_xlabel('Total amount of violations per year') axs[1].set_xlabel('Percent (%)') axs[1].set_ylabel('') axs[0].set_ylabel('Violation') # remove yticks / labels axs[1].set_yticks([]) _ = axs[2].set_yticks([]) Comment out the last two lines to verify the yticklabels are aligned for each axs. DataFrame Views df_example.head() year task violation amount student 0 2020 2 i 84 59 1 2019 2 u 12 182 2 2020 5 s 20 9 3 2020 11 u 56 163 4 2018 17 t 59 125 violations violation amount percent 0 s 4869 17.86 1 l 3103 11.38 2 t 3044 11.17 3 e 2634 9.66 4 a 2177 7.99 5 i 2099 7.70 6 h 1275 4.68 7 f 1232 4.52 8 b 1191 4.37 9 m 1155 4.24 10 o 1075 3.94 11 p 763 2.80 12 r 762 2.80 13 j 707 2.59 14 u 595 2.18 15 x 578 2.12 df_heatmap_pivot task 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 violation s 62.0 36.0 263.0 273.0 191.0 250.0 556.0 239.0 230.0 188.0 185.0 516.0 249.0 331.0 212.0 219.0 458.0 411.0 l 83.0 245.0 264.0 451.0 155.0 314.0 98.0 125.0 310.0 117.0 21.0 99.0 98.0 50.0 40.0 268.0 192.0 173.0 t 212.0 255.0 45.0 141.0 74.0 135.0 52.0 202.0 107.0 128.0 158.0 NaN 261.0 137.0 339.0 207.0 362.0 229.0 e 215.0 315.0 NaN 116.0 213.0 165.0 130.0 194.0 56.0 355.0 75.0 NaN 118.0 189.0 160.0 177.0 79.0 77.0 a 135.0 NaN 165.0 156.0 204.0 115.0 77.0 65.0 80.0 143.0 83.0 146.0 21.0 29.0 285.0 72.0 116.0 285.0 i 209.0 NaN 20.0 187.0 83.0 136.0 24.0 132.0 257.0 56.0 201.0 52.0 136.0 226.0 104.0 145.0 91.0 40.0 h 27.0 NaN 255.0 NaN 99.0 NaN 71.0 53.0 100.0 89.0 NaN 106.0 NaN 170.0 86.0 79.0 140.0 NaN f 75.0 23.0 99.0 NaN 26.0 103.0 NaN 185.0 99.0 145.0 NaN 63.0 64.0 29.0 114.0 141.0 38.0 28.0 b 44.0 70.0 56.0 12.0 55.0 14.0 158.0 130.0 NaN 11.0 21.0 NaN 52.0 137.0 162.0 NaN 231.0 38.0 m 86.0 NaN NaN 147.0 74.0 131.0 49.0 180.0 94.0 16.0 NaN 88.0 NaN NaN NaN 51.0 161.0 78.0 o 109.0 NaN 51.0 NaN NaN NaN 20.0 139.0 149.0 NaN 101.0 60.0 NaN 143.0 39.0 73.0 10.0 181.0 p 16.0 NaN 197.0 50.0 87.0 NaN 88.0 NaN 11.0 162.0 NaN 14.0 NaN 78.0 45.0 NaN NaN 15.0 r NaN 85.0 73.0 40.0 NaN NaN 68.0 77.0 NaN 26.0 122.0 105.0 NaN 98.0 NaN NaN NaN 68.0 j NaN 70.0 NaN NaN 73.0 76.0 NaN 150.0 NaN NaN NaN 81.0 NaN 97.0 97.0 63.0 NaN NaN u 174.0 45.0 NaN NaN 32.0 NaN NaN 86.0 30.0 56.0 13.0 NaN 24.0 NaN NaN 69.0 54.0 12.0 x 69.0 29.0 NaN 106.0 NaN 43.0 NaN NaN NaN 97.0 56.0 29.0 149.0 NaN NaN NaN NaN NaN | 7 | 7 |
73,297,673 | 2022-8-9 | https://stackoverflow.com/questions/73297673/what-is-the-difference-between-queryset-last-and-latest-in-django | I want to get data which inserted in the last so I used a django code user = CustomUser.objects.filter(email=email).last() So it gives me the last user detail. But then experimentally I used: user = CustomUser.objects.filter(email=email).latest() Then It didn't give me a user object. Now, what is the difference between earliest(), latest, first and last()? | There are several differences between .first() [Django-doc]/.last() [Django-doc] and .earliest(β¦) [Django-doc]/.latest(β¦) [Django-doc]. The main ones are: .first() and .last() do not take field names (or orderable expressions) to order by, they do not have parameters, .earliest(β¦) and .latest(β¦) do; .first() and .last() will work with the ordering of the queryset if there is one, .earliest(β¦) and .latest(β¦) will omit any .order_by(β¦) clause that has already been used; if the queryset is not ordered .first() and .last() will order by the primary key and return the first/last item of that queryset, .earliest(β¦) and .latest(β¦) will look for the get_latest_by model option [Django-doc] if no fields are specified; and .first() and .last() will return None in case the queryset is empty; whereas .earliest(β¦) and .latest(β¦) will raise a DoesNotExist exception [Django-doc]. | 8 | 18 |
73,274,526 | 2022-8-8 | https://stackoverflow.com/questions/73274526/how-to-integrate-torch-into-a-docker-image-while-keeping-image-size-reasonable | So I've a Flask web app that will be exposing some deep learning models. I built the image and everything works fine. the problem is the size of this image is 5.58GB! which is a bit ridiculous. I have some deep learning models that are copied during the build, I thought they might be the culprit but their size combined does not exceed 300MB so that's definately not it. upon checking the history and the size of each layer I discovered this: RUN /bin/sh -c pip install -r requirements.txt is taking up 771MB. RUN /bin/sh -c pip install torch==1.10.2 is taking up 2.8GB! RUN /bin/sh -c apt-get install ffmpeg libsm6 libxext6 is taking up 400MB. so how do I incorporate these libraries while keeping image size reasonable? is it ok to have images of these size when deploying ml models in python? below is the root directory: Dockerfile: FROM python:3.7.13 WORKDIR /app COPY ["rdm.pt", "autosort_model.pt", "rotated_model.pt", "yolov5x6.pt", "/app/"] RUN pip install torch==1.10.2 COPY requirements.txt /app/requirements.txt RUN pip install -r requirements.txt RUN apt-get update RUN apt-get install ffmpeg libsm6 libxext6 -y COPY . /app CMD python ./app.py .dockerignore: Dockerfile README.md __pycache__ | By default torch will package CUDA packages and stuff. Add --extra-index-url https://download.pytorch.org/whl/cpu and --no-cache-dir to pip install command if you do not require CUDA. RUN pip install --no-cache-dir -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cpu Also it's good practice to remove the apt list cache: RUN apt-get update \ && apt-get install -y \ ffmpeg \ libsm6 \ libxext6 \ && rm -rf /var/lib/apt/lists/* | 5 | 10 |
73,268,995 | 2022-8-7 | https://stackoverflow.com/questions/73268995/typeerror-when-calling-super-in-dataclassslots-true-subclass | I am trying to call a a superclass method from a dataclass with slots=True in Python 3.10.5. from dataclasses import dataclass @dataclass(slots=True) class Base: def hi(self): print("Hi") @dataclass(slots=True) class Sub(Base): def hi(self): super().hi() Sub().hi() I get the following error. Traceback (most recent call last): File "...", line 16, in <module> Sub().hi() File "...", line 13, in hi super().hi() TypeError: super(type, obj): obj must be an instance or subtype of type It works fine if I remove slots=True from Sub, or make it a non-dataclass with __slots__ manually. The error remains if I instead do these to Base. Sub.__mro__ is (<class '__main__.Sub'>, <class '__main__.Base'>, <class 'object'>) and isinstance(Sub(), Base) is True. | As seen here, the dataclass decorator creates a new class object, and so the __closure__ attached to hi() is different to the one attached to the decorated class, and therefore the super() call cannot work without arguments due to relying on the __closure__. Therefore, you need to change super().hi() to super(Sub, self).hi(). | 13 | 16 |
73,267,607 | 2022-8-7 | https://stackoverflow.com/questions/73267607/how-to-train-model-with-multiple-gpus-in-pytorch | My server has two GPUs, How can I use two GPUs for training at the same time to maximize their computing power? Is my code below correct? Does it allow my model to be properly trained? class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.bert = pretrained_model # for param in self.bert.parameters(): # param.requires_grad = True self.linear = nn.Linear(2048, 4) #def forward(self, input_ids, token_type_ids, attention_mask): def forward(self, input_ids, attention_mask): batch = input_ids.size(0) #output = self.bert(input_ids, token_type_ids, attention_mask).pooler_output output = self.bert(input_ids, attention_mask).last_hidden_state print('last_hidden_state',output.shape) # torch.Size([1, 768]) #output = output.view(batch, -1) # output = output[:,-1,:]#(batch_size, hidden_size*2)(batch_size,1024) output = self.linear(output) return output device = torch.device("cuda" if torch.cuda.is_available() else "cpu") if torch.cuda.device_count() > 1: print("Use", torch.cuda.device_count(), 'gpus') model = MyModel() model = nn.DataParallel(model) model = model.to(device) | There are two different ways to train on multiple GPUs: Data Parallelism = splitting a large batch that can't fit into a single GPU memory into multiple GPUs, so every GPU will process a small batch that can fit into its GPU Model Parallelism = splitting the layers within the model into different devices is a bit tricky to manage and deal with. Please refer to this post for more information To do Data Parallelism in pure PyTorch, please refer to this example that I created a while back to the latest changes of PyTorch (as of today, 1.12). To utilize other libraries to do multi-GPU training without engineering many things, I would suggest using PyTorch Lightning as it has a straightforward API and good documentation to learn how to do multi-GPU training using Data Parallelism. Update: 2022/10/25 Here is a video explaining in much details about different types of distributed training: https://youtu.be/BPYOsDCZbno?t=1011 | 8 | 11 |
73,275,243 | 2022-8-8 | https://stackoverflow.com/questions/73275243/vs-code-debugger-immediately-exits | I use VS Code for a python project but recently whenever I launch the debugger it immediately exits. The debug UI will pop up for half a second then disappear. I can't hit a breakpoint no matter where it's placed in the current file. The project has the expected normal behavior when run in non-debug mode. I vaguely remember a command being inserted into the terminal window when I used to click debug but now I see nothing. I opened a totally different project but debugger still exits immediately. Any advice? Anywhere I can find logs for the debugger run? My launch.json file: { "version": "0.2.0", "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal", "justMyCode": true } ] } I have tried: running app as admin, reinstalling vs code, reinstalling python extension, restarting app, restarting computer, disabling all non-essential extensions, deleting launch.json, launching a file with only print statement. | Please install Python 3.7 or later. If you must use Python 3.6 or earlier, rollback the Python extension to version 2022.08.0. | 8 | 7 |
73,241,595 | 2022-8-4 | https://stackoverflow.com/questions/73241595/why-is-vscode-on-ubuntu-searching-for-the-python-dist-packages-directory-in-a-us | I'm running vscode release (1.69.2 dated 7/18/22) with the python extension on Ubuntu 20.04. When I try to run some code with the debugger I get an exception because it can't find my_venv_dir/lib/python3.8/dist-packages. I've read a bit about Ubuntu's use of dist-packages and site-packages but I haven't found any information that suggests that there should be a dist-packages directory in my venv. There isn't one. When I run the code without the debugger I don't see the exception. When I run it in the debugger, the exception occurs before I hit a very early breakpoint. The exception seems to happen in startup code. So I don't know if there's a venv issue or a vscode/python-extension issue. I created the virtual env using python -m venv Here's the call stack: islink (/usr/lib/python3.8/posixpath.py:167) _joinrealpath (/usr/lib/python3.8/posixpath.py:425) realpath (/usr/lib/python3.8/posixpath.py:391) _run_code (/usr/lib/python3.8/runpy.py:87) _run_module_as_main (/usr/lib/python3.8/runpy.py:194) Maybe this is more useful: Exception has occurred: FileNotFoundError [Errno 2] No such file or directory: '/home/my/venv/dir/lib/python3.8/dist-packages' File "/usr/lib/python3.8/posixpath.py", line 167, in islink st = os.lstat(path) File "/usr/lib/python3.8/posixpath.py", line 425, in _joinrealpath if not islink(newpath): File "/usr/lib/python3.8/posixpath.py", line 391, in realpath path, ok = _joinrealpath(filename[:0], filename, {}) File "/usr/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, launch.json: { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal", "justMyCode": false } ] } This is the sys.path: (myvenv) host:~/dev/myprj$ python Python 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> print(sys.path) ['', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/home/my/venv/dir/lib/python3.8/site-packages', '/usr/local/lib/python3.8/dist-packages', '/usr/lib/python3/dist-packages'] >>> | It looks like something was corrupted in the vscode configuration. I tried uninstalling and reinstalling vscode but noticed that when I started up again, vscode remembered where I left off. When I deleted the ~/.config/Code directory the problem was fixed. I suspect that the corruption happened when I updated to a newer release but I can't be sure. | 6 | 2 |
73,238,082 | 2022-8-4 | https://stackoverflow.com/questions/73238082/pip-install-returning-a-valueerror | I recently tried to install a library using Pip, and I received this error message. I am unable to install any packages, as the same error message keeps popping up. I notice this problem in both my main enviroment, and my venv Virtual Environment. Any help will be much appreciated. WARNING: Ignoring invalid distribution -illow (c:\users\brdwoo\appdata\local\programs\python\python39\lib\site-packages) WARNING: Ignoring invalid distribution -aleido (c:\users\brdwoo\appdata\local\programs\python\python39\lib\site-packages) WARNING: Ignoring invalid distribution -illow (c:\users\brdwoo\appdata\local\programs\python\python39\lib\site-packages) WARNING: Ignoring invalid distribution -aleido (c:\users\brdwoo\appdata\local\programs\python\python39\lib\site-packages) ERROR: Exception: Traceback (most recent call last): File "c:\users\brdwoo\appdata\local\programs\python\python39\lib\site-packages\pip\_internal\cli\base_command.py", line 167, in exc_logging_wrapper status = run_func(*args) File "c:\users\brdwoo\appdata\local\programs\python\python39\lib\site-packages\pip\_internal\cli\req_command.py", line 205, in wrapper return func(self, options, args) ... resp = self.send(prep, **send_kwargs) File "c:\users\brdwoo\appdata\local\programs\python\python39\lib\site-packages\pip\_vendor\requests\sessions.py", line 645, in send r = adapter.send(request, **kwargs) File "c:\users\brdwoo\appdata\local\programs\python\python39\lib\site-packages\pip\_vendor\cachecontrol\adapter.py", line 57, in send resp = super(CacheControlAdapter, self).send(request, **kw) File "c:\users\brdwoo\appdata\local\programs\python\python39\lib\site-packages\pip\_vendor\requests\adapters.py", line 440, in send resp = conn.urlopen( File "c:\users\brdwoo\appdata\local\programs\python\python39\lib\site-packages\pip\_vendor\urllib3\connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "c:\users\brdwoo\appdata\local\programs\python\python39\lib\site-packages\pip\_vendor\urllib3\connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "c:\users\brdwoo\appdata\local\programs\python\python39\lib\site-packages\pip\_vendor\urllib3\connectionpool.py", line 1040, in _validate_conn conn.connect() File "c:\users\brdwoo\appdata\local\programs\python\python39\lib\site-packages\pip\_vendor\urllib3\connection.py", line 401, in connect context.verify_mode = resolve_cert_reqs(self.cert_reqs) File "c:\users\brdwoo\appdata\local\programs\python\python39\lib\ssl.py", line 720, in verify_mode super(SSLContext, SSLContext).verify_mode.__set__(self, value) ValueError: Cannot set verify_mode to CERT_NONE when check_hostname is enabled. WARNING: Ignoring invalid distribution -illow (c:\users\brdwoo\appdata\local\programs\python\python39\lib\site-packages) WARNING: Ignoring invalid distribution -aleido (c:\users\brdwoo\appdata\local\programs\python\python39\lib\site-packages) WARNING: Ignoring invalid distribution -illow (c:\users\brdwoo\appdata\local\programs\python\python39\lib\site-packages) WARNING: Ignoring invalid distribution -aleido (c:\users\brdwoo\appdata\local\programs\python\python39\lib\site-packages) WARNING: There was an error checking the latest version of pip. | I was able to fix the problem by hard-coding in a 1, instead of having CERT_NONE being passed to verify_mode. The error message gave me the location of the code: File "C:\Users\name\AppData\Local\Programs\Python\Python310\lib\ssl.py", line 738, in verify_mode super(SSLContext, SSLContext).verify_mode.__set__(self, 0) ValueError: Cannot set verify_mode to CERT_NONE when check_hostname is enabled. I swapped the variable value -> 1 @verify_mode.setter def verify_mode(self, value): super(SSLContext, SSLContext).verify_mode.__set__(self, 1) | 7 | 2 |
73,234,123 | 2022-8-4 | https://stackoverflow.com/questions/73234123/how-copy-file-to-clipboard-using-python-or-cl-to-paste-it-using-strgv-later-on | I am trying to copy (using python or a CL command which I then can call using python) a file to the clipboard to later paste it using STRG+V. As far as I understand it, files are not "moved" into the clipboard, but rather the clipboard holds the path and an argument/flag that tells the OS "this is a file". I am happy with a linux-specific answer, but a universal answer would be the cherry on top. pyperclip Is not a solution, because it doesn't allow to copy files, just strings. xclip Is not a solution, because it only copies text xclip-copyfile Is not a solution, because it only copies to the X clipboard, not the clipboard. While xclip offers the option -selection clipboard (but only copies text), xclip-copyfile has no such option. Using find find ${PWD} -name "*.pdf"| xclip -i -selection clipboard -t text/uri-list is a command described here: https://askubuntu.com/questions/210413/what-is-the-command-line-equivalent-of-copying-a-file-to-clipboard#answer-210428 But I can't replicate copying files with it and therefore assume that it is not working for all files. | Configurations The clipboard is part of the Window Management and not of the Linux operating system itself. Different configurations with different distributions behave differently and therefore require different variants. Meanwhile, Wayland is increasingly on the way to successively replace X, which means there are three configurations to consider: Wayland only Wayland together with XWayland (compatibility with non-adapted X software) X Sending clipboard content When saving to the clipboard, the system first only informs the receiver that data is available for the clipboard. Only on request, the actual data is sent. The program that sends the content to the clipboard must therefore not be terminated before the data has been transferred. Depending on the environment/configuration, it is also possible that the content of the clipboard is deleted as soon as the program is terminated. How then does the xclip program already mentioned in the question work? It seems to terminate immediately after being called. But on closer inspection it doesn't, because it performs a fork, so that it is still present in the background (easily verifiable by looking at the source code or the command ps). Format Furthermore, different environments require the content in different ways. For example GNOME requires the list of files to be copied with the special target x-special/gnome-copied-files and a special formatting of the content, e.g. copy\nfile:///etc/group for the GNOME file manager Nautilus to perform the copy operation correctly. Under KDE, on the other hand, there is then only one URI list with the target text/uri-list. Determining the environment The following example program works for Linuxmint 20.2 Cinnamon, Ubuntu 22.04 with Gnome and Kubuntu 22.04 with KDE. Other distributions / configurations may require some customization. Here it is advisable to simply copy a file in the appropriate file manager and then look at the clipboard contents with a program and then make appropriate adaptions to the script. Based on the environment variables XDG_CURRENT_DESKTOP and WAYLAND_DISPLAY the following program tries to determine the environments. If it is Wayland, wl-copy is used, otherwise xclip is used. The target and the content formatting is adapted accordingly. With subprocess.Popen the tool is started and the content is sent to stdin of the tool. As soon as this is done, the program exits. Both wl-copy and xclip then create a fork, ensuring that the data is present in the clipboard. import os import subprocess import sys from pathlib import Path gnome_desktops = ['X-Cinnamon', 'XFCE'] def is_gnome(desktop): if desktop.endswith("GNOME") or desktop in gnome_desktops: return True return False def target(): current_desktop = os.environ['XDG_CURRENT_DESKTOP'] if is_gnome(current_desktop): return 'x-special/gnome-copied-files' elif current_desktop == 'KDE': return 'text/uri-list' else: sys.exit(f'unsupported desktop {current_desktop}') def base_copy_cmd(): if 'WAYLAND_DISPLAY' in os.environ: return 'wl-copy' return 'xclip -i -selection clipboard' def copy_clipboard_cmd(): return f"{base_copy_cmd()} -t '{target()}'" def content(files_to_copy): uris = '\n'.join([Path(f).as_uri() for f in files_to_copy]) current_desktop = os.environ['XDG_CURRENT_DESKTOP'] if is_gnome(current_desktop): return f"copy\n{uris}".encode("utf-8") return uris.encode("utf-8") def copy_to_clipboard(files_to_copy): copy_process = subprocess.Popen(copy_clipboard_cmd(), shell=True, stdin=subprocess.PIPE) copy_process.stdin.write(content(files_to_copy)) copy_process.stdin.close() copy_process.wait() if __name__ == '__main__': files = ['/etc/hosts', '/etc/group'] copy_to_clipboard(files) As mentioned above for other environments simply copy a file in the native file manager and then inspect the current clipboard contents and make appropriate adjustments to the script. Depending on the environment, xclip or wl-copy (install the package wl-clipboard with your package manager) must be there. Detailed information about wl-copy can be found here: https://github.com/bugaevc/wl-clipboard. Inspect Clipboard Finally, to be able to dump the current contents of the clipboard, here is a small script that does just that. So it is possible to see what other programs like the native file manager put into the clipboard. Usually many programs put several different representations targets of the same data into the clipboard. import gi gi.require_version("Gtk", "3.0") from gi.repository import Gtk, Gdk def on_activate(app): win = Gtk.ApplicationWindow(application=app) win.set_title("GTK Clipboard Util") win.set_default_size(256, 192) btn = Gtk.Button(label="Dump Clipboard") btn.connect('clicked', dump) box = Gtk.VBox() win.add(box) box.add(btn) win.show_all() def dump(button): cb_targets = [] counter = 0 def print_content(clipboard, data): print(data.get_data()) print() print_next_target_and_content(clipboard) def print_next_target_and_content(clipboard): nonlocal counter if counter < len(cb_targets): target = cb_targets[counter] print(target) clipboard.request_contents(target, print_content) counter += 1 def get_targets(clipboard, targets, n_targets): nonlocal counter nonlocal cb_targets counter = 0 cb_targets = targets print_next_target_and_content(clipboard) gtk_clipboard = Gtk.Clipboard.get(Gdk.SELECTION_CLIPBOARD) gtk_clipboard.request_targets(get_targets) if __name__ == '__main__': app = Gtk.Application(application_id='com.software7.clipboard.formats') app.connect('activate', on_activate) app.run(None) | 6 | 6 |
73,269,000 | 2022-8-7 | https://stackoverflow.com/questions/73269000/efficient-logic-to-pad-tensor | I'm trying to pad a tensor of some shape such that the total memory used by the tensor is always a multiple of 512 E.g. Tensor shape 16x1x1x4 of type SI32 (Multiply by 4 to get total size) The total elements are 16x4x1x1 = 64 Total Memory required 64x**4** = 256 (Not multiple of 512) Padded shape would be 32x1x1x4 = 512 The below logic works for the basic shape but breaks with a shape e.g. 16x51x1x4 SI32 or something random say 80x240x1x1 U8 The padding logic goes like below from functools import reduce DATA_TYPE_MULTIPLYER = 2 # This would change at runtime with different type e.g. 8 with U8 16 with F16 32 with SI32 ALIGNMENT = 512 #Always Constant CHAR_BIT = 8 # Always Const for given fixed Arch def approachOne(tensor): totalElements = reduce((lambda x, y: x * y), tensor) totalMemory = totalElements * DATA_TYPE_MULTIPLYER divisor = tensor[1] * tensor[2] * tensor[3] tempDimToPad = totalElements/divisor orgDimToPad = totalElements/divisor while (True): if ((tempDimToPad * divisor * DATA_TYPE_MULTIPLYER) % ALIGNMENT == 0): return int(tempDimToPad - orgDimToPad) tempDimToPad = tempDimToPad + 1; def getPadding(tensor): totalElements = reduce((lambda x, y: x * y), tensor) totalMemory = totalElements * DATA_TYPE_MULTIPLYER newSize = totalMemory + (ALIGNMENT - (totalMemory % ALIGNMENT)) newTotalElements = (newSize * CHAR_BIT) / (CHAR_BIT * DATA_TYPE_MULTIPLYER) # Any DIM can be padded, using first for now paddingValue = tensor[0] padding = int(((newTotalElements * paddingValue) / totalElements) - paddingValue) return padding tensor = [11, 7, 3, 5] print(getPadding(tensor)) print(approachOne(tensor)) tensorflow package may help here but I'm originally coding in C++ so just posting in python with a minimal working example Any help is appreciated, thanks Approach 1 the brute force approach is to keep on incrementing across any chosen dimension by 1 and check if the totalMemory is multiple of 512. The brute force approach works but doesn't give the minimal padding and bloats the tensor Updating the conditions Initially the approach was to pad across the first dim. Since always padding the first dimension my not be the best solution, just getting rid of this constraint | If you want the total memory to be a multiple of 512 then the number of elements in the tensor must be a multiple of 512 // DATA_TYPE_MULTIPLIER, e.g. 128 in your case. Whatever that number is, it will have a prime factorization of the form 2**n. The number of elements in the tensor is given by s[0]*s[1]*...*s[d-1] where s is a sequence containing the shape of the tensor and d is an integer, the number of dimensions. The product s[0]*s[1]*...*s[d-1] also has some prime factorization and it is a multiple of 2**n if and only if it contains these prime factors. I.e. the task is to pad the individual dimensions s[i] such that the resulting prime factorization of the product s[0]*s[1]*...*s[d-1] contains 2**n. If the goal is to reach a minimum possible size of the padded tensor, then one can simply iterate through all multiples of the given target number of elements to find the first one that can be satisfied by padding (increasing) the individual dimensions of the tensor (1). A dimension must be increased as long as it contains at least one prime factor that is not contained in the target multiple size. After all dimensions have been increased such that their prime factors are contained in the target multiple size, one can check the resulting size of the candidate shape: if it matches the target multiple size we are done; if its prime factors are a strict subset of the target multiple prime factors, we can add the missing prime factors to any of the dimensions (e.g. the first); otherwise, we can use the excess prime factors to store the candidate shape for a future (larger) multiplier. The first such future multiplier then marks an upper boundary for the iteration over all possible multipliers, i.e. the algorithm will terminate. However, if the candidate shape (after adjusting all the dimensions) has an excess of prime factors w.r.t. the target multiple size as well as misses some other prime factors, the only way is to iterate over all possible padded shapes with size bound by the target multiple size. The following is an example implementation: from collections import Counter import itertools as it import math from typing import Iterator, Sequence def pad(shape: Sequence[int], target: int) -> tuple[int,...]: """Pad the given `shape` such that the total number of elements is a multiple of the given `target`. """ size = math.prod(shape) if size % target == 0: return tuple(shape) target_prime_factors = get_prime_factors(target) solutions: dict[int, tuple[int,...]] = {} # maps `target` multipliers to corresponding padded shapes for multiplier in it.count(math.ceil(size / target)): if multiplier in solutions: return solutions[multiplier] prime_factors = [*get_prime_factors(multiplier), *target_prime_factors] def good(x): return all(f in prime_factors for f in get_prime_factors(x)) candidate = list(shape) for i, x in enumerate(candidate): while not good(x): x += 1 candidate[i] = x if math.prod(candidate) == multiplier*target: return tuple(candidate) candidate_prime_factor_counts = Counter(f for x in candidate for f in get_prime_factors(x)) target_prime_factor_counts = Counter(prime_factors) missing = target_prime_factor_counts - candidate_prime_factor_counts excess = candidate_prime_factor_counts - target_prime_factor_counts if not excess: return ( candidate[0] * math.prod(k**v for k, v in missing.items()), *candidate[1:], ) elif not missing: solutions[multiplier * math.prod(k**v for k, v in excess.items())] = tuple(candidate) else: for padded_shape in generate_all_padded_shapes(shape, bound=multiplier*target): padded_size = math.prod(padded_shape) if padded_size == multiplier*target: return padded_shape elif padded_size % target == 0: solutions[padded_size // target] = padded_shape def generate_all_padded_shapes(shape: Sequence[int], *, bound: int) -> Iterator[tuple[int,...]]: head, *tail = shape if bound % head == 0: max_value = bound // math.prod(tail) else: max_value = math.floor(bound / math.prod(tail)) for x in range(head, max_value+1): if tail: yield from ((x, *other) for other in generate_all_padded_shapes(tail, bound=math.floor(bound/x))) else: yield (x,) def get_prime_factors(n: int) -> list[int]: """From: https://stackoverflow.com/a/16996439/3767239 Replace with your favorite prime factorization method. """ primfac = [] d = 2 while d*d <= n: while (n % d) == 0: primfac.append(d) # supposing you want multiple factors repeated n //= d d += 1 if n > 1: primfac.append(n) return primfac Here are a few examples: pad((16, 1, 1), 128) = (128, 1, 1) pad((16, 51, 1, 4), 128) = (16, 52, 1, 4) pad((80, 240, 1, 1), 128) = (80, 240, 1, 1) pad((3, 5, 7, 11), 128) = (3, 5, 8, 16) pad((3, 3, 3, 1), 128) = (8, 4, 4, 1) pad((7, 7, 7, 7), 128) = (7, 8, 8, 8) pad((9, 9, 9, 9), 128) = (10, 10, 10, 16) Footnotes: (1) In fact, we need to find the roots of the polynomial (s[0]+x[0])*(s[1]+x[1])*...*(s[d-1]+x[d-1]) - multiple*target for x[i] >= 0 over the domain of integers. However, I am not aware of any algorithm to solve this problem. | 7 | 5 |
73,275,868 | 2022-8-8 | https://stackoverflow.com/questions/73275868/what-is-the-best-way-to-perform-value-estimation-on-a-dataset-with-discrete-con | What is the best approach to this regression problem, in terms of performance as well as accuracy? Would feature importance be helpful in this scenario? And how do I process this large range of data? Please note that I am not an expert on any of this, so I may have bad information or theories about why things/methods don't work. The Data: Each item has an id and various attributes. Most items share the same attributes, however there are a few special items with items specific attributes. An example would look something like this: item = { "item_id": "AMETHYST_SWORD", "tier_upgrades": 1, # (0-1) "damage_upgrades": 15, # (0-15) ... "stat_upgrades": 5 # (0-5) } The relationship between any attribute and the value of the item is linear; if the level of an attribute is increased, so is the value, and vise versa. However, an upgrade at level 1 is not necessarily 1/2 of the value of an upgrade at level 2; the value added for each level increase is different. The value of each upgrade is not constant between items, nor is the price of the item without upgrades. All attributes are capped at a certain integer, however it is not constant for all attributes. As an item gets higher levels of upgrades, they are also more likely to have other high level upgrades, which is why the price starts to have a steeper slope at upgrade level 10+. Collected Data: I've collected a bunch of data on the prices of these items with various different combinations of these upgrades. Note that, there is never going to be every single combination of each upgrade, which is why I must implement some sort of prediction into this problem. As far as the economy & pricing goes, high tier, low drop chance items that cannot be outright bought from a shop are going to be priced based on pure demand/supply. However, middle tier items that have a certain cost to unlock/buy will usually settle for a bit over the cost to acquire. Some upgrades are binary (ranges from 0 to 1). As shown below, almost all points where tier_upgrades == 0 overlap with the bottom half of tier_upgrades == 1, which I think may cause problems for any type of regression. Attempts made so far: I've tried linear regression, K-Nearest Neighbor search, and attemted to make a custom algorithm (more on that below). Regression: It works, but with a high amount of error. Due to the nature of the data I'm working with, many of the features are either a 1 or 0 and/or overlap a lot. From my understanding, this creates a lot of noise in the model and degrades the accuracy of it. I'm also unsure of how well it would scale to multiple items, since each is valued independent of each other. Aside from that, in theory, regression should work because different attributes affect the value of an item linearly. from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error from sklearn import linear_model x = df.drop("id", axis=1).drop("adj_price", axis=1) y = df.drop("id", axis=1)["adj_price"] x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25, random_state=69) regr = linear_model.LinearRegression() regr.fit(x, y) y_pred = regr.predict(x_test) rmse = np.sqrt(mean_squared_error(y_test, y_pred)) mae = np.mean(np.absolute(y_pred - y_test)) print(f"RMSE: {rmse} MAE: {mae}") K-Nearest Neighbors: This has also worked, but not all the time. Sometimes I run into issues where I don't have enough data for one item, which then forces it to choose a very different item, throwing off the value completely. In addition, there are some performance concerns here, as it is quite slow to generate an outcome. This example is written in JS, using the nearest-neighbor package. Note: The price is not included in the item object, however I add it when I collect data, as it is the price that gets paid for the item. The price is only used to find the value after the fact, it is not accounted for in the KNN search, which is why it is not in fields. const nn = require("nearest-neighbor"); var items = [ { item_id: "AMETHYST_SWORD", tier_upgrades: 1, damage_upgrades: 15, stat_upgrades: 5, price: 1800000 }, { item_id: "AMETHYST_SWORD", tier_upgrades: 0, damage_upgrades: 0, stat_upgrades: 0, price: 1000000 }, { item_id: "AMETHYST_SWORD", tier_upgrades: 0, damage_upgrades: 8, stat_upgrades: 2, price: 1400000 }, ]; var query = { item_id: "AMETHYST_SWORD", tier_upgrades: 1, damage_upgrades: 10, stat_upgrades: 3 }; var fields = [ { name: "item_id", measure: nn.comparisonMethods.word }, { name: "tier_upgrades", measure: nn.comparisonMethods.number }, { name: "damage_upgrades", measure: nn.comparisonMethods.number }, { name: "stat_upgrades", measure: nn.comparisonMethods.number }, ]; nn.findMostSimilar(query, items, fields, function(nearestNeighbor, probability) { console.log(query); console.log(nearestNeighbor); console.log(probability); }); Averaged distributions: Below is a box chart showing the distribution of prices for each level of damage_upgrades. This algorithm will find the average price where the attribute == item[attribute] for each attribute, and then find the mean. This is a relatively fast way to calculate the value, much faster than using a KNN. However, there is often too big of a spread in a given distribution, which increases the error. Another problem with this is if there is not an equal(ish) distribution of items in each set, it also increases the error. However, the main problem is that items with max upgrades except for a few will be placed in the same set, further disrupting the average, because there is a spread in the value of items. An example: low_value = { item_id: "AMETHYST_SWORD", tier_upgrades: 0, damage_upgrades: 1, stat_upgrades: 0, price: 1_100_000 } # May be placed in the same set as a high value item: high_value = { item_id: "AMETHYST_SWORD", tier_upgrades: 0, damage_upgrades: 15, stat_upgrades: 5, price: 1_700_000 } # This spread in each set is responsible for any inaccuracies in the prediction, because the algorithm does not take into account any other attributes/upgrades. Here is the Python code for this algorithm. df is a regular dataframe with the item_id, price, and the attributes. total = 0 features = { 'tier_upgrades': 1, 'damage_upgrades': 15, 'stat_upgrades': 5, } for f in features: a = df[df[f] == features[f]] avg_price = np.mean(a["adj_price"]) total += avg_price print("Estimated value:", total / len(features)) If anyone has any ideas, please, let me know! | For modeling right-skewed targets such as prices I'd try other distributions than Gaussian, like gamma or log-normal. The algo can be made less restrictive. GBDTs offer best trade-off in terms of accuracy for such tabular data, and should be able to capture some non-linearities. They even accept categorical variables as numerical vectors (label encoder). XGBoost has more APIs, but LightGBM is more accurate and faster. You may use a submodel to try to predict the binary feature ("probability of tier upgrades") - predictions from a classifier can improve the main model compared to using the binary feature as it is (smooth predictor with no missings vs. discrete with missings). You can improve model accuracy on small datasets by using cross-validation with a relatively large number of folds (20 or more), which saves more data for training. Try to stay within python for all ML tasks, this is by far the most appropriate language (and yes, you can later easily host python models in production). | 5 | 1 |
73,219,378 | 2022-8-3 | https://stackoverflow.com/questions/73219378/python-pptx-how-to-replace-keyword-across-multiple-runs | I have two PPTs (File1.pptx and File2.pptx) in which I have the below 2 lines XX NOV 2021, Time: xx:xx β xx:xx hrs (90mins) FY21/22 / FY22/23 I wish to replace like below a) NOV 2021 as NOV 2022. b) FY21/22 / FY22/23 as FY21/22 or FY22/23. But the problem is my replacement works in File1.pptx but it doesn't work in File2.pptx. When I printed the run text, I was able to see that they are represented differently in two slides. def replace_text(replacements:dict,shapes:list): for shape in shapes: for match, replacement in replacements.items(): if shape.has_text_frame: if (shape.text.find(match)) != -1: text_frame = shape.text_frame for paragraph in text_frame.paragraphs: for run in paragraph.runs: cur_text = run.text print(cur_text) print("---") new_text = cur_text.replace(str(match), str(replacement)) run.text = new_text In File1.pptx, the cur_text looks like below (for 1st keyword). So, my replace works (as it contains the keyword that I am looking for) But in File2.pptx, the cur_text looks like below (for 1st keyword). So, replace doesn't work (because the cur_text doesn't match with my search term) The same issue happens for 2nd keyword as well which is FY21/22 / FY22/23. The problem is the split keyword could be in previous or next run from current run (with no pattern). So, we should be able to compare a search term with previous run term (along with current term as well). Then a match can be found (like Nov 2021) and be replaced. This issue happens for only 10% of the search terms (and not for all of my search terms) but scary to live with this issue because if the % increases, we may have to do a lot of manual work. How do we avoid this and code correctly? How do we get/extract/find/identify the word that we are looking for across multiple runs (when they are indeed present) like CTRL+F and replace it with desired keyword? Any help please? UPDATE - Incorrect replacements based on matching Before replacement After replacement My replacement keywords can be found below replacements = { 'How are you?': "I'm fine!", 'FY21/22':'FY22/23', 'FY_2021':'FY21/22', 'FY20/21':'FY21/22', 'GB2021':'GB2022', 'GB2020':'GB2022', 'SEP-2022':'SEP-2023', 'SEP-2021':'SEP-2022', 'OCT-2021':'OCT-2022', 'OCT-2020':'OCT-2021', 'OCT 2021':'OCT 2022', 'NOV 2021':'NOV 2022', 'FY2122':'FY22/23', 'FY2021':'FY21/22', 'FY1920':'FY20/21', 'FY_2122':'FY22/23', 'FY21/22 / FY22/23':'FY21/22 or FY22/23', 'F21Y22':'FY22/23', 'your FY20 POS FCST':'your FY22/23 POS FCST', 'your FY21/22 POS FCST':'your FY22/23 POS FCST', 'Q2/FY22/23':'Q2-FY22/23', 'JAN-22':'JAN-23', 'solution for FY21/22':'solution for FY22/23', 'achievement in FY20/21':'achievement in FY21/22', 'FY19/20':'FY20/21'} | As one can find in python-pptx's documentation at https://python-pptx.readthedocs.io/en/latest/api/text.html a text frame is made up of paragraphs and a paragraph is made up of runs and specifies a font configuration that is used as the default for it's runs. runs specify part of the paragraph's text with a certain font configuration - possibly different from the default font configuration in the paragraph All three have a field called text: The text frame's text contains all the text from all it's paragraphs concatenated together with the appropriate line-feeds in between the paragraphs. The paragraphs's text contains all the texts from all of it's runs concatenated to a long string with a vertical tab character (\v) put wherever there was a so-called soft-break in any of the run's text (a soft break is like a line-feed but without terminating the paragraph). The run's text contains text that is to be rendered with a certain font configuration (font family, font size, italic/bold/underlined, color etc. pp). It is the lowest level of the font configuration for any text. Now if you specify a line of text in a text-frame in a PowerPoint presentation, this text-frame will very likely only have one paragraph and that paragraph will have just one run. Let's say that line says: Hi there! How are you? What is your name? and is all normal (neither italic nor bold) and in size 10. Now if you go ahead in PowerPoint and make the questions How are you? What is your name? stand out by making them italic, you will end up with 2 runs in our paragraph: Hello there! with the default font configuration from the paragraph How are you? What is you name? with the font configuration specifying the additional italic attribute. Now imagine, we want the How are you? stand out even more by making it bold and italic. We end up with 3 runs: Hello there! with the default font configuration from the paragraph. How are you? with the font configuration specifying the BOLD and ITALIC attribute What is your name? with the font configuration specifying the ITALIC attribute. One step further, making the are in How are you? bigger. We get 5 runs: Hello there! with the default font configuration from the paragraph. How with the font configuration specifying the BOLD and ITALIC attribute are with the font configuration specifying the BOLD and ITALIC attribute and font size 16 you? with the font configuration specifying the BOLD and ITALIC attribute What is your name? with the font configuration specifying the ITALIC attribute. So if you try to replace the How are you? with I'm fine! with the code from your question, you won't succeed, because the text How are you? is actually distributed across 3 runs. You can go one level higher and look at the paragraph's text, that still says Hello there! How are you? What is your name? since it is the concatenation of all its run's texts. But if you go ahead and do the replacement of the paragraph's text, it will erase all runs and create one new run with the text Hello there! I'm fine! What is your name? all the while deleting all the formatting that we put on the What is your name?. Therefore, changing text in a paragraph without affecting formatting of the other text in the paragraph is pretty involved. And even if the text you are looking for has all the same formatting, that is no guarantee for it to be within one run. Because if you - in our example above - make the are smaller again, the 5 runs will very likely remain, the runs 2 to 4 just having the same font configuration now. Here is the code to produce a test presentation with a text box containing the exact paragraph runs as given in my example above: from pptx import Presentation from pptx.chart.data import CategoryChartData from pptx.enum.chart import XL_CHART_TYPE,XL_LABEL_POSITION from pptx.util import Inches, Pt from pptx.dml.color import RGBColor from pptx.enum.dml import MSO_THEME_COLOR # create presentation with 1 slide ------ prs = Presentation() slide = prs.slides.add_slide(prs.slide_layouts[5]) textbox_shape = slide.shapes.add_textbox(Pt(200),Pt(200),Pt(30),Pt(240)) text_frame = textbox_shape.text_frame p = text_frame.paragraphs[0] font = p.font font.name = 'Arial' font.size = Pt(10) font.bold = False font.italic = False font.color.rgb = RGBColor(0,0,0) run = p.add_run() run.text = 'Hello there! ' run = p.add_run() run.text = 'How ' font = run.font font.italic = True font.bold = True run = p.add_run() run.text = 'are' font = run.font font.italic = True font.bold = True font.size = Pt(16) run = p.add_run() run.text = ' you?' font = run.font font.italic = True font.bold = True run = p.add_run() run.text = ' What is your name?' run.font.italic = True prs.save('text-01.pptx') And this is what it looks like, if you open it in PowerPoint: Now if you install the python code from my GitHub repository at https://github.com/fschaeck/python-pptx-text-replacer by running the command python -m pip install python-pptx-text-replacer and after successful installation run the command python-pptx-text-replacer -m "How are you?" -r "I'm fine!" -i text-01.pptx -o text-02.pptx the resulting presentation text-02.pptx will look like this: As you can see, it mapped the replacement string exactly onto the existing font-configurations, thus if your match and it's replacement have the same length, the replacement string will retain the exact format of the match. But - as an important side-note - if the text-frame has auto-size or fit-frame switched on, even all that work won't save you from screwing up the formatting, if the text after the replacement needs more or less space! If you got issues with this code, please use the possibly improved version from GitHub first. If your problem remains, use the GitHub issue tracker to report it. The discussion of this question and answer is already getting out of hand. ;-) | 4 | 4 |
73,269,424 | 2022-8-7 | https://stackoverflow.com/questions/73269424/interpreting-the-effect-of-lk-norm-with-different-orders-on-training-machine-lea | Both the RMSE and the MAE are ways to measure the distance between two vectors: the vector of predictions and the vector of target values. Various distance measures, or norms, are possible. Generally speaking, calculating the size or length of a vector is often required either directly or as part of a broader vector or vector-matrix operation. Even though the RMSE is generally the preferred performance measure for regression tasks, in some contexts you may prefer to use another function. For instance, if there are many outliers instances in the dataset, in this case, we may consider using mean absolute error (MAE). More formally, the higher the norm index, the more it focuses on large values and neglect small ones. This is why RMSE is more sensitive to outliers than MAE.) Source: hands on machine learning with scikit learn and tensorflow. Therefore, ideally, in any dataset, if we have a great number of outliers, the loss function, or the norm of the vector "representing the absolute difference between predictions and true labels; similar to y_diff in the code below" should grow if we increase the norm... In other words, RMSE should be greater than MAE. --> correct me if mistaken <-- Given this definition, I have generated a random dataset and added many outliers to it as seen in the code below. I calculated the lk_norm for the residuals, or y_diff for many k values, ranging from 1 to 5. However, I found that the lk_norm decreases as the value of k increases; however, I was expecting that RMSE, aka norm = 2, to be greater than MAE, aka norm = 1. How is LK norm decreasing as we increase K, aka the order, which is contrary to the definition above? Code: import numpy as np import plotly.offline as pyo import plotly.graph_objs as go from plotly import tools num_points = 1000 num_outliers = 50 x = np.linspace(0, 10, num_points) # places where to add outliers: outlier_locs = np.random.choice(len(x), size=num_outliers, replace=False) outlier_vals = np.random.normal(loc=1, scale=5, size=num_outliers) y_true = 2 * x y_pred = 2 * x + np.random.normal(size=num_points) y_pred[outlier_locs] += outlier_vals y_diff = y_true - y_pred losses_given_lk = [] norms = np.linspace(1, 5, 50) for k in norms: losses_given_lk.append(np.linalg.norm(y_diff, k)) trace_1 = go.Scatter(x=norms, y=losses_given_lk, mode="markers+lines", name="lk_norm") trace_2 = go.Scatter(x=x, y=y_true, mode="lines", name="y_true") trace_3 = go.Scatter(x=x, y=y_pred, mode="markers", name="y_true + noise") fig = tools.make_subplots(rows=1, cols=3, subplot_titles=("lk_norms", "y_true", "y_true + noise")) fig.append_trace(trace_1, 1, 1) fig.append_trace(trace_2, 1, 2) fig.append_trace(trace_3, 1, 3) pyo.plot(fig, filename="lk_norms.html") Output: Finally, in which cases one uses L3 or L4 norm, etc...? | Another python implementation for the np.linalg is: def my_norm(array, k): return np.sum(np.abs(array) ** k)**(1/k) To test our function, run the following: array = np.random.randn(10) print(np.linalg.norm(array, 1), np.linalg.norm(array, 2), np.linalg.norm(array, 3), np.linalg.norm(array, 10)) # And print(my_norm(array, 1), my_norm(array, 2), my_norm(array, 3), my_norm(array, 10)) output: (9.561258110585216, 3.4545982749318846, 2.5946495606046547, 2.027258231324604) (9.561258110585216, 3.454598274931884, 2.5946495606046547, 2.027258231324604) Therefore, we can see that the numbers are decreasing, similar to our output in the figure posted in the question above. However, the correct implementation of RMSE in python is: np.mean(np.abs(array) ** k)**(1/k) where k is equal to 2. As a result, I have replaced the sum by the mean. Therefore, if I add the following function: def my_norm_v2(array, k): return np.mean(np.abs(array) ** k)**(1/k) And run the following: print(my_norm_v2(array, 1), my_norm_v2(array, 2), my_norm_v2(array, 3), my_norm_v2(array, 10)) Output: (0.9561258110585216, 1.092439894967332, 1.2043296427640868, 1.610308452218342) Hence, the numbers are increasing. In the code below I rerun the same code posted in the question above with a modified implementation and I got the following: import numpy as np import plotly.offline as pyo import plotly.graph_objs as go from plotly import tools num_points = 1000 num_outliers = 50 x = np.linspace(0, 10, num_points) # places where to add outliers: outlier_locs = np.random.choice(len(x), size=num_outliers, replace=False) outlier_vals = np.random.normal(loc=1, scale=5, size=num_outliers) y_true = 2 * x y_pred = 2 * x + np.random.normal(size=num_points) y_pred[outlier_locs] += outlier_vals y_diff = y_true - y_pred losses_given_lk = [] losses = [] norms = np.linspace(1, 5, 50) for k in norms: losses_given_lk.append(np.linalg.norm(y_diff, k)) losses.append(my_norm(y_diff, k)) trace_1 = go.Scatter(x=norms, y=losses_given_lk, mode="markers+lines", name="lk_norm") trace_2 = go.Scatter(x=norms, y=losses, mode="markers+lines", name="my_lk_norm") trace_3 = go.Scatter(x=x, y=y_true, mode="lines", name="y_true") trace_4 = go.Scatter(x=x, y=y_pred, mode="markers", name="y_true + noise") fig = tools.make_subplots(rows=1, cols=4, subplot_titles=("lk_norms", "my_lk_norms", "y_true", "y_true + noise")) fig.append_trace(trace_1, 1, 1) fig.append_trace(trace_2, 1, 2) fig.append_trace(trace_3, 1, 3) fig.append_trace(trace_4, 1, 4) pyo.plot(fig, filename="lk_norms.html") Output: And that explains why the loss increase as we increase k. | 7 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.