question_id
int64
59.5M
79.4M
creation_date
stringdate
2020-01-01 00:00:00
2025-02-10 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
70,519,432
2021-12-29
https://stackoverflow.com/questions/70519432/python-virtual-env-succesfully-activated-via-wsl-but-not-working
on my windows system I've succesfully installed a virtual environment (python version is 3.9) using windows command prompt python -m venv C:\my_path\my_venv Always using windows command prompt, I'm able to activate the created venv via C:\my_path\my_venv\Scripts\activate.bat I am sure the venv is correctly activated since: on the windows terminal, I see the command line is preceded by (my_venv) if I activate python from the terminal (python) and run the following commands: import sys ; sys.path I can see, in the list of paths, the desired path [..., 'C:\\my_path\\my_venv\\lib\\site-packages\\win32\\lib', ...] if I do stuff in the activated venv (like installing packages) everything works and is done inside the venv To sum up, everything is fine so far. I also have WSL2 (Ubuntu) and I'd like to activate the same venv using the Ubuntu terminal. If, from the Ubuntu terminal, I activate the venv source /mnt/c/my_path/my_venv/Scripts/activate it seems to work since the command line is preceeded by (my_venv), but when I run python (python3 command) and then run import sys ; sys.path I see that the system is targeting the base Ubuntu python installation (version 3.8) and not the venv installation: ['', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/usr/local/lib/python3.8/dist-packages', '/usr/lib/python3/dist-packages'] The venv is not really activated. Any suggestions to solve the issue? If it can help, I add a couple of information. If I try to create a venv directly using the Ubuntu terminal python3 -m venv /mnt/c/my_path/my_venv_unix and activate it via the Ubuntu terminal (source /mnt/c/my_path/my_venv_unix/bin/activate) everything works fine, but that's not what I want: I'd like to use WSL to activate a virtual environment created using windows command prompt, since on my machine I've a lot of venvs created with windows and I don't want to replicate them. Following the script C:\my_path\my_venv\Scripts\activate (/mnt/c/my_path/my_venv/Scripts/activate using wsl folders naming) (I had to change the EOL from windows to Ubuntu, otherwise the command source /mnt/c/my_path/my_venv/Scripts/activate would not have worked) # This file must be used with "source bin/activate" *from bash* # you cannot run it directly deactivate () { # reset old environment variables if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then PATH="${_OLD_VIRTUAL_PATH:-}" export PATH unset _OLD_VIRTUAL_PATH fi if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}" export PYTHONHOME unset _OLD_VIRTUAL_PYTHONHOME fi # This should detect bash and zsh, which have a hash command that must # be called to get it to forget past commands. Without forgetting # past commands the $PATH changes we made may not be respected if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then hash -r 2> /dev/null fi if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then PS1="${_OLD_VIRTUAL_PS1:-}" export PS1 unset _OLD_VIRTUAL_PS1 fi unset VIRTUAL_ENV if [ ! "${1:-}" = "nondestructive" ] ; then # Self destruct! unset -f deactivate fi } # unset irrelevant variables deactivate nondestructive VIRTUAL_ENV="C:\my_path\my_venv" export VIRTUAL_ENV _OLD_VIRTUAL_PATH="$PATH" PATH="$VIRTUAL_ENV/Scripts:$PATH" export PATH # unset PYTHONHOME if set # this will fail if PYTHONHOME is set to the empty string (which is bad anyway) # could use `if (set -u; : $PYTHONHOME) ;` in bash if [ -n "${PYTHONHOME:-}" ] ; then _OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}" unset PYTHONHOME fi if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then _OLD_VIRTUAL_PS1="${PS1:-}" PS1="(.venv_ml_dl_gen_purpose) ${PS1:-}" export PS1 fi # This should detect bash and zsh, which have a hash command that must # be called to get it to forget past commands. Without forgetting # past commands the $PATH changes we made may not be respected if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then hash -r 2> /dev/null fi Finally, here also the script /mnt/c/my_path/my_venv_unix/bin/activate # This file must be used with "source bin/activate" *from bash* # you cannot run it directly deactivate () { # reset old environment variables if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then PATH="${_OLD_VIRTUAL_PATH:-}" export PATH unset _OLD_VIRTUAL_PATH fi if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}" export PYTHONHOME unset _OLD_VIRTUAL_PYTHONHOME fi # This should detect bash and zsh, which have a hash command that must # be called to get it to forget past commands. Without forgetting # past commands the $PATH changes we made may not be respected if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then hash -r fi if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then PS1="${_OLD_VIRTUAL_PS1:-}" export PS1 unset _OLD_VIRTUAL_PS1 fi unset VIRTUAL_ENV if [ ! "${1:-}" = "nondestructive" ] ; then # Self destruct! unset -f deactivate fi } # unset irrelevant variables deactivate nondestructive VIRTUAL_ENV="/mnt/c/my_path/my_venv_unix" export VIRTUAL_ENV _OLD_VIRTUAL_PATH="$PATH" PATH="$VIRTUAL_ENV/bin:$PATH" export PATH # unset PYTHONHOME if set # this will fail if PYTHONHOME is set to the empty string (which is bad anyway) # could use `if (set -u; : $PYTHONHOME) ;` in bash if [ -n "${PYTHONHOME:-}" ] ; then _OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}" unset PYTHONHOME fi if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then _OLD_VIRTUAL_PS1="${PS1:-}" if [ "x(venv_unix) " != x ] ; then PS1="(venv_unix) ${PS1:-}" else if [ "`basename \"$VIRTUAL_ENV\"`" = "__" ] ; then # special case for Aspen magic directories # see https://aspen.io/ PS1="[`basename \`dirname \"$VIRTUAL_ENV\"\``] $PS1" else PS1="(`basename \"$VIRTUAL_ENV\"`)$PS1" fi fi export PS1 fi # This should detect bash and zsh, which have a hash command that must # be called to get it to forget past commands. Without forgetting # past commands the $PATH changes we made may not be respected if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then hash -r fi Thanks to anyone who wants to answer!
Short answer: It's highly recommended to use the Linux version of Python and tools when in WSL. You'll find a number of posts here on Stack Overflow related to this, but your question is different enough (regarding venv) that it deserves its own answer. More Detail: Also worth reading this question. In that case, the question was around a dual-boot system and whether or not the same venv could be shared between Windows and Linux. I know it seems like things might be better on WSL, where you can run Windows .executables under Linux, but it really isn't for this particular case. You've solved the first problem, in the difference in line endings, but the next problem that you are facing is the difference in the directory format. After sourcing activate, do an echo $PATH and you'll see that the Windows style C:\path\to\the\venv path has been prepended to your PATH. For WSL, that would need to be /mnt/c/path/to/the/venv. That's not going to work. Once you fix that (again, by editing activate), you are still trying to run python3. The venv executable is actually python.exe. Under WSL, you do have to specify the extension. So if you: Change the line-endings from CRLF to LF Change the path style in activate from Windows to WSL2 format Use the python.exe executable Then you can at least launch the Windows Python version. Your import sys; sys.path will show the Windows paths. That said, you are almost certainly going to run into additional problems that don't make it worth doing this. For instance, if a script assumes python or python3, or even pip; then those are going to fail because it needs to call, e.g., pip.exe. Line endings and native code will also be a problem. For these reasons (and likely more), it's highly recommended to use the Linux version of Python when in WSL.
5
2
70,521,500
2021-12-29
https://stackoverflow.com/questions/70521500/tablescraping-from-a-website-with-id-using-beautifulsoup
Im having a problem with scraping the table of this website, I should be getting the heading but instead am getting AttributeError: 'NoneType' object has no attribute 'tbody' Im a bit new to web-scraping so if you could help me out that would be great import requests from bs4 import BeautifulSoup URL = "https://www.collincad.org/propertysearch?situs_street=Willowgate&situs_street_suffix" \ "=&isd%5B%5D=any&city%5B%5D=any&prop_type%5B%5D=R&prop_type%5B%5D=P&prop_type%5B%5D=MH&active%5B%5D=1&year=2021&sort=G&page_number=1" s = requests.Session() page = s.get(URL) soup = BeautifulSoup(page.content, "lxml") table = soup.find("table", id="propertysearchresults") table_data = table.tbody.find_all("tr") headings = [] for td in table_data[0].find_all("td"): headings.append(td.b.text.replace('\n', ' ').strip()) print(headings)
What happens? Note: Always look at your soup first - therein lies the truth. The content can always be slightly to extremely different from the view in the dev tools. Access Revoked Your IP address has been blocked. We detected irregular, bot-like usage of our Property Search originating from your IP address. This block was instated to reduce stress on our webserver, to ensure that we're providing optimal site performance to the taxpayers of Collin County. We have not blocked your ability to download our data exports, which you can still use to acquire bulk property data. How to fix? Add a user-agent to your requets so that it looks like your requesting with a "browser". headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36'} page = s.get(URL,headers=headers) Or as alternativ just download the results. Example (scraping table) import requests from bs4 import BeautifulSoup import pandas as pd headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36'} URL = "https://www.collincad.org/propertysearch?situs_street=Willowgate&situs_street_suffix" \ "=&isd%5B%5D=any&city%5B%5D=any&prop_type%5B%5D=R&prop_type%5B%5D=P&prop_type%5B%5D=MH&active%5B%5D=1&year=2021&sort=G&page_number=1" s = requests.Session() page = s.get(URL,headers=headers) soup = BeautifulSoup(page.content, "lxml") data = [] for row in soup.select('#propertysearchresults tr'): data.append([c.get_text(' ',strip=True) for c in row.select('td')]) pd.DataFrame(data[1:], columns=data[0]) Output Property ID ↓ Geographic ID ↓ Owner Name Property Address Legal Description 2021 Market Value 1 2709013 R-10644-00H-0010-1 PARTHASARATHY SURESH & ANITHA HARIKRISHNAN 12209 Willowgate Dr Frisco, TX\xa0 75035 Ridgeview At Panther Creek Phase 2, Blk H, Lot 1 $513,019 2 2709018 R-10644-00H-0020-1 JOSHI PRASHANT & SHWETA PANT 12235 Willowgate Dr Frisco, TX\xa0 75035 Ridgeview At Panther Creek Phase 2, Blk H, Lot 2 $546,254 3 2709019 R-10644-00H-0030-1 THALLAPUREDDY RAVENDRA & UMA MAHESWARI VEMULA 12261 Willowgate Dr Frisco, TX\xa0 75035 Ridgeview At Panther Creek Phase 2, Blk H, Lot 3 $550,768 4 2709020 R-10644-00H-0040-1 KULKARNI BHEEMSEN T & GOURI R 12287 Willowgate Dr Frisco, TX\xa0 75035 Ridgeview At Panther Creek Phase 2, Blk H, Lot 4 $509,593 5 2709021 R-10644-00H-0050-1 BALAM GANESH & SHANTHIREKHA LOKULA 12313 Willowgate Dr Frisco, TX\xa0 75035 Ridgeview At Panther Creek Phase 2, Blk H, Lot 5 $553,949 ...
5
2
70,518,288
2021-12-29
https://stackoverflow.com/questions/70518288/pytorch-training-loop-within-a-sklearn-pipeline
What I am playing around with right now is to work with PyTorch within a pipeline, where all of the preprocessing will be handled. I am able to make it work. However, the results I am getting are a bit off. The loss function seems to be not decreasing and gets stuck (presumably in local optima?) as the training loop progresses. I follow the standard PyTorch training loop and wrap it inside the fit method as this is what sklearn wants: import torch from sklearn.base import BaseEstimator, TransformerMixin import torch.nn.functional as F from IPython.core.debugger import set_trace # + import pandas as pd import seaborn as sns import numpy as np from tqdm import tqdm import random # - df = sns.load_dataset("tips") df.head() # + class LinearRegressionModel(torch.nn.Module, BaseEstimator, TransformerMixin): def __init__(self, loss_func = torch.nn.MSELoss()): super(LinearRegressionModel, self).__init__() self.linear = torch.nn.Linear(3, 1) # One in and one out self.loss_func = loss_func self.optimizer = torch.optim.SGD(self.parameters(), lr = 0.01) def forward(self, x): y_pred = F.relu(self.linear(x)) return y_pred def fit(self, X, y): # set_trace() X = torch.from_numpy(X.astype(np.float32)) y = torch.from_numpy(y.values.astype(np.float32)) for epoch in tqdm(range(0, 12)): pred_y = self.forward(X) # Compute and print loss loss = self.loss_func(pred_y, X) # Zero gradients, perform a backward pass, # and update the weights. self.optimizer.zero_grad() loss.backward() self.optimizer.step() print('epoch {}, loss {}'.format(epoch, loss.item())) # + from sklearn.pipeline import Pipeline from sklego.preprocessing import PatsyTransformer # - my_model = LinearRegressionModel() pipe = Pipeline([ ("patsy", PatsyTransformer("tip + size")), ("model", my_model) ]) pipe.fit(df, df['total_bill']) It is not only due to the model being to simple. If I use sklearn linear regression estimated via stochastic gradient descent (SGDRegressor) the results seem nice. Therefore, I am concluding that problem is within my PyTorch class # + from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error pipe2 = Pipeline([ ("patsy", PatsyTransformer("tip + C(size) + C(time)")), ("model", LinearRegression()) ]) pipe2.fit(df, df['total_bill']) # - mean_squared_error(df['total_bill'], pipe2.predict(df))
The problem in this implementation is in the fit method. We are comparing prediction and design matrix # Compute and print loss loss = self.loss_func(pred_y, X) Should be prediction and real value y: loss = self.loss_func(pred_y, y)
5
3
70,515,194
2021-12-29
https://stackoverflow.com/questions/70515194/syntaxerror-future-feature-annotations-is-not-defined
I am try to run code sh run.sh and it showed me the error File "/anaconda3/envs/_galaxy_/lib/python3.6/site-packages/filelock/__init__.py", line 8 from __future__ import annotations ^ SyntaxError: future feature annotations is not defined I saw some solutions indicated that I need to update my python version, but I am in a python verion 3.9.7. (py39) KedeMacBook-Pro:~ ke$ python --version Python 3.9.7 However, in the error code it showed a python veriosn of 3.6. So, I am not sure where went wrong. Why it is not using the python envrionment that I have? Please help, thank you.
Based on the error, it looks like your code is using Python 3.6 and not Python 3.9. This import is available starting from Python 3.7. Check run.sh to make sure it is referencing the right python interpreter. I'd also recommend using a virtual env using the python version you require and running your script inside that.
11
21
70,514,336
2021-12-29
https://stackoverflow.com/questions/70514336/solidity-typeerror-object-of-type-set-is-not-json-serializable
I ran the code in VSCode and got a TypeError: Object of type set is not JSON serializable. I just start to learn to code, really don't get it, and googled it, also didn't know what does JSON serializable means. from solcx import compile_standard import json # get the contract content with open("./SimpleStorage.sol", "r") as file: simple_storage_file = file.read() # compile the contract compiled_sol = compile_standard( { "language": "Solidity", "sources": {"SimpleStorage.sol": {"content": simple_storage_file}}, "settings": { "outputSelection": { "*": { "*": {"abi", "metadata", "evm.bytecode", "evm.bytecode.sourceMap"} } } }, }, solc_version="0.6.0", ) # creat json file dump the comiled code in it to make it more readable. with open("compiled_code.json", "w") as file: json.dump(compiled_sol, file) print(compiled_sol) The full error information is below: (env) (base) liwei@liweideMacBook-Pro practice % python3 deploy.py Traceback (most recent call last): File "deploy.py", line 10, in <module> compiled_sol = compile_standard( File "/Users/liwei/Desktop/demos/practice/env/lib/python3.8/site-packages/solcx/main.py", line 375, in compile_standard stdin=json.dumps(input_data), File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/json/__init__.py", line 231, in dumps return _default_encoder.encode(obj) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/json/encoder.py", line 199, in encode chunks = self.iterencode(o, _one_shot=True) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/json/encoder.py", line 257, in iterencode return _iterencode(o, 0) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/json/encoder.py", line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type set is not JSON serializable
Instead of this: {"abi", "metadata", "evm.bytecode", "evm.bytecode.sourceMap"} you should use this: ["abi", "metadata", "evm.bytecode", "evm.bytecode.sourceMap"] Sets in Python aren't JSON serializable.
4
6
70,447,335
2021-12-22
https://stackoverflow.com/questions/70447335/what-is-the-use-case-for-djangos-on-commit
Reading this documentation https://docs.djangoproject.com/en/4.0/topics/db/transactions/#django.db.transaction.on_commit This is the use case for on_commit with transaction.atomic(): # Outer atomic, start a new transaction transaction.on_commit(foo) # Do things... with transaction.atomic(): # Inner atomic block, create a savepoint transaction.on_commit(bar) # Do more things... # foo() and then bar() will be called when leaving the outermost block But why not just write the code like normal without on_commit hooks? Like this: with transaction.atomic(): # Outer atomic, start a new transaction # Do things... with transaction.atomic(): # Inner atomic block, create a savepoint # Do more things... foo() bar() # foo() and then bar() will be called when leaving the outermost block It's easier to read since it doesn't require more knowledge of the Django APIs and the statements are put in the order of when they are executed. It's easier to test since you don't have to use any special test classes for Django. So what is the use-case for the on_commit hook?
Django documentation: Django provides the on_commit() function to register callback functions that should be executed after a transaction is successfully committed It is the main purpose. A transaction is a unit of work that you want to treat atomically. It either happens completely or not at all. The same applies to your code. If something went wrong during DB operations you might not need to do some things. Let's consider some business logic flow: User sends their registration data to our endpoint, we validate it, etc. We save the new user to our DB. We send them a "hello" letter to email with a link for confirming his account. If something goes wrong during step 2, we shouldn't go to step 3. We can think that, well, I'll get an exception and wouldn't execute that code as well. Why do we still need it? Sometimes you take actions in your code based on an assumption of the transaction being successful before potentially dangerous DB operations. For example, you want firstly to check if can send an email to your user, because you know that your emailing 3rd-party often gives you 500. In that case, you want to raise a 500 for the user and ask them to register later (a very bad idea, btw, but it's just a synthetic example). When your function (e.g. with @atomic decorator) contains a lot of DB operations you surely don't want to memorize all the variables states in order to use them after all DB-related code. Like this: Validation of user's order. Checking at DB if it could be completed. If it could be done we need to send a request to 3rd-party CRM with the order's details. If it couldn't, then we should create a support ticket in another 3rd-party. Saving user's order to DB, updating user's model. Sending a messenger notification to the employee who is responsible for the order. Saving information, that notification for employee was sent successfully to the DB. You can imagine what a mess would we have if we hadn't on_commit in this situation and we had a really big try-catch on this.
13
9
70,506,366
2021-12-28
https://stackoverflow.com/questions/70506366/failed-to-start-the-kernel-jupyter-in-vs-code
I am trying to use a Jupyter notebook for some Pandas in VS Code. I set up a virtual environment venv where I installed Pandas and jupyter. I always did it like this and it worked fine. But suddenly it does not work anymore.
Could you try to reinstall the pyzmq module? pip uninstall pyzmq pip install pyzmq==19.0.2 The version number may be different depending on the jupyter-client version.
31
21
70,507,099
2021-12-28
https://stackoverflow.com/questions/70507099/how-to-check-if-xgboost-uses-the-gpu
I'm writing a pytest file to check if my machine learning libraries use the GPU. For Tensorflow I can check this with tf.config.list_physical_devices(). For XGBoost I've so far checked it by looking at GPU utilization (nvdidia-smi) while running my software. But how can I check this in a simple test? Something similar to the test I have for Tensorflow would do. import pytest import tensorflow as tf import xgboost # Marking all tests to be GPU dependent pytestmark = pytest.mark.gpu def test_tf_finds_gpu(): """Check if Tensorflow finds the GPU.""" assert tf.config.list_physical_devices("GPU") def test_xgb_finds_gpu(): """Check if XGBoost finds the GPU.""" ... # What can I write here?
Note that tree_method="gpu_hist" is deprecated and will stop / has stopped working since xgboost==2.0.0. Histogram type and device are currently split into two parameters: tree_method (an unfortunate overwriting of the existing parameter, but with a different set of permitted levels) and a new one called device: import numpy as np import xgboost as xgb xgb_model = xgb.XGBRegressor( # tree_method="gpu_hist" # deprecated tree_method="hist", device="cuda" ) X = np.random.rand(50, 2) y = np.random.randint(2, size=50) xgb_model.fit(X, y) xgb_model Output when GPU access works correctly (no warnings): XGBRegressor(base_score=None, booster=None, callbacks=None, colsample_bylevel=None, colsample_bynode=None, colsample_bytree=None, device='cuda', early_stopping_rounds=None, enable_categorical=False, eval_metric=None, feature_types=None, gamma=None, grow_policy=None, importance_type=None, interaction_constraints=None, learning_rate=None, max_bin=None, max_cat_threshold=None, max_cat_to_onehot=None, max_delta_step=None, max_depth=None, max_leaves=None, min_child_weight=None, missing=nan, monotone_constraints=None, multi_strategy=None, n_estimators=None, n_jobs=None, num_parallel_tree=None, random_state=None, ...) vs. No GPU access - a warning that the device argument has not been used: [11:43:35] WARNING: ../src/learner.cc:767: Parameters: { "device" } are not used.
6
9
70,477,787
2021-12-25
https://stackoverflow.com/questions/70477787/how-to-get-current-path-in-fastapi-with-domain
I have a simple route as below that written in FastAPI, from fastapi import FastAPI app = FastAPI() @app.get("/foo/bar/{rand_int}/foo-bar/") async def main(rand_int: int): return {"path": f"https://some-domain.com/foo/bar/{rand_int}/foo-bar/?somethig=foo"} How can I get the current path "programmatically" with, domain (some-domain.com) path (/foo/bar/{rand_int}/foo-bar/) and query parameters (?somethig=foo)
We can use the Request.url-(starlette doc) API to get the various URL properties. To get the absolute URL, we need to use the Request.url._url private API ( or str(Request.url)), as below from fastapi import FastAPI, Request app = FastAPI() @app.get("/foo/bar/{rand_int}/foo-bar/") async def main(rand_int: int, request: Request): return {"raw_url": str(request.url)}
26
41
70,489,367
2021-12-26
https://stackoverflow.com/questions/70489367/how-to-generate-a-random-convex-piecewise-linear-function
I want to generate a toy example to illustrate a convex piecewise linear function in python, but I couldn't figure out the best way to do this. What I want to do is to indicate the number of lines and generate the function randomly. A convex piecewise-linear function is defined as: For instance, if I want to have four linear lines, then I want to generate something as shown below. Since there are four lines. I need to generate four increasing random integers to determine the intervals in x-axis. import random import numpy as np random.seed(1) x_points = np.array(random.sample(range(1, 20), 4)) x_points.sort() x_points = np.append(0, x_points) x_points [0 3 4 5 9] I can now use the first two points and create a random linear function, but I don't know how I should continue from there to preserve the convexity. Note that a function is called convex if the line segment between any two points on the graph of the function does not lie below the graph between the two points.
The slope increases monotonically by a random value from the range [0,1), starting from 0. The first y value is also zero, see the comments. import numpy as np np.random.seed(0) x_points = np.random.randint(low=1, high=20, size=4) x_points.sort() x_points = np.append(0, x_points) # the first 0 point is 0 slopes = np.add.accumulate(np.random.random(size=3)) slopes = np.append(0,slopes) # the first slope is 0 y_incr = np.ediff1d(x_points)*slopes y_points = np.add.accumulate(y_incr) y_points = np.append(0,y_points) # the first y values is 0 A possible output looks like this: print(x_points) print(y_points) # [ 0 1 4 13 16] # [ 0. 0. 2.57383685 17.92061306 24.90689622] To print this figure: import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot(x_points,y_points, '-o', label="convex piecewise-linear function") ax.legend() fig.patch.set_facecolor('white') plt.show()
6
2
70,468,354
2021-12-23
https://stackoverflow.com/questions/70468354/fastapi-sqlalchemy-connection-was-closed-in-the-middle-of-operation
I have an async FastApi application with async sqlalchemy, source code (will not provide schemas.py because it is not necessary): database.py from sqlalchemy import ( Column, String, ) from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker from sqlalchemy.orm.decl_api import DeclarativeMeta from app.config import settings engine = create_async_engine(settings.DATABASE_URL) Base: DeclarativeMeta = declarative_base() async_session = sessionmaker(engine, class_=AsyncSession, expire_on_commit=False) class Titles(Base): __tablename__ = "titles" id = Column(String(100), primary_key=True) title = Column(String(100), unique=True) async def get_session() -> AsyncSession: async with async_session() as session: yield session routers.py import .database from fastapi_utils.cbv import cbv from fastapi_utils.inferring_router import InferringRouter router = InferringRouter() async def get_titles(session: AsyncSession): results = await session.execute(select(database.Titles))) return results.scalars().all() @cbv(router) class TitlesView: session: AsyncSession = Depends(database.get_session) @router.get("/titles", status_code=HTTP_200_OK) async def get(self) -> List[TitlesSchema]: results = await get_titles(self.session) return [TitlesSchema.from_orm(result) for result in results] main.py from fastapi import FastAPI from app.routers import router def create_app() -> FastAPI: app = FastAPI() app .include_router(routers, prefix="/", tags=["Titles"]) return printer_app app = create_app() It runs with docker: CMD ["uvicorn", "main:app", "--reload", "--host", "0.0.0.0", "--port", "8000", "--limit-max-requests", "10000"] And it has Postgres database with default settings in docker too. It all runs at docker-swarm. Works fine at first, accepts all requests. But if you leave it for 15-30 minutes (I did not count), and then make a request, it will not work: <class 'asyncpg.exceptions.ConnectionDoesNotExistError'>: connection was closed in the middle of operation And right after that I send the next request and it doesn't throw an error. What could it be? How do I get rid of the ConnectionDoesNotExistError?
I solve that using pool_pre_ping setting like that: engine = create_async_engine(DB_URL, pool_pre_ping=True) https://docs.sqlalchemy.org/en/14/core/pooling.html
5
7
70,493,438
2021-12-27
https://stackoverflow.com/questions/70493438/why-does-this-python-code-with-threading-have-race-conditions
This code creates a race condition: import threading ITERS = 100000 x = [0] def worker(): for _ in range(ITERS): x[0] += 1 # this line creates a race condition # because it takes a value, increments and then writes # some inrcements can be done together, and lost def main(): x[0] = 0 # you may use `global x` instead of this list trick too t1 = threading.Thread(target=worker) t2 = threading.Thread(target=worker) t1.start() t2.start() t1.join() t2.join() for i in range(5): main() print(f'iteration {i}. expected x = {ITERS*2}, got {x[0]}') Output: $ python3 test.py iteration 0. expected x = 200000, got 200000 iteration 1. expected x = 200000, got 148115 iteration 2. expected x = 200000, got 155071 iteration 3. expected x = 200000, got 200000 iteration 4. expected x = 200000, got 200000 Python3 version: Python 3.9.7 (default, Sep 10 2021, 14:59:43) [GCC 11.2.0] on linux I thought GIL would prevent it and not allow two threads run together until they do something io-related or call a C library. At least this is what you may conclude from the docs. Turns out I was wrong. Then, what does GIL actually do, and when do threads run in parallel?
Reading the docs better, I think there's the answer: The mechanism used by the CPython interpreter to assure that only one thread executes Python bytecode at a time. This simplifies the CPython implementation by making the object model (including critical built-in types such as dict) implicitly safe against concurrent access. Locking the entire interpreter makes it easier for the interpreter to be multi-threaded, at the expense of much of the parallelism afforded by multi-processor machines. However, some extension modules, either standard or third-party, are designed so as to release the GIL when doing computationally-intensive tasks such as compression or hashing. Also, the GIL is always released when doing I/O. I guess this means that each line of source code consists of multiple blocks of bytecode. Bytecode lines/blocks are atomic, i.e. they get executed alone, but the source lines aren't. Here's the byte code that +=1 exapands to (run dis.dis('x[0] += 1') to see): 0 LOAD_NAME 0 (x) 2 LOAD_CONST 0 (0) 4 DUP_TOP_TWO 6 BINARY_SUBSCR 8 LOAD_CONST 1 (1) 10 INPLACE_ADD 12 ROT_THREE 14 STORE_SUBSCR 16 LOAD_CONST 2 (None) 18 RETURN_VALUE When these lines are executed in concurrent way, race condition occurs. So, GIL does not save you from it. It only prevents race conditions that could damage complex structures like list or dict.
8
4
70,458,458
2021-12-23
https://stackoverflow.com/questions/70458458/how-do-i-simply-run-a-python-script-from-github-repo-with-actions
I assume it's possible to schedule a python script to run every day for example, from my github repository. After searching, I've come up with the following main.yml file that resides in the master branch of the repo: the .py file I want to run resides in another branch; I suppose it doesn't have to if it causes an issue, but the script isn't running either way. I'm new to a lot of this, and have a funny feeling I'm missing fundamental pieces to get this working. name: py on: schedule: - cron: "30 11 * * *" #runs at 11:30 UTC everyday jobs: build: runs-on: ubuntu-latest steps: - name: checkout repo content uses: actions/checkout@v2 # checkout the repository content to github runner. - name: setup python uses: actions/setup-python@v2 with: python-version: 3.8 #install the python needed - name: Install dependencies run: | python -m pip install --upgrade pip pip install flake8 pytest if [ -f requirements.txt ]; then pip install -r requirements.txt; fi ref: # branch my_other_branch - name: execute py script # run file run: | python my_file.py
Everything seems to be working now, the solution was to move my main.yml file into .github/workflows. I also moved my_file.py from alternate branch into the master branch. One helpful comment recommended specifying the ref branch where you run my_file.py if my_file.py is not located in the default branch.
8
8
70,491,428
2021-12-27
https://stackoverflow.com/questions/70491428/how-do-i-reset-the-underscore-in-an-interactive-session
I have overriden the underscore variable _ in the Python interactive interpreter. How can I make the underscore work again without restarting the interpreter?
del _ A global _ shadows the builtin _, so deleting the global reveals the builtin again. It's also worth noting that it doesn't actually stop working, it's just not accessible. You can import builtins to access it: >>> _ = 'foobar' >>> 22 22 >>> _ 'foobar' >>> import builtins >>> 23 23 >>> builtins._ 23
4
6
70,511,031
2021-12-28
https://stackoverflow.com/questions/70511031/rename-names-of-multiindex-pandas-dataframe
I'm in trouble with a dataframe created from a groupby function. df = base.groupby(['year', 'categ']).agg({'id_prod':'count', 'price':'sum'}).unstack(level=1) it returns this result : but I would like to rename id_prod and price to no_sales and revenue but I don't know how to do that because of the MultiIndex with the print(df.columns) the result is : MultiIndex([('id_prod', 0), ('id_prod', 1), ('id_prod', 2), ( 'price', 0), ( 'price', 1), ( 'price', 2)], names=[None, 'categ']) So is this names=[] I would like to change Thanks for your help !
df = df.rename(columns={'id_prod': 'no_sales', 'price': 'revenue'}, level=0) The level=0 indicates where in the multi-index the keys to be renamed can be found.
5
4
70,467,781
2021-12-23
https://stackoverflow.com/questions/70467781/feature-importance-with-svr
I would like to plot Feature Importance with SVR, but I don't know if possible with support vector regression it's my code. from sklearn.svm import SVR C=1e3 svr_lin = SVR(kernel="linear", C=C) y_lin = svr_lin.fit(X,Y).predict(X) scores = cross_val_score(svr_lin, X, Y, cv = 5) print(scores) print(scores.mean()) print(scores.std())
SVR does not support native feature importance scores, you might need to try Permutation feature importance which is a technique for calculating relative importance scores that is independent of the model used. First, a model is fit on the dataset, such as a model that does not support native feature importance scores. Then the model is used to make predictions on a dataset, although the values of a feature (column) in the dataset are scrambled. This is repeated for each feature in the dataset. Then this whole process is repeated 3, 5, 10 or more times. The result is a mean importance score for each input feature (and distribution of scores given the repeats). This approach can be used for regression or classification and requires that a performance metric be chosen as the basis of the importance score, such as the mean squared error for regression and accuracy for classification. Permutation feature selection can be used via the permutation_importance() function that takes a fit model, a dataset (train or test dataset is fine), and a scoring function. model = SVR() # fit the model model.fit(X, y) # perform permutation importance results = permutation_importance(model, X, y, scoring='neg_mean_squared_error') # get importance importance = results.importances_mean # summarize feature importance for i,v in enumerate(importance): print('Feature: %0d, Score: %.5f' % (i,v)) # plot feature importance pyplot.bar([x for x in range(len(importance))], importance) pyplot.show()
5
2
70,448,419
2021-12-22
https://stackoverflow.com/questions/70448419/how-to-retry-async-requests-upon-clientoserror-errno-104-connection-reset-by
I have a function in Google Cloud that accepts a number of parameters. I generate ~2k asynchronous requests with different combinations of parameter values using aiohttp: # url = 'https://...' # headers = {'X-Header': 'value'} timeout = aiohttp.ClientTimeout(total=72000000) async def submit_bt(session, url, payload): async with session.post(url, json=payload) as resp: result = await resp.text() async def main(): async with aiohttp.ClientSession(headers=headers, timeout=timeout) as session: tasks = [] gen = payload_generator() # a class that generates dictionaries for payload in gen.param_grid(): tasks.append(asyncio.ensure_future(submit_bt(session, url, payload))) bt_results = await asyncio.gather(*tasks) for result in bt_results: pass asyncio.run(main()) A function takes between 3 to 6 minutes to run, function timeout is set to 9 minutes and maximum number of instances to 3000, but I never see more than 150-200 instances being initiated even when the total number of submitted requests is between 1.5k and 2.5k. On some occasions all requests are processed in 20 to 30 minutes, but sometimes I get an error on the client side: ClientOSError: [Errno 104] Connection reset by peer that does not correspond to any errors on the server side. I think I should be able to catch it as an aiohttp.client_exceptions.ClientOSError exception, but I am not sure how to handle it in the asynchronous settings, so that the failed request is resubmitted and the premature termination is avoided. Any hints are greatly appreciated.
The solution suggested by @vaizki in the comment seems to be working well for me. After a closer look at the traceback it turned out that the exception was raised in the submit_bt co-routine, so I added the try-except clause: async def submit_bt(session, url, payload): try: async with session.post(url, json=payload) as resp: result = await resp.text() except aiohttp.client_exceptions.ClientOSError as e: await asyncio.sleep(3 + random.randint(0, 9)) async with session.post(url, json=payload) as resp: result = await resp.text() except Exception as e: result = str(e) return result It does not look very elegant with the repeated lines, but this is still a work in progress for me and the code structure is not formalized at this stage. Anyway it is clear to see what I wanted to achieve: post the payload to the function URL, catch exceptions, but repeat the post only in case of ClientOSError and only once. I did not want to go with a while True kind of loop to avoid infinite execution in case of some serious issues. I tried this code as is a couple of times and I know it worked through a couple of connection resets until the end of the task list, because I got all the results generated by the function, so even in this form it is robust enough for my circumstances.
4
5
70,466,886
2021-12-23
https://stackoverflow.com/questions/70466886/typeerror-init-got-an-unexpected-keyword-argument-providing-args
I am creating a Django website. I was recently adding permissions/search functionality using the allauth package. When I attempt to run the website through docker I receive the error message: File "/usr/local/lib/python3.9/site-packages/allauth/account/signals.py", line 5, in user_logged_in = Signal(providing_args=["request", "user"]) TypeError: init() got an unexpected keyword argument 'providing_args' What is causing this error? I know usually a type error is caused by incorrect models.py files but I can't seem to access this file since it is a part of an external package. Urls.py urlpatterns = [ path('admin/', admin.site.urls), path('accounts/', include('allauth.urls')), path('accounts/', include('accounts.urls')), path('', include('climate.urls')), ] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) if settings.DEBUG: import debug_toolbar urlpatterns = [ path('__debug__/', include(debug_toolbar.urls)), ] + urlpatterns Models.py class Country(models.Model): id = models.UUIDField( primary_key= True, db_index = True, default=uuid.uuid4, editable= False ) name = models.CharField(max_length=50) population = models.IntegerField(default=1) emissions = models.FloatField(default=1) reason = models.CharField(default="", max_length=100) flags = models.ImageField(upload_to='images/', default="") page = models.URLField(max_length=300, default="") def save(self, *args, **kwargs): super(Country, self).save(*args, **kwargs) class Meta: verbose_name_plural = 'countries' indexes = [ models.Index(fields=['id'], name='id_index') ] permissions = { ("special_status", "Can read all countries") } def __str__(self): return self.name def flag(self): return u'<img src="%s" />' % (self.flags.url) def get_absolute_url(self): return reverse('country_detail', args =[str(self.id)]) flag.short_description = 'Flag' My settings.py that handles allauth. AUTH_USER_MODEL = 'accounts.CustomUser' LOGIN_REDIRECT_URL = 'climate:home' ACCOUNT_LOGOUT_REDIRECT = 'climate:home' ACCOUNT_SESSION_REMEMBER = True ACCOUNT_SIGNUP_PASSWORD_ENTER_TWICE = False ACCOUNT_USERNAME_REQUIRED = False ACCOUNT_AUTHENTICATION_METHOD = 'email' ACCOUNT_EMAIL_REQUIRED = True ACCOUNT_UNIQUE_EMAIL = True Full Traceback: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 954, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.9/site-packages/django/utils/autoreload.py", line 64, in wrapper fn(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/django/core/management/commands/runserver.py", line 115, in inner_run autoreload.raise_last_exception() File "/usr/local/lib/python3.9/site-packages/django/utils/autoreload.py", line 87, in raise_last_exception raise _exception[1] File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 381, in execute autoreload.check_errors(django.setup)() File "/usr/local/lib/python3.9/site-packages/django/utils/autoreload.py", line 64, in wrapper fn(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/django/__init__.py", line 24, in setup apps.populate(settings.INSTALLED_APPS) File "/usr/local/lib/python3.9/site-packages/django/apps/registry.py", line 114, in populate app_config.import_models() File "/usr/local/lib/python3.9/site-packages/django/apps/config.py", line 300, in import_models self.models_module = import_module(models_module_name) File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 790, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/usr/local/lib/python3.9/site-packages/allauth/account/models.py", line 12, in <module> from . import app_settings, signals File "/usr/local/lib/python3.9/site-packages/allauth/account/signals.py", line 5, in <module> user_logged_in = Signal(providing_args=["request", "user"]) TypeError: __init__() got an unexpected keyword argument 'providing_args'
Based on the comments, you're running Django 4.0 with an old version of AllAuth. So you just need to update AllAuth and should be fine. However, other people who have upgraded AllAuth and are running Django 4.0 but still seeing this error may have custom AllAuth or other signals registered that include the providing_args argument. In this case, you need to search your project for any signals (such as those often overridden in AllAuth: user_logged_in or email_changed) and remove the providing_args=['request', 'user', 'signup'] or other variation from the Signal parenthesis. See below for more information, and an example diff showing how you could move each providing_args argument to a commented line. Django deprecated the ability to use the providing_args argument to django.dispatch.Signal in Django 3.1. See bullet in the Misc section of the 3.1 release notes. It was removed because this argument doesn't do anything other than act as documentation. If that seems strange, it is because it is strange. Arguments should indicate data getting passed. This is probably why it was deprecated. Django 4.0 went ahead and removed this altogether, making any code that calls Signal() with the providing_args argument now trigger the TypeError you encountered: TypeError: Signal.__init__() got an unexpected keyword argument 'providing_args' AllAuth removed the use of this argument in September of 2020. (See original report on this issue here and the referenced diff) The person asking the question in this thread was running AllAuth 0.42.0 which does not include this change and is thus incompatible with Django 4.0. As of today, the last version of Django AllAuth 0.42.0 is compatible with is Django 3.2.9.
16
20
70,508,775
2021-12-28
https://stackoverflow.com/questions/70508775/error-could-not-build-wheels-for-pycairo-which-is-required-to-install-pyprojec
Error while installing manimce, I have been trying to install manimce library on windows subsystem for linux and after running pip install manimce Collecting manimce Downloading manimce-0.1.1.post2-py3-none-any.whl (249 kB) |████████████████████████████████| 249 kB 257 kB/s Collecting Pillow Using cached Pillow-8.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.1 MB) Collecting scipy Using cached scipy-1.7.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (39.3 MB) Collecting colour Using cached colour-0.1.5-py2.py3-none-any.whl (23 kB) Collecting pangocairocffi<0.5.0,>=0.4.0 Downloading pangocairocffi-0.4.0.tar.gz (17 kB) Preparing metadata (setup.py) ... done Collecting numpy Using cached numpy-1.21.5-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB) Collecting pydub Using cached pydub-0.25.1-py2.py3-none-any.whl (32 kB) Collecting pygments Using cached Pygments-2.10.0-py3-none-any.whl (1.0 MB) Collecting cairocffi<2.0.0,>=1.1.0 Downloading cairocffi-1.3.0.tar.gz (88 kB) |████████████████████████████████| 88 kB 160 kB/s Preparing metadata (setup.py) ... done Collecting tqdm Using cached tqdm-4.62.3-py2.py3-none-any.whl (76 kB) Collecting pangocffi<0.9.0,>=0.8.0 Downloading pangocffi-0.8.0.tar.gz (33 kB) Preparing metadata (setup.py) ... done Collecting pycairo<2.0,>=1.19 Using cached pycairo-1.20.1.tar.gz (344 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting progressbar Downloading progressbar-2.5.tar.gz (10 kB) Preparing metadata (setup.py) ... done Collecting rich<7.0,>=6.0 Using cached rich-6.2.0-py3-none-any.whl (150 kB) Collecting cffi>=1.1.0 Using cached cffi-1.15.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (446 kB) Collecting commonmark<0.10.0,>=0.9.0 Using cached commonmark-0.9.1-py2.py3-none-any.whl (51 kB) Collecting typing-extensions<4.0.0,>=3.7.4 Using cached typing_extensions-3.10.0.2-py3-none-any.whl (26 kB) Collecting colorama<0.5.0,>=0.4.0 Using cached colorama-0.4.4-py2.py3-none-any.whl (16 kB) Collecting pycparser Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB) Building wheels for collected packages: cairocffi, pangocairocffi, pangocffi, pycairo, progressbar Building wheel for cairocffi (setup.py) ... done Created wheel for cairocffi: filename=cairocffi-1.3.0-py3-none-any.whl size=89650 sha256=afc73218cc9fa1d844d7165f598e2be0428598166b4c3ed9de5bbdc94a0a6977 Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/f3/97/83/8022b9237866102e18d1b7ac0a269769e6fccba0f63dceb9b7 Building wheel for pangocairocffi (setup.py) ... done Created wheel for pangocairocffi: filename=pangocairocffi-0.4.0-py3-none-any.whl size=19283 sha256=54399796259c6e24f9ab56c5747ab273dcf97fb6fed3e7b54935f9ac49351d50 Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/60/58/92/507a12a5044f7fcda6f4dfd8e0a607cc1fe957bc0dea885906 Building wheel for pangocffi (setup.py) ... done Created wheel for pangocffi: filename=pangocffi-0.8.0-py3-none-any.whl size=37899 sha256=bea348af93696816b046dd901aa60d29a464460c5faac67628eb7e1ea7d1807d Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/c4/df/6d/e9d0f79b1545f6e902cc22773b1429de7a5efc240b891ee009 Building wheel for pycairo (pyproject.toml) ... error ERROR: Command errored out with exit status 1: command: /home/yusifer_zendric/manim_ce/venv/bin/python /home/yusifer_zendric/manim_ce/venv/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmpuguwzu3u cwd: /tmp/pip-install-l4hqdegr/pycairo_f4d80b8f3e4840a3802342825adcdff5 Complete output (12 lines): running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/cairo copying cairo/__init__.py -> build/lib.linux-x86_64-3.8/cairo copying cairo/__init__.pyi -> build/lib.linux-x86_64-3.8/cairo copying cairo/py.typed -> build/lib.linux-x86_64-3.8/cairo running build_ext 'pkg-config' not found. Command ['pkg-config', '--print-errors', '--exists', 'cairo >= 1.15.10'] ---------------------------------------- ERROR: Failed building wheel for pycairo Building wheel for progressbar (setup.py) ... done Created wheel for progressbar: filename=progressbar-2.5-py3-none-any.whl size=12074 sha256=7290ef8de5dd955bf756b90130f400dd19c2cc9ea050a5a1dce2803440f581e2 Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/2c/67/ed/d84123843c937d7e7f5ba88a270d11036473144143355e2747 Successfully built cairocffi pangocairocffi pangocffi progressbar Failed to build pycairo ERROR: Could not build wheels for pycairo, which is required to install pyproject.toml-based projects (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ pip install manim_ce ERROR: Could not find a version that satisfies the requirement manim_ce (from versions: none) ERROR: No matching distribution found for manim_ce (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ manim example_scenes/basic.py -pql Command 'manim' not found, did you mean: command 'maim' from deb maim (5.5.3-1build1) Try: sudo apt install <deb name> (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ sudo apt-get install manim [sudo] password for yusifer_zendric: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package manim (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ pip3 install manimlib Collecting manimlib Downloading manimlib-0.2.0.tar.gz (4.8 MB) |████████████████████████████████| 4.8 MB 498 kB/s Preparing metadata (setup.py) ... done Collecting Pillow Using cached Pillow-8.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.1 MB) Collecting argparse Downloading argparse-1.4.0-py2.py3-none-any.whl (23 kB) Collecting colour Using cached colour-0.1.5-py2.py3-none-any.whl (23 kB) Collecting numpy Using cached numpy-1.21.5-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB) Collecting opencv-python Downloading opencv_python-4.5.4.60-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (60.3 MB) |████████████████████████████████| 60.3 MB 520 kB/s Collecting progressbar Using cached progressbar-2.5-py3-none-any.whl Collecting pycairo Using cached pycairo-1.20.1.tar.gz (344 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting pydub Using cached pydub-0.25.1-py2.py3-none-any.whl (32 kB) Collecting pygments Using cached Pygments-2.10.0-py3-none-any.whl (1.0 MB) Collecting scipy Using cached scipy-1.7.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (39.3 MB) Collecting tqdm Using cached tqdm-4.62.3-py2.py3-none-any.whl (76 kB) Building wheels for collected packages: manimlib, pycairo Building wheel for manimlib (setup.py) ... done Created wheel for manimlib: filename=manimlib-0.2.0-py3-none-any.whl size=212737 sha256=27efe2c226d80cfe5663928e980d3e5f5a164d8e9d0aacea5014d37ffdedb76a Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/87/36/c1/2db5ed5de9908034108f3c39538cd3367445d9cec01e7c8c23 Building wheel for pycairo (pyproject.toml) ... error ERROR: Command errored out with exit status 1: command: /home/yusifer_zendric/manim_ce/venv/bin/python /home/yusifer_zendric/manim_ce/venv/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmp5o2970su cwd: /tmp/pip-install-sxxp3lw2/pycairo_d372a62d0c6b4c4484391402d21485e1 Complete output (12 lines): running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/cairo copying cairo/__init__.py -> build/lib.linux-x86_64-3.8/cairo copying cairo/__init__.pyi -> build/lib.linux-x86_64-3.8/cairo copying cairo/py.typed -> build/lib.linux-x86_64-3.8/cairo running build_ext 'pkg-config' not found. Command ['pkg-config', '--print-errors', '--exists', 'cairo >= 1.15.10'] ---------------------------------------- ERROR: Failed building wheel for pycairo Successfully built manimlib Failed to build pycairo ERROR: Could not build wheels for pycairo, which is required to install pyproject.toml-based projects all the libraries are installed accept the pycairo library. It's just showing this to install pyproject.toml error. Infact I have already done pip install pyproject.toml and it is installed then also it's showing the same error.
apt-get install sox ffmpeg libcairo2 libcairo2-dev apt-get install texlive-full pip3 install manimlib # or pip install manimlib Then: pip3 install manimce # or pip install manimce And everything works.
52
23
70,474,854
2021-12-24
https://stackoverflow.com/questions/70474854/returning-result-set-from-redshift-stored-procedure
I have a procedure that returns a recordset using the cursor method: CREATE OR REPLACE PROCEDURE myschema.permissions_sp(rs_out INOUT refcursor) LANGUAGE plpgsql SECURITY DEFINER AS $$ BEGIN OPEN rs_out FOR select schema_name,schema_owner,grantee_type,grantee,p_usage,p_create,object_name,perms,p_select,p_update,p_insert,p_delete,p_truncate,p_references,p_trigger,p_rule from xxxx.myview; END; $$ / GRANT EXECUTE on PROCEDURE myschema.permissions_sp(INOUT refcursor) TO xxxx_user / And I can call it perfectly fine from workbench using my admin login. BEGIN; CALL meta.permissions_sp('apples'); COMMIT; Result is a dataset of 16columns x >7k rows (To be honest I don't even need to do the transaction parts, it'll run just fine with the CALL only) However, when I call it via psycopg2, this is what happens : cur = conn.cursor() cur.execute("CALL meta.permissions_sp('apples');") conn.commit() rows = cur.fetchall() print('rows:%s' % (rows)) Output: > rows:[('apples',)] I've played around with using the commit and not using it. Just really struggling to understand what is going on. At this stage not sure if it's how I'm calling from Python, or on the Redshift side of things. Any guidance appreciated !
The procedure receives a name as its argument and returns a server-side cursor with that name. On the client side, after calling the procedure you must declare a named cursor with the same name and use it to access the query results. You must do this before committing the connection, otherwise the server-side cursor will be destroyed. with psycopg2.connect(dbname='test') as conn: cur = conn.cursor() cur.execute("""CALL myschema.permissions_sp('mycursor')""") # Use the name that was passed to the procedure. named_cursor = conn.cursor('mycursor') for row in named_cursor.fetchall(): print(row) This is analogous to how you might get the results in the psql console, as described in the Redshift docs: BEGIN; CALL myschema.permissions_sp('mycursor'); FETCH ALL FROM mycursor; COMMIT;
6
4
70,489,306
2021-12-26
https://stackoverflow.com/questions/70489306/kill-a-python-subprocess-that-does-not-return
TLDR I want to kill a subprocess like top while it is still running I am using Fastapi to run a command on input. For example if I enter top my program runs the command but since it does not return, at the moment I have to use a time delay then kill/terminate it. However I want to be able to kill it while it is still running. However at the moment it won't run my kill command until the time runs out. Here is the current code for running a process: @app.put("/command/{command}") async def run_command(command: str): subprocess.run([command], shell=True, timeout=10) return {"run command"} and to kill it @app.get("/stop") async def stop(): proc.kill() return{"Stop"} I am new to fastapi so I would be grateful for any help
It's because subprocess.run is blocking itself - you need to run shell command in background e.g. if you have asnycio loop already on, you could use subprocesses import asyncio process = None @app.get("/command/{command}") async def run_command(command: str): global process process = await asyncio.create_subprocess_exec( command, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE ) return {"run command"} @app.get("/stop") async def stop(): process.kill() return {"Stop"} Or with Popen from subprocess import Popen process = None @app.get("/command/{command}") async def run_command(command: str): global process process = Popen([command]) # something long running return {"run command"} Adding the timeout option can be tricky as you do not want to wait until it completes (where you could indeed use wait_for function) but rather kill process after specific time. As far I know the best option would be to schedule other process which is responsible for killing main one. The code with asyncio could look like that: @app.get("/command/{command}") async def run_command(command: str): global process process = await asyncio.create_subprocess_exec( command, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE ) loop = asyncio.get_event_loop() # We schedule here stop coroutine execution that runs after TIMEOUT seconds loop.call_later(TIMEOUT, asyncio.create_task, stop(process.pid)) return {"run command"} @app.get("/stop") async def stop(pid=None): global process # We need to make sure we won't kill different process if process and (pid is None or process.pid == pid): process.kill() process = None return {"Stop"}
6
2
70,492,432
2021-12-27
https://stackoverflow.com/questions/70492432/gunicorn-async-and-threaded-workers-for-django
Async For input/output(IO) bound we need to use async code and django is not async by default, but we can achieve this running gunicorn with the gevent worker and monkey patching: gunicorn --worker-class=gevent --worker-connections=1000 --workers=3 main:app Gunicorn changelog from 2014 https://docs.gunicorn.org/en/stable/2014-news.html?highlight=monkey#gevent-worker: fix: monkey patching is now done in the worker Do i still need to monkey patch my app or it's done by default from a worker ? How did gevent achieve async functionality for my django code ? Threads If we have a CPU bound we need to use a gthread worker with threads: gunicorn --workers=5 --threads=2 --worker-class=gthread main:app If we use this configuration for i/o bound, does it work? When one thread is waiting because of i/o, will the other thread be able to work? I see the point in (3) (if I'm right) because of the wait time in i/o, but if this is a CPU bound, how in our case will the second thread help us or will it only help if the core is not fully loaded by one thread and there is room for another to run? Are (3) and (4) useless because of GIL ? For example, 4 people sending request to server with 1 worker and 4 threads. GIL make sure that only 1 thread works. First man start processing with thread 1 and other 3 are waiting ? What are the threads for, then?
Do i still need to monkey patch my app or it's done by default from a worker ? No need to patch anything in your code. No need to modify codes at all. How did gevent achieve async functionality for my django code ? gunicorn patches everything. If we use this configuration for i/o bound, does it work? When one thread is waiting because of i/o, will the other thread be able to work? This configuration works for i/o bound. Threads can switch between themselves at anytime (switching controlled by ultimately the operating system), no matter whether the current thread is doing I/O or CPU-bound computation. Multiple threads can work simultaneously on multi-thread CPUs. In contrast, greenlets are more of coroutines rather than threads. If a coroutine gets blocked by I/O, it actively allows another coroutine to take control of the CPU and do non-I/O stuff. I see the point in (3) (if I'm right) because of the wait time in i/o, but if this is a CPU bound, how in our case will the second thread help us or will it only help if the core is not fully loaded by one thread and there is room for another to run? For a purely CPU-bounded task on a single-thread CPU, extra threads makes little sense. Are (3) and (4) useless because of GIL? GIL forbids your Python codes running concurrently, but gunicorn mostly uses its libraries not written in Python. You cannot run your Django codes (in Python) with multiple threads, but the I/O tasks (handled by gunicorn, not in Python) may go concurrently. If you do need CPU utilization, use multiple processes (workers=2 * CPU_THREADS + 1) instead of multiple gthreads, or consider non-CPython interpreters like pypy, which is not constrained by GIL, but may not be compatible with your codes.
6
7
70,452,647
2021-12-22
https://stackoverflow.com/questions/70452647/how-to-use-python-to-read-excel-files-that-contain-extended-fonts-openpyxl-err
As a learning project for Python, I am attempting to read all Excel files in a directory and extract the names of all the sheets. I have been trying several available Python modules to do this (pandas in this example), but am running into an issue with most of them depending on openpyxl. This is my current code: import os import pandas directory_root = 'D:\\testFiles' # Dict to hold all files, stats all_files = {} for _current_path, _dirs_in_path, _files_in_path in os.walk(directory_root): # Add all files to this `all_files` for _file in _files_in_path: # Extract filesystem stats from the file _stats = os.stat(os.path.join(_current_path, _file)) # Add the full file path and its stats to the `all_files` dict. all_files[os.path.join(_current_path, _file)] = _stats # Loop through all found files to extract the sheet names for _file in all_files: # Open the workbook xls = pandas.ExcelFile(_file) # Loop through all sheets in the workbook for _sheet in xls.sheet_names(): print(_sheet) This raises an error from openpyxl when calling pandas.ExcelFile(): ValueError: Max value is 14. From what I can find online, this is because the file contains a font family above 14. How do I read from an Excel (xlsx) file while disregarding any existing formatting? The only potential solution I could find suggests modifying the original file and removing the formatting, but this is not an option as I do not want to modify the files in any way. Is there another way to do this that doesn't have this formatting limitation?
The issue is that your file does not conform to the Open Office specification. Only certain font families are allowed. Once openpyxl encounters a font out of specification, it throws this error because OpenPyxl only allows spec-conforming excel files. Some Excel readers may not have an issue with this and are more flexible with non-OpenOffice-spec-conforming files, but openpyxl only implements the Apache Open Office spec. The xml being parsed will contain information about the font like this: <font> <b/> <sz val="11"/> <color rgb="FF000000"/> <name val="Century Gothic"/> <family val="34"/> </font> If the family value is over 14, openpyxl throws this ValueError. There is an underlying descriptor in Open Office that controls this. When other readers like, say, Microsoft Office 365 Excel encounters this, it will change the font family when loading the file to a compliant font (the default, Calibri). As a workaround, if you don't want to change the value (as Microsoft Excel does), you can monkeypatch the descriptor to allow a larger max font family. # IMPORTANT, you must do this before importing openpyxl from unittest import mock # Set max font family value to 100 p = mock.patch('openpyxl.styles.fonts.Font.family.max', new=100) p.start() import openpyxl openpyxl.open('my-bugged-worksheet.xlsx') # this works now! This can be reproduced using this excel workbook. Before the patch this will fail to load. After the patch, it loads without error.
4
5
70,506,629
2021-12-28
https://stackoverflow.com/questions/70506629/efficient-code-for-custom-color-formatting-in-tkinter-python
So , I was trying to create a Periodic Table and its almost done from the exterior efficiently . However , I couldn't understand if there's any way I could fill in colors in individual buttons in the same fashion . Can anyone please help me regarding this ? Below here is my code : from tkinter import * period_1 = ['H','','','','','','','','','','','','','','','','','He'] period_2 = ['Li','Be','','','','','','','','','','','B','C','N','O','F','Ne'] period_3 = ['Na','Mg','','','','','','','','','','','Al','Si','P','S','Cl','Ar'] period_4 = """K Ca Sc Ti V Cr Mn Fe Co Ni Cu Zn Ga Ge As Se Br Kr""".split(" ") period_5 = """Rb Sr Y Zr Nb Mo Tc Ru Rh Pd Ag Cd In Sn Sb Te I Xe""".split(" ") period_6 = """Cs Ba * Hf Ta W Re Os Ir Pt Au Hg Tl Pb Bi Po At Rn""".split(" ") period_6a = """La Ce Pr Nd Pm Sm Eu Gd Tb Dy Ho Er Tm Yb Lu""".split(" ") period_7 = """Fr Ra ** Rf D Sg Bh Hs Mt Ds Rg Cn Nh Fl Mc Lv Ts Og""".split(" ") period_7a = """Ac Th Pa U Np Pu Am Cm Bk Cf Es Fm Md No Lr""".split(" ") root = Tk() root.attributes('-fullscreen', True) root.config(bg='black') Button(root, text='EXIT', bg='red', fg='white', command=root.destroy).place(x=0, y=0) canvas_a = Canvas(root, bg='black', width=350, height=50) canvas_a.place(relx=0.15, rely=0.3) canvas1 = Canvas(canvas_a, bg='black', width=350, height=50) canvas1.pack() canvas2 = Canvas(canvas_a, bg='black', width=350, height=50) canvas2.pack() canvas3 = Canvas(canvas_a, bg='black', width=350, height=50) canvas3.pack() canvas4 = Canvas(canvas_a, bg='black', width=350, height=50) canvas4.pack() canvas5 = Canvas(canvas_a, bg='black', width=350, height=50) canvas5.pack() canvas6 = Canvas(canvas_a, bg='black', width=350, height=50) canvas6.pack() canvas7 = Canvas(canvas_a, bg='black', width=350, height=50) canvas7.pack() canvas_b = Canvas(root, bg='black', width=350, height=50) canvas_b.place(relx=0.265, rely=0.8) canvas8 = Canvas(canvas_b, bg='black', width=350, height=50) canvas8.pack() canvas9 = Canvas(canvas_b, bg='black', width=350, height=50) canvas9.pack() class Table: def __init__(self): for i in range(0,18): Button(canvas1, text=period_1[i], width=6, height=2).pack(side=LEFT) Button(canvas2, text=period_2[i], width=6, height=2).pack(side=LEFT) Button(canvas3, text=period_3[i], width=6, height=2).pack(side=LEFT) Button(canvas4, text=period_4[i], width=6, height=2).pack(side=LEFT) Button(canvas5, text=period_5[i], width=6, height=2).pack(side=LEFT) Button(canvas6, text=period_6[i], width=6, height=2).pack(side=LEFT) Button(canvas7, text=period_7[i], width=6, height=2).pack(side=LEFT) for i in range(0,15): Button(canvas8, text=period_6a[i], width=6, height=2).pack(side=LEFT) Button(canvas9, text=period_7a[i], width=6, height=2).pack(side=LEFT) table1 = Table() root.mainloop() Further Notes : If anyone could tell me how I can color individual buttons (like green to halogens and blue to metals and red to hydrogen) and also to create toplevels for each button command , I would be grateful to him/her .
I rewrote your code with some better ways to create table. My idea was to pick out the buttons that fell onto a range of type and then loop through those buttons and change its color to those type. from tkinter import * period_1 = ['H' ,'','','','','','','','','','','','','','','','','He'] period_2 = ['Li','Be','','','','','','','','','','','B','C','N','O','F','Ne'] period_3 = ['Na','Mg','','','','','','','','','','','Al','Si','P','S','Cl','Ar'] period_4 = """K Ca Sc Ti V Cr Mn Fe Co Ni Cu Zn Ga Ge As Se Br Kr""".split(" ") period_5 = """Rb Sr Y Zr Nb Mo Tc Ru Rh Pd Ag Cd In Sn Sb Te I Xe""".split(" ") period_6 = """Cs Ba * Hf Ta W Re Os Ir Pt Au Hg Tl Pb Bi Po At Rn""".split(" ") period_7 = """Fr Ra ** Rf D Sg Bh Hs Mt Ds Rg Cn Nh Fl Mc Lv Ts Og""".split(" ") period_6a = """La Ce Pr Nd Pm Sm Eu Gd Tb Dy Ho Er Tm Yb Lu""".split(" ") period_7a = """Ac Th Pa U Np Pu Am Cm Bk Cf Es Fm Md No Lr""".split(" ") # Making a list of main elements and secondary elements main = period_1 + period_2 + period_3 + period_4 + period_5 + period_6 + period_7 sec = period_6a + period_7a # Colors for each group non_m_col = '#feab90' alk_m_col = '#ffe0b2' alk_ea_col = '#fecc81' trans_m_col = '#d2c4e8' halogen_col = '#a4d7a7' metals_col = '#feab90' noble_g_col = '#fefffe' act_col = '#b2e5fd' rare_m_col = '#e7ee9a' root = Tk() # Frame for the entire table period_tab = Frame(root) period_tab.pack() # Frame for the main elements only main_elem = Frame(period_tab) main_elem.pack() # Frame for the secondary elements only sec_elem = Frame(period_tab) sec_elem.pack(pady=10) # Creating a 7x18 table of buttons and appending it to a 2D python list for main elements buttons = [] for i in range(7): temp = [] for j in range(18): but = Button(main_elem,text=main[18*i+j],width=10,bg='#f0f0f0') but.grid(row=i,column=j) temp.append(but) buttons.append(temp) # Creating a 2x15 table of buttons for secondary elements for i in range(2): for j in range(15): text = sec[15*i+j] if i == 0: # If row 1 then different color Button(sec_elem,text=text,width=10,bg=rare_m_col).grid(row=i,column=j) else: Button(sec_elem,text=text,width=10,bg=act_col).grid(row=i,column=j) # Manually pick out main elements from the table non_metals = buttons[0][0],buttons[1][12:16],buttons[2][13:16],buttons[3][14:16],buttons[4][15] alk_metals = [row[0] for row in buttons[1:]] alk_ea_metals = [row[1] for row in buttons[1:]] halogens = [row[16] for row in buttons[1:]] noble_gases = [row[-1] for row in buttons[:]] transition_met = [buttons[x][3:12] for x in range(3,7)] metals = buttons[6][12:16],buttons[5][12:16],buttons[4][12:15],buttons[3][12:14],buttons[2][12] rare_metals = [row[2] for row in buttons[3:6]] actinoid = buttons[-1][2] plain_but = buttons[0][1:-1],buttons[1][2:12],buttons[2][2:12] # Change colors for those main element buttons actinoid['bg'] = act_col for i in alk_metals: i['bg'] = alk_m_col for i in alk_ea_metals: i['bg'] = alk_ea_col for i in halogens: i['bg'] = halogen_col for i in noble_gases: i['bg'] = noble_g_col for i in rare_metals: i['bg'] = rare_m_col for i in transition_met: for j in i: j['bg'] = trans_m_col for i in non_metals: if isinstance(i,list): for j in i: j.config(bg=non_m_col) else: i.config(bg=non_m_col) for i in metals: if isinstance(i,list): for j in i: j.config(bg=metals_col) else: i.config(bg=metals_col) for i in plain_but: for j in i: j['relief'] = 'flat' Button(root,text='EXIT',command=root.destroy).pack(pady=10) root.mainloop() I've commented the code to make it more understandable. The slicing part might seem a bit complicated because python list does not support 2D slicing. One way is to create a numpy array and store the coordinates onto it and then retrieve the respective button object based on coordinate, might be longer code but it would make the slicing more easier and understandable as numpy supports 2D slicing. Final output of the GUI: Edit: Here is a more advanced and not so complicated periodic table
5
5
70,512,660
2021-12-28
https://stackoverflow.com/questions/70512660/how-to-show-text-on-a-heatmap-with-plotly
I am trying to show the z items as text on a Plotly heatmap. I am using the latest version (5.5.0) and following the exact example shown on the Plotly Heatmaps webpage (https://plotly.com/python/heatmaps/), see the section "Text on Heatmap Points" near the bottom. My code is their example code, which is: figHeatmap = go.Figure( data=go.Heatmap( z=[[1,20,30], [20,1,60], [30,60,1]], text=[['1','2','3'],['4','5','6'],['7','8','9']], texttemplate="%{text}", textfont={"size":20} ) ) When I render my app in a webpage, what I see is this: But on their site it shows with the labels: I don't get any errors when I run my app, and the labels do indeed show when I hover over the heatmap, but the text is not displaying on the heatmap as it should. Since I'm using their exact example, I don't understand what could be going wrong. Any thoughts? [EDIT]: It seems like this might be an issue with Dash. When I run the example interactively, I do indeed get the text labels. But my application is part of a Dash app, and that is where I am not seeing the rendered labels.
This is a new feature in 5.5.0 https://github.com/plotly/plotly.py/releases. After installing and restarting my jupyter kernel, this did not work. Required restart of complete jupyter environment the documented example using go https://plotly.com/python/heatmaps/#text-on-heatmap-points plus note in release notes auto_text parameter, I explored this. below shows use of px and also go by adding numbers as text as well import plotly.express as px import plotly.graph_objects as go import inflect p = inflect.engine() df = px.data.medals_wide(indexed=True) fig = px.imshow(df, text_auto=True) fig2 = go.Figure(fig.data, fig.layout) fig2 = fig2.update_traces(text=df.applymap(p.number_to_words).values, texttemplate="%{text}", hovertemplate=None) fig.show() fig2.show() dash see https://dash.plotly.com/external-resources Controlling the Plotly.js Version Used by dcc.Graph dash 2.0.0 is packaged with a version of plotly.js that does not yet support texttemplate code below shows how to override import numpy as np import dash import plotly.graph_objects as go import plotly.express as px from dash.dependencies import Input, Output, State from jupyter_dash import JupyterDash import inflect # Build App app = JupyterDash(__name__, external_scripts=["https://cdn.plot.ly/plotly-2.8.3.min.js"]) app.scripts.config.serve_locally = False app.layout = dash.html.Div( [ dash.dcc.Interval(id="run", max_intervals=1), dash.dcc.Graph(id="fig"), ] ) def hm(): p = inflect.engine() df = px.data.medals_wide(indexed=True) fig = px.imshow(df, text_auto=True) fig2 = go.Figure(fig.data, fig.layout) fig2 = fig2.update_traces( text=df.applymap(p.number_to_words).values, texttemplate="%{text}", hovertemplate=None, ) return fig2.update_layout(title=f"dash: {dash.__version__}") @app.callback(Output("fig", "figure"), Input("run", "n_intervals")) def createFig(n): return hm() # Run app and display result inline in the notebook app.run_server(mode="inline")
8
9
70,512,520
2021-12-28
https://stackoverflow.com/questions/70512520/python-unittest-mock-pyspark-chain
I'd like to write some unit tests for simple methods which have pyspark code. def do_stuff(self, df1: DataFrame, df2_path: str, df1_key: str, df2_key: str) -> DataFrame: df2 = self.spark.read.format('parquet').load(df2_path) return df1.join(df2, [f.col(df1_key) == f.col(df2_key)], 'left') How can I mock the spark read part? I've tried this: @patch("class_to_test.SparkSession") def test_do_stuff(self, mock_spark: MagicMock) -> None: spark = MagicMock() spark.read.return_value.format.return_value.load.return_value = \ self.spark.createDataFrame([(1, 2)], ["key2", "c2"]) mock_spark.return_value = spark input_df = self.spark.createDataFrame([(1, 1)], ["key1", "c1"]) actual_df = ClassToTest().do_stuff(input_df, "df2", "key1", "key2") expected_df = self.spark.createDataFrame([(1, 1, 1, 2)], ["key1", "c1", "key2", "c2"]) assert_pyspark_df_equal(actual_df, expected_df) But it fails with this error: py4j.Py4JException: Method join([class java.util.ArrayList, class org.apache.spark.sql.Column, class java.lang.String]) does not exist Looks like the mocking didn't work as I expected, what should I do with it so the spark.read.load returns the test dataframe that I specified? Edit: full code here
You can do it using PropertyMock. Here is an example: # test.py import unittest from unittest.mock import patch, PropertyMock, Mock from pyspark.sql import SparkSession, DataFrame, functions as f from pyspark_test import assert_pyspark_df_equal class ClassToTest: def __init__(self) -> None: self._spark = SparkSession.builder.getOrCreate() @property def spark(self): return self._spark def do_stuff(self, df1: DataFrame, df2_path: str, df1_key: str, df2_key: str) -> DataFrame: df2 = self.spark.read.format('parquet').load(df2_path) return df1.join(df2, [f.col(df1_key) == f.col(df2_key)], 'left') class TestClassToTest(unittest.TestCase): def setUp(self) -> None: self.spark = SparkSession.builder.getOrCreate() def test_do_stuff(self) -> None: # let's say ClassToTest().spark.read.format().load() will return a DataFrame with patch( # change __main__ to your module... '__main__.ClassToTest.spark', new_callable=PropertyMock, return_value=Mock( # read property read=Mock( # format() method format=Mock( return_value=Mock( # load() method result: load=Mock(return_value=self.spark.createDataFrame([(1, 2)], ['key2', 'c2'])))))) ): input_df = self.spark.createDataFrame([(1, 1)], ['key1', 'c1']) df = ClassToTest().do_stuff(input_df, 'df2_path', 'key1', 'key2') assert_pyspark_df_equal( df, self.spark.createDataFrame([(1, 1, 1, 2)], ['key1', 'c1', 'key2', 'c2']) ) if __name__ == '__main__': unittest.main() Let's check: python test.py # result: ---------------------------------------------------------------------- Ran 1 test in 7.460s OK
7
7
70,508,568
2021-12-28
https://stackoverflow.com/questions/70508568/django-csrf-trusted-origins-not-working-as-expected
Im having trouble in understanding why a post from a third party site is being rejected even though the site is added to CSRF_TRUSTED_ORIGINS list in settings.py. Im receiving a 403 error after the post stating the the csrf check has failed. I thought that adding the site to CSRF_TRUSTED_ORIGINS should make the site exempt from csrf checks. Is there something else I should have done in order to receive post requests from external origins? Im running django 3.2 CSRF_TRUSTED_ORIGINS = ['site.lv'] request headers: Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 Accept-Encoding: gzip, deflate, br Accept-Language: en-US,en;q=0.9,lv;q=0.8,ru;q=0.7 Cache-Control: no-cache Connection: keep-alive Content-Length: 899 Content-Type: application/x-www-form-urlencoded Host: cvcentrs-staging.herokuapp.com Origin: https://www.site.lv Pragma: no-cache Referer: https://www.site.lv/ sec-ch-ua: " Not A;Brand";v="99", "Chromium";v="96", "Microsoft Edge";v="96" sec-ch-ua-mobile: ?0 sec-ch-ua-platform: "Windows" Sec-Fetch-Dest: document Sec-Fetch-Mode: navigate Sec-Fetch-Site: cross-site Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36 Edg/96.0.1054.62
This assumption is wrong: I thought that adding the site to CSRF_TRUSTED_ORIGINS should make the site exempt from csrf checks. Adding the URL to CSRF_TRUSTED_ORIGINS is only one thing you need to do to allow a POST request from a form on an external domain. You also need to: Make an AJAX-call from the external page to get a csrf_token, and send the token with your POST request. Set up CORS (see this article for example: https://www.stackhawk.com/blog/django-cors-guide/). Starting from Django 4.0, you have to include the scheme in CSRF_TRUSTED_ORIGINS: CSRF_TRUSTED_ORIGINS = ['https://site.lv', 'https://www.site.lv']
15
34
70,466,992
2021-12-23
https://stackoverflow.com/questions/70466992/partial-tucker-decomposition
I want to apply a partial tucker decomposition algorithm to minimize MNIST image tensor dataset of (60000,28,28), in order to conserve its features when applying another machine algorithm afterwards like SVM. I have this code that minimizes the second and third dimension of the tensor i = 16 j = 10 core, factors = partial_tucker(train_data_mnist, modes=[1,2],tol=10e-5, rank=[i,j]) train_datapartial_tucker = tl.tenalg.multi_mode_dot(train_data_mnist, factors, modes=modes, transpose=True) test_data_partial_tucker = tl.tenalg.multi_mode_dot(test_data_mnist, factors, modes=modes, transpose=True) How to find the best rank [i,j] when I'm using partial_tucker in tensorly that will give the best dimension reduction for the image while conserving as much data?
Just like principal component analysis the partial tucker decomposition will give better results as we increase the rank, in the sense that the optimal mean square residual of the reconstruction is smaller. In general, features (the core tensor) that enables accurate reconstructions of the original data, can be used to make similar predictions (given any model we can prepend a transformation that reconstruct the original data from the core features). import mxnet as mx import numpy as np import tensorly as tl import matplotlib.pyplot as plt import tensorly.decomposition # Load data mnist = mx.test_utils.get_mnist() train_data = mnist['train_data'][:,0] err = np.zeros([28,28]) # here I will save the errors for each rank batch = train_data[::100] # process only 1% of the data to go faster for i in range(1,28): for j in range(1,28): if err[i,j] == 0: # Decompose the data core, factors = tl.decomposition.partial_tucker( batch, modes=[1,2], tol=10e-5, rank=[i,j]) # Reconstruct data from features c = tl.tenalg.multi_mode_dot(core, factors, modes=[1,2]); # Calculate the RMS error and save err[i,j] = np.sqrt(np.mean((c - batch)**2)); # Plot the statistics plt.figure(figsize=(9,6)) CS = plt.contour(np.log2(err), levels=np.arange(-6, 0)); plt.clabel(CS, CS.levels, inline=True, fmt='$2^{%d}$', fontsize=16) plt.xlabel('rank 2') plt.ylabel('rank 1') plt.grid() plt.title('Reconstruction RMS error'); Usually you have better result with a balanced rank, i.e. i and j not very different from each other. As we increase the error we can get better compression, we can rank the (i,j) by the error, and plot only where the error is minimum for a given feature dimension i * j, like this X = np.zeros([28, 28]) X[...] = np.nan; p = 28 * 28; for e,i,j in sorted([(err[i,j], i, j) for i in range(1, 28) for j in range(1, 28)]): if p < i * j: # we can achieve this error with some better compression pass else: p = i * j; X[i,j] = e; plt.imshow(X) Anywhere in the white region you are wasting resources, the choice
5
5
70,511,762
2021-12-28
https://stackoverflow.com/questions/70511762/modify-duplicated-rows-in-dataframe-python
I am working with a dataframe in Pandas and I need a solution to automatically modify one of the columns that has duplicate values. It is a column type 'object' and I would need to modify the name of the duplicate values. The dataframe is the following: City Year Restaurants 0 New York 2001 20 1 Paris 2000 40 2 New York 1999 41 3 Los Angeles 2004 35 4 Madrid 2001 22 5 New York 1998 33 6 Barcelona 2001 15 As you can see, New York is repeated 3 times. I would like to create a new dataframe in which this value would be automatically modified and the result would be the following: City Year Restaurants 0 New York 2001 2001 20 1 Paris 2000 40 2 New York 1999 1999 41 3 Los Angeles 2004 35 4 Madrid 2001 22 5 New York 1998 1998 33 6 Barcelona 2001 15 I would also be happy with "New York 1", "New York 2" and "New York 3". Any option would be good.
Use np.where, to modify column City if duplicated df['City']=np.where(df['City'].duplicated(keep=False), df['City']+' '+df['Year'].astype(str),df['City'])
4
3
70,502,457
2021-12-28
https://stackoverflow.com/questions/70502457/do-i-need-to-do-any-text-cleaning-for-spacy-ner
I am new to NER and Spacy. Trying to figure out what, if any, text cleaning needs to be done. Seems like some examples I've found trim the leading and trailing whitespace and then muck with the start/stop indexes. I saw one example where the guy did a bunch of cleaning and his accuracy was really bad because all the indexes were messed up. Just to clarify, the dataset was annotated with DataTurks, so you get json like this: "Content": <original text> "label": [ "Skills" ], "points": [ { "start": 1295, "end": 1621, "text": "\n• Programming language... So by "mucking with the indexes", I mean, if you strip off the leading \n, you need to update the start index, so it's still aligned properly. So that's really the question, if I start removing characters from the beginning, end or middle, I need to apply the rule to the content attribute and adjust start/end indexes to match, no? I'm guessing an obvious "yes" :), so I was wondering how much cleaning needs to be done. So you would remove the \ns, bullets, leading / trailing whitespace, but leave standard punctuation like commas, periods, etc? What about stuff like lowercasing, stop words, lemmatizing, etc? One concern I'm seeing with a few samples I've looked at, is the start/stop indexes do get thrown off by the cleaning they do because you kind of need to update EVERY annotation as you remove characters to keep them in sync. I.e. A 0 -> 100 B 101 -> 150 if I remove a char at position 50, then I need to adjust B to 100 -> 149.
First, spaCy does no transformation of the input - it takes it literally as-is and preserves the format. So you don't lose any information when you provide text to spaCy. That said, input to spaCy with the pretrained pipelines will work best if it is in natural sentences with no weird punctuation, like a newspaper article, because that's what spaCy's training data looks like. To that end, you should remove meaningless white space (like newlines, leading and trailing spaces) or formatting characters (maybe a line of ----?), but that's about all the cleanup you have to do. The spaCy training data won't have bullets, so they might get some weird results, but I would leave them in to start. (Also, bullets are obviously printable characters - maybe you mean non-ASCII?) I have no idea what you mean by "muck with the indexes", but for some older NLP methods it was common to do more extensive preprocessing, like removing stop words and lowercasing everything. Doing that will make things worse with spaCy because it uses the information you are removing for clues, just like a human reader would. Note that you can train your own models, in which case they'll learn about the kind of text you show them. In that case you can get rid of preprocessing entirely, though for actually meaningless things like newlines / leading and following spaces you might as well remove them anyway. To address your new info briefly... Yes, character indexes for NER labels must be updated if you do preprocessing. If they aren't updated they aren't usable. It looks like you're trying to extract "skills" from a resume. That has many bullet point lists. The spaCy training data is newspaper articles, which don't contain any lists like that, so it's hard to say what the right thing to do is. I don't think the bullets matter much, but you can try removing or not removing them. What about stuff like lowercasing, stop words, lemmatizing, etc? I already addressed this, but do not do this. This was historically common practice for NLP models, but for modern neural models, including spaCy, it is actively unhelpful.
7
5
70,501,334
2021-12-27
https://stackoverflow.com/questions/70501334/cannot-install-openvino-with-pip
I'm trying to install Openvino to convert a Keras model into a representation for the inference engine. I'm running the command: python3 openvino/tools/mo/mo_tf.py —model_13.h5/ --input_shape=\[180,180\] This returns the error: from openvino.tools.mo.subprocess_main import subprocess_main ModuleNotFoundError: No module named 'openvino' I've tried pip install openvino but consistently get: ERROR: Could not find a version that satisfies the requirement openvino (from versions: none) ERROR: No matching distribution found for openvino To try and make sure the versions of python for running the script and installing Openvino are the same, I've tried: python3 -m pip install openvino The content of the mo_tf.py script is simply: #!/usr/bin/env python3 # Copyright (C) 2018-2021 Intel Corporation # SPDX-License-Identifier: Apache-2.0 if __name__ == "__main__": from openvino.tools.mo.subprocess_main import subprocess_main subprocess_main(framework='tf') Has anyone seen this issue and found a workaround?
The latest version of openvino is 2021.4.2. The list of packages to download by pip includes packages for Python 3.6-3.9 for Linux, MacOS on Intel, and Windows; only packages for 64-bit platforms are provided. No packages for Python 3.10 and no source code. The solution is either to compile from sources, or install with Docker or install from Anaconda. Or downgrade to Python 3.9.
11
13
70,501,065
2021-12-27
https://stackoverflow.com/questions/70501065/type-hint-pandas-dataframegroupby
How should I type hint in Python a pandas DataFrameGroupBy object? Should I just use pd.DataFrame as for normal pandas dataframes? I didn't find any other solution atm
DataFrameGroupBy is a proper type in of itself. So if you're writing a function which must specifically take a DataFrameGroupBy instance: from pandas.core.groupby import DataFrameGroupBy def my_function(dfgb: DataFrameGroupBy) -> None: """Do something with dfgb.""" If you're looking for a more general polymorphic type, there are several possibilities: pandas.core.groupby.GroupBy since DataFrameGroupBy inherits from GroupBy[DataFrame]. If you want to accept Series instances too, you could either union DataFrameGroupBy and SeriesGroupBy or you could use GroupBy[FrameOrSeries] (if you intend to always match the input type in your return value) or GroupBy[FrameOrSeriesUnion] if your output type doesn't reflect the input type. All of these types are in pandas.core.groupby.generic. You could combine the above generics (and others) in many different ways to your liking.
7
11
70,498,791
2021-12-27
https://stackoverflow.com/questions/70498791/how-to-sort-a-mixed-typed-list
I have a list that looks as follows (the 'None' in the list is a string, not a None): profit = [1 , 20 , 3 , 5 , 90 , 'None', 900, 67 , 'None'] name = ['a', 'b', 'c', 'e', 'd', 'f' , 'g', 'k', 'pp'] The profit list is a list of "profit" values, so I had to sort it in a reversed order so that the highest values will be at the beginning. Also, I have more lists with the same length that represent other things that are connected to the profit list (for example the name list that shows where the profit came from). Now, I wrote the following code to sort the profit list in a reversed order, and I save the indices so I could sort the other lists (like name) according to the obtained indices: sorted_ind = sorted(range(len(profit)), key=lambda k: profit[k], reverse=True) for i in sorted_ind: print('{0:^50}|{1:^7}|'.format(profit[i], name[I])) The above code works great when my profit list contains only numbers. However, there are cases where I have no profit and I would like this value to be 'None' (I don't want to set it to 0). I'm trying to perform the same sort but in a way that all of the None indices will be at the end of the list - to sort all the ints and insert the Nones at the end. Any good way to do it?
You can use a "priority" key to the sort function which checks the types as well: >>> sorted(profit, key=lambda x: (isinstance(x, int), x), reverse=True) [900, 90, 67, 20, 5, 3, 1, 'None', 'None'] If you're doing that just to sort the names list, then it is not necessary to sort the indices, you should use zip: profits = [1, 20, 3, 5, 90, 'None', 900, 67, 'None'] names = ['a', 'b', 'c', 'e', 'd', 'f', 'g', 'k', 'pp'] sorted_ind = sorted(zip(profits, names), key=lambda tup: (isinstance(tup[0], int), tup[0]), reverse=True) for profit, name in sorted_ind: print('{0:^50}|{1:^7}|'.format(profit, name)) Gives: 900 | g | 90 | d | 67 | k | 20 | b | 5 | e | 3 | c | 1 | a | None | f | None | pp |
4
6
70,497,633
2021-12-27
https://stackoverflow.com/questions/70497633/how-to-correctly-read-csv-file-generated-by-groupby-results
I have calculated the mean value of DataFrame by two groups and saved the results to CSV file. Then, I tried to read it again by read_csv(), but the .loc() function doesn't work for the loaded DataFrame. Here's the code example: import pandas as pd import numpy as np np.random.seed(100) df = pd.DataFrame(np.random.randn(100, 3), columns=['a', 'b', 'value']) a_bins = np.arange(-3, 4, 1) b_bins = np.arange(-2, 4, 2) # calculate the mean value df['a_bins'] = pd.cut(df['a'], bins=a_bins) df['b_bins'] = pd.cut(df['b'], bins=b_bins) df_value_bin = df.groupby(['a_bins','b_bins']).agg({'value':'mean'}) # save to csv file df_value_bin.to_csv('test.csv') # read the exported file df_test = pd.read_csv('test.csv') When I type: df_value_bin.loc[(1.5, -1)] I got this output value 0.254337 Name: ((1, 2], (-2, 0]), dtype: float64 But, if I use the same method to locate the value from the loaded CSV file: df_test.loc[(1.5, -1)] I got this Keyerror: --------------------------------------------------------------------------- KeyError Traceback (most recent call last) /tmp/ipykernel_33836/4042082162.py in <module> ----> 1 df_test.loc[(1.5, -1)] ~/miniconda3/lib/python3.9/site-packages/pandas/core/indexing.py in __getitem__(self, key) 923 with suppress(KeyError, IndexError): 924 return self.obj._get_value(*key, takeable=self._takeable) --> 925 return self._getitem_tuple(key) 926 else: 927 # we by definition only have the 0th axis ~/miniconda3/lib/python3.9/site-packages/pandas/core/indexing.py in _getitem_tuple(self, tup) 1098 def _getitem_tuple(self, tup: tuple): 1099 with suppress(IndexingError): -> 1100 return self._getitem_lowerdim(tup) 1101 1102 # no multi-index, so validate all of the indexers ~/miniconda3/lib/python3.9/site-packages/pandas/core/indexing.py in _getitem_lowerdim(self, tup) 836 # We don't need to check for tuples here because those are 837 # caught by the _is_nested_tuple_indexer check above. --> 838 section = self._getitem_axis(key, axis=i) 839 840 # We should never have a scalar section here, because ~/miniconda3/lib/python3.9/site-packages/pandas/core/indexing.py in _getitem_axis(self, key, axis) 1162 # fall thru to straight lookup 1163 self._validate_key(key, axis) -> 1164 return self._get_label(key, axis=axis) 1165 1166 def _get_slice_axis(self, slice_obj: slice, axis: int): ~/miniconda3/lib/python3.9/site-packages/pandas/core/indexing.py in _get_label(self, label, axis) 1111 def _get_label(self, label, axis: int): 1112 # GH#5667 this will fail if the label is not present in the axis. -> 1113 return self.obj.xs(label, axis=axis) 1114 1115 def _handle_lowerdim_multi_index_axis0(self, tup: tuple): ~/miniconda3/lib/python3.9/site-packages/pandas/core/generic.py in xs(self, key, axis, level, drop_level) 3774 raise TypeError(f"Expected label or tuple of labels, got {key}") from e 3775 else: -> 3776 loc = index.get_loc(key) 3777 3778 if isinstance(loc, np.ndarray): ~/miniconda3/lib/python3.9/site-packages/pandas/core/indexes/range.py in get_loc(self, key, method, tolerance) 386 except ValueError as err: 387 raise KeyError(key) from err --> 388 raise KeyError(key) 389 return super().get_loc(key, method=method, tolerance=tolerance) 390 KeyError: 1.5
You should read the index as a MultiIndex, but you need to convert the strings to interval. You can use to_interval (all credits to korakot): def to_interval(istr): c_left = istr[0]=='[' c_right = istr[-1]==']' closed = {(True, False): 'left', (False, True): 'right', (True, True): 'both', (False, False): 'neither' }[c_left, c_right] left, right = map(int, istr[1:-1].split(',')) return pd.Interval(left, right, closed) df_test = pd.read_csv('test.csv', index_col=[0,1], converters={0: to_interval,1: to_interval}) Test: df_test.loc[(1.5, -1)] #value 0.254337 #Name: ((1, 2], (-2, 0]), dtype: float64
4
3
70,492,568
2021-12-27
https://stackoverflow.com/questions/70492568/how-to-use-queue-with-threading-properly
I am new to queue & threads kindly help with the below code , here I am trying to execute the function hd , I need to run the function multiple times but only after a single run has been completed import queue import threading import time fifo_queue = queue.Queue() def hd(): print("hi") time.sleep(1) print("done") for i in range(3): cc = threading.Thread(target=hd) fifo_queue.put(cc) cc.start() Current Output hi hi hi donedonedone Expected Output hi done hi done hi done​
You can use a Semaphore for your purposes A semaphore manages an internal counter which is decremented by each acquire() call and incremented by each release() call. The counter can never go below zero; when acquire() finds that it is zero, it blocks, waiting until some other thread calls release(). A default value of Semaphore is 1, class threading.Semaphore(value=1) so only one thread would be active at once: import queue import threading import time fifo_queue = queue.Queue() semaphore = threading.Semaphore() def hd(): with semaphore: print("hi") time.sleep(1) print("done") for i in range(3): cc = threading.Thread(target=hd) fifo_queue.put(cc) cc.start() hi done hi done hi done As @user2357112supportsMonica mentioned in comments RLock would be more safe option class threading.RLock This class implements reentrant lock objects. A reentrant lock must be released by the thread that acquired it. Once a thread has acquired a reentrant lock, the same thread may acquire it again without blocking; the thread must release it once for each time it has acquired it. import queue import threading import time fifo_queue = queue.Queue() lock = threading.RLock() def hd(): with lock: print("hi") time.sleep(1) print("done") for i in range(3): cc = threading.Thread(target=hd) fifo_queue.put(cc) cc.start()
6
1
70,491,270
2021-12-27
https://stackoverflow.com/questions/70491270/how-to-make-a-single-image-using-several-images
I have these images and there is a shadow in all images. I target is making a single image of a car without shadow by using these three images: Finally, how can I get this kind of image as shown below: Any kind of help or suggestions are appreciated. EDITED According to the comments, I used np.maximum and achieved easily to my target: import cv2 import numpy as np img_1 = cv2.imread('1.png', cv2.COLOR_BGR2RGB) img_2 = cv2.imread('2.png', cv2.COLOR_BGR2RGB) img = np.maximum(img_1, img_2) cv2.imshow('img1', img_1) cv2.imshow('img2', img_2) cv2.imshow('img', img) cv2.waitKey(0)
Here's a possible solution. The overall idea is to compute the location of the shadows, produce a binary mask identifying the location of the shadows and use this information to copy pixels from all the cropped sub-images. Let's see the code. The first problem is to locate the three images. I used the black box to segment and crop each car, like this: # Imports: import cv2 import numpy as np # image path path = "D://opencvImages//" fileName = "qRLI7.png" # Reading an image in default mode: inputImage = cv2.imread(path + fileName) # Get the HSV image: hsvImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2HSV) # Get the grayscale image: grayImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY) showImage("grayImage", grayImage) # Threshold via Otsu: _, binaryImage = cv2.threshold(grayImage, 5, 255, cv2.THRESH_BINARY_INV) cv2.imshow("binaryImage", binaryImage) cv2.waitKey(0) The previous bit uses the grayscale version of the image and applies a fixed binarization using a threshold of 5. I also pre-compute the HSV version of the original image. The result of the thresholding is this: I'm trying to get the black rectangles and use them to crop each car. Let's get the contours and filter them by area, as the black rectangles on the binary image have the biggest area: for i, c in enumerate(currentContour): # Get the contour's bounding rectangle: boundRect = cv2.boundingRect(c) # Get the dimensions of the bounding rect: rectX = boundRect[0] rectY = boundRect[1] rectWidth = boundRect[2] rectHeight = boundRect[3] # Get the area: blobArea = rectWidth * rectHeight minArea = 20000 if blobArea > minArea: # Deep local copies: hsvImage = hsvImage.copy() localImage = inputImage.copy() # Get the S channel from the HSV image: (H, S, V) = cv2.split(hsvImage) # Crop image: croppedImage = V[rectY:rectY + rectHeight, rectX:rectX + rectWidth] localImage = localImage[rectY:rectY + rectHeight, rectX:rectX + rectWidth] _, binaryMask = cv2.threshold(croppedImage, 0, 255, cv2.THRESH_OTSU + cv2.THRESH_BINARY_INV) After filtering each contour to get the biggest one, I need to locate the position of the shadow. The shadow is mostly visible in the HSV color space, particularly, in the V channel. I cropped two versions of the image: The original BGR image, now cropped, and the V cropped channel of the HSV image. This is the binary mask that results from applying an automatic thresholding on the S channel : To locate the shadow I only need the starting x coordinate and its width, because the shadow is uniform across every cropped image. Its height is equal to each cropped image's height. I reduced the V image to a row, using the SUM mode. This will sum each pixel across all columns. The biggest values will correspond to the position of the shadow: # Image reduction: reducedImg = cv2.reduce(binaryMask, 0, cv2.REDUCE_SUM, dtype=cv2.CV_32S) # Normalize image: max = np.max(reducedImg) reducedImg = reducedImg / max # Clip the values to [0,255] reducedImg = np.clip((255 * reducedImg), 0, 255) # Convert the mat type from float to uint8: reducedImg = reducedImg.astype("uint8") _, shadowMask = cv2.threshold(reducedImg, 250, 255, cv2.THRESH_BINARY) The reduced image is just a row: The white pixels denote the largest values. The location of the shadow is drawn like a horizontal line with the largest area, that is, the most contiguous white pixels. I process this row by getting contours and filtering, again, to the largest area: # Get the biggest rectangle: subContour, _ = cv2.findContours(shadowMask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) for j, s in enumerate(subContour): # Get the contour's bounding rectangle: boundRect = cv2.boundingRect(s) # Get the dimensions of the bounding rect: rectX = boundRect[0] rectY = boundRect[1] rectWidth = boundRect[2] rectHeight = boundRect[3] # Get the area: blobArea = rectWidth * rectHeight minArea = 30 if blobArea > minArea: # Get image dimensions: (imageHeight, imageWidth) = localImage.shape[:2] # Set an empty array, this will be the binary mask shadowMask = np.zeros((imageHeight, imageWidth, 3), np.uint8) color = (255, 255, 255) cv2.rectangle(shadowMask, (int(rectX), int(0)), (int(rectX + rectWidth), int(0 + imageHeight)), color, -1) # Invert mask: shadowMask = 255 - shadowMask # Store mask and cropped image: shadowRois.append((shadowMask.copy(), localImage.copy())) Alright, with that information I create a mask, where the only thing drawn in white is the location of the mask. I store this mask and the original BGR crop in the shadowRois list. What follows is a possible method to use this information and create a full image. The idea is that I use the information of each mask to copy all the non-masked pixels. I accumulate this information on a buffer, initially an empty image, like this: # Prepare image buffer: buffer = np.zeros((100, 100, 3), np.uint8) # Loop through cropped images and produce the final image: for r in range(len(shadowRois)): # Get data from the list: (mask, img) = shadowRois[r] # Get image dimensions: (imageHeight, imageWidth) = img.shape[:2] # Resize the buffer: newSize = (imageWidth, imageHeight) buffer = cv2.resize(buffer, newSize, interpolation=cv2.INTER_AREA) # Get the image mask: temp = cv2.bitwise_and(img, mask) # Set info in buffer, substitute the black pixels # for the new data: buffer = np.where(temp == (0, 0, 0), buffer, temp) cv2.imshow("Composite Image", buffer) cv2.waitKey(0) The result is this:
5
1
70,490,381
2021-12-26
https://stackoverflow.com/questions/70490381/how-can-you-multiply-all-the-values-within-a-2d-df-with-all-the-values-within-a
I'm new to numpy and I'm currently working on a modeling project for which I have to perform some calculations based on two different data sources. However until now I haven't managed to figure out how I could multiply all the individual values to each other: I have two data frames One 2D-dataframe: df1 = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) One 1D-dataframe: df2 = np.array([1, 2, 3, 4, 5]) I would like to multiply all the individual values within the first dataframe (df1) separately with all the values that are stored within the second dataframe in order to create a data cube / new 3D-dataframe that has the shape 5x3x3: df3 = np.array([[[1, 2, 3], [4, 5, 6], [7, 8, 9]], [[2, 4, 6], [8, 10, 12], [14, 16, 18]], ..... ]) I tried different methods but every time I failed to obtain something that looks like df3. x = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) y = np.array([1, 2, 3, 4, 5]) z = y for i in range(len(z)): z.iloc[i] = x for i in range(0, 5): for j in range(0, 3): for k in range(0, 3): z.iloc[i, j, k] = y.iloc[i] * x.iloc[j, k] print(z) Could someone help me out with some example code? Thank you!
Try this: df1 = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) df2 = np.array([1, 2, 3, 4, 5]) df3 = df1 * df2[:, None, None] Output: >>> df3 array([[[ 1, 2, 3], [ 4, 5, 6], [ 7, 8, 9]], [[ 2, 4, 6], [ 8, 10, 12], [14, 16, 18]], [[ 3, 6, 9], [12, 15, 18], [21, 24, 27]], [[ 4, 8, 12], [16, 20, 24], [28, 32, 36]], [[ 5, 10, 15], [20, 25, 30], [35, 40, 45]]])
4
5
70,461,753
2021-12-23
https://stackoverflow.com/questions/70461753/shap-the-color-bar-is-not-displayed-in-the-summary-plot
When displaying summary_plot, the color bar does not show. shap.summary_plot(shap_values, X_train) I have tried changing plot_size. When the plot is higher the color bar appears, but it is very small - doesn't look like it should. shap.summary_plot(shap_values, X_train, plot_size=0.7) Here is an example of a proper looking color bar. Does anyone know if this can be fixed somehow? How to reproduce: import pandas as pd import shap import sklearn from sklearn.ensemble import RandomForestRegressor # a classic housing price dataset X,y = shap.datasets.boston() # a simple linear model model = RandomForestRegressor(max_depth=6, random_state=0, n_estimators=10) model.fit(X, y) shap_values = shap.TreeExplainer(model).shap_values(X) shap.summary_plot(shap_values, X) In this case, the color bar is displayed, but it is very small. I have chosen such an example to make it easy to retrieve the data.
I had the same problem as you did, and I found that the solution was to downgrade matplotlib to 3.4.3.. It appears SHAP isn't optimized for matplotlib 3.5.1 yet.
12
6
70,479,867
2021-12-25
https://stackoverflow.com/questions/70479867/install-uwsgi-on-m1-monterey-fails-with-python-3-10-0
I installed python via pyenv, and then created virtual environment with command python -m venv .venv which python Returns: /Users/my_name/Development/my_project/.venv/bin/python Then pip install uWSGI==2.0.20 fails with following error: *** uWSGI linking *** clang -o /Users/my_name/Development/my_project/.venv/bin/uwsgi -L/opt/homebrew/opt/libffi/lib core/utils.o core/protocol.o core/socket.o core/logging.o core/master.o core/master_utils.o core/emperor.o core/notify.o core/mule.o core/subscription.o core/stats.o core/sendfile.o core/async.o core/master_checks.o core/fifo.o core/offload.o core/io.o core/static.o core/websockets.o core/spooler.o core/snmp.o core/exceptions.o core/config.o core/setup_utils.o core/clock.o core/init.o core/buffer.o core/reader.o core/writer.o core/alarm.o core/cron.o core/hooks.o core/plugins.o core/lock.o core/cache.o core/daemons.o core/errors.o core/hash.o core/master_events.o core/chunked.o core/queue.o core/event.o core/signal.o core/strings.o core/progress.o core/timebomb.o core/ini.o core/fsmon.o core/mount.o core/metrics.o core/plugins_builder.o core/sharedarea.o core/rpc.o core/gateway.o core/loop.o core/cookie.o core/querystring.o core/rb_timers.o core/transformations.o core/uwsgi.o proto/base.o proto/uwsgi.o proto/http.o proto/fastcgi.o proto/scgi.o proto/puwsgi.o core/zlib.o core/yaml.o core/xmlconf.o core/dot_h.o core/config_py.o plugins/python/python_plugin.o plugins/python/pyutils.o plugins/python/pyloader.o plugins/python/wsgi_handlers.o plugins/python/wsgi_headers.o plugins/python/wsgi_subhandler.o plugins/python/web3_subhandler.o plugins/python/pump_subhandler.o plugins/python/gil.o plugins/python/uwsgi_pymodule.o plugins/python/profiler.o plugins/python/symimporter.o plugins/python/tracebacker.o plugins/python/raw.o plugins/gevent/gevent.o plugins/gevent/hooks.o plugins/ping/ping_plugin.o plugins/cache/cache.o plugins/nagios/nagios.o plugins/rrdtool/rrdtool.o plugins/carbon/carbon.o plugins/rpc/rpc_plugin.o plugins/corerouter/cr_common.o plugins/corerouter/cr_map.o plugins/corerouter/corerouter.o plugins/fastrouter/fastrouter.o plugins/http/http.o plugins/http/keepalive.o plugins/http/https.o plugins/http/spdy3.o plugins/signal/signal_plugin.o plugins/syslog/syslog_plugin.o plugins/rsyslog/rsyslog_plugin.o plugins/logsocket/logsocket_plugin.o plugins/router_uwsgi/router_uwsgi.o plugins/router_redirect/router_redirect.o plugins/router_basicauth/router_basicauth.o plugins/zergpool/zergpool.o plugins/redislog/redislog_plugin.o plugins/mongodblog/mongodblog_plugin.o plugins/router_rewrite/router_rewrite.o plugins/router_http/router_http.o plugins/logfile/logfile.o plugins/router_cache/router_cache.o plugins/rawrouter/rawrouter.o plugins/router_static/router_static.o plugins/sslrouter/sslrouter.o plugins/spooler/spooler_plugin.o plugins/cheaper_busyness/cheaper_busyness.o plugins/symcall/symcall_plugin.o plugins/transformation_tofile/tofile.o plugins/transformation_gzip/gzip.o plugins/transformation_chunked/chunked.o plugins/transformation_offload/offload.o plugins/router_memcached/router_memcached.o plugins/router_redis/router_redis.o plugins/router_hash/router_hash.o plugins/router_expires/expires.o plugins/router_metrics/plugin.o plugins/transformation_template/tt.o plugins/stats_pusher_socket/plugin.o -lpthread -lm -lz -lexpat -lintl -ldl -framework CoreFoundation /Users/my_name/.pyenv/versions/3.10.0/lib/python3.10/config-3.10-darwin/libpython3.10.a ld: library not found for -lintl clang: error: linker command failed with exit code 1 (use -v to see invocation) *** error linking uWSGI *** Any suggestions ?
Found solution on github: https://github.com/unbit/uwsgi/issues/2361 LDFLAGS=-L/opt/homebrew/Cellar/gettext/0.21/lib pip install --no-cache-dir "uWSGI==2.0.20"
4
9
70,489,412
2021-12-26
https://stackoverflow.com/questions/70489412/path-to-each-leaf-of-a-binary-tree
The function above AllPaths() appends an array containing the path to each leaf of the binary tree to the global array res. The code works just fine, but I want to remove the global variable res and make the function return an array instead. How can I do that? class Node: def __init__(self, value, left=None, right=None) -> None: self.value = value self.left = left self.right = right res = [] def allPaths(node, arr=[]): if node: tmp = [*arr, node.value] if not node.left and not node.right: # Leaf res.append(tmp) allPaths(node.left, tmp) allPaths(node.right, tmp) root = Node(1) root.left = Node(2); root.left.left = Node(4); root.left.right = Node(5); root.right = Node(3); root.right.right = Node(6); """ 1 <-- root / \ 2 3 / \ \ 4 5 6 <-- leaves """ allPaths(root) print(res) # Output : [[1, 2, 4], [1, 2, 5], [1, 3, 6]]
A simple way that allow you to avoid the inner lists and global list altogether is to make a generator that yields the values as they come. Then you can just pass this to list to make the final outcome: class Node: def __init__(self, value, left=None, right=None) -> None: self.value = value self.left = left self.right = right def allPaths(node): if node: if not node.left and not node.right: # Leaf yield [node.value] else: yield from ([node.value] + arr for arr in allPaths(node.left)) yield from ([node.value] + arr for arr in allPaths(node.right)) root = Node(1) root.left = Node(2); root.left.left = Node(4); root.left.right = Node(5); root.right = Node(3); root.right.right = Node(6); g = allPaths(root) list(g) # [[1, 2, 4], [1, 2, 5], [1, 3, 6]]
16
12
70,489,368
2021-12-26
https://stackoverflow.com/questions/70489368/how-to-register-an-exact-x-y-boundary-crossing-when-object-is-moving-more-than-1
I'm trying to learn Python/Pygame and I made a simple Pong game. However I cannot get the square to bounce off the sides at the perfect pixel as the drawing of the square is updating let's say 3 pixels every frame. I have a code to decide when the square is hitting the edges and bounce in a reverse direction like this: if y_ball < 100: y_ball_change = y_ball_change * -1 if y_ball > 675: y_ball_change = y_ball_change * -1 if x_ball > 775: x_ball_change = x_ball_change * -1 if x_ball <= x_rect + 25 and y_ball >= y_rect -25 and not y_ball > y_rect + 150: x_ball_change = x_ball_change * -1 +2 It's keeping the square inside the boundaries of the screen however it's not pixel perfect since x_ball_change y_ball_change is often more/less than 1 since they are randomized between -5 and 5 (except 0) to make starting direction of the ball random every new game. Thanks for any help!
You also need to correct the position of the ball when changing the direction of the ball. The ball bounces on the boundaries and moves the excessive distance in the opposite direction like a billiard ball: e.g.: if y_ball < 100: y_ball = 100 + (100 - y_ball) y_ball_change = y_ball_change * -1 if y_ball > 675: y_ball = 675 - (y_ball - 675) y_ball_change = y_ball_change * -1 if x_ball > 775: x_ball = 775 - (x_ball - 775) x_ball_change = x_ball_change * -1 if x_ball <= x_rect + 25 and y_rect -25 <= y_ball <= y_rect + 150: x_ball = x_rect + 25 + (x_rect + 25 - x_ball) x_ball_change = x_ball_change * -1 +2
5
2
70,486,824
2021-12-26
https://stackoverflow.com/questions/70486824/share-media-between-multiple-djangovms-servers
We have deployed a django server (nginx/gunicorn/django) but to scale the server there are multiple instances of same django application running. Here is the diagram (architecture): Each blue rectangle is a Virtual Machine. HAProxy sends all request to example.com/admin to Server 3.other requests are divided between Server 1 and Server 2.(load balance). Old Problem: Each machine has a media folder and when admin Uploads something the uploaded media is only on Server 3. (normal users can't upload anything) We solved this by sending all requests to example.com/media/* to Server 3 and nginx from Server3 serves all static files and media. Problem right now We are also using sorl-thumbnail. When a requests comes for example.com/,sorl-thumbnail tries to access the media file but it doesn't exist on this machine because it's on Server3. So now all requests to that machine(server 1 or 2) get 404 for that media file. One solution that comes to mind is to make a shared partition between all 3 machines and use it as media. Another solution is to sync all media folders after each upload but this solution has problem and that is we have almost 2000 requests per second and sometimes sync might not be fast enough and sorl-thumbnail creates the database record of empty file and 404 happens. Thanks in advance and sorry for long question.
You should use an object store to save and serve your user uploaded files. django-storages makes the implementation really simple. If you don’t want to use cloud based AWS S3 or equivalent, you can host your own on-prem S3 compatible object store with minio. On your current setup I don’t see any easy way to fix where the number of vm s are dynamic depending on load. If you have deployment automation then maybe try out rsync so that the vm takes care of syncing files with other vms.
7
2
70,489,060
2021-12-26
https://stackoverflow.com/questions/70489060/efficient-pandas-grouby-nunique-rolling-calculation
I am trying to build a scalable method to calculate the number of unique members that have modified a certain file up to and including the latest modified_date. The unique_member_until_now column contains expected result for each file. import pandas as pd from pandas import Timestamp # Example Dataset df = pd.DataFrame({'File': ['A', 'A', 'A', 'B', 'B', 'B', 'B', 'B', 'B', 'C', 'C', 'C'], 'Member': ['X', 'X', 'Y', 'X', 'Y', 'Y', 'X', 'Z', 'Y', 'X', 'Y', 'X'], 'modified_date': [Timestamp('2021-11-25 00:00:00'), Timestamp('2021-11-28 00:00:00'), Timestamp('2021-12-14 00:00:00'), Timestamp('2021-10-17 00:00:00'), Timestamp('2021-11-01 00:00:00'), Timestamp('2021-11-04 00:00:00'), Timestamp('2021-11-16 00:00:00'), Timestamp('2021-12-16 00:00:00'), Timestamp('2021-12-29 00:00:00'), Timestamp('2021-10-30 00:00:00'), Timestamp('2021-11-23 00:00:00'), Timestamp('2021-12-17 00:00:00')], 'unique_member_until_now': [1, 1, 2, 1, 2, 2, 2, 3, 3, 1, 2, 2]}) df.groupby("File")["Member"].transform('nunique') ofcourse doesn't give the intended result The current approach is to iterate over every group and each record in the group, but I am sure that is grossly inefficient and slow when dealing with millions for rows.
An efficient method would be to compute the (non) duplicated on the File+Member columns, then groupby File and cumsum: (~df[['File', 'Member']].duplicated()).groupby(df['File']).cumsum() Saving as column: df['unique_member_until_now'] = (~df[['File', 'Member']].duplicated()).groupby(df['File']).cumsum() output: File Member modified_date unique_member_until_now 0 A X 2021-11-25 1 1 A X 2021-11-28 1 2 A Y 2021-12-14 2 3 B X 2021-10-17 1 4 B Y 2021-11-01 2 5 B Y 2021-11-04 2 6 B X 2021-11-16 2 7 B Z 2021-12-16 3 8 B Y 2021-12-29 3 9 C X 2021-10-30 1 10 C Y 2021-11-23 2 11 C X 2021-12-17 2
6
2
70,477,631
2021-12-25
https://stackoverflow.com/questions/70477631/batchdataset-get-img-array-and-labels
Here is the batch data set i created before to fit in the model: train_ds = tf.keras.preprocessing.image_dataset_from_directory( train_path, label_mode = 'categorical', #it is used for multiclass classification. It is one hot encoded labels for each class validation_split = 0.2, #percentage of dataset to be considered for validation subset = "training", #this subset is used for training seed = 1337, # seed is set so that same results are reproduced image_size = img_size, # shape of input images batch_size = batch_size, # This should match with model batch size ) valid_ds = tf.keras.preprocessing.image_dataset_from_directory( train_path, label_mode ='categorical', validation_split = 0.2, subset = "validation", #this subset is used for validation seed = 1337, image_size = img_size, batch_size = batch_size, ) if i run a for loop, i am able to access the img array and labels: for images, labels in train_ds: print(labels) But if i try to access them like this: ATTEMPT 1) images, labels = train_ds I get the following value error: ValueError: too many values to unpack (expected 2) ATTEMPT 2: If i try to unpack it like this: images = train_ds[:,0] # get the 0th column of all rows labels = train_ds[:,1] # get the 1st column of all rows I get the following error: TypeError: 'BatchDataset' object is not subscriptable Is there a way for me to extract the labels and images without going trough a for loop?
Just unbatch your dataset and convert the data to lists: import tensorflow as tf import pathlib dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz" data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True) data_dir = pathlib.Path(data_dir) batch_size = 32 train_ds = tf.keras.utils.image_dataset_from_directory( data_dir, validation_split=0.2, subset="training", seed=123, batch_size=batch_size) train_ds = train_ds.unbatch() images = list(train_ds.map(lambda x, y: x)) labels = list(train_ds.map(lambda x, y: y)) print(len(labels)) print(len(images)) Found 3670 files belonging to 5 classes. Using 2936 files for training. 2936 2936
6
7
70,486,284
2021-12-26
https://stackoverflow.com/questions/70486284/for-loop-is-only-storing-the-last-value-in-colum
I am trying to pull the week number given a date and then add that week number to the corresponding row in a pandas/python dataframe. When I run a for loop it is only storing the last calculated value instead of recording each value. I've tried .append but haven't been able to get anything to work. import datetime from datetime import date for i in df.index: week_number = date(df.year[i],df.month[i],df.day[i]).isocalendar() df['week'] = (week_number[1]) Expected values: day month year week 8 12 2021 49 19 12 2021 50 26 12 2021 51 Values I'm getting: day month year week 8 12 2021 51 19 12 2021 51 26 12 2021 51
You can simply use Pandas .apply method to make it a one-liner: df["week"] = df.apply(lambda x: date(x.year, x.month, x.day).isocalendar()[1], axis=1)
6
1
70,482,003
2021-12-25
https://stackoverflow.com/questions/70482003/updating-a-json-in-a-more-efficient-way
[ {"923390702359048212": 5}, {"462291477964259329": 1}, {"803390252265242634": 3}, {"824114065445486592": 2}, {"832041337968263178": 4} ] This is a list of user ids that I just randomly made and some sample number that each id has. In this case lets call it a number of goals scored in a season in a video game. As I try to update the amount of goals scored at the end of the game, I have to go in a roundabout way which first gets all the member ids in the server, and then compares it to the data with the following code. amount = 0 for member in server: if member.id != server.member.id: amount = amount + 1 else: print(amount) print(str(member.id), str(server.member.id)) print(jsondata[amount][str(server.member.id)]) break jsondata[amount][str(server.member.id) = jsondata[amount][str(server.member.id)] + 1 Is there a better way to do what I am working on? I know I am going to run into problems eventually as I don't list members on the json until they score a goal and also I feel like I am wasting a ton of time with my program by having it check a lot of members (I edited out my real list and made this one to make it easier as nobody needs to see 50+ entries to get the idea). I am still a beginner when it comes to this so please respond in simple terms or leave links to places I can learn more complicated ideas. Thanks def goalscored(): amount = 0 for member in server: if member.id != server.member.id: amount = amount + 1 else: print(amount) break with open('Goals.json','w') as f: jsondata = json.load(f) jsondata[amount][str(server.member.id) = jsondata[amount][str(server.member.id)] + 1 json.dump(jsondata,f) def firstgoal(server.member.id): with open('Goals.json','w') as f: jsondata = json.load(f) amount = 0 goalscored = 1 totalmembers = server.members.amount for member in server: if membed.id !=server.member.id: amount = amount + 1 if amount == totalmembers: NewScore = {user.id:goalscored} json.dump(NewScore,f) Code that I'm using
Not sure why you couldn't get it work earlier, but storing it as dict would be much, much easier. # file.json { "923390702359048212": 5, "462291477964259329": 1, "803390252265242634": 3, "824114065445486592": 2, "832041337968263178": 4 } def goalscored(): with open("Goals.json", "r") as f: jsondata = json.load(f) # Make sure your id is not int but str if server.member.id not in jsondata: # first goal case jsondata[server.member.id] = 0 jsondata[server.member.id] += 1 with open("Goals.json", "w") as f: json.dump(jsondata, f)
5
1
70,481,851
2021-12-25
https://stackoverflow.com/questions/70481851/how-to-fix-exception-has-occurred-sslerror-httpsconnectionpool-in-vs-code-env
i try to use python requests library but i got this error i use psiphon VPN most of time in Windows 10 and got this below error after calling requests.get('[API URL]') Exception has occurred: SSLError HTTPSConnectionPool(host='api.github.com', port=443): Max retries exceeded with url: /user (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:997)'))) During handling of the above exception, another exception occurred: During handling of the above exception, another exception occurred: File "C:\Users\Hessam\Desktop\QWE.py", line 3, in <module> r = requests.get('https://api.github.com/user', auth=('user', 'pass'))
You should try to add verify=False to your request: import requests r = requests.get('https://api.github.com/user', verify=False) requests verifies SSL certificates for HTTPS requests, just like a web browser. By default, SSL verification is enabled, and requests will throw an SSLError if it’s unable to verify the certificate. In your specific case, you most likely have a problem with the SSL certificate on your VPN. Note that when verify is set to False, requests will accept any TLS certificate presented by the server, and will ignore hostname mismatches and/or expired certificates, which will make your application vulnerable to man-in-the-middle (MitM) attacks. Setting verify to False may be useful during local development or testing.
7
11
70,479,605
2021-12-25
https://stackoverflow.com/questions/70479605/python-pandas-pandas-correlation-one-column-vs-all
I'm trying to get the correlation between a single column and the rest of the numerical columns of the dataframe, but I'm stuck. I'm trying with this: corr = IM['imdb_score'].corr(IM) But I get the error operands could not be broadcast together with shapes which I assume is because I'm trying to find a correlation between a vector (my imdb_score column) with the dataframe of several columns. How can this be fixed?
The most efficient method it to use corrwith. Example: df.corrwith(df['A']) Setup of example data: import numpy as np import pandas as pd df = pd.DataFrame(np.random.randint(10, size=(5, 5)), columns=list('ABCDE')) # A B C D E # 0 7 2 0 0 0 # 1 4 4 1 7 2 # 2 6 2 0 6 6 # 3 9 8 0 2 1 # 4 6 0 9 7 7 output: A 1.000000 B 0.526317 C -0.209734 D -0.720400 E -0.326986 dtype: float64
8
9
70,472,945
2021-12-24
https://stackoverflow.com/questions/70472945/pandas-getting-the-mean-of-columns-in-multi-index-dataframe
I have a pandas multi index dataframe like shown below. M EM A ... EA M0 EM0 Component EN EZ NZ EN EZ NZ EN EZ ... EZ NZ EN EZ NZ EN EZ NZ Date ... 2020-07-15 0.001682 0.000963 0.001292 0.000737 0.000635 0.000907 -0.048716 0.022769 ... 0.013103 0.016042 0.003619 0.001009 0.001718 0.000829 0.000685 0.000880 2020-07-16 0.000198 0.000178 -0.001219 0.000586 0.000548 0.000691 0.014514 0.009234 ... 0.010467 0.013240 0.000271 0.000238 -0.001365 0.000592 0.000541 0.000750 2020-07-17 0.000810 -0.000322 -0.000682 0.000445 0.000654 0.000595 0.045604 -0.014437 ... 0.014070 0.011086 0.000966 -0.000665 -0.000886 0.000680 0.000564 0.000584 2020-07-18 0.000287 -0.000887 -0.000329 0.000631 0.000815 0.000534 0.038145 -0.001408 ... 0.016091 0.010116 -0.000147 -0.000928 -0.000342 0.000760 0.000654 0.000519 2020-07-19 0.000805 0.000673 -0.001189 0.000537 0.000513 0.000462 0.051083 0.026574 ... 0.010684 0.009200 0.000746 0.001316 -0.001138 0.000896 0.000493 0.000470 I need to calculate the mean of EN, EZ, and NZ columns under M and EM to new columns (e.g. Mean) (also under M and EM, respectively), like shown below. M EM Component EN EZ NZ Mean EN EZ NZ Mean Date 2020-07-15 0.001682 0.000963 0.001292 x.YYYYYY 0.000737 0.000635 0.000907 y.ZZZZZZ Can someone show me a workaround for this? Thanks in advance!
Try: df_mean = pd.concat({'mean': df.groupby(level=0, axis=1).mean()}, axis=1).swaplevel(axis=1) df = df.join(df_mean).sort_index(level=0, axis=1) print(df) # Output: EM M Component EN EZ NZ mean EN EZ NZ mean Date 2020-07-15 4 5 6 5.0 1 2 3 2.0 2020-07-16 14 15 16 15.0 11 12 13 12.0 Setup to be reproducible: import io s = """\ ,M,M,M,EM,EM,EM Component,EN,EZ,NZ,EN,EZ,NZ Date,,,,,, 2020-07-15,1,2,3,4,5,6 2020-07-16,11,12,13,14,15,16 """ df = pd.read_csv(io.StringIO(s), header=[0, 1], index_col=0) Update I was wondering how may I calculate the median under EM instead of mean. I want to have only the mean for M and only median for EM df1 = pd.concat([df['M'].mean(axis=1), df['EM'].median(axis=1)], keys=[('M', 'mean'), ('EM', 'median')]) df = df.join(df1.unstack('Date').T).sort_index(level=0, axis=1) print(df) # Output: EM M Component EN EZ NZ median EN EZ NZ mean Date 2020-07-15 4 6 6 6.0 1 3 3 2.333333 2020-07-16 14 17 16 16.0 11 11 13 11.666667
4
7
70,471,888
2021-12-24
https://stackoverflow.com/questions/70471888/text-as-tooltip-popup-or-labels-in-folium-choropleth-geojson-polygons
Folium allow to create Markers with tooltip or popup text. I would like to do the same with my GeoJSON polygons. My GeoJSON has a property called "name" (feature.properties.name -> let's assume it is the name of each US state). I would like to be able to display this as a label in my choropleth map, in addition to the unemployment rate in each state. I also have the same information in the "State" column from the pandas dataframe. Is this possible? I would be happy with a solution that allows this to be a popup, tooltip or a simple text label written on top. import pandas as pd url = ( "https://raw.githubusercontent.com/python-visualization/folium/master/examples/data" ) state_geo = f"{url}/us-states.json" state_unemployment = f"{url}/US_Unemployment_Oct2012.csv" state_data = pd.read_csv(state_unemployment) m = folium.Map(location=[48, -102], zoom_start=3) folium.Choropleth( geo_data=state_geo, name="choropleth", data=state_data, columns=["State", "Unemployment"], key_on="feature.id", fill_color="YlGn", fill_opacity=0.7, line_opacity=0.2, legend_name="Unemployment Rate (%)", ).add_to(m) folium.LayerControl().add_to(m) m
I've had to use folium's GeoJsonTooltip() and some other steps to get this done in the past. I'm curious to know if someone has a better way Capture the return value of the Choropleth function Add a value(eg unemployment) to the Chorpleth's underlying geojson obj Create GeoJsonTooltip with that value from step 2 Add that tooltip to the choropleth's geojson url = ( "https://raw.githubusercontent.com/python-visualization/folium/master/examples/data" ) state_geo = f"{url}/us-states.json" state_unemployment = f"{url}/US_Unemployment_Oct2012.csv" state_data = pd.read_csv(state_unemployment) m = folium.Map(location=[48, -102], zoom_start=3) # capturing the return of folium.Choropleth() cp = folium.Choropleth( geo_data=state_geo, name="choropleth", data=state_data, columns=["State", "Unemployment"], key_on="feature.id", fill_color="YlGn", fill_opacity=0.7, line_opacity=0.2, legend_name="Unemployment Rate (%)", ).add_to(m) # creating a state indexed version of the dataframe so we can lookup values state_data_indexed = state_data.set_index('State') # looping thru the geojson object and adding a new property(unemployment) # and assigning a value from our dataframe for s in cp.geojson.data['features']: s['properties']['unemployment'] = state_data_indexed.loc[s['id'], 'Unemployment'] # and finally adding a tooltip/hover to the choropleth's geojson folium.GeoJsonTooltip(['name', 'unemployment']).add_to(cp.geojson) folium.LayerControl().add_to(m) m
10
17
70,465,276
2021-12-23
https://stackoverflow.com/questions/70465276/multiprocessing-hanging-at-join
Before anyone marks it as a duplicate question. I have been looking at StackOverflow posts for days, I haven't really found a good or satisfying answer. I have a program that at some point will take individual strings (also many other arguments and objects), do some complicated processes on them, and spit 1 or more strings back. Because each string is processed separately, using multiprocessing seems natural here, especially since I work on machines with over 100 cores. The following is a minimal example, which works with up to 12 to 15 cores, if I try to give it more cores, it hangs at p.join(). I know it's hanging at join because I tried to add some debug prints before and after join and it would stop at some point between the two print commands. Minimal example: import os, random, sys, time, string import multiprocessing as mp letters = string.ascii_uppercase align_len = 1300 def return_string(queue): n_strings = [1,2,3,4] alignments = [] # generating 1 to 4 sequences randomly, each sequence of length 1300 # the original code might even produce more than 4, but 1 to 4 is an average case # instead of the random string there will be some complicated function called # in the original code for i in range(random.choice(n_strings)): alignment = "" for i in range(align_len): alignment += random.choice(letters) alignments.append(alignment) for a in alignments: queue.put(a) def run_string_gen(cores): processes = [] queue = mp.Queue() # running the target function 1000 time for i in range(1000): # print(i) process = mp.Process(target=return_string, args = (queue,)) processes.append(process) if len(processes) == cores: counter = len(processes) for p in processes: p.start() for p in processes: p.join() while queue.qsize() != 0: a = queue.get() # the original idea is that instead of print # I will be writing to a file that is already open print(a) processes = [] queue = mp.Queue() # any leftovers processes if processes: for p in processes: p.start() for p in processes: p.join() while queue.qsize() != 0: a = queue.get() print(a) if __name__ == "__main__": cores = int(sys.argv[1]) if cores > os.cpu_count(): cores = os.cpu_count() start = time.perf_counter() run_string_gen(cores) print(f"it took {time.perf_counter() - start}") The suspect is that the queue is getting full, but also it's not that many strings, when I give it 20 cores, it's hanging, but that's about 20*4=80 strings (if the choice was always 4), but is that many strings for the queue to get full? Assuming the queue is getting full, I am not sure at which point I should check and empty it. Doing it inside return_string seems to be a bad idea as some other processes will also have the queue and might be emptying it/filling it at the same time. Do I use lock.acquire() and lock.release() then? These strings will be added to a file, so I can avoid using a queue and output the strings to a file. However, because starting a process means copying objects, I cannot pass a _io.TextIOWrapper object (which is an open file to append to) but I need to open and close the file inside return_string while syncing using lock.acquire() and lock.release(), but this seems wasteful to keep opening and closing the output file to write to it. Some of the suggested solutions out there: 1- De-queuing the queue before joining is one of the answers I found. However, I cannot anticipate how long each process will take, and adding a sleep command after p.start() loop and before p.join() is bad (at least for my code), because if they finish fast and I end up waiting, that's just a lot of time wasted, and the whole idea is to have speed here. 2- Add some kind of sentinal character e.g. none to know if one worker finished. But didn't get this part, if I run the target function 10 times for 10 cores, I will have 10 sentinels, but the problems is that it's hanging and can't get to the queue to empty and check for sentinal. Any suggestions or ideas on what to do here?
Read carefully the documentation for `multiprocessing.Queue. Read the second warning, which says in part: Warning: As mentioned above, if a child process has put items on a queue (and it has not used JoinableQueue.cancel_join_thread), then that process will not terminate until all buffered items have been flushed to the pipe. This means that if you try joining that process you may get a deadlock unless you are sure that all items which have been put on the queue have been consumed. Similarly, if the child process is non-daemonic then the parent process may hang on exit when it tries to join all its non-daemonic children. In simple terms, your program violates this by joining the processes before it has read the items from the queue. You must reverse the order of operations. Then the problem becomes how does the main process know when to stop reading if the subprocesses are still running and writing to the queue. The simplest solution is for each subprocess to write a special sentinel record as the final item signaling that there are no more items that will be written by that process. The main process can then simply do blocking reads until it sees N sentinel records where N is the number of processes that it has started that will be writing to the queue. The sentinel record just has to be any unique record that cannot be mistaken for a normal item to be processed. None will suffice for that purpose: import os, random, sys, time, string import multiprocessing as mp letters = string.ascii_uppercase align_len = 1300 SENTINEL = None # no more records sentinel def return_string(queue): n_strings = [1,2,3,4] alignments = [] # generating 1 to 4 sequences randomly, each sequence of length 1300 # the original code might even produce more than 4, but 1 to 4 is an average case # instead of the random string there will be some complicated function called # in the original code for i in range(random.choice(n_strings)): alignment = "" for i in range(align_len): alignment += random.choice(letters) alignments.append(alignment) for a in alignments: queue.put(a) # show this process is through writing records: queue.put(SENTINEL) def run_string_gen(cores): processes = [] queue = mp.Queue() # running the target function 1000 time for i in range(1000): # print(i) process = mp.Process(target=return_string, args = (queue,)) processes.append(process) if len(processes) == cores: counter = len(processes) for p in processes: p.start() seen_sentinel_count = 0 while seen_sentinel_count < len(processes): a = queue.get() if a is SENTINEL: seen_sentinel_count += 1 # the original idea is that instead of print # I will be writing to a file that is already open else: print(a) for p in processes: p.join() processes = [] # The same queue can be reused: #queue = mp.Queue() # any leftovers processes if processes: for p in processes: p.start() seen_sentinel_count = 0 while seen_sentinel_count < len(processes): a = queue.get() if a is SENTINEL: seen_sentinel_count += 1 else: print(a) for p in processes: p.join() if __name__ == "__main__": cores = int(sys.argv[1]) if cores > os.cpu_count(): cores = os.cpu_count() start = time.perf_counter() run_string_gen(cores) print(f"it took {time.perf_counter() - start}") Prints: ... NEUNBZVXNHCHVIGNDCEUXJSINEJQNCOWBMUJRTIASUEJHDJUWZIYHHZTJJSJXALZHOEVGMHSVVMMIFZGLGLJDECEWSVZCDRHZWVOMHCDLJVQLQIQCVKBEVOVDWTMFPWIWIQFOGWAOPTJUWKAFBXPWYDIENZTTJNFAEXDVZHXHJPNFDKACCTRTOKMVDGBQYJQMPSQZKDNDYFVBCFMWCSCHTVKURPJDBMRWFQAYIIALHDJTTMSIAJAPLHUAJNMHOKLZNUTRWWYURBTVQHWECAFHQPOZZLVOQJWVLFXUEQYKWEFXQPHKRRHBBCSYZOHUDIFOMBSRNDJNBHDUYMXSMKUOJZUAPPLOFAESZXIETOARQMBRYWNWTSXKBBKWYYKDNLZOCPHDVNLONEGMALL it took 32.7125509 Update The same code done using a multiprocessing pool, which obviates having to re-create processes: import os, random, sys, time, string import multiprocessing as mp letters = string.ascii_uppercase align_len = 1300 SENTINEL = None # no more records sentinel def return_string(): n_strings = [1,2,3,4] alignments = [] # generating 1 to 4 sequences randomly, each sequence of length 1300 # the original code might even produce more than 4, but 1 to 4 is an average case # instead of the random string there will be some complicated function called # in the original code for i in range(random.choice(n_strings)): alignment = "" for i in range(align_len): alignment += random.choice(letters) alignments.append(alignment) return alignments def run_string_gen(cores): def my_callback(result): alignments = result for alignment in alignments: print(alignment) pool = mp.Pool(cores) for i in range(1000): pool.apply_async(return_string, callback=my_callback) # wait for completion of all tasks: pool.close() pool.join() if __name__ == "__main__": cores = int(sys.argv[1]) if cores > os.cpu_count(): cores = os.cpu_count() start = time.perf_counter() run_string_gen(cores) print(f"it took {time.perf_counter() - start}") Prints: ... OMCRIHWCNDKYBZBTXUUYAGCMRBMOVTDOCDYFGRODBWLIFZZBDGEDVAJAJFXWJRFGQXTSCCJLDFKMOENGAGXAKKFSYXEQOICKWFPSKOHIMCRATLVLVLMGFAWBDIJMZMVMHCXMTVJBSWXTLDHEWYHUMSQZGGFWRMOHKKKGMTFEOTTJDOQMOWWLKTOWHKCIUNINHTGUZHTBGHROPVKQBNEHQWIDCZUOJGHUXLLDGHCNWIGFUCAQAZULAEZPIP it took 2.1607988999999996
4
7
70,473,310
2021-12-24
https://stackoverflow.com/questions/70473310/why-does-python-tell-me-to-sort-before-taking-a-random-sample
Python just gave me weird advice: >>> import random >>> random.sample({1: 2, 3: 4, 5: 6}, 2) Traceback (most recent call last): File "<pyshell#11>", line 1, in <module> random.sample({1: 2, 3: 4, 5: 6}, 2) File "C:\Users\*****\AppData\Local\Programs\Python\Python310\lib\random.py", line 466, in sample raise TypeError("Population must be a sequence. For dicts or sets, use sorted(d).") TypeError: Population must be a sequence. For dicts or sets, use sorted(d). Note the second part of the last error line. Why should I sort if I'm going to randomize a moment later anyway? Seems like wasting O(n log n) time.
From the commit history of cpython - My emphasis: github In the future, the population must be a sequence. Instances of :class:set are no longer supported. The set must first be converted to a :class:list or :class:tuple, preferably in a deterministic order so that the sample is reproducible. If you don't care about reproducibility sorting is not necessary.
4
5
70,470,136
2021-12-24
https://stackoverflow.com/questions/70470136/how-to-generate-a-environments-yml-file-of-a-python-virtual-environment
I want to generate a environments.yml file of an existing Python environment. I tried the following command: python env export --from-history -f environment.yml This throws the following error: can't open file 'env': [Errno 2] No such file or directory Note: This is not a conda environment.
pip freeze > requirements.txt to save the venv pip install -r requirements.txt to create venv.
5
1
70,467,517
2021-12-23
https://stackoverflow.com/questions/70467517/how-can-i-know-what-python-versions-can-run-my-code
I've read in few places that generally, Python doesn't provide backward compatibility, which means that any newer version of Python may break code that worked fine for earlier versions. If so, what is my way as a developer to know what versions of Python can execute my code successfully? Is there any set of rules/guarantees regarding this? Or should I just tell my users: Just run this with Python 3.8 (for example) - no more no less...?
99% of the time, if it works on Python 3.x, it'll work on 3.y where y >= x. Enabling warnings when running your code on the older version should pop DeprecationWarnings when you use a feature that's deprecated (and therefore likely to change/be removed in later Python versions). Aside from that, you can read the What's New docs for each version between the known good version and the later versions, in particular the Deprecated and Removed sections of each. Beyond that, the only solution is good unit and component tests (you are using those, right? 😉) that you rerun on newer releases to verify stuff still works & behavior doesn't change.
4
8
70,461,249
2021-12-23
https://stackoverflow.com/questions/70461249/how-to-flatten-a-multi-level-columns-in-pandas
Please see the data in excel above. When use df.columns this is the printout: Index(['Country of Citizenship', '2015', 'Unnamed: 2', 'Unnamed: 3', 'Unnamed: 4', 'Unnamed: 5', 'Unnamed: 6', 'Unnamed: 7', 'Unnamed: 8', 'Unnamed: 9', ... 'Unnamed: 108', 'Unnamed: 109', 'Unnamed: 110', 'Unnamed: 111', 'Unnamed: 112', 'Unnamed: 113', 'Unnamed: 114', 'Unnamed: 115', 'Unnamed: 116', 'Unnamed: 117'], dtype='object', length=118) I want to turn this multi-level column into a single level column with column name changed as 2015-Jan. Thanks a lot for help
You can read a excel file into a pandas dataframe with multi-indexes, like with the following example: import pandas as pd df = pd.read_excel('your_file.xlsx', header=[0,1,2], index_col=[0]) If you want to know how to navigate and use multi-indexes i recommend: this guide on indexes Alternatively there are also alot of Stackoverflow questions regarding Multiindex. Its hard to show without example data, but basically to flatten the multiindex you can do this: df.columns = df.columns.to_flat_index()
4
4
70,455,957
2021-12-22
https://stackoverflow.com/questions/70455957/quart-framework-warningasyncioexecuting
We are using Quart (Flask+asyncio) Python web framework. Every time the request is processed and the response is sent to a client, this (or similar) message is logged: WARNING:asyncio:Executing <Task pending name='Task-11' coro=<ASGIHTTPConnection.handle_request() running at /usr/local/lib/python3.8/site-packages/quart/asgi.py:102> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f742ba41ee0>()] created at /usr/local/lib/python3.8/asyncio/base_events.py:422> cb=[_wait.._on_completion() at /usr/local/lib/python3.8/asyncio/tasks.py:518] created at /usr/local/lib/python3.8/site-packages/quart/asgi.py:46> took 2.700 seconds Since it is WARNING, we are kind of worried about what this could be. Does anyone have any idea why a log like this appears? Also, I have seen more logs starting <Task pending name... before. Does anyone know what these logs are? To replicate a similar log message, it is enough just to do this: import time from quart import Quart app = Quart(__name__) @app.route('/', methods=['POST']) async def endpoint(): time.sleep(0.5) return '', 200 If I set sleep() to a lower value (e.g. 0.05), the log message is not printed out.
asyncio and other event loops require the tasks to yield control back to the event loop periodically so that it can switch to another task and execute tasks concurrently. This warning is indicating that a task is taking a long time between yields, thereby 'blocking' the event loop. It is likely this is happening as your code is either doing something CPU intensive, or more likely is using non-asyncio IO e.g. using requests. You should investigate this as it will degrade your servers ability to serve multiple requests concurrently.
4
5
70,463,736
2021-12-23
https://stackoverflow.com/questions/70463736/templatedoesnotexist-graphene-graphiql-html
I'm trying to setup Graphene, but have a following exception raised when open http://localhost:8000/graphql/ in browser: TemplateDoesNotExist at /graphql/ graphene/graphiql.html Request Method: GET Request URL: http://localhost:8000/graphql/ Django Version: 3.2.10 Added did whole setup, added to urls, configured schema, queries and mutations. But still not work. And even don't remember that ever needed to configure templates for Graphene.
Looks like forgot to add following in settings.py, so it wasn't fully configured, at least for DEBUG mode: INSTALLED_APPS = [ # ... "graphene_django", # ... ]
4
9
70,462,865
2021-12-23
https://stackoverflow.com/questions/70462865/how-to-use-a-column-value-as-key-to-a-dictionary-in-pyspark
I have a small PySpark DataFrame df: index col1 0 1 1 3 2 4 And a dictionary: LOOKUP = {0: 2, 1: 5, 2: 5, 3: 4, 4: 6} I now want to add an extra column col2 to df, equal to the LOOKUP values of col1. My output should look like this: index col1 col2 0 1 5 1 3 4 2 4 6 I tried using: df = df.withColumn(col("col2"), LOOKUP[col("col1")]) But this gave me errors, as well as using expr. How to achieve this in PySpark?
You can use a map column that you create from the lookup dictionary: from itertools import chain from pyspark.sql import functions as F lookup = {0: 2, 1: 5, 2: 5, 3: 4, 4: 6} lookup_map = F.create_map(*[F.lit(x) for x in chain(*lookup.items())]) df1 = df.withColumn("col2", lookup_map[F.col("col1")]) df1.show() #+-----+----+----+ #|index|col1|col2| #+-----+----+----+ #| 0| 1| 5| #| 1| 3| 4| #| 2| 4| 6| #+-----+----+----+ Another way would be to create a lookup_df from the dict then join with your dataframe
5
6
70,450,880
2021-12-22
https://stackoverflow.com/questions/70450880/pandas-groupby-with-multiple-conditions
I'm trying to create a summary of call logs. There are 4 cases There is only one call log record for a phone and it has outcome, we choose its values for duration, status and outcome_record Multiple call logs of same phone has outcome, we choose the summary, duration and outcome_record of call log with max duration There is only one call log record for a phone and it doesn't have outcome, we choose its values for duration and status. outcome_record will be None Multiple call logs of same phone doesn't have outcome, we choose the summary and duration of call log with max duration. outcome_record will be None What I tried is looping on the groups. But it is terribly slow when dealing with huge amount of data. I think I need to use pandas methods instead of looping. How to use pandas methods to achieve the same, with multiple conditions. Thanks. import pandas as pd def get_summarized_call_logs_df(df): data_list = [] phone_groups = df.groupby('phone') unique_phones = df.phone.unique() for ph in unique_phones: row_data = {"phone": ph} group = phone_groups.get_group(ph) group_len = len(group) if True in group['outcome'].to_list(): outcome = group.loc[group['outcome'] == True] row_data.update({"has_outcome": True}) if outcome.phone.count() == 1: # Cases where there is outcome for single calls row_data.update({"status": outcome.status.iloc[0], "duration": outcome.duration.iloc[0], "outcome_record": outcome.id.iloc[0]}) else: # Cases where there is outcome for multiple calls # We choose the status and duration of outcome record with maximum duration out_rec = outcome.loc[outcome['duration'] == outcome['duration'].max()] row_data.update({"status": out_rec.status.iloc[0], "duration": out_rec.duration.iloc[0], "outcome_record": out_rec.id.iloc[0]}) else: row_data.update({"has_outcome": False, "outcome_record": None}) if group_len == 1: # Cases where there is no outcome for single calls row_data.update({"status": group.status.iloc[0], "duration": group.duration.iloc[0]}) else: # Cases where there is no outcome for multiple calls # We choose the status and duration of the record with maximum duration row_data.update({"status": group.loc[group['duration'] == group['duration'].max()].status.iloc[0], "duration": group.loc[group['duration'] == group['duration'].max()].duration.iloc[0]}) data_list.append(row_data) new_df = pd.DataFrame(data_list) return new_df if __name__ == "__main__": data = [ {"id": 1, "phone": "123", "outcome": True, "status": "sale", "duration": 1550}, {"id": 2, "phone": "123", "outcome": False, "status": "failed", "duration": 3}, {"id": 3, "phone": "123", "outcome": False, "status": "no_ring", "duration": 5}, {"id": 4, "phone": "456", "outcome": True, "status": "call_back", "duration": 550}, {"id": 5, "phone": "456", "outcome": True, "status": "sale", "duration": 2500}, {"id": 6, "phone": "456", "outcome": False, "status": "no_ring", "duration": 5}, {"id": 7, "phone": "789", "outcome": False, "status": "no_pick", "duration": 4}, {"id": 8, "phone": "741", "outcome": False, "status": "try_again", "duration": 25}, {"id": 9, "phone": "741", "outcome": False, "status": "try_again", "duration": 10}, {"id": 10, "phone": "741", "outcome": False, "status": "no_ring", "duration": 5}, ] df = pd.DataFrame(data) new_df = get_summarized_call_logs_df(df) print(new_df) It should produce an output phone has_outcome status duration outcome_record 0 123 True sale 1550 1.0 1 456 True sale 2500 5.0 2 789 False no_pick 4 NaN 3 741 False try_again 25 NaN
I think you can simplify the logic. If you sort your values mainly by 'outcome' and 'duration', you just have to drop duplicates and keep the last row of each sorted groups like this: cols = ['phone', 'outcome', 'duration'] new_df = df.sort_values(cols).drop_duplicates('phone', keep='last') print(new_df) # Output: id phone outcome status duration 0 1 123 True sale 1550 4 5 456 True sale 2500 7 8 741 False try_again 25 6 7 789 False no_pick 4 From @user10375196, to get the expected outcome: new_df = new_df.rename(columns={'id': 'outcome_record', 'outcome': 'has_outcome'}) new_df.loc[new_df.has_outcome == False, "outcome_record"] = None new_df.reset_index(drop=True, inplace=True) print(new_df) # Output: outcome_record phone has_outcome status duration 0 1.0 123 True sale 1550 1 5.0 456 True sale 2500 2 NaN 741 False try_again 25 3 NaN 789 False no_pick 4
5
1
70,458,086
2021-12-23
https://stackoverflow.com/questions/70458086/how-to-import-pyspark-sql-functions-all-at-once
from pyspark.sql.functions import isnan, when, count, sum , etc... It is very tiresome adding all of it. Is there a way to import all of it at once?
You can try to use from pyspark.sql.functions import *. This method may lead to namespace coverage, such as pyspark sum function covering python built-in sum function. Another insurance method: import pyspark.sql.functions as F, use method: F.sum.
5
20
70,452,146
2021-12-22
https://stackoverflow.com/questions/70452146/how-to-speed-up-the-agg-of-pandas-groupby-bins
I have created different bins for each column and grouped the DataFrame based on these. import pandas as pd import numpy as np np.random.seed(100) df = pd.DataFrame(np.random.randn(100, 4), columns=['a', 'b', 'c', 'value']) # for simplicity, I use the same bin here bins = np.arange(-3, 4, 0.05) df['a_bins'] = pd.cut(df['a'], bins=bins) df['b_bins'] = pd.cut(df['b'], bins=bins) df['c_bins'] = pd.cut(df['c'], bins=bins) The output of df.groupby(['a_bins','b_bins','c_bins']).size() indicates the group length is 2685619. Calculate statistics of each group Then, the statistics of each group are calculated like this: %%timeit df.groupby(['a_bins','b_bins','c_bins']).agg({'value':['mean']}) >>> 16.9 s ± 637 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Expected output Is it possible to speed this up? The quicker method should also support finding the value by inputs of a, b, and c values, like this: df.groupby(['a_bins','b_bins','c_bins']).agg({'value':['mean']}).loc[(-1.72, 0.32, 1.18)] >>> -0.252436
For this data, I'd suggest you pivot the data, and pass the mean. Usually, this is faster since you are hitting the entire dataframe, instead of going through each group: (df .pivot(None, ['a_bins', 'b_bins', 'c_bins'], 'value') .mean() .sort_index() # ignore this if you are not fuzzy on order ) a_bins b_bins c_bins (-2.15, -2.1] (0.25, 0.3] (-1.3, -1.25] 0.929100 (0.75, 0.8] (-0.3, -0.25] 0.480411 (-2.05, -2.0] (-0.1, -0.05] (0.3, 0.35] -1.684900 (0.75, 0.8] (-0.25, -0.2] -1.184411 (-2.0, -1.95] (-0.6, -0.55] (-1.2, -1.15] -0.021176 ... (1.7, 1.75] (-0.75, -0.7] (1.05, 1.1] -0.229518 (1.85, 1.9] (-0.4, -0.35] (1.8, 1.85] 0.003017 (1.9, 1.95] (-1.45, -1.4] (0.1, 0.15] 0.949361 (2.05, 2.1] (-0.35, -0.3] (-0.65, -0.6] 0.763184 (2.25, 2.3] (-0.95, -0.9] (0.1, 0.15] 2.539432 This matches the output from the groupby: (df .groupby(['a_bins','b_bins','c_bins']) .agg({'value':['mean']}) .dropna() .squeeze() ) a_bins b_bins c_bins (-2.15, -2.1] (0.25, 0.3] (-1.3, -1.25] 0.929100 (0.75, 0.8] (-0.3, -0.25] 0.480411 (-2.05, -2.0] (-0.1, -0.05] (0.3, 0.35] -1.684900 (0.75, 0.8] (-0.25, -0.2] -1.184411 (-2.0, -1.95] (-0.6, -0.55] (-1.2, -1.15] -0.021176 ... (1.7, 1.75] (-0.75, -0.7] (1.05, 1.1] -0.229518 (1.85, 1.9] (-0.4, -0.35] (1.8, 1.85] 0.003017 (1.9, 1.95] (-1.45, -1.4] (0.1, 0.15] 0.949361 (2.05, 2.1] (-0.35, -0.3] (-0.65, -0.6] 0.763184 (2.25, 2.3] (-0.95, -0.9] (0.1, 0.15] 2.539432 Name: (value, mean), Length: 100, dtype: float64 The pivot option gives a speed of 3.72ms on my PC, while I had to terminate the groupby option, as it was taking too long (my PC is quite old :)) Again, the reason why this works/is faster is because the mean is hitting the entire dataframe, and not going through groups in the groupby. As to your other question, you can index it easily: bin_mean = (df .pivot(None, ['a_bins', 'b_bins', 'c_bins'], 'value') .mean() .sort_index() # ignore this if you are not fuzzy on order ) bin_mean.loc[(-1.72, 0.32, 1.18)] -0.25243603652138985 The main problem though is Pandas for categoricals will return for all rows( which is wasteful, and not efficient); pass observed = True and you should notice a dramatic improvement: (df.groupby(['a_bins','b_bins','c_bins'], observed=True) .agg({'value':['mean']}) ) value mean a_bins b_bins c_bins (-2.15, -2.1] (0.25, 0.3] (-1.3, -1.25] 0.929100 (0.75, 0.8] (-0.3, -0.25] 0.480411 (-2.05, -2.0] (-0.1, -0.05] (0.3, 0.35] -1.684900 (0.75, 0.8] (-0.25, -0.2] -1.184411 (-2.0, -1.95] (-0.6, -0.55] (-1.2, -1.15] -0.021176 ... ... (1.7, 1.75] (-0.75, -0.7] (1.05, 1.1] -0.229518 (1.85, 1.9] (-0.4, -0.35] (1.8, 1.85] 0.003017 (1.9, 1.95] (-1.45, -1.4] (0.1, 0.15] 0.949361 (2.05, 2.1] (-0.35, -0.3] (-0.65, -0.6] 0.763184 (2.25, 2.3] (-0.95, -0.9] (0.1, 0.15] 2.539432 Speed is about 7.39ms on my PC, about 2 times less than the pivot option, but way faster now, and that's because only categoricals that exist in the dataframe are used/returned.
8
6
70,453,104
2021-12-22
https://stackoverflow.com/questions/70453104/brownie-not-working-cython-undefined-symbol-pygen-send
I set up my development environment on Fedora 35 and when I run any brownie command such as $ brownie console or even brownie --version I get the following error: Traceback (most recent call last): File "/home/philippbunke/.local/bin/brownie", line 5, in <module> from brownie._cli.__main__ import main File "/home/philippbunke/.local/pipx/venvs/eth-brownie/lib64/python3.10/site-packages/brownie/__init__.py", line 6, in <module> from brownie.project import compile_source, run File "/home/philippbunke/.local/pipx/venvs/eth-brownie/lib64/python3.10/site-packages/brownie/project/__init__.py", line 3, in <module> from .main import ( # NOQA 401 File "/home/philippbunke/.local/pipx/venvs/eth-brownie/lib64/python3.10/site-packages/brownie/project/main.py", line 44, in <module> from brownie.network import web3 File "/home/philippbunke/.local/pipx/venvs/eth-brownie/lib64/python3.10/site-packages/brownie/network/__init__.py", line 4, in <module> from .account import Accounts File "/home/philippbunke/.local/pipx/venvs/eth-brownie/lib64/python3.10/site-packages/brownie/network/account.py", line 12, in <module> import eth_account File "/home/philippbunke/.local/pipx/venvs/eth-brownie/lib64/python3.10/site-packages/eth_account/__init__.py", line 1, in <module> from eth_account.account import ( # noqa: F401 File "/home/philippbunke/.local/pipx/venvs/eth-brownie/lib64/python3.10/site-packages/eth_account/account.py", line 8, in <module> from cytoolz import ( File "/home/philippbunke/.local/pipx/venvs/eth-brownie/lib64/python3.10/site-packages/cytoolz/__init__.py", line 3, in <module> from .functoolz import * ImportError: /home/philippbunke/.local/pipx/venvs/eth-brownie/lib64/python3.10/site-packages/cytoolz/functoolz.cpython-310-x86_64-linux-gnu.so: undefined symbol: _PyGen_Send Setup: Python=3.10.1 Cython=0.29.26 gcc/gcc-c=11.2.1-7.fc35.x86_64 Ganache CLI=v6.12.2 $ pipx list venvs are in /home/philippbunke/.local/pipx/venvs apps are exposed on your $PATH at /home/philippbunke/.local/bin package eth-brownie 1.16.4, Python 3.10.1 - brownie $ $PATH /home/philippbunke/.local/bin:/home/philippbunke/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/var/lib/snapd/snap/bin I have spent a whole day trying to fix this error, please help me out!
The problem here seems to be Python 3.10.1! I used anaconda to create a new virtual environment with Python 3.8.12, installed brownie using pipx install --python python3.8 eth-brownie and it worked! The trick here was, to also tell pipx to use another python version, otherwise it would create a dependency to the global python version, which is python 3.10 in my case.
9
7
70,452,465
2021-12-22
https://stackoverflow.com/questions/70452465/how-to-load-in-graph-from-networkx-into-pytorch-geometric-and-set-node-features
Goal: I am trying to import a graph FROM networkx into PyTorch geometric and set labels and node features. (This is in Python) Question(s): How do I do this [the conversion from networkx to PyTorch geometric]? (presumably by using the from_networkx function) How do I transfer over node features and labels? (more important question) I have seen some other/previous posts with this question but they weren't answered (correct me if I am wrong). Attempt: (I have just used an unrealistic example below, as I cannot post anything real on here) Let us imagine we are trying to do a graph learning task (e.g. node classification) on a group of cars (not very realistic as I said). That is, we have a group of cars, an adjacency matrix, and some features (e.g. price at the end of the year). We want to predict the node label (i.e. brand of the car). I will be using the following adjacency matrix: (apologies, cannot use latex to format this) A = [(0, 1, 0, 1, 1), (1, 0, 1, 1, 0), (0, 1, 0, 0, 1), (1, 1, 0, 0, 0), (1, 0, 1, 0, 0)] Here is the code (for Google Colab environment): import pandas as pd import numpy as np import matplotlib.pyplot as plt import networkx as nx from torch_geometric.utils.convert import to_networkx, from_networkx import torch !pip install torch-scatter torch-sparse torch-cluster torch-spline-conv torch-geometric -f https://data.pyg.org/whl/torch-1.10.0+cpu.html # Make the networkx graph G = nx.Graph() # Add some cars (just do 4 for now) G.add_nodes_from([ (1, {'Brand': 'Ford'}), (2, {'Brand': 'Audi'}), (3, {'Brand': 'BMW'}), (4, {'Brand': 'Peugot'}), (5, {'Brand': 'Lexus'}), ]) # Add some edges G.add_edges_from([ (1, 2), (1, 4), (1, 5), (2, 3), (2, 4), (3, 2), (3, 5), (4, 1), (4, 2), (5, 1), (5, 3) ]) # Convert the graph into PyTorch geometric pyg_graph = from_networkx(G) So this correctly converts the networkx graph to PyTorch Geometric. However, I still don't know how to properly set the labels. The brand values for each node have been converted and are stored within: pyg_graph.Brand Below, I have just made some random numpy arrays of length 5 for each node (just pretend that these are realistic). ford_prices = np.random.randint(100, size = 5) lexus_prices = np.random.randint(100, size = 5) audi_prices = np.random.randint(100, size = 5) bmw_prices = np.random.randint(100, size = 5) peugot_prices = np.random.randint(100, size = 5) This brings me to the main question: How do I set the prices to be the node features of this graph? How do I set the labels of the nodes? (and will I need to remove the labels from pyg_graph.Brand when training the network?) Thanks in advance and happy holidays.
The easiest way is to add all information to the networkx graph and directly create it in the way you need it. I guess you want to use some Graph Neural Networks. Then you want to have something like below. Instead of text as labels, you probably want to have a categorial representation, e.g. 1 stands for Ford. If you want to match the "usual convention". Then you name your input features x and your labels/ground truth y. The splitting of the data into train and test is done via mask. So the graph still contains all information, but only part of it is used for training. Check the PyTorch Geometric introduction for an example, which uses the Cora dataset. import networkx as nx import numpy as np import torch from torch_geometric.utils.convert import from_networkx # Make the networkx graph G = nx.Graph() # Add some cars (just do 4 for now) G.add_nodes_from([ (1, {'y': 1, 'x': 0.5}), (2, {'y': 2, 'x': 0.2}), (3, {'y': 3, 'x': 0.3}), (4, {'y': 4, 'x': 0.1}), (5, {'y': 5, 'x': 0.2}), ]) # Add some edges G.add_edges_from([ (1, 2), (1, 4), (1, 5), (2, 3), (2, 4), (3, 2), (3, 5), (4, 1), (4, 2), (5, 1), (5, 3) ]) # Convert the graph into PyTorch geometric pyg_graph = from_networkx(G) print(pyg_graph) # Data(edge_index=[2, 12], x=[5], y=[5]) print(pyg_graph.x) # tensor([0.5000, 0.2000, 0.3000, 0.1000, 0.2000]) print(pyg_graph.y) # tensor([1, 2, 3, 4, 5]) print(pyg_graph.edge_index) # tensor([[0, 0, 0, 1, 1, 1, 2, 2, 3, 3, 4, 4], # [1, 3, 4, 0, 2, 3, 1, 4, 0, 1, 0, 2]]) # Split the data train_ratio = 0.2 num_nodes = pyg_graph.x.shape[0] num_train = int(num_nodes * train_ratio) idx = [i for i in range(num_nodes)] np.random.shuffle(idx) train_mask = torch.full_like(pyg_graph.y, False, dtype=bool) train_mask[idx[:num_train]] = True test_mask = torch.full_like(pyg_graph.y, False, dtype=bool) test_mask[idx[num_train:]] = True print(train_mask) # tensor([ True, False, False, False, False]) print(test_mask) # tensor([False, True, True, True, True])
12
14
70,447,276
2021-12-22
https://stackoverflow.com/questions/70447276/i-have-a-dataset-in-which-i-have-two-columns-with-time-in-it-but-the-dat
Unnamed: 0 Created Date Closed Date Agency Agency Name Complaint Type Descriptor Location Type Incident Zip Address Type City Landmark Status Borough 2869 2869 10/30/2013 09:14:47 AM 10/30/2013 10:48:51 AM NYPD New York City Police Department Illegal Parking Double Parked Blocking Traffic Street/Sidewalk 11217.0 PLACENAME BROOKLYN BARCLAYS CENTER Closed BROOKLYN 23571 23571 10/25/2013 02:33:54 PM 10/25/2013 03:36:36 PM NYPD New York City Police Department Noise - Park Loud Music/Party Park/Playground 10000 PLACENAME NEW YORK CENTRAL PARK Closed MANHATTAN 41625 41625 10/22/2013 09:33:56 PM 10/24/2013 05:37:24 PM TLC Taxi and Limousine Commission For Hire Vehicle Complaint Car Service Company Complaint Street 11430 PLACENAME JAMAICA J F K AIRPORT Closed QUEENS 44331 44331 10/22/2013 07:25:35 AM 10/25/2013 10:40:35 AM TLC Taxi and Limousine Commission Taxi Complaint Driver Complaint Street 11430 PLACENAME JAMAICA J F K AIRPORT Closed QUEENS 46913 46913 10/21/2013 05:03:26 PM 10/23/2013 09:59:23 AM DPR Department of Parks and Recreation Dead Tree Dead/Dying Tree Street 11215 PLACENAME BROOKLYN BARTEL PRITCHARD SQUARE Closed BROOKLYN 47459 47459 10/21/2013 02:56:08 PM 10/29/2013 06:17:10 PM TLC Taxi and Limousine Commission Taxi Complaint Driver Complaint Street 10031 PLACENAME NEW YORK CITY COLLEGE Closed MANHATTAN 48465 48465 10/21/2013 10:44:10 AM 10/21/2013 11:17:47 AM NYPD New York City Police Department Illegal Parking Posted Parking Sign Violation Street/Sidewalk 11434 PLACENAME JAMAICA PS 37 Closed QUEENS 51837 51837 10/20/2013 04:36:12 PM 10/20/2013 06:35:49 PM NYPD New York City Police Department Noise - Park Loud Music/Party Park/Playground 10031.0 PLACENAME NEW YORK JACKIE ROBINSON PARK Closed MANHATTAN 51848 51848 10/20/2013 04:26:03 PM 10/20/2013 06:34:47 PM NYPD New York City Police Department Noise - Park Loud Music/Party Park/Playground 10031.0 PLACENAME NEW YORK JACKIE ROBINSON PARK Closed MANHATTAN 54089 54089 10/19/2013 03:45:47 PM 10/19/2013 04:10:11 PM NYPD New York City Police Department Noise - Park Loud Music/Party Park/Playground 10000.0 PLACENAME NEW YORK CENTRAL PARK Closed MANHATTAN 54343 54343 10/19/2013 01:27:43 PM 10/28/2013 08:42:12 AM DOT Department of Transportation Street Condition Rough, Pitted or Cracked Roads Street 10003.0 PLACENAME NEW YORK UNION SQUARE PARK Closed MANHATTAN 55140 55140 10/19/2013 02:02:28 AM 10/19/2013 02:19:55 AM NYPD New York City Police Department Noise - Vehicle Car/Truck Music Street/Sidewalk 11368.0 PLACENAME CORONA WORLDS FAIR MARINA Closed QUEENS 57789 57789 10/18/2013 11:55:44 AM 10/23/2013 02:42:14 PM TLC Taxi and Limousine Commission Taxi Complaint Driver Complaint Street 11369.0 PLACENAME EAST ELMHURST LA GUARDIA AIRPORT Closed QUEENS 63119 63119 10/17/2013 06:52:37 AM 10/25/2013 06:49:59 PM TLC Taxi and Limousine Commission Taxi Complaint Driver Complaint Street 11430.0 PLACENAME JAMAICA J F K AIRPORT Closed QUEENS 66242 66242 10/16/2013 01:56:24 PM 10/22/2013 03:09:11 PM TLC Taxi and Limousine Commission Taxi Complaint Driver Complaint Street 11369 PLACENAME EAST ELMHURST LA GUARDIA AIRPORT Closed QUEENS 66758 66758 10/16/2013 11:52:43 AM 10/16/2013 04:35:34 PM NYPD New York City Police Department Vending Unlicensed Park/Playground 10036 PLACENAME NEW YORK BRYANT PARK Closed MANHATTAN 66786 66786 10/16/2013 11:42:23 AM 10/18/2013 04:57:04 PM TLC Taxi and Limousine Commission Taxi Complaint Insurance Information Requested Street 10003 PLACENAME NEW YORK BETH ISRAEL MED CENTER Closed MANHATTAN 66809 66809 10/16/2013 11:36:54 AM 10/16/2013 12:34:23 PM NYPD New York City Police Department Traffic Congestion/Gridlock Street/Sidewalk 11430 PLACENAME JAMAICA J F K AIRPORT Closed QUEENS 67465 67465 10/16/2013 09:14:35 AM 10/16/2013 12:43:06 PM NYPD New York City Police Department Traffic Drag Racing Street/Sidewalk 11367 PLACENAME FLUSHING QUEENS COLLEGE Closed QUEENS 72424 72424 10/15/2013 12:22:00 AM 10/21/2013 12:16:15 PM TLC Taxi and Limousine Commission Taxi Complaint Driver Complaint Street 11217 PLACENAME BROOKLYN BARCLAYS CENTER Closed BROOKLYN 75531 75531 10/14/2013 10:59:20 AM 10/14/2013 03:09:51 PM NYPD New York City Police Department Vending In Prohibited Area Park/Playground 10000 PLACENAME NEW YORK CENTRAL PARK Closed MANHATTAN 77918 77918 10/13/2013 03:16:03 PM 10/13/2013 03:25:45 PM NYPD New York City Police Department Noise - Park Loud Music/Party Park/Playground 10000 PLACENAME NEW YORK CENTRAL PARK Closed MANHATTAN 78048 78048 10/13/2013 01:06:02 PM 10/21/2013 10:20:21 AM TLC Taxi and Limousine Commission Taxi Complaint Driver Complaint Street 11369 PLACENAME EAST ELMHURST LA GUARDIA AIRPORT Closed QUEENS 78352 78352 10/13/2013 05:14:33 AM 10/16/2013 01:42:42 PM TLC Taxi and Limousine Commission For Hire Vehicle Complaint Car Service Company Complaint Street 11217 PLACENAME BROOKLYN BARCLAYS CENTER Closed BROOKLYN 78383 78383 10/13/2013 03:50:02 AM 10/13/2013 05:03:13 AM NYPD New York City Police Department Noise - Vehicle Car/Truck Music Street/Sidewalk 11368 PLACENAME CORONA WORLDS FAIR MARINA Closed QUEENS 79078 79078 10/12/2013 09:53:17 PM 10/13/2013 02:52:07 AM NYPD New York City Police Department Noise - Park Loud Music/Party Park/Playground 10011 PLACENAME NEW YORK WASHINGTON SQUARE PARK Closed MANHATTAN 84489 84489 10/10/2013 07:16:16 PM 10/10/2013 10:29:16 PM NYPD New York City Police Department Noise - Park Loud Music/Party Park/Playground 11215 PLACENAME BROOKLYN PROSPECT PARK Closed BROOKLYN 84518 84518 10/10/2013 07:02:29 PM 10/10/2013 10:29:16 PM NYPD New York City Police Department Noise - Park Loud Music/Party Park/Playground 11215 PLACENAME BROOKLYN PROSPECT PARK Closed BROOKLYN 84688 84688 10/10/2013 05:39:19 PM 10/10/2013 10:29:17 PM NYPD New York City Police Department Noise - Park Loud Music/Party Park/Playground 11215 PLACENAME BROOKLYN PROSPECT PARK Closed BROOKLYN 84695 84695 10/10/2013 05:37:04 PM 10/10/2013 10:30:19 PM NYPD New York City Police Department Noise - Park Loud Music/Party Park/Playground 11215 PLACENAME BROOKLYN PROSPECT PARK Closed BROOKLYN 88812 88812 10/09/2013 09:17:15 PM 10/23/2013 02:15:21 PM TLC Taxi and Limousine Commission Taxi Complaint Driver Complaint Street 11430 PLACENAME JAMAICA J F K AIRPORT Closed QUEENS 89205 89205 10/09/2013 06:01:48 PM 10/09/2013 09:04:26 PM NYPD New York City Police Department Vending Unlicensed Park/Playground 10000 PLACENAME NEW YORK CENTRAL PARK Closed MANHATTAN 89382 89382 10/09/2013 04:53:01 PM 10/18/2013 08:35:02 AM DOT Department of Transportation Public Toilet Damaged Door Sidewalk 11238 PLACENAME BROOKLYN GRAND ARMY PLAZA Closed BROOKLYN 89734 89734 10/09/2013 03:13:23 PM 10/09/2013 05:10:45 PM NYPD New York City Police Department Noise - Park Loud Music/Party Park/Playground 10036 PLACENAME NEW YORK BRYANT PARK Closed MANHATTAN 93990 93990 10/08/2013 06:14:15 PM 10/09/2013 04:00:59 PM TLC Taxi and Limousine Commission Taxi Complaint Driver Complaint Street 10003 PLACENAME NEW YORK BETH ISRAEL MED CENTER Closed MANHATTAN 99407 99407 10/07/2013 03:56:11 PM 10/08/2013 07:04:14 AM DPR Department of Parks and Recreation Overgrown Tree/Branches Traffic Sign or Signal Blocked Street 11430.0 PLACENAME JAMAICA J F K AIRPORT Closed QUEENS 99847 99847 10/07/2013 02:33:21 PM 10/09/2013 02:36:42 PM TLC Taxi and Limousine Commission Taxi Complaint Driver Complaint Street 10036.0 PLACENAME NEW YORK PORT AUTH 42 STREET Closed MANHATTAN 100073 100073 10/07/2013 01:36:02 PM 10/09/2013 09:56:55 AM TLC Taxi and Limousine Commission Taxi Complaint Driver Complaint Street 10024.0 PLACENAME NEW YORK MUSEUM NATURAL HIST Closed MANHATTAN 101013 101013 10/07/2013 10:05:18 AM 10/09/2013 03:36:23 PM TLC Taxi and Limousine Commission Taxi Complaint Driver Complaint Street 10017.0 PLACENAME NEW YORK GRAND CENTRAL TERM Closed MANHATTAN 104020 104020 10/06/2013 02:58:47 PM 10/07/2013 12:11:16 PM TLC Taxi and Limousine Commission For Hire Vehicle Complaint Car Service Company Complaint Street 11430.0 PLACENAME JAMAICA JFK Closed QUEENS 106118 106118 10/05/2013 03:24:47 PM 10/05/2013 04:20:34 PM NYPD New York City Police Department Noise - Park Loud Music/Party Park/Playground 10000.0 PLACENAME NEW YORK CENTRAL PARK Closed MANHATTAN 106499 106499 10/05/2013 11:52:13 AM 10/07/2013 08:00:28 AM DOT Department of Transportation Public Toilet Dirty/Graffiti Sidewalk 11369.0 PLACENAME EAST ELMHURST LA GUARDIA AIRPORT Closed QUEENS this is how my data looks like. What I want to do here is to extract date from created date and closed date I have tried .extract method but I am new to this so, it didn't work. I want to calculate housrs from created date and closed which i can do like this: pd.Timedelta(task3['Closed Date'] - task3['Created Date']).seconds / 60.0 In the end I want the output Find average completion time in hours for top-10 most frequent complaints. Also calculate how many data points you have for each complaint types. Do this analysis only for closed complaints. The sample output I want is as follows: mean count complaint Type closing_time_hours closing_time_hours Blocked Driveway 3.00 4581 DOF Literature Request 30.16 5481 General Construction 66.38 798 Heating 54.88 6704 Illegal Parking 3.08 3336 Nonconst 65 100 Paint-Plaster 49 3281 Plumbing 65 666 Strret Condition 81 2610 Street Light Condition 90 4207 these mean and count values are randomly generated The sample output can be produced by groupby function once the hours are extracted from the data. dictionary format {'Unnamed: 0': {2869: 2869, 23571: 23571, 41625: 41625, 44331: 44331, 46913: 46913, 47459: 47459, 48465: 48465, 51837: 51837, 51848: 51848, 54089: 54089, 54343: 54343, 55140: 55140, 57789: 57789, 63119: 63119, 66242: 66242, 66758: 66758, 66786: 66786, 66809: 66809, 67465: 67465, 72424: 72424, 75531: 75531, 77918: 77918, 78048: 78048, 78352: 78352, 78383: 78383, 79078: 79078, 84489: 84489, 84518: 84518, 84688: 84688, 84695: 84695, 88812: 88812, 89205: 89205, 89382: 89382, 89734: 89734, 93990: 93990, 99407: 99407, 99847: 99847, 100073: 100073, 101013: 101013, 104020: 104020, 106118: 106118, 106499: 106499}, 'Created Date': {2869: '10/30/2013 09:14:47 AM', 23571: '10/25/2013 02:33:54 PM', 41625: '10/22/2013 09:33:56 PM', 44331: '10/22/2013 07:25:35 AM', 46913: '10/21/2013 05:03:26 PM', 47459: '10/21/2013 02:56:08 PM', 48465: '10/21/2013 10:44:10 AM', 51837: '10/20/2013 04:36:12 PM', 51848: '10/20/2013 04:26:03 PM', 54089: '10/19/2013 03:45:47 PM', 54343: '10/19/2013 01:27:43 PM', 55140: '10/19/2013 02:02:28 AM', 57789: '10/18/2013 11:55:44 AM', 63119: '10/17/2013 06:52:37 AM', 66242: '10/16/2013 01:56:24 PM', 66758: '10/16/2013 11:52:43 AM', 66786: '10/16/2013 11:42:23 AM', 66809: '10/16/2013 11:36:54 AM', 67465: '10/16/2013 09:14:35 AM', 72424: '10/15/2013 12:22:00 AM', 75531: '10/14/2013 10:59:20 AM', 77918: '10/13/2013 03:16:03 PM', 78048: '10/13/2013 01:06:02 PM', 78352: '10/13/2013 05:14:33 AM', 78383: '10/13/2013 03:50:02 AM', 79078: '10/12/2013 09:53:17 PM', 84489: '10/10/2013 07:16:16 PM', 84518: '10/10/2013 07:02:29 PM', 84688: '10/10/2013 05:39:19 PM', 84695: '10/10/2013 05:37:04 PM', 88812: '10/09/2013 09:17:15 PM', 89205: '10/09/2013 06:01:48 PM', 89382: '10/09/2013 04:53:01 PM', 89734: '10/09/2013 03:13:23 PM', 93990: '10/08/2013 06:14:15 PM', 99407: '10/07/2013 03:56:11 PM', 99847: '10/07/2013 02:33:21 PM', 100073: '10/07/2013 01:36:02 PM', 101013: '10/07/2013 10:05:18 AM', 104020: '10/06/2013 02:58:47 PM', 106118: '10/05/2013 03:24:47 PM', 106499: '10/05/2013 11:52:13 AM'}, 'Closed Date': {2869: '10/30/2013 10:48:51 AM', 23571: '10/25/2013 03:36:36 PM', 41625: '10/24/2013 05:37:24 PM', 44331: '10/25/2013 10:40:35 AM', 46913: '10/23/2013 09:59:23 AM', 47459: '10/29/2013 06:17:10 PM', 48465: '10/21/2013 11:17:47 AM', 51837: '10/20/2013 06:35:49 PM', 51848: '10/20/2013 06:34:47 PM', 54089: '10/19/2013 04:10:11 PM', 54343: '10/28/2013 08:42:12 AM', 55140: '10/19/2013 02:19:55 AM', 57789: '10/23/2013 02:42:14 PM', 63119: '10/25/2013 06:49:59 PM', 66242: '10/22/2013 03:09:11 PM', 66758: '10/16/2013 04:35:34 PM', 66786: '10/18/2013 04:57:04 PM', 66809: '10/16/2013 12:34:23 PM', 67465: '10/16/2013 12:43:06 PM', 72424: '10/21/2013 12:16:15 PM', 75531: '10/14/2013 03:09:51 PM', 77918: '10/13/2013 03:25:45 PM', 78048: '10/21/2013 10:20:21 AM', 78352: '10/16/2013 01:42:42 PM', 78383: '10/13/2013 05:03:13 AM', 79078: '10/13/2013 02:52:07 AM', 84489: '10/10/2013 10:29:16 PM', 84518: '10/10/2013 10:29:16 PM', 84688: '10/10/2013 10:29:17 PM', 84695: '10/10/2013 10:30:19 PM', 88812: '10/23/2013 02:15:21 PM', 89205: '10/09/2013 09:04:26 PM', 89382: '10/18/2013 08:35:02 AM', 89734: '10/09/2013 05:10:45 PM', 93990: '10/09/2013 04:00:59 PM', 99407: '10/08/2013 07:04:14 AM', 99847: '10/09/2013 02:36:42 PM', 100073: '10/09/2013 09:56:55 AM', 101013: '10/09/2013 03:36:23 PM', 104020: '10/07/2013 12:11:16 PM', 106118: '10/05/2013 04:20:34 PM', 106499: '10/07/2013 08:00:28 AM'}, 'Agency': {2869: 'NYPD', 23571: 'NYPD', 41625: 'TLC', 44331: 'TLC', 46913: 'DPR', 47459: 'TLC', 48465: 'NYPD', 51837: 'NYPD', 51848: 'NYPD', 54089: 'NYPD', 54343: 'DOT', 55140: 'NYPD', 57789: 'TLC', 63119: 'TLC', 66242: 'TLC', 66758: 'NYPD', 66786: 'TLC', 66809: 'NYPD', 67465: 'NYPD', 72424: 'TLC', 75531: 'NYPD', 77918: 'NYPD', 78048: 'TLC', 78352: 'TLC', 78383: 'NYPD', 79078: 'NYPD', 84489: 'NYPD', 84518: 'NYPD', 84688: 'NYPD', 84695: 'NYPD', 88812: 'TLC', 89205: 'NYPD', 89382: 'DOT', 89734: 'NYPD', 93990: 'TLC', 99407: 'DPR', 99847: 'TLC', 100073: 'TLC', 101013: 'TLC', 104020: 'TLC', 106118: 'NYPD', 106499: 'DOT'}, 'Agency Name': {2869: 'New York City Police Department', 23571: 'New York City Police Department', 41625: 'Taxi and Limousine Commission', 44331: 'Taxi and Limousine Commission', 46913: 'Department of Parks and Recreation', 47459: 'Taxi and Limousine Commission', 48465: 'New York City Police Department', 51837: 'New York City Police Department', 51848: 'New York City Police Department', 54089: 'New York City Police Department', 54343: 'Department of Transportation', 55140: 'New York City Police Department', 57789: 'Taxi and Limousine Commission', 63119: 'Taxi and Limousine Commission', 66242: 'Taxi and Limousine Commission', 66758: 'New York City Police Department', 66786: 'Taxi and Limousine Commission', 66809: 'New York City Police Department', 67465: 'New York City Police Department', 72424: 'Taxi and Limousine Commission', 75531: 'New York City Police Department', 77918: 'New York City Police Department', 78048: 'Taxi and Limousine Commission', 78352: 'Taxi and Limousine Commission', 78383: 'New York City Police Department', 79078: 'New York City Police Department', 84489: 'New York City Police Department', 84518: 'New York City Police Department', 84688: 'New York City Police Department', 84695: 'New York City Police Department', 88812: 'Taxi and Limousine Commission', 89205: 'New York City Police Department', 89382: 'Department of Transportation', 89734: 'New York City Police Department', 93990: 'Taxi and Limousine Commission', 99407: 'Department of Parks and Recreation', 99847: 'Taxi and Limousine Commission', 100073: 'Taxi and Limousine Commission', 101013: 'Taxi and Limousine Commission', 104020: 'Taxi and Limousine Commission', 106118: 'New York City Police Department', 106499: 'Department of Transportation'}, 'Complaint Type': {2869: 'Illegal Parking', 23571: 'Noise - Park', 41625: 'For Hire Vehicle Complaint', 44331: 'Taxi Complaint', 46913: 'Dead Tree', 47459: 'Taxi Complaint', 48465: 'Illegal Parking', 51837: 'Noise - Park', 51848: 'Noise - Park', 54089: 'Noise - Park', 54343: 'Street Condition', 55140: 'Noise - Vehicle', 57789: 'Taxi Complaint', 63119: 'Taxi Complaint', 66242: 'Taxi Complaint', 66758: 'Vending', 66786: 'Taxi Complaint', 66809: 'Traffic', 67465: 'Traffic', 72424: 'Taxi Complaint', 75531: 'Vending', 77918: 'Noise - Park', 78048: 'Taxi Complaint', 78352: 'For Hire Vehicle Complaint', 78383: 'Noise - Vehicle', 79078: 'Noise - Park', 84489: 'Noise - Park', 84518: 'Noise - Park', 84688: 'Noise - Park', 84695: 'Noise - Park', 88812: 'Taxi Complaint', 89205: 'Vending', 89382: 'Public Toilet', 89734: 'Noise - Park', 93990: 'Taxi Complaint', 99407: 'Overgrown Tree/Branches', 99847: 'Taxi Complaint', 100073: 'Taxi Complaint', 101013: 'Taxi Complaint', 104020: 'For Hire Vehicle Complaint', 106118: 'Noise - Park', 106499: 'Public Toilet'}, 'Descriptor': {2869: 'Double Parked Blocking Traffic', 23571: 'Loud Music/Party', 41625: 'Car Service Company Complaint', 44331: 'Driver Complaint', 46913: 'Dead/Dying Tree', 47459: 'Driver Complaint', 48465: 'Posted Parking Sign Violation', 51837: 'Loud Music/Party', 51848: 'Loud Music/Party', 54089: 'Loud Music/Party', 54343: 'Rough, Pitted or Cracked Roads', 55140: 'Car/Truck Music', 57789: 'Driver Complaint', 63119: 'Driver Complaint', 66242: 'Driver Complaint', 66758: 'Unlicensed', 66786: 'Insurance Information Requested', 66809: 'Congestion/Gridlock', 67465: 'Drag Racing', 72424: 'Driver Complaint', 75531: 'In Prohibited Area', 77918: 'Loud Music/Party', 78048: 'Driver Complaint', 78352: 'Car Service Company Complaint', 78383: 'Car/Truck Music', 79078: 'Loud Music/Party', 84489: 'Loud Music/Party', 84518: 'Loud Music/Party', 84688: 'Loud Music/Party', 84695: 'Loud Music/Party', 88812: 'Driver Complaint', 89205: 'Unlicensed', 89382: 'Damaged Door', 89734: 'Loud Music/Party', 93990: 'Driver Complaint', 99407: 'Traffic Sign or Signal Blocked', 99847: 'Driver Complaint', 100073: 'Driver Complaint', 101013: 'Driver Complaint', 104020: 'Car Service Company Complaint', 106118: 'Loud Music/Party', 106499: 'Dirty/Graffiti'}, 'Location Type': {2869: 'Street/Sidewalk', 23571: 'Park/Playground', 41625: 'Street', 44331: 'Street', 46913: 'Street', 47459: 'Street', 48465: 'Street/Sidewalk', 51837: 'Park/Playground', 51848: 'Park/Playground', 54089: 'Park/Playground', 54343: 'Street', 55140: 'Street/Sidewalk', 57789: 'Street', 63119: 'Street', 66242: 'Street', 66758: 'Park/Playground', 66786: 'Street', 66809: 'Street/Sidewalk', 67465: 'Street/Sidewalk', 72424: 'Street', 75531: 'Park/Playground', 77918: 'Park/Playground', 78048: 'Street', 78352: 'Street', 78383: 'Street/Sidewalk', 79078: 'Park/Playground', 84489: 'Park/Playground', 84518: 'Park/Playground', 84688: 'Park/Playground', 84695: 'Park/Playground', 88812: 'Street', 89205: 'Park/Playground', 89382: 'Sidewalk', 89734: 'Park/Playground', 93990: 'Street', 99407: 'Street', 99847: 'Street', 100073: 'Street', 101013: 'Street', 104020: 'Street', 106118: 'Park/Playground', 106499: 'Sidewalk'}, 'Incident Zip': {2869: '11217.0', 23571: '10000', 41625: '11430', 44331: '11430', 46913: '11215', 47459: '10031', 48465: '11434', 51837: '10031.0', 51848: '10031.0', 54089: '10000.0', 54343: '10003.0', 55140: '11368.0', 57789: '11369.0', 63119: '11430.0', 66242: '11369', 66758: '10036', 66786: '10003', 66809: '11430', 67465: '11367', 72424: '11217', 75531: '10000', 77918: '10000', 78048: '11369', 78352: '11217', 78383: '11368', 79078: '10011', 84489: '11215', 84518: '11215', 84688: '11215', 84695: '11215', 88812: '11430', 89205: '10000', 89382: '11238', 89734: '10036', 93990: '10003', 99407: '11430.0', 99847: '10036.0', 100073: '10024.0', 101013: '10017.0', 104020: '11430.0', 106118: '10000.0', 106499: '11369.0'}, 'Address Type': {2869: 'PLACENAME', 23571: 'PLACENAME', 41625: 'PLACENAME', 44331: 'PLACENAME', 46913: 'PLACENAME', 47459: 'PLACENAME', 48465: 'PLACENAME', 51837: 'PLACENAME', 51848: 'PLACENAME', 54089: 'PLACENAME', 54343: 'PLACENAME', 55140: 'PLACENAME', 57789: 'PLACENAME', 63119: 'PLACENAME', 66242: 'PLACENAME', 66758: 'PLACENAME', 66786: 'PLACENAME', 66809: 'PLACENAME', 67465: 'PLACENAME', 72424: 'PLACENAME', 75531: 'PLACENAME', 77918: 'PLACENAME', 78048: 'PLACENAME', 78352: 'PLACENAME', 78383: 'PLACENAME', 79078: 'PLACENAME', 84489: 'PLACENAME', 84518: 'PLACENAME', 84688: 'PLACENAME', 84695: 'PLACENAME', 88812: 'PLACENAME', 89205: 'PLACENAME', 89382: 'PLACENAME', 89734: 'PLACENAME', 93990: 'PLACENAME', 99407: 'PLACENAME', 99847: 'PLACENAME', 100073: 'PLACENAME', 101013: 'PLACENAME', 104020: 'PLACENAME', 106118: 'PLACENAME', 106499: 'PLACENAME'}, 'City': {2869: 'BROOKLYN', 23571: 'NEW YORK', 41625: 'JAMAICA', 44331: 'JAMAICA', 46913: 'BROOKLYN', 47459: 'NEW YORK', 48465: 'JAMAICA', 51837: 'NEW YORK', 51848: 'NEW YORK', 54089: 'NEW YORK', 54343: 'NEW YORK', 55140: 'CORONA', 57789: 'EAST ELMHURST', 63119: 'JAMAICA', 66242: 'EAST ELMHURST', 66758: 'NEW YORK', 66786: 'NEW YORK', 66809: 'JAMAICA', 67465: 'FLUSHING', 72424: 'BROOKLYN', 75531: 'NEW YORK', 77918: 'NEW YORK', 78048: 'EAST ELMHURST', 78352: 'BROOKLYN', 78383: 'CORONA', 79078: 'NEW YORK', 84489: 'BROOKLYN', 84518: 'BROOKLYN', 84688: 'BROOKLYN', 84695: 'BROOKLYN', 88812: 'JAMAICA', 89205: 'NEW YORK', 89382: 'BROOKLYN', 89734: 'NEW YORK', 93990: 'NEW YORK', 99407: 'JAMAICA', 99847: 'NEW YORK', 100073: 'NEW YORK', 101013: 'NEW YORK', 104020: 'JAMAICA', 106118: 'NEW YORK', 106499: 'EAST ELMHURST'}, 'Landmark': {2869: 'BARCLAYS CENTER', 23571: 'CENTRAL PARK', 41625: 'J F K AIRPORT', 44331: 'J F K AIRPORT', 46913: 'BARTEL PRITCHARD SQUARE', 47459: 'CITY COLLEGE', 48465: 'PS 37', 51837: 'JACKIE ROBINSON PARK', 51848: 'JACKIE ROBINSON PARK', 54089: 'CENTRAL PARK', 54343: 'UNION SQUARE PARK', 55140: 'WORLDS FAIR MARINA', 57789: 'LA GUARDIA AIRPORT', 63119: 'J F K AIRPORT', 66242: 'LA GUARDIA AIRPORT', 66758: 'BRYANT PARK', 66786: 'BETH ISRAEL MED CENTER', 66809: 'J F K AIRPORT', 67465: 'QUEENS COLLEGE', 72424: 'BARCLAYS CENTER', 75531: 'CENTRAL PARK', 77918: 'CENTRAL PARK', 78048: 'LA GUARDIA AIRPORT', 78352: 'BARCLAYS CENTER', 78383: 'WORLDS FAIR MARINA', 79078: 'WASHINGTON SQUARE PARK', 84489: 'PROSPECT PARK', 84518: 'PROSPECT PARK', 84688: 'PROSPECT PARK', 84695: 'PROSPECT PARK', 88812: 'J F K AIRPORT', 89205: 'CENTRAL PARK', 89382: 'GRAND ARMY PLAZA', 89734: 'BRYANT PARK', 93990: 'BETH ISRAEL MED CENTER', 99407: 'J F K AIRPORT', 99847: 'PORT AUTH 42 STREET', 100073: 'MUSEUM NATURAL HIST', 101013: 'GRAND CENTRAL TERM', 104020: 'JFK', 106118: 'CENTRAL PARK', 106499: 'LA GUARDIA AIRPORT'}, 'Status': {2869: 'Closed', 23571: 'Closed', 41625: 'Closed', 44331: 'Closed', 46913: 'Closed', 47459: 'Closed', 48465: 'Closed', 51837: 'Closed', 51848: 'Closed', 54089: 'Closed', 54343: 'Closed', 55140: 'Closed', 57789: 'Closed', 63119: 'Closed', 66242: 'Closed', 66758: 'Closed', 66786: 'Closed', 66809: 'Closed', 67465: 'Closed', 72424: 'Closed', 75531: 'Closed', 77918: 'Closed', 78048: 'Closed', 78352: 'Closed', 78383: 'Closed', 79078: 'Closed', 84489: 'Closed', 84518: 'Closed', 84688: 'Closed', 84695: 'Closed', 88812: 'Closed', 89205: 'Closed', 89382: 'Closed', 89734: 'Closed', 93990: 'Closed', 99407: 'Closed', 99847: 'Closed', 100073: 'Closed', 101013: 'Closed', 104020: 'Closed', 106118: 'Closed', 106499: 'Closed'}, 'Borough': {2869: 'BROOKLYN', 23571: 'MANHATTAN', 41625: 'QUEENS', 44331: 'QUEENS', 46913: 'BROOKLYN', 47459: 'MANHATTAN', 48465: 'QUEENS', 51837: 'MANHATTAN', 51848: 'MANHATTAN', 54089: 'MANHATTAN', 54343: 'MANHATTAN', 55140: 'QUEENS', 57789: 'QUEENS', 63119: 'QUEENS', 66242: 'QUEENS', 66758: 'MANHATTAN', 66786: 'MANHATTAN', 66809: 'QUEENS', 67465: 'QUEENS', 72424: 'BROOKLYN', 75531: 'MANHATTAN', 77918: 'MANHATTAN', 78048: 'QUEENS', 78352: 'BROOKLYN', 78383: 'QUEENS', 79078: 'MANHATTAN', 84489: 'BROOKLYN', 84518: 'BROOKLYN', 84688: 'BROOKLYN', 84695: 'BROOKLYN', 88812: 'QUEENS', 89205: 'MANHATTAN', 89382: 'BROOKLYN', 89734: 'MANHATTAN', 93990: 'MANHATTAN', 99407: 'QUEENS', 99847: 'MANHATTAN', 100073: 'MANHATTAN', 101013: 'MANHATTAN', 104020: 'QUEENS', 106118: 'MANHATTAN', 106499: 'QUEENS'}}
You can first start by converting your date columns to datetime type using pd.to_datetime(): for c in ['Created Date', 'Closed Date']: df[c] = pd.to_datetime(df[c]) #df[c+'_date'] = df[c].dt.date # to extract the date (for created + closed) #df[c+'_time'] = df[c].dt.time # to extract the time (for created + closed) Then you can calculate the difference in time between the two as a new column (as hours) .astype('timedelta64[h]'), and then calculate a grouped mean: df['difference in time'] = (df['Closed Date'] - df['Created Date']).astype('timedelta64[h]') print(df.groupby('Complaint Type').agg({'difference in time':'mean'})) returns: difference in time Complaint Type Dead Tree 40.000000 For Hire Vehicle Complaint 48.333333 Illegal Parking 0.500000 Noise - Park 1.916667 Noise - Vehicle 0.500000 Overgrown Tree/Branches 15.000000 Public Toilet 125.500000 Street Condition 211.000000 Taxi Complaint 125.461538 Traffic 1.500000 Vending 3.666667
5
1
70,449,114
2021-12-22
https://stackoverflow.com/questions/70449114/programming-a-probability-of-twins-reunion
I have a problem as below, I tried but I couldn't find the right result. I want to solve it in a simple way without using an extra library. I don't have any data to share because I can't establish a correct logic. 4 twins (8 children in total) play with their eyes closed. The children in the randomly distributed group hold each other's hands in pairs when a moment comes. How to write a python script that lists all possibilities and marks the probability that siblings hold each other's hand? list = ['a1', 'a2', 'b1', 'b2', 'c1', 'c2', 'd1', 'd2'] I want to get a result like this: [a1a2, b1b2, c1c2, d1d2] - Found [a1a2, b1b2, c1d1, c2d2] - Not found [a1a2, b1b2, c1d2, c2d1] - Not found [a1a2, b1b2, c1d1, c2d2] - Not found [a1a2, b2b1, c1d1, c2d2] - Not found ... all possible matches ... Thanks for help.
Creating all pairings and comparing to correct pairing First, define a function to generate all pairings, and a function to generate the correct pairing: def all_pairings(l): if len(l) == 0: return [[]] else: return [[(l[0],l[i])] + p for i in range(1,len(l)) for p in all_pairings(l[1:i]+l[i+1:])] def adjacent_pairing(l): it = iter(l) return zip(it, it) Then it's just a for-loop: children = ['a1', 'a2', 'b1', 'b2', 'c1', 'c2', 'd1', 'd2'] correct_pairing = set(adjacent_pairing(children)) for pairing in all_pairings(children): sentence = 'Found' if set(pairing) == correct_pairing else 'Not found' print(', '.join(''.join(pair) for pair in pairing), ' -- ', sentence) Output: a1a2, b1b2, c1c2, d1d2 -- Found a1a2, b1b2, c1d1, c2d2 -- Not found a1a2, b1b2, c1d2, c2d1 -- Not found a1a2, b1c1, b2c2, d1d2 -- Not found ... a1d2, a2d1, b1b2, c1c2 -- Not found a1d2, a2d1, b1c1, b2c2 -- Not found a1d2, a2d1, b1c2, b2c1 -- Not found Shuffling the pairings I suggest using random.shuffle to shuffle the possible pairings before iterating (otherwise, the correct pairing is always the first pairing generated, which is a bit boring). import random l = list(all_pairings(children)) random.shuffle(l) for pairing in l: sentence = 'Found' if set(pairing) == correct_pairing else 'Not found' print(', '.join(''.join(pair) for pair in pairing), ' -- ', sentence) Output: a1a2, b1d2, b2d1, c1c2 -- Not found a1d1, a2d2, b1c2, b2c1 -- Not found a1b2, a2c2, b1c1, d1d2 -- Not found ... a1b1, a2c2, b2d1, c1d2 -- Not found a1a2, b1b2, c1c2, d1d2 -- Found a1b2, a2c2, b1d2, c1d1 -- Not found ... a1c2, a2d2, b1b2, c1d1 -- Not found a1c2, a2d2, b1c1, b2d1 -- Not found a1b1, a2b2, c1c2, d1d2 -- Not found Probability of finding the correct pairing Assuming all pairings are equiprobable, the probability of finding the correct pairing when picking one pairing randomly is 1 divided by the total number of distinct possible pairings. How many distinct possible pairings are there? Choosing an ordered pairing, where the order of the pairs matter, and the order of the two children in each pair matter, is the same as choosing a permutation. It is well known that there are N! possible permutations of N children. How many ordered pairings correspond to each unordered pairing? There are 2 possible ways to order the 2 children in each pair; thus there are 2 ** (N / 2) ways to order the 2 children of all pairs. There are (N / 2)! possible ways to order the N / 2 pairs. Thus, each pairing corresponds to (N / 2)! * 2 ** (N / 2) ordered pairings. Thus, there must be N! / ( (N / 2)! * 2 ** (N / 2) ) distinct possible pairings of N children. For your 8 children, that's 8! / (4! * 2**4) == 105. The probability of picking the correct pairing, when picking uniformly at random amongst all distinct possible pairings, is 1 over that number: just slightly under 1% in the case of 8 children. Expected number of correct pairs in a random pairing Let's pick a pairing at random among all distinct possible pairings. How many of the pairs in that pairing will be correct, in average? We can count the number of correct pairs in each pairing in our python program: for pairing in all_pairings(children): nb_correct_pairs = len(set(pairing) & correct_pairing) print(', '.join(''.join(pair) for pair in pairing), ' -- ', nb_correct_pairs) Output: a1a2, b1b2, c1c2, d1d2 -- 4 a1a2, b1b2, c1d1, c2d2 -- 2 a1a2, b1b2, c1d2, c2d1 -- 2 a1a2, b1c1, b2c2, d1d2 -- 2 a1a2, b1c1, b2d1, c2d2 -- 1 a1a2, b1c1, b2d2, c2d1 -- 1 a1a2, b1c2, b2c1, d1d2 -- 2 a1a2, b1c2, b2d1, c1d2 -- 1 ... a1d2, a2c1, b1d1, b2c2 -- 0 a1d2, a2c2, b1b2, c1d1 -- 1 a1d2, a2c2, b1c1, b2d1 -- 0 a1d2, a2c2, b1d1, b2c1 -- 0 a1d2, a2d1, b1b2, c1c2 -- 2 a1d2, a2d1, b1c1, b2c2 -- 0 a1d2, a2d1, b1c2, b2c1 -- 0 The average number is actually easy to calculate with mathematics. Let us consider one particular child in a random pairing, for instance child a1. What is the probability that child a1 will hold their twin's hand? Since there are N-1 other children, and by symmetry all children are equally likely, the probability that child a1 will hold their twin's hand is 1/(N-1). Of course, this applies to all children, not just a1. Thus, in average, 1/(N-1) of the children will hold their own twin's hand. Since there are N children, then in average, N/(N-1) == 1 + 1/(N-1) children will hold their own twin's hand. Since there are 2 children per pair, that means that in average, there will be N / (2(N-1)) == (1 + 1/(N-1)) / 2 correct pairs in a random pairing. In the case of N == 8, this means 4/7 ≈ 0.571 correct pairs per pairing. Let us verify experimentally: l = list(all_pairings(children)) total_correct_pairs = sum(len(set(pairing) & correct_pairing) for pairing in l) n_pairings = len(l) expected_correct_pairs = total_correct_pairs / n_pairings print('Expected number of correct pairs: ', expected_correct_pairs) Output: Expected number of correct pairs: 0.5714285714285714
6
3
70,449,405
2021-12-22
https://stackoverflow.com/questions/70449405/when-why-use-types-from-typing-module-for-type-hints
What is exactly the 'right' way for type hinting? My IDE (and resulting code) works fine for type hints using either of below options, but some types can be imported from the typing module. Is there a preference for using the import from the typing module over builtins (like list or dict)? Examples: from typing import Dict def func_1(arg_one: Dict) -> Dict: pass and def func_2(arg_one: dict) -> dict: pass
The "right" way is to use builtins when possible (e.g. dict over typing.Dict). typing.Dict is only needed if you use Python < 3.9. In older versions you couldn't use generic annotations like dict[str, Any] with builtins, you had to use Dict[str, Any]. See PEP 585
12
14
70,444,996
2021-12-22
https://stackoverflow.com/questions/70444996/obtaining-metadata-where-from-of-a-file-on-mac
I am trying to obtain the "Where from" extended file attribute which is located on the "get info" context-menu of a file in MacOS. Example When right-clicking on the file and displaying the info it shows the this metadata. The highlighted part in the image below shows the information I want to obtain (the link of the website where the file was downloaded from). I want to use this Mac-specific function using Python. I thought of using OS tools but couldn't figure out any.
TL;DR: Get the extended attribute like MacOS's "Where from" by e.g. pip-install pyxattr and use xattr.getxattr("file.pdf", "com.apple.metadata:kMDItemWhereFroms"). Extended Attributes on files These extended file attributes like your "Where From" in MacOS (since 10.4) store metadata not interpreted by the filesystem. They exist for different operating systems. using the command-line You can also query them on the command-line with tools like: exiftool: exiftool -MDItemWhereFroms -MDItemTitle -MDItemAuthors -MDItemDownloadedDate /path/to/file xattr (apparently MacOS also uses a Python-script) xattr -p -l -x /path/to/file On MacOS many attributes are displayed in property-list format, thus use -x option to obtain hexadecimal output. using Python Ture Pålsson pointed out the missing link keywords. Such common and appropriate terms are helpful to search Python Package Index (PyPi): Search PyPi by keywords: extend file attributes, meta data: xattr pyxattr osxmetadata, requires Python 3.7+, MacOS only For example to list and get attributes use (adapted from pyxattr's official docs) import xattr xattr.listxattr("file.pdf") # ['user.mime_type', 'com.apple.metadata:kMDItemWhereFroms'] xattr.getxattr("file.pdf", "user.mime_type") # 'text/plain' xattr.getxattr("file.pdf", "com.apple.metadata:kMDItemWhereFroms") # ['https://example.com/downloads/file.pdf'] However you will have to convert the MacOS specific metadata which is stored in plist format, e.g. using plistlib. File metadata on MacOS Mac OS X 10.4 (Tiger) introduced Spotlight a system for extracting (or harvesting), storing, indexing, and querying metadata. It provides an integrated system-wide service for searching and indexing. This metadata is stored as extended file attributes having keys prefixed with com.apple.metadata:. The "Where from" attribute for example has the key com.apple.metadata:kMDItemWhereFroms. using Python Use osxmetadata to use similar functionality like in MacOS's md* utils: from osxmetadata import OSXMetaData filename = 'file.pdf' meta = OSXMetaData(filename) # get and print "Where from" list, downloaded date, title print(meta.wherefroms, meta.downloadeddate, meta.title) See also MacIssues (2014): How to look up file metadata in OS X OSXDaily (2018): How to View & Remove Extended Attributes from a File on Mac OS Ask Different: filesystem - What all file metadata is available in macOS? Query Spotlight for a range of dates via PyObjC Mac OS X : add a custom meta data field to any file
5
9
70,446,485
2021-12-22
https://stackoverflow.com/questions/70446485/check-whether-url-exists-or-not-without-downloading-the-content-using-python
I have to check whether url exists or not using python. I am trying to use requests.get(url) but it is taking alot of time as the file starts downloading as soon as get is hit. I don't want the file to be downloaded for checking the url validity. Can this be achieved using python ?
Something like the below. See HTTP head for more info. import requests urls = ['https://www.google.com','https://www.google.com/you_can_not_find_me'] for idx,url in enumerate(urls,1): r = requests.head(url) if r.status_code == 200: print(f'{idx}) {url} was found') else: print(f'{idx}) {url} was NOT found') output 1) https://www.google.com was found 2) https://www.google.com/you_can_not_find_me was NOT found
4
6
70,361,947
2021-12-15
https://stackoverflow.com/questions/70361947/how-to-install-python-package-from-github-that-doesnt-have-setup-py-in-it
I would like to use the following sdk in my python project -> https://github.com/LBank-exchange/lbank-api-sdk-v2. It has sdk's for 3 languages (I just want the python one). I tried to install it using the command: pip install git+https://github.com/LBank-exchange/lbank-api-sdk-v2.git#egg=lbank which gave the error does not appear to be a Python project: neither 'setup.py' nor 'pyproject.toml' found.
Looks like the developer didn't bother to package it properly. If it was me using it, I would fork it on GH, add the setup.py and use the fork. Maybe a good exercise for you? Meanwhile, to just get it to work, in your project "root": git clone https://github.com/LBank-exchange/lbank-api-sdk-v2.git ln -s lbank-api-sdk-v2/python-sdk-api/LBank ./LBank Then in your code just import LBank. This will leave the cloned repo untouched (so you can git pull to update it later) and just link the module directory to the root. Alternatively you can just include the api directory in sys.path for imports to work.
6
6
70,392,020
2021-12-17
https://stackoverflow.com/questions/70392020/f-string-formatting-display-number-sign
Basic question about python f-strings, but couldn't find out the answer: how to force sign display of a float or integer number? i.e. what f-string makes 3 displayed as +3?
From Docs: Format Specification Mini-Language(Emphasis mine): Option Meaning '+' indicates that a sign should be used for both positive as well as negative numbers. '-' indicates that a sign should be used only for negative numbers (this is the default behavior). Example from docs: >>> '{:+f}; {:+f}'.format(3.14, -3.14) # show it always '+3.140000; -3.140000' >>> '{:-f}; {:-f}'.format(3.14, -3.14) # show only the minus -- same as '{:f}; {:f}' '3.140000; -3.140000' >>> '{:+} {:+}'.format(10, -10) '+10 -10' Above examples using f-strings: >>> f'{3.14:+f}; {-3.14:+f}' '+3.140000; -3.140000' >>> f'{3.14:-f}; {-3.14:-f}' '3.140000; -3.140000' >>> f'{10:+} {-10:+}' '+10 -10' One caveat while printing 0 as 0 is neither positive nor negative. In python, +0 = -0 = 0. >>> f'{0:+} {-0:+}' '+0 +0' >>> f'{0.0:+} {-0.0:+}' '+0.0 -0.0' 0.0 and -0.0 are different objects1. 0 in Computer Science(Emphasis mine): In some computer hardware signed number representations, zero has two distinct representations, a positive one grouped with the positive numbers and a negative one grouped with the negatives; this kind of dual representation is known as signed zero, with the latter form sometimes called negative zero. Update: From Python 3.11 and above, allows negative floating point zero as positive zero. The 'z' option coerces negative zero floating-point values to positive zero after rounding to the format precision. This option is only valid for floating-point presentation types. Example from PEP682: >>> x = -.00001 >>> f'{x:z.1f}' '0.0' >>> x = decimal.Decimal('-.00001') >>> '{:+z.1f}'.format(x) '+0.0' 1. Negative 0 in Python. Also check out Signed Zero (-0)
14
22
70,367,905
2021-12-15
https://stackoverflow.com/questions/70367905/how-to-create-a-new-branch-push-a-text-file-and-send-merge-request-to-a-gitlab
I found https://github.com/python-gitlab/python-gitlab, but I was unable to understand the examples in the doc.
That's right there are no tests we can find in the doc. Here's a basic answer for your question. If you would like a complete working script, I have attached it here: https://github.com/hubshashwat/common_scripts/blob/main/automation_to_create_push_merge_in_gitlab/usecase_gitlab_python.py Breaking down the steps below: Create an authkey for you: Follow the steps here: https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html Create a gitlab server instance of your project server = gitlab.Gitlab('https://gitlab.example.com', private_token=YOUR_API_TOKEN) project = server.projects.get(PROJECT_ID) Create a branch using: branch = project.branches.create( {"branch": branch_name, "ref": project.default_branch} ) Upload a file using: project.files.create( { "file_path": file_name, "branch": branch.name, "content": "data to be written", "encoding": "text", # or 'base64'; useful for binary files "author_email": AUTHOR_EMAIL, # Optional "author_name": AUTHOR_NAME, # Optional "commit_message": "Create file", } ) Create a merge request using: project.mergerequests.create( { "source_branch": branch.name, "target_branch": project.default_branch, "title": "merge request title", } )
6
2
70,404,485
2021-12-18
https://stackoverflow.com/questions/70404485/how-did-printa-a-pop0-change
This code: a = [1, 2, 3] print(*a, a.pop(0)) Python 3.8 prints 2 3 1 (does the pop before unpacking). Python 3.9 prints 1 2 3 1 (does the pop after unpacking). What caused the change? I didn't find it in the changelog. Edit: Not just in function calls but also for example in a list display: a = [1, 2, 3] b = [*a, a.pop(0)] print(b) Prints [2, 3, 1] vs [1, 2, 3, 1]. And Expression lists says "The expressions are evaluated from left to right" (that's the link to Python 3.8 documentation), so I'd expect the unpacking expression to happen first.
I suspect this may have been an accident, though I prefer the new behavior. The new behavior is a consequence of a change to how the bytecode for * arguments works. The change is in the changelog under Python 3.9.0 alpha 3: bpo-39320: Replace four complex bytecodes for building sequences with three simpler ones. The following four bytecodes have been removed: BUILD_LIST_UNPACK BUILD_TUPLE_UNPACK BUILD_SET_UNPACK BUILD_TUPLE_UNPACK_WITH_CALL The following three bytecodes have been added: LIST_TO_TUPLE LIST_EXTEND SET_UPDATE On Python 3.8, the bytecode for f(*a, a.pop()) looks like this: 1 0 LOAD_NAME 0 (f) 2 LOAD_NAME 1 (a) 4 LOAD_NAME 1 (a) 6 LOAD_METHOD 2 (pop) 8 CALL_METHOD 0 10 BUILD_TUPLE 1 12 BUILD_TUPLE_UNPACK_WITH_CALL 2 14 CALL_FUNCTION_EX 0 16 RETURN_VALUE while on 3.9, it looks like this: 1 0 LOAD_NAME 0 (f) 2 BUILD_LIST 0 4 LOAD_NAME 1 (a) 6 LIST_EXTEND 1 8 LOAD_NAME 1 (a) 10 LOAD_METHOD 2 (pop) 12 CALL_METHOD 0 14 LIST_APPEND 1 16 LIST_TO_TUPLE 18 CALL_FUNCTION_EX 0 20 RETURN_VALUE In the old bytecode, the code pushes a and (a.pop(),) onto the stack, then unpacks those two iterables into a tuple. In the new bytecode, the code pushes a list onto the stack, then does l.extend(a) and l.append(a.pop()), then calls tuple(l). This change has the effect of shifting the unpacking of a to before the pop call, but this doesn't seem to have been deliberate. Looking at bpo-39320, the intent was to simplify the bytecode instructions, not to change the behavior, and the bpo thread has no discussion of behavior changes. A related change affects ** unpacking, listed in the changelog under Python 3.9.0 alpha 4: bpo-39320: Replace two complex bytecodes for building dicts with two simpler ones. The new bytecodes DICT_MERGE and DICT_UPDATE have been added The old bytecodes BUILD_MAP_UNPACK and BUILD_MAP_UNPACK_WITH_CALL have been removed. So d = {'a': 1} f(**d, b=d.pop('a')) also changed behavior between Python 3.8 and 3.9.
65
53
70,363,269
2021-12-15
https://stackoverflow.com/questions/70363269/how-can-i-convert-a-markdown-string-to-a-docx-in-python
I am getting markdown text from my API like this: { name:'Onur', surname:'Gule', biography:'## Computers I like **computers** so much. I wanna *be* a computer.', membership:1 } biography column includes markdown string like above. ## Computers I like **computers** so much. I wanna *be* a computer. I want to take this markdown text and convert to docx string for my reports. In my docx template: {{markdownText|mark2html}} {{simpleText}} I am using python3 docxtpl package for creating docx and it's working for simple texts. I tried BeautifulSoup for convert markdown to docx text but it doesn't work for styles(bold, italic etc.). I tried pandoc and it worked but it just create a docx file, I want to add rendered markdown text to existing docx(while creating). My current code: import docx from docxtpl import DocxTemplate, RichText import markdown import jinja2 import markupsafe from bs4 import BeautifulSoup import pypandoc def safe_markdown(text): return markupsafe.Markup(markdown.markdown(text)) def mark2html(value): html = markdown.markdown(value) soup = BeautifulSoup(html, features='html.parser') output = pypandoc.convert_text(value,'rtf',format='md') return RichText(value) #tried soup and pandoc.. def from_template(template): template = DocxTemplate(template) context = { 'simpleText':'Simple text test.', 'markdownText':'Markdown **text** test.' } jenv = jinja2.Environment() jenv.filters['markdown'] = safe_markdown jenv.filters["mark2html"] = mark2html template.render(context,jenv) template.save('new_report.docx') So, how can I add rendered markdown to existed docx or while creating, maybe with a jinja2 filter?
I solved it without any shortcut. I turn the markdown to html with beautifulSoup and then process every paragraph by checking theirs tag names. In my word template: {% if markdownText != None %} {% for mt in markdownText|mark2html %} {{mt}} {% endfor %} {% endif %} My template tag: def mark2html(value): if value == None: return '-' html = markdown.markdown(value) soup = BeautifulSoup(html, features='html.parser') paragraphs = [] global doc for tag in soup.findAll(True): if tag.name in ('p','h1','h2','h3','h4','h5','h6'): paragraphs.extend(parseHtmlToDoc(tag)) return paragraphs My code to insert docx: def parseHtmlToDoc(org_tag): contents = org_tag.contents pars= [] for con in contents: if str(type(con)) == "<class 'bs4.element.Tag'>": tag = con if tag.name in ('strong',"h1","h2","h3","h4","h5","h6"): source = RichText("") if len(pars) > 0 and str(type(pars[len(pars)-1])) == "<class 'docxtpl.richtext.RichText'>": source = pars[len(pars)-1] source.add(con.contents[0], bold=True) else: source.add(con.contents[0], bold=True) pars.append(source) elif tag.name == 'img': source = tag['src'] imagen = InlineImage(doc, settings.MEDIA_ROOT+source) pars.append(imagen) elif tag.name == 'em': source = RichText("") source.add(con.contents[0], italic=True) pars.append(source) else: source = RichText("") if len(pars) > 0 and str(type(pars[len(pars)-1])) == "<class 'docxtpl.richtext.RichText'>": source = pars[len(pars)-1] pars.add(con) else: if org_tag.name == 'h2': source.add(con,bold=True,size=40) else: source.add(con) pars.append(source) # her zaman append? return pars It process html tags like b, i, img, headers. You can add more tags to process. I solved like that and it doesn't need any additional file transform like html2docx or etc. I used this process in my code like this: report_context = {'reportVariables': report_variables} template = DocxTemplate('report_format.docx') jenv = jinja2.Environment() jenv.filters["mark2html"] = mark2html template.render(report_context,jenv) template.save('exported_1.docx')
7
10
70,383,316
2021-12-16
https://stackoverflow.com/questions/70383316/pydantic-constr-vs-field-args
I wanted to know what is the difference between: from pydantic import BaseModel, Field class Person(BaseModel): name: str = Field(..., min_length=1) And: from pydantic import BaseModel, constr class Person(BaseModel): name: constr(min_length=1) Both seem to perform the same validation (even raise the exact same exception info when name is an empty string). Is it just a matter of code style? Is one of them preferred over the other? Also, if I wanted to include a list of nonempty strings as an attribute, which of these ways do you think would be better?: from typing import List from pydantic import BaseModel, constr class Person(BaseModel): languages: List[constr(min_length=1)] Or: from typing import List from pydantic import BaseModel, Field class Person(BaseModel): languages: List[str] @validator('languages', each_item=True) def check_nonempty_strings(cls, v): if not v: raise ValueError('Empty string is not a valid language.') return v EDIT: FWIW, I am using this for a FastAPI app. EDIT2: For my 2nd question, I think the first alternative is better, as it includes the length requirement in the Schema (and so it's in the documentation)
constr and Fields don't serve the same purpose. constr is a specific type that give validation rules regarding this specific type. You have equivalent for all classic python types. arguments of constr: strip_whitespace: bool = False: removes leading and trailing whitespace to_lower: bool = False: turns all characters to lowercase to_upper: bool = False: turns all characters to uppercase strict: bool = False: controls type coercion min_length: int = None: minimum length of the string max_length: int = None: maximum length of the string curtail_length: int = None: shrinks the string length to the set value when it is longer than the set value regex: str = None: regex to validate the string against As you can see thoses arguments allow you to manipulate the str itself not the behavior of pydantic with this field. Field doesn't serve the same purpose, it's a way of customizing fields, all fields not only str, it add 18 customization variables that you can find here. Is it just a matter of code style? Is one of them preferred over the other? for the specific case of str it is a matter of code style and what is preferred doesn't matter, only your use-case does. In general it is better to don't mix different syntax together and since you often need Field(), you will find it often. A classic use case would be api response that send json object in camelCase or PascalCase, you would use field alias to match theses objects and work with their variables in snake_case. exemple: class Voice(BaseModel): name: str = Field(None, alias='ActorName') language_code: str = None mood: str = None I personally prefer to use pydantic types to clearly separate type rules and field annotations. Basic example: class Car(BaseModel): description: Union[constr(min_length=1, max_length=64), None] = Field( default=None, example="something", description="Your car description", ) In any case you should only use one style of model structure (field, pydantic type or both toguether) for global coherence and better readability of your project. for your 2nd question you are right, using constr is surely the best approach since the validation rule will be added into the openapi doc. If you want to learn more about limitation and field rules enforcement check this.
19
15
70,416,097
2021-12-19
https://stackoverflow.com/questions/70416097/adding-data-labels-ontop-of-my-histogram-python-matplotlib
i am trying to add data labels values on top of my histogram to try to show the frequency visibly. This is my code now but unsure how to code up to put the value ontop: plt.figure(figsize=(15,10)) plt.hist(df['Age'], edgecolor='white', label='d') plt.xlabel("Age") plt.ylabel("Number of Patients") plt.title = ('Age Distrubtion') I was wondering if anyone knows the code to do this:
You can use the new bar_label() function using the bars returned by plt.hist(). Here is an example: from matplotlib import pyplot as plt import pandas as pd import numpy as np df = pd.DataFrame({'Age': np.random.randint(20, 60, 200)}) plt.figure(figsize=(15, 10)) values, bins, bars = plt.hist(df['Age'], edgecolor='white') plt.xlabel("Age") plt.ylabel("Number of Patients") plt.title('Age Distrubtion') plt.bar_label(bars, fontsize=20, color='navy') plt.margins(x=0.01, y=0.1) plt.show() PS: As the age is discrete distribution, it is recommended to explicitly set the bin boundaries, e.g. plt.hist(df['Age'], bins=np.arange(19.999, 60, 5)).
11
24
70,362,595
2021-12-15
https://stackoverflow.com/questions/70362595/visual-studio-code-not-recognizing-python-import-and-functions
What do the squiggly lines represent in the image? The actual error the flags up when I hover my mouse over the squiggly line is: Import "pyspark.sql.functions" could not be resolvedPylance I'm not sure what that means, but I'm getting the error for almost all functions in Visual Studio Code. How can I resolve it?
I was with the same error as yours. Visual Studio Code usually has a "recommended" interpreter, but sometimes it won't help you out with what you need. So, I changed the Interpreter (Ctrl + Shift + P in Visual Studio Code). Look for "Python: Select Interpreter. Choose the one who contains the name "Conda" And that's how the magic happens.
8
8
70,426,576
2021-12-20
https://stackoverflow.com/questions/70426576/get-random-number-from-set-deprecation
I am trying to get a random n number of users from a set of unique users. Here is what I have so far users = set() random_users = random.sample((users), num_of_user) This works well but it is giving me a deprecated warning. What should I be using instead? random.choice doesn't work with sets UPDATE I am trying to get reactions on a post and want them to be unique which is why I used a set. Would it be better to stick with a list for this? users = set() for reaction in msg.reactions: async for user in reaction.users(): users.add(user)
Convert your set to a list. by using the list function: random_users = random.choices(list(users),k=num_of_user) by using * operator to unpack your set or dict: random_users = random.choices([*users],k=num_of_user) Solution 1. is 3 char longer than the 2., but solution 1. is more literal - to me. It is not guaranteed that you will get the same result for the list order through different executions, python versions and platforms, therefore you may end up with different random result, despite a careful random number generator initialization. To resolve this issue, order your list. You can also store the users in a list, and make the elements unique later with set, and then convert again to a list. For this, a common way is to convert your list to a set, and back to a list again.
12
13
70,396,931
2021-12-17
https://stackoverflow.com/questions/70396931/catch-all-overload-for-in-python-type-annotations
The below code fails mypy with error: Overloaded function signatures 1 and 2 overlap with incompatible return types. @overload def test_overload(x: str) -> str: ... @overload def test_overload(x: object) -> int: ... def test_overload(x) -> Union[str, int]: if isinstance(x, str): return x else: return 1 What I'm trying to express is: "This function takes an arbitrary Python object. If that object is a string, it returns a string. If it is any other type, it returns an integer. Note this particular example is contrived to represent the general case. Is it possible to express this with overloads?
At the moment (Python 3.10, mypy 0.961) there is no way to express any object except one. But you could use ignoring type: ignore[misc] for excepted types. And they must precede the more general variant, because for @overload order is matter: from typing import overload, Union @overload def test_overload(x: str) -> str: # type: ignore[misc] ... @overload def test_overload(x: object) -> int: ... def test_overload(x) -> Union[str, int]: if isinstance(x, str): return x else: return 1 reveal_type(test_overload("string")) # Revealed type is "builtins.str" reveal_type(test_overload(object())) # Revealed type is "builtins.int"
10
7
70,376,255
2021-12-16
https://stackoverflow.com/questions/70376255/how-to-fetch-data-from-clickhouse-in-dicitionary-name-tuple-using-clickhouse-dri
When we fetch data using the DB API 2.0 cur.execute("select * from db.table") we get a cursor which seems like a generator object of list of tuples. Whereas in pymongo, when we fetch we get it as list of dictionaries. I wanted to achieve something like this. Instead of fetching list of tuples, I wanted list of dictionaries or a named tuple. I believe from an efficiency point of view it makes sense, since the schema is already defined so no need to send it for every record. Currently the workaround I am using is: cur.execute("select * from db.table") columns = cur.columns_with_types data = cur.fetchall() df = pd.DataFrame(data,columns=[tuple[0] for tuple in columns]) data_reqd = df.to_dict('records') This methods fairs poorly when query returns a lot of data. Workaround 1: Use fetchmany(size=block_size) but it doesn't seem like an elegant way to do things. Workaround 2: This seems like a much better way to handle things. cur.execute("select * from db.table") columns = cur.columns_with_types for tup in cur: row = dict(zip(columns, tup)) # use row Any good way to handle this? Any improvements to the question are appreciated.
You can alternatively create a Client and call its query_dataframe method. import clickhouse_driver as ch ch_client = ch.Client(host='localhost') df = ch_client.query_dataframe('select * from db.table') records = df.to_dict('records')
5
3
70,416,187
2021-12-19
https://stackoverflow.com/questions/70416187/check-if-values-in-a-column-exist-elsewhere-in-a-dataframe-row
Suppose I have a dataframe as below: df = pd.DataFrame({'a':[1,2,3,4],'b':[2,3,4,5],'c':[3,4,5,6],'d':[5,3,2,4]}) I want to check if elements in column d exist elsewhere in its corresponding row. So the outcome I want is [False, True, False, True] Towards that end, I used df.apply(lambda x: x['d'] in x[['a','b','c']], axis=1) but this is somehow giving me [False, False, False, False].
Try: out = (df[['a','b','c']].T==df['d']).any() Output: 0 False 1 True 2 False 3 True dtype: bool
5
3
70,437,840
2021-12-21
https://stackoverflow.com/questions/70437840/how-to-change-colors-for-decision-tree-plot-using-sklearn-plot-tree
How to change colors in decision tree plot using sklearn.tree.plot_tree without using graphviz as in this question: Changing colors for decision tree plot created using export graphviz? plt.figure(figsize=[21, 6]) ax1 = plt.subplot(121) ax2 = plt.subplot(122) ax1.plot(X[:, 0][y == 0], X[:, 1][y == 0], "bo") ax1.plot(X[:, 0][y == 1], X[:, 1][y == 1], "g^") ax1.contourf(xx, yy, pred.reshape(xx.shape), cmap=matplotlib.colors.ListedColormap(['b', 'g']), alpha=0.25) ax1.set_title(title) plot_tree(tree_clf, feature_names=["X", "y"], class_names=["blue", "green"], filled=True, rounded=True)
Many matplotlib functions follow the color cycler to assign default colors, but that doesn't seem to apply here. The following approach loops through the generated annotation texts (artists) and the clf tree structure to assign colors depending on the majority class and the impurity (gini). Note that we can't use alpha, as a transparent background would show parts of arrows that are usually hidden. from matplotlib import pyplot as plt from matplotlib.colors import ListedColormap, to_rgb import numpy as np from sklearn import tree X = np.random.rand(50, 2) * np.r_[100, 50] y = X[:, 0] - X[:, 1] > 20 clf = tree.DecisionTreeClassifier(random_state=2021) clf = clf.fit(X, y) fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=[21, 6]) colors = ['crimson', 'dodgerblue'] ax1.plot(X[:, 0][y == 0], X[:, 1][y == 0], "o", color=colors[0]) ax1.plot(X[:, 0][y == 1], X[:, 1][y == 1], "^", color=colors[1]) xx, yy = np.meshgrid(np.linspace(X[:, 0].min(), X[:, 0].max(), 100), np.linspace(X[:, 1].min(), X[:, 1].max(), 100)) pred = clf.predict(np.c_[(xx.ravel(), yy.ravel())]) ax1.contourf(xx, yy, pred.reshape(xx.shape), cmap=ListedColormap(colors), alpha=0.25) # ax2.set_prop_cycle(mpl.cycler(color=colors)) # doesn't seem to work artists = tree.plot_tree(clf, feature_names=["X", "y"], class_names=colors, filled=True, rounded=True, ax=ax2) for artist, impurity, value in zip(artists, clf.tree_.impurity, clf.tree_.value): # let the max value decide the color; whiten the color depending on impurity (gini) r, g, b = to_rgb(colors[np.argmax(value)]) f = impurity * 2 # for N colors: f = impurity * N/(N-1) if N>1 else 0 artist.get_bbox_patch().set_facecolor((f + (1-f)*r, f + (1-f)*g, f + (1-f)*b)) artist.get_bbox_patch().set_edgecolor('black') plt.tight_layout() plt.show()
11
8
70,375,349
2021-12-16
https://stackoverflow.com/questions/70375349/using-searchvectorfields-on-many-to-many-related-models
I have two models Author and Book which are related via m2m (one author can have many books, one book can have many authors) Often we need to query and match records for ingests using text strings, across both models ie: "JRR Tolkien - Return of the King" when unique identifiers are not available. I would like to test if using SearchVectorField with GIN indexes can improve full-text search response times - but since the search query will be SearchVector(author__name, book__title) It seems that both models need a SearchVectorField added. This becomes more complicated when each table needs updating since it appears Postgres Triggers need to be set up on both tables, which might make updating anything completely untenable. Question What is the modern best practice in Django for adopting vectorised full-text search methods when m2m related models are concerned? Should the SearchVectorField be placed through a table? Or in each model? How should triggers be applied? I've been searching for guides on this specifically - but no one seems to mention m2ms when talking about SearchVectorFields. I did find this old question Also, if Postgres is really not the way forward in modern Django I'd also gladly take direction in something better suited/supported/documented. In our case, we are using Postgres 11.6. Repro from django.db import models from django.contrib.postgres.search import SearchVectorField from django.contrib.postgres.indexes import GinIndex class Author(models.Model): name = models.CharField(max_length=100, unique=True) main_titles = models.ManyToManyField( "Book", through="BookMainAuthor", related_name="main_authors", ) search = SearchVectorField(null=True) class BookMainAuthor(models.Model): """The m2m through table for book and author (main)""" book = models.ForeignKey("Book", on_delete=models.CASCADE) artist = models.ForeignKey("Author", on_delete=models.CASCADE) class Meta: unique_together = ["book", "author"] class Book(models.Model): title = models.CharField(max_length=100, unique=True) search = SearchVectorField(null=True) Exploring indexing the M2M Through table Exploring Yevgeniy-kosmak's answer below, this is a simple way to index the string permutations of the through table for Book.title and Author.name Performing a search using the SearchVectorField is fast and a little more effective for some titles that have multiple authors. However when trying to use SearchRank - things slow down dramatically: BookMainAuthor.objects.annotate(rank=SearchRank("search", SearchQuery("JRR Tolkien - Return of the King")).order_by("-rank:).explain(analyze=True) "Gather Merge (cost=394088.44..489923.26 rows=821384 width=227) (actual time=8569.729..8812.096 rows=989307 loops=1) Workers Planned: 2 Workers Launched: 2 -> Sort (cost=393088.41..394115.14 rows=410692 width=227) (actual time=8559.074..8605.681 rows=329769 loops=3) Sort Key: (ts_rank(to_tsvector(COALESCE((search_vector)::text, ''::text)), plainto_tsquery('JRR Tolkien - Return of the King'::text), 6)) DESC Sort Method: external merge Disk: 77144kB – Worker 0: Sort Method: external merge Disk: 76920kB Worker 1: Sort Method: external merge Disk: 76720kB -> Parallel Seq Scan on bookstore_bookmainauthor (cost=0.00..264951.11 rows=410692 width=227) (actual time=0.589..8378.569 rows=329769 loops=3) Planning Time: 0.369 ms Execution Time: 8840.139 ms" Without the sort, only saves 500ms: BookMainAuthor.objects.annotate(rank=SearchRank("search", SearchQuery("JRR Tolkien - Return of the King")).explain(analyze=True) 'Gather (cost=1000.00..364517.21 rows=985661 width=227) (actual time=0.605..8282.976 rows=989307 loops=1) Workers Planned: 2 Workers Launched: 2 -> Parallel Seq Scan on bookstore_bookmainauthor (cost=0.00..264951.11 rows=410692 width=227) (actual time=0.356..8187.242 rows=329769 loops=3) Planning Time: 0.039 ms Execution Time: 8306.799 ms' However I noticed that if you do the following, it dramatically improves the query execution time (~17x), sorting included. Add an F Expression to the first argument of SearchRank (instead of using the name of the field in quotes which is what is directed to do in the documentation) Adding a config kwarg to the SearchQuery BookMainAuthor.objects.annotate(rank=SearchRank(F("search"), SearchQuery("JRR Tolkien - Return of the King", config='english')).order_by("-rank").explain(analyze=True) Gather Merge (cost=304240.66..403077.76 rows=847116 width=223) (actual time=336.654..559.367 rows=989307 loops=1) Workers Planned: 2 Workers Launched: 2 -> Sort (cost=303240.63..304299.53 rows=423558 width=223) (actual time=334.970..373.282 rows=329769 loops=3) Sort Key: (ts_rank(search_vector, '''jrr'' & ''tolkien'' & ''return'' & ''king'''::tsquery)) DESC Sort Method: external merge Disk: 75192kB Worker 0: Sort Method: external merge Disk: 76672kB Worker 1: Sort Method: external merge Disk: 76976kB -> Parallel Seq Scan on bookstore_bookmainauthor (cost=0.00..173893.48 rows=423558 width=223) (actual time=0.014..211.007 rows=329769 loops=3) Planning Time: 0.059 ms Execution Time: 584.402 ms
Finally got it. I suppose you need to search by query containing the author and the book's name at the same time. And you wouldn't be able to separate them to look at Book table for "book" part of the query and the same for Author. Yep, making an index of fields from separate tables is impossible with PostgreSQL. I don't see it as a weakness of PostgreSQL, it's just a very unusual case when you really need such an index. In most cases, there are other solutions, not worse as for efficiency. Of course, you can always look at ElasticSearch if for some reason you are sure that it's necessary. I'll advise you of such an approach. You can make BookMainAuthor with this structure: class BookMainAuthor(models.Model): """The m2m through table for book and author (main)""" book = models.ForeignKey("Book", on_delete=models.CASCADE) artist = models.ForeignKey("Author", on_delete=models.CASCADE) book_full_name = models.CharField(max_length=200) search = SearchVectorField(null=True) class Meta: unique_together = ["book", "author"] As it seems to me it shouldn't cause any trouble to maintain book_full_name field, which would contain both author and book names with an appropriate separator in it. Everything else is a textbook case. From my experience, if table BookMainAuthor would contain not more than 10M entries, on an average single server (for example like AX161 from here) everything would be just fine.
5
4
70,422,166
2021-12-20
https://stackoverflow.com/questions/70422166/when-run-pip-compile-requirements-in-in-macos12-monterey-using-venv-python-3-9
OS: monterey macOSv12.0.1 python venv: 3.9.9 requirements.in # To update requirements.txt, run: # # pip-compile requirements.in # # To install in localhost, run: # # pip-sync requirements.txt # django==3.2.10 # https://www.djangoproject.com/ psycopg2-binary==2.9.2 # https://github.com/psycopg/psycopg2 After i turn on venv, then i type pip-compile requirements.in then i get a bunch of errors about pg_config not found This is my asciinema https://asciinema.org/a/sl9MqmrayLAR3rRxEul4mYaxw I have tried env LDFLAGS='-L/usr/local/lib -L/usr/local/opt/openssl/lib -L/usr/local/opt/readline/lib' pip-compile requirements.in but same. Please advise.
I appreciate the other 2 answers from @Vishnudev and @cetver 🙏 But I tried to install postgresql using brew install and it took a very long time and I still cannot complete after 20 mins. I figured this out eventually after much googling Here are my tech specs of my situation: monterey 12.1.0 Apple silicon zsh Concepts Conceptually, what I did is : install openssl, turn on all the exports associated with that, then install libpq, turn on all the exports associated with that, then turn on python venv Here are my steps I took. It's entirely possible not all steps are needed. But I have limited time so here they are. Steps brew install openssl put the following in .zshrc export PATH="/opt/homebrew/opt/[email protected]/bin:$PATH" put the following in .zshenv export LDFLAGS="-L/opt/homebrew/opt/[email protected]/lib" export CPPFLAGS="-I/opt/homebrew/opt/openssl@3/include" export PKG_CONFIG_PATH="/opt/homebrew/opt/[email protected]/lib/pkgconfig" source ~/.zshrc brew install libpq put the following in .zshrc export PATH="/opt/homebrew/opt/libpq/bin:$PATH" put the following in .zshenv export LDFLAGS="-L/opt/homebrew/opt/libpq/lib" export CPPFLAGS="-I/opt/homebrew/opt/libpq/include" export PKG_CONFIG_PATH="/opt/homebrew/opt/libpq/lib/pkgconfig" source ~/.zshenv Turn on venv Now the installing of psycopg2-binary should work Links that help me: the one abt openssl the one abt libpq
6
7
70,393,863
2021-12-17
https://stackoverflow.com/questions/70393863/polymorphism-and-type-hints-in-python
Consider the following case: class Base: ... class Sub(Base): ... def get_base_instance(*args) -> Base: ... def do_something_with_sub(instance: Sub): ... Let's say I'm calling get_base_instance in a context where I kow it will return a Sub instance - maybe based on what args I'm passing. Now I want to pass the returned instance to do_something_with_sub: sub_instance = get_base_instance(*args) do_something_with_sub(sub_instance) The problem is that my IDE complains about passing a Base instance to a method that only accepts a Sub instance. I think I remember from other programming languages that I would just cast the returned instance to Sub. How do I solve the problem in Python? Conditionally throw an exception based on the return type, or is there a better way?
I think you were on the right track when you thought about it in terms of casting. We could use cast from typing to stop the IDE complaining. For example: from typing import cast class Base: pass class Sub(Base): pass def get_base_instance(*args) -> Base: return Sub() def do_something_with_sub(instance: Sub): print(instance) sub_instance = cast(Sub, get_base_instance()) do_something_with_sub(sub_instance)
7
2
70,420,155
2021-12-20
https://stackoverflow.com/questions/70420155/how-to-predict-actual-future-values-after-testing-the-trained-lstm-model
I have trained my stock price prediction model by splitting the dataset into train & test. I have also tested the predictions by comparing the valid data with the predicted data, and the model works fine. But I want to predict actual future values. What do I need to change in my code below? How can I make predictions up to a specific date in the actual future? Code (in a Jupyter Notebook): (To run the code, please try it in a similar csv file you have, or install nsepy python library using command pip install nsepy) # imports import pandas as pd # data processing import numpy as np # linear algebra import matplotlib.pyplot as plt # plotting from datetime import date # date from nsepy import get_history # NSE historical data from keras.models import Sequential # neural network from keras.layers import LSTM, Dropout, Dense # LSTM layer from sklearn.preprocessing import MinMaxScaler # scaling nseCode = 'TCS' stockTitle = 'Tata Consultancy Services' # API call apiData = get_history(symbol = nseCode, start = date(2017,1,1), end = date(2021,12,19)) data = apiData # copy the dataframe (not necessary) # remove columns you don't need del data['Symbol'] del data['Series'] del data['Prev Close'] del data['Volume'] del data['Turnover'] del data['Trades'] del data['Deliverable Volume'] del data['%Deliverble'] # store the data in a csv file data.to_csv('infy2.csv') # Read the csv file data = pd.read_csv('infy2.csv') # convert the date column to datetime; if you read data from csv, do this. Otherwise, no need if you read data from API data['Date'] = pd.to_datetime(data['Date'], format = '%Y-%m-%d') data.index = data['Date'] # plot plt.xlabel('Date') plt.ylabel('Close Price (Rs.)') data['Close'].plot(legend = True, figsize = (10,6), title = stockTitle, grid = True, color = 'blue') # Sort data into Date and Close columns data2 = data.sort_index(ascending = True, axis = 0) newData = pd.DataFrame(index = range(0,len(data2)), columns = ['Date', 'Close']) for i in range(0, len(data2)): # only if you read data from csv newData['Date'][i] = data2['Date'][i] newData['Close'][i] = data2['Close'][I] # Calculate the row number to split the dataset into train and test split = len(newData) - 100 # normalize the new dataset scaler = MinMaxScaler(feature_range = (0, 1)) finalData = newData.values trainData = finalData[0:split, :] validData = finalData[split:, :] newData.index = newData.Date newData.drop('Date', axis = 1, inplace = True) scaler = MinMaxScaler(feature_range = (0, 1)) scaledData = scaler.fit_transform(newData) xTrainData, yTrainData = [], [] for i in range(60, len(trainData)): # data-flair has used 60 instead of 30 xTrainData.append(scaledData[i-60:i, 0]) yTrainData.append(scaledData[i, 0]) xTrainData, yTrainData = np.array(xTrainData), np.array(yTrainData) xTrainData = np.reshape(xTrainData, (xTrainData.shape[0], xTrainData.shape[1], 1)) # build and train the LSTM model lstmModel = Sequential() lstmModel.add(LSTM(units = 50, return_sequences = True, input_shape = (xTrainData.shape[1], 1))) lstmModel.add(LSTM(units = 50)) lstmModel.add(Dense(units = 1)) inputsData = newData[len(newData) - len(validData) - 60:].values inputsData = inputsData.reshape(-1,1) inputsData = scaler.transform(inputsData) lstmModel.compile(loss = 'mean_squared_error', optimizer = 'adam') lstmModel.fit(xTrainData, yTrainData, epochs = 1, batch_size = 1, verbose = 2) # Take a sample of a dataset to make predictions xTestData = [] for i in range(60, inputsData.shape[0]): xTestData.append(inputsData[i-60:i, 0]) xTestData = np.array(xTestData) xTestData = np.reshape(xTestData, (xTestData.shape[0], xTestData.shape[1], 1)) predictedClosingPrice = lstmModel.predict(xTestData) predictedClosingPrice = scaler.inverse_transform(predictedClosingPrice) # visualize the results trainData = newData[:split] validData = newData[split:] validData['Predictions'] = predictedClosingPrice plt.xlabel('Date') plt.ylabel('Close Price (Rs.)') trainData['Close'].plot(legend = True, color = 'blue', label = 'Train Data') validData['Close'].plot(legend = True, color = 'green', label = 'Valid Data') validData['Predictions'].plot(legend = True, figsize = (12,7), grid = True, color = 'orange', label = 'Predicted Data', title = stockTitle)
Below is an example of how you could implement this approach for your model: import pandas as pd import numpy as np from datetime import date from nsepy import get_history from keras.models import Sequential from keras.layers import LSTM, Dense from sklearn.preprocessing import MinMaxScaler pd.options.mode.chained_assignment = None # load the data stock_ticker = 'TCS' stock_name = 'Tata Consultancy Services' train_start = date(2017, 1, 1) train_end = date.today() data = get_history(symbol=stock_ticker, start=train_start, end=train_end) data.index = pd.DatetimeIndex(data.index) data = data[['Close']] # scale the data scaler = MinMaxScaler(feature_range=(0, 1)).fit(data) z = scaler.transform(data) # extract the input sequences and target values window_size = 60 x, y = [], [] for i in range(window_size, len(z)): x.append(z[i - window_size: i]) y.append(z[i]) x, y = np.array(x), np.array(y) # build and train the model model = Sequential() model.add(LSTM(units=50, return_sequences=True, input_shape=x.shape[1:])) model.add(LSTM(units=50)) model.add(Dense(units=1)) model.compile(loss='mse', optimizer='adam') model.fit(x, y, epochs=100, batch_size=128, verbose=1) # generate the multi-step forecasts def multi_step_forecasts(n_past, n_future): x_past = x[- n_past - 1:, :, :][:1] # last observed input sequence y_past = y[- n_past - 1] # last observed target value y_future = [] # predicted target values for i in range(n_past + n_future): # feed the last forecast back to the model as an input x_past = np.append(x_past[:, 1:, :], y_past.reshape(1, 1, 1), axis=1) # generate the next forecast y_past = model.predict(x_past) # save the forecast y_future.append(y_past.flatten()[0]) # transform the forecasts back to the original scale y_future = scaler.inverse_transform(np.array(y_future).reshape(-1, 1)).flatten() # add the forecasts to the data frame df_past = data.rename(columns={'Close': 'Actual'}).copy() df_future = pd.DataFrame( index=pd.bdate_range(start=data.index[- n_past - 1] + pd.Timedelta(days=1), periods=n_past + n_future), columns=['Forecast'], data=y_future ) return df_past.join(df_future, how='outer') # forecast the next 30 days df1 = multi_step_forecasts(n_past=0, n_future=30) df1.plot(title=stock_name) # forecast the last 20 days and the next 30 days df2 = multi_step_forecasts(n_past=20, n_future=30) df2.plot(title=stock_name)
5
3
70,442,764
2021-12-21
https://stackoverflow.com/questions/70442764/custom-conflict-handling-for-argumentparser
What I need I need an ArgumentParser, with a conflict handling scheme, that resolves some registered set of duplicate arguments, but raises on all other arguments. What I tried My initial approach (see also the code example at the bottom) was to subclass ArgumentParser, add a _handle_conflict_custom method, and then instantiate the subclass with ArgumentParser(conflict_handler='custom'), thinking that the _get_handler method would pick it up. The Problem This raises an error, because the ArgumentParser inherits from _ActionsContainer, which provides the _get_handler and the _handle_conflict_{strategy} methods, and then internally instantiates an _ArgumentGroup (that also inherits from _ActionsContainer), which in turn doesn't know about the newly defined method on ArgumentParser and thus fails to get the custom handler. Overriding the _get_handler method is not feasible for the same reasons. I have created a (rudimentary) class diagram illustrating the relationships, and therefore hopefully the problem in subclassing ArgumentParser to achieve what I want. Motivation I (think, that I) need this, because I have two scripts, that handle distinct parts of a workflow, and I would like to be able to use those separately as scripts, but also have one script, that imports the methods of both of these scripts, and does everything in one go. This script should support all the options of the two individual scripts, but I don't want to duplicate the (extensive) argument definitions, so that I would have to make changes in multiple places. This is easily solved by importing the ArgumentParsers of the (part) scripts and using them as parents, like so combined_parser = ArgumentParser(parents=[arg_parser1, arg_parser2]). In the scripts I have duplicate options, e.g. for the work directory, so I need to resolve those conflicts. This could also be done, with conflict_handler='resolve'. But because there are a lot of possible arguments (which is not up to our team, because we have to maintain compatibility), I also want the script to raise an error if something gets defined that causes a conflict, but hasn't been explicitly allowed to do so, instead of quietly overriding the other flag, potentially causing unwanted behavior. Other suggestions to achieve these goals (keeping both scripts separate, enabling use of one script that wraps both, avoiding code duplication and raising on unexpected duplicates) are welcome. Example Code from argparse import ArgumentParser class CustomParser(ArgumentParser): def _handle_conflict_custom(self, action, conflicting_actions): registered = ['-h', '--help', '-f'] conflicts = conflicting_actions[:] use_error = False while conflicts: option_string, action = conflicts.pop() if option_string in registered: continue else: use_error = True break if use_error: self._handle_conflict_error(action, conflicting_actions) else: self._handle_conflict_resolve(action, conflicting_actions) if __name__ == '__main__': ap1 = ArgumentParser() ap2 = ArgumentParser() ap1.add_argument('-f') # registered, so should be resolved ap2.add_argument('-f') ap1.add_argument('-g') # not registered, so should raise ap2.add_argument('-g') # this raises before ever resolving anything, for the stated reasons ap3 = CustomParser(parents=[ap1, ap2], conflict_handler='custom') Other questions I am aware of these similar questions: python argparse subcommand with dependency and conflict argparse conflict when used with two connected python3 scripts Handling argparse conflicts ... and others But even though some of them provide interesting insights into argparse usage and conflicts, they seem to address issues that are not related to mine.
For a various reasons -- notably the needs of testing -- I have adopted the habit of always defining argparse configuration in the form of a data structure, typically a sequence of dicts. The actual creation of the ArgumentParser is done in a reusable function that simply builds the parser from the dicts. This approach has many benefits, especially for more complex projects. If each of your scripts were to shift to that model, I would think that you might be able to detect any configuration conflicts in that function and raise accordingly, thus avoiding the need to inherit from ArgumentParser and mess around with understanding its internals. I'm not certain I understand your conflict-handling needs very well, so the demo below simply hunts for duplicate options and raises if it sees one, but I think you should be able to understand the approach and assess whether it might work for your case. The basic idea is to solve your problem in the realm of ordinary data structures rather than in the byzantine world of argparse. import sys import argparse from collections import Counter OPTS_CONFIG1 = ( { 'names': 'path', 'metavar': 'PATH', }, { 'names': '--nums', 'nargs': '+', 'type': int, }, { 'names': '--dryrun', 'action': 'store_true', }, ) OPTS_CONFIG2 = ( { 'names': '--foo', 'metavar': 'FOO', }, { 'names': '--bar', 'metavar': 'BAR', }, { 'names': '--dryrun', 'action': 'store_true', }, ) def main(args): ap = define_parser(OPTS_CONFIG1, OPTS_CONFIG2) opts = ap.parse_args(args) print(opts) def define_parser(*configs): # Validation: adjust as needed. tally = Counter( nm for config in configs for d in config for nm in d['names'].split() ) for k, n in tally.items(): if n > 1: raise Exception(f'Duplicate argument configurations: {k}') # Define and return parser. ap = argparse.ArgumentParser() for config in configs: for d in config: kws = dict(d) xs = kws.pop('names').split() ap.add_argument(*xs, **kws) return ap if __name__ == '__main__': main(sys.argv[1:])
9
2
70,413,959
2021-12-19
https://stackoverflow.com/questions/70413959/combine-2-string-columns-in-pandas-with-different-conditions-in-both-columns
I have 2 columns in pandas, with data that looks like this. code fx category AXD AXDG.R cat1 AXF AXDG_e.FE cat1 333 333.R cat1 .... There are other categories but I am only interested in cat1. I want to combine everything from the code column, and everything after the . in the fx column and replace the code column with the new combination without affecting the other rows. code fx category AXD.R AXDG.R cat1 AXF.FE AXDG_e.FE cat1 333.R 333.R cat1 ..... Here is my code, I think I have to use regex but I'm not sure how to combine it in this way. df.loc[df['category']== 'cat1', 'code'] = df[df['category'] == 'cat1']['code'].str.replace(r'[a-z](?=\.)', '', regex=True).str.replace(r'_?(?=\.)','', regex=True).str.replace(r'G(?=\.)', '', regex=True) I'm not sure how to select the second column also. Any help would be greatly appreciated.
There are other categories but I am only interested in cat1 You can use str.split with series.where to add the extention for cat1: df['code'] = (df['code'].astype(str).add("."+df['fx'].str.split(".").str[-1]) .where(df['category'].eq("cat1"),df['code'])) print(df) code fx category 0 AXD.R AXDG.R cat1 1 AXF.FE AXDG_e.FE cat1 2 333.R 333.R cat1
6
3
70,433,788
2021-12-21
https://stackoverflow.com/questions/70433788/proper-c-type-for-nested-list-of-arbitrary-and-variable-depth
I'm trying to port some code from Python to C++. The Python code has a function foo that can take nested lists of ints, with variable list depth. For example, these are legitimate function calls to foo: foo([ [], [[]], [ [], [[]] ] ]) foo([1]) foo([ [1], [2, 3, [4, 5]], [ [6], [7, [8, 9], 10] ] ]) What should the method signature(s) be for a C++ method that can accept this kind of argument?
Here's a way that's pretty simple to define and use: #include <variant> #include <vector> struct VariableDepthList : std::variant<std::vector<VariableDepthList>, int> { private: using base = std::variant<std::vector<VariableDepthList>, int>; public: using base::base; VariableDepthList(std::initializer_list<VariableDepthList> v) : base(v) {} }; This is based on the fact that your type is either an int or a list of (the same type), adding an initializer_list constructor just for ease of use. You might want to add some helper function like is_vector()/is_value() too. Here is an example using it: #include <iostream> void foo(const VariableDepthList& v) { // Use like a variant. This is a print function if (auto* as_vector = std::get_if<std::vector<VariableDepthList>>(&v)) { if (as_vector->empty()) { std::cout << "[]"; return; } std::cout << "[ "; bool first = true; for (const auto& el : *as_vector) { if (!first) { std::cout << ", "; } first = false; foo(el); } std::cout << " ]"; } else { auto* as_int = std::get_if<int>(&v); std::cout << *as_int; } } int main() { foo({}); std::cout << '\n'; foo({ 1 }); std::cout << '\n'; foo({ {}, {{}}, { {}, {{}} } }); foo( {{1},{2,3,{4,5}},{{6},{7,{8,9},10}}} ); std::cout << '\n'; }
6
4
70,416,616
2021-12-20
https://stackoverflow.com/questions/70416616/rolling-sum-based-on-all-previous-dates-not-previous-rows-sorted-by-date
Given the following dataframe: +------------+--------+ | Date | Amount | +------------+--------+ | 01/05/2019 | 15 | | 27/05/2019 | 20 | | 27/05/2019 | 15 | | 25/06/2019 | 10 | | 29/06/2019 | 25 | | 01/07/2019 | 50 | +------------+--------+ I need to get the rolling sum of all previous dates as follows: +------------+--------+ | Date | Amount | +------------+--------+ | 01/05/2019 | NaN | | 27/05/2019 | 15 | | 27/05/2019 | 15 | | 15/06/2019 | 35 | | 29/06/2019 | 10 | | 01/07/2019 | 35 | +------------+--------+ Using: df = pd.DataFrame( { 'Date': { 0: datetime.datetime(2019, 5, 1), 1: datetime.datetime(2019, 5, 27), 2: datetime.datetime(2019, 5, 27), 3: datetime.datetime(2019, 6, 15), 4: datetime.datetime(2019, 6, 29), 5: datetime.datetime(2019, 7, 1), }, 'Amount': {0: 15, 1: 20, 2: 15, 3: 10, 4: 25, 5: 50} } ) df.sort_values("Date", inplace=True) df_roll = df.rolling("28d", on="Date", closed="left").sum() Gets me: +------------+--------+ | Date | Amount | +------------+--------+ | 01/05/2019 | NaN | | 27/05/2019 | 15 | | 27/05/2019 | 35 | <-- Should be 15 | 15/06/2019 | 35 | | 29/06/2019 | 10 | | 01/07/2019 | 35 | +------------+--------+ Which isn't quite correct. How would I get the sum of all previous dates rather than all previous rows?
You can do df['new'] = df.Date.map(df.groupby('Date').Amount.sum().rolling("28d", closed="left").sum()) df Date Amount new 0 2019-05-01 15 NaN 1 2019-05-27 20 15.0 2 2019-05-27 15 15.0 3 2019-06-15 10 35.0 4 2019-06-29 25 10.0 5 2019-07-01 50 35.0
6
2
70,400,639
2021-12-18
https://stackoverflow.com/questions/70400639/how-do-i-get-python-dataclass-initvar-fields-to-work-with-typing-get-type-hints
When messing with Python dataclasses, I ran into this odd error that's pretty easy to reproduce. from __future__ import annotations import dataclasses as dc import typing @dc.dataclass class Test: foo: dc.InitVar[int] print(typing.get_type_hints(Test)) Running this gets you the following: Traceback (most recent call last): File "test.py", line 11, in <module> print(typing.get_type_hints(Test)) File "C:\Program Files\Python310\lib\typing.py", line 1804, in get_type_hints value = _eval_type(value, base_globals, base_locals) File "C:\Program Files\Python310\lib\typing.py", line 324, in _eval_type return t._evaluate(globalns, localns, recursive_guard) File "C:\Program Files\Python310\lib\typing.py", line 687, in _evaluate type_ =_type_check( File "C:\Program Files\Python310\lib\typing.py", line 173, in _type_check raise TypeError(f"{msg} Got {arg!r:.100}.") TypeError: Forward references must evaluate to types. Got dataclasses.InitVar[int]. Without from __future__ import annotations, it seems to work fine; but in the actual code I'm making use of that import in a couple different type hints. Is there no way to make it so that the annotations import doesn't break this?
So I was actually able to replicate this exact same behavior in my Python 3.10 environment, and frankly was sort of surprised that I was able to do so. The issue, at least from the surface, seems to be with InitVar and with how typing.get_type_hints resolves such non-generic types. Anyways, before we get too deep into the weeds, it's worth clarifying a bit about how the from __future__ import annotations works. You can read more about it in the PEP that introduces it into the wild, but essentially the story "in a nutshell" is that the __future__ import converts all annotations in the module where it is used into forward-declared annotations, i.e. ones that are wrapped in a single quotes ' to render all type annotations as string values. So then with all type annotations converted to strings, what typing.get_type_hints actually does is to resolve those ForwardRef types -- which is essentially the typing library's way of identifying annotations that are wrapped in strings -- using a class or module's globals namespace, along with an optional locals namespace if provided. Here's a simple example to basically bring home all that was discussed above. All I'm doing here, is instead of using from __future__ import annotations at the top of the module, I'm manually going in and forward declaring all annotations by wrapping them in strings. It's worth noting that this is essentially the same as how it appears in the question above. import typing from dataclasses import dataclass, InitVar @dataclass class Test: foo: 'InitVar[int]' print(typing.get_type_hints(Test)) If curious, you can also try with a __future__ import and without forward declaring the annotations manually, and then inspect the Test.__annotations__ object to confirm that the end result is the same as how I've defined it above. In either case, we run into the same error below, also as noted in the OP above: Traceback (most recent call last): print(typing.get_type_hints(Test)) File "C:\Users\USER\.pyenv\pyenv-win\versions\3.10.0\lib\typing.py", line 1804, in get_type_hints value = _eval_type(value, base_globals, base_locals) File "C:\Users\USER\.pyenv\pyenv-win\versions\3.10.0\lib\typing.py", line 324, in _eval_type return t._evaluate(globalns, localns, recursive_guard) File "C:\Users\USER\.pyenv\pyenv-win\versions\3.10.0\lib\typing.py", line 687, in _evaluate type_ =_type_check( File "C:\Users\USER\.pyenv\pyenv-win\versions\3.10.0\lib\typing.py", line 173, in _type_check raise TypeError(f"{msg} Got {arg!r:.100}.") TypeError: Forward references must evaluate to types. Got dataclasses.InitVar[int]. Let's note the stack trace as it's certainly to useful to know where things went wrong. However, we'll likely want to explore exactly why the dataclasses.InitVar usage resulted in this strange and unusual error in the first place, which is actually what we'll look at to start with. So what's up with dataclasses.InitVar? The TL;DR here is there's a problem with subscripted dataclasses.InitVar usage specifically. Anyway, let's look at only the relevant parts of how InitVar is defined in Python 3.10: class InitVar: def __init__(self, type): self.type = type def __class_getitem__(cls, type): return InitVar(type) Note that the __class_getitem__ is the method that is called when we subscript the class in an annotation, for example like InitVar[str]. This calls InitVar.__class_getitem__(str) which returns InitVar(str). So the actual problem here is, the subscripted InitVar[int] usage returns an InitVar object, rather than the underlying type, which is the InitVar class itself. So typing.get_type_hints is causing an error here because it sees an InitVar instance in the resolved type annotation, rather than the InitVar class itself, which is a valid type as it's a Python class essentially. Hmm... but what seems to be the most straightforward way to resolve this? The (Patchwork) Road to a Solution If you check out the source code of typing.get_type_hints at least in Python 3.10, you'll notice that it's converting all string annotations to ForwardRef objects explictly, and then calling ForwardRef._evaluate on each one: for name, value in ann.items(): ... if isinstance(value, str): value = ForwardRef(value, is_argument=False) >> value = _eval_type(value, base_globals, base_locals) What the ForwardRef._evaluate method does is eval the contained reference using the class or module globals, and then internally call typing._type_check to check the reference contained in the ForwardRef object. This does a couple things like validating that the reference is of a Generic type from the typing module, which definitely aren't of interest here, since InitVar is explicitly defined is a non-generic type, at least in 3.10. The relevant bits of typing._type_check are shown below: if isinstance(arg, _SpecialForm) or arg in (Generic, Protocol): raise TypeError(f"Plain {arg} is not valid as type argument") if isinstance(arg, (type, TypeVar, ForwardRef, types.UnionType, ParamSpec)): return arg if not callable(arg): >> raise TypeError(f"{msg} Got {arg!r:.100}.") It's the last line shown above, raise TypeError(...) which seems to return the error message that we're running into. If you check the last condition that the _type_check function checks, you can kind of guess how we can implement the simplest possible workaround in our case: if not callable(arg): If we glance a little briefly into the documentation for the callable builtin, we get our first concrete hint of a possible solution we can use: def callable(i_e_, some_kind_of_function): # real signature unknown; restored from __doc__ """ Return whether the object is callable (i.e., some kind of function). Note that classes are callable, as are instances of classes with a __call__() method. """ So, simply put, all we need to do is define a __call__ method under the dataclasses.InitVar class. This can be a stub method, essentially a no-op, but at a minimum the class must define this method so that it can be considered a callable, and thus the typing module can accept it as a valid reference type in a ForwardRef object. Finally, here's the same example as in the OP, but slightly modified to add a new line which patches dataclasses.InitVar to add the necessary method, as a stub: from __future__ import annotations import typing from dataclasses import dataclass, InitVar @dataclass class Test: foo: InitVar[int] # can also be defined as: # setattr(InitVar, '__call__', lambda *args: None) InitVar.__call__ = lambda *args: None print(typing.get_type_hints(Test)) The example now seems to work as expected, without any errors raised by the typing.get_type_hints method, when forward declaring any subscripted InitVar annotations.
6
14
70,429,982
2021-12-21
https://stackoverflow.com/questions/70429982/how-to-disable-all-tensorflow-warnings
I have a for loop with several different deep learning models in it that generates this warning: WARNING:tensorflow:5 out of the last 5 calls to <function Model.make_predict_function.<locals>.predict_function at 0x000001B0A8CC90D0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:6 out of the last 6 calls to <function Model.make_predict_function.<locals>.predict_function at 0x000001B0A6C01940> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. I have tried many different things in the for loop to stop it from popping up with no success. Is there simply a way to disable all warnings?
Use this: tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
5
7
70,425,481
2021-12-20
https://stackoverflow.com/questions/70425481/namedtuple-with-default-values
I'm trying to use a function with a parameter of namedTuple which has default values. I tried this. Is that somehow possible? from typing import Optional, NamedTuple Stats = NamedTuple("Stats", [("min", Optional[int]), ("max", Optional[int])]) def print(value1: Stats=None, value2: Stats=None): print("min: ", value1.min) print("max: ", value1.max) print()
Rename your print() function, first you're using the built-in function name print which is bad style, secondly then you make a recursive call to print() inside print() (and I'm sure you meant to call the actual built-in print() inside function's body). Second, use collection.namedtuple class to implement actual type of tuple, like following: Also type annotations are not needed. Try it online! from collections import namedtuple StatsTup = namedtuple('Stats', ['min', 'max'], defaults = [3, 7]) def printf(value1 = StatsTup(), value2 = StatsTup(max = 10)): print("min1: ", value1.min) print("max1: ", value1.max) print("min2: ", value2.min) print("max2: ", value2.max) printf() printf(StatsTup(12, 14), StatsTup(16, 18)) Output: min1: 3 max1: 7 min2: 3 max2: 10 min1: 12 max1: 14 min2: 16 max2: 18 As you can see in code and output above I'm passing named tuple as default parameters to funcion. You can omit tuple's fields values if you provide defaults = [...] like I did. If you provide such defaults then you may provide no values to tuple like StatsTup() or some values like StatsTup(max = 123) or all values like StatsTup(min = 20, max = 35). Solution above works only starting with Python 3.7, for older versions of Python do following: Try it online! from collections import namedtuple StatsTup = namedtuple('Stats', 'min max') StatsTup.__new__.__defaults__ = (3, 7) def printf(value1 = StatsTup(), value2 = StatsTup(max = 10)): print("min1: ", value1.min) print("max1: ", value1.max) print("min2: ", value2.min) print("max2: ", value2.max) printf() printf(StatsTup(12, 14), StatsTup(16, 18))
9
5
70,428,172
2021-12-20
https://stackoverflow.com/questions/70428172/how-to-put-string-parameters-in-functions-inside-f-strings
I have the following f-string: f"Something{function(parameter)}" I want to hardcode that parameter, which is a string: f"Something{function("foobar")}" It gives me this error: SyntaxError: f-string: unmatched '(' How do I do this?
Because f-strings are recognized by the lexer, not the parser, you cannot nest quotes of the same type in a string. The lexer is just looking for the next ", regardless of its context. Use single quotes inside f"..." or double quotes inside f'...'. f"Something{function('foobar')}" f'Something{function("foobar")}' Escaping quotes is not an option (for reasons that escape me at the moment), which means arbitrarily nested expressions are not an option. You only have 4 types of quotes to work with: "..." '...' """...""" '''...'''
5
7
70,420,566
2021-12-20
https://stackoverflow.com/questions/70420566/dataframe-pairs-of-columns-division
I have a DataFrame and want to get divisions of pairs of columns like below: df = pd.DataFrame({ 'a1': np.random.randint(1, 1000, 1000), 'a2': np.random.randint(1, 1000, 1000), 'b1': np.random.randint(1, 1000, 1000), 'b2': np.random.randint(1, 1000, 1000), 'c1': np.random.randint(1, 1000, 1000), 'c2': np.random.randint(1, 1000, 1000), }) df['a'] = df['a2'] / df['a1'] df['b'] = df['b2'] / df['b1'] df['c'] = df['c2'] / df['c1'] I want to combine the last three lines into one like: df[['a', 'b', 'c']] = df[['a2', 'b2', 'c2']] / df[['a1', 'b1', 'c1']] but I only get an error of ValueError: Columns must be same length as key. If I just simply print(df[['a2', 'b2', 'c2']] / df[['a1', 'b1', 'c1']]), I will only get a DataFrame with NaNs of shape (1000, 6). ==== Edit Now I know why my original one-line code doesn't work. Actually, the arithmetic operations of two DataFrames will be conducted between the columns with same labels, while those columns without same label in another DataFrame will generate NaNs. The result DataFrame will have the union() of the columns of the two operating DataFrames. That's why my original solution will give an ValueError and the div will generate NaNs. Following example will be helpful to explain: df1 = pd.DataFrame(data={'A':[1,2], 'B':[3,4], 'C':[5,6], 'D':[8,9]}) df2 = pd.DataFrame(data={'A':[11,12], 'B':[13,14], 'C':[15,16], 'D':[18,19]}) df1[['A', 'B']] / df2[['A', 'B']] Out[130]: A B 0 0.090909 0.230769 1 0.166667 0.285714 df1[['A', 'B']] / df2[['C', 'D']] Out[131]: A B C D 0 NaN NaN NaN NaN 1 NaN NaN NaN NaN df1[['A', 'B']] + df2[['A', 'C']] Out[132]: A B C 0 12 NaN NaN 1 14 NaN NaN
You can use: df[['a', 'b', 'c']] = df[['a2', 'b2', 'c2']].values / df[['a1', 'b1', 'c1']].values OUTPUT a1 a2 b1 b2 c1 c2 a b c 0 864 214 551 761 174 111 0.247685 1.381125 0.637931 1 820 971 379 79 190 587 1.184146 0.208443 3.089474 2 305 154 519 378 567 186 0.504918 0.728324 0.328042 3 51 505 303 417 959 326 9.901961 1.376238 0.339937 4 84 531 625 899 248 905 6.321429 1.438400 3.649194 .. ... ... ... ... ... ... ... ... ... 995 302 695 790 777 896 975 2.301325 0.983544 1.088170 996 24 308 462 316 388 784 12.833333 0.683983 2.020619 997 135 286 359 752 282 283 2.118519 2.094708 1.003546 998 695 45 832 936 811 404 0.064748 1.125000 0.498150 999 809 454 971 335 366 884 0.561187 0.345005 2.415301
6
3
70,419,372
2021-12-20
https://stackoverflow.com/questions/70419372/python-generic-type-that-implements-protocol
Objects A, B ... have attribute namespace and I have a function that filters a list of such objects by a certain set of values of namespace attribute: T = TypeVar('T') def filter(seq: list[T], namespace_values: set[str]) -> list[T]: # Returns a smaller list containing only the items from # `seq` whose `namespace` are in `namespace_values` ... This works well, but it allows passing an object of type X that does not have the attribute namespace without any check error. Then I created a protocol and changed the function in order to use the protocol: class Namespaced(Protocol): namespace: str def filter(seq: list[Namespaced], namespace_values: set[str]) -> list[Namespaced]: # Returns a smaller list containing only the items from # `seq` whose `namespace` are in `namespace_values` ... Now I get a check error if I pass a list of X (which is what I wanted), but I lost the generics: list_of_a: list[A] = [a1, a2, a3] output = filter(list_of_a, ['ns1', 'ns2']) # output is list[Namespaced] instead of list[A] How can I combine the generics and protocol so my function returns a list of type T and also checks that the seq's items implement Namespaced protocol? I tried the below approach but the T is lost. def filter(seq: list[Namespaced[T]], namespace_values: set[str]) -> list[T]: # Returns a smaller list containing only the items from # `seq` whose `namespace` are in `namespace_values` ... Cheers!
Use a bound type variable with the protocol as the bound. Consider the following module: (py39) Juans-MacBook-Pro:~ juan$ cat test.py Which has: from typing import TypeVar, Protocol from dataclasses import dataclass class Namespaced(Protocol): namespace: str T = TypeVar("T", bound="Namespaced") @dataclass class Foo: namespace: str @dataclass class Bar: namespace: str id: int def frobnicate(namespaced: list[T]) -> list[T]: for x in namespaced: print(x.namespace) return namespaced result1 = frobnicate([Foo('foo')]) result2 = frobnicate([Bar('bar', 1)]) reveal_type(result1) reveal_type(result2) Then mypy gives: (py39) Juans-MacBook-Pro:~ juan$ mypy --strict test.py test.py:27: note: Revealed type is "builtins.list[test.Foo*]" test.py:28: note: Revealed type is "builtins.list[test.Bar*]"
9
9
70,418,120
2021-12-20
https://stackoverflow.com/questions/70418120/python-how-to-transpose-the-count-of-values-in-one-pandas-data-frame-to-multiple
I have 2 data frames df1 and df2. import pandas as pd df1 = pd.DataFrame({ 'id':['1','1','1','2','2','2', '3', '4','4', '5', '6', '7'], 'group':['A','A','B', 'A', 'A', 'C', 'A', 'A', 'B', 'B', 'A', 'C'] }) df2 = pd.DataFrame({ 'id':['1','2','3','4','5','6','7'] }) I want to add 3 columns to df2 named group_A, group_B, and group_C, where each counts the number of repetitions of each group in df1 according to the id column. so the result of df2 should be likes this:
Use crosstab with DataFrame.join, type of both id has to by same, like here strings: print (pd.crosstab(df1['id'], df1['group']).add_prefix('group_')) group group_A group_B group_C id 1 2 1 0 2 2 0 1 3 1 0 0 4 1 1 0 5 0 1 0 6 1 0 0 7 0 0 1 df = df2.join(pd.crosstab(df1['id'], df1['group']).add_prefix('group_'), on='id') print (df) id group_A group_B group_C 0 1 2 1 0 1 2 2 0 1 2 3 1 0 0 3 4 1 1 0 4 5 0 1 0 5 6 1 0 0 6 7 0 0 1 Solution without join is possible, if same ids in both DataFrames: print (pd.crosstab(df1['id'], df1['group']).add_prefix('group_').reset_index().rename_axis(None, axis=1)) id group_A group_B group_C 0 1 2 1 0 1 2 2 0 1 2 3 1 0 0 3 4 1 1 0 4 5 0 1 0 5 6 1 0 0 6 7 0 0 1
5
3
70,410,527
2021-12-19
https://stackoverflow.com/questions/70410527/tesseract-ocr-gives-really-bad-output-even-with-typed-text
I've been trying to get tesseract OCR to extract some digits from a pre-cropped image and it's not working well at all even though the images are fairly clear. I've tried looking around for solutions but all the other questions I've seen on here involve a problem with cropping or skewed text. Here's an example of my code which tries to read the image and output to the command line. #convert image to greyscale for OCR im_g = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY) #create threshold image to simplify things. im_t = cv2.threshold(im_g, 0, 255, cv2.THRESH_OTSU | cv2.THRESH_BINARY_INV)[1] #define kernel size rect_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (20,20)) #Apply dilation to threshold image im_d = cv2.dilate(im_t, rect_kernel, iterations = 1) #Find countours contours = cv2.findContours(im_t, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0] for cnt in contours: x,y,w,h = cv2.boundingRect(cnt) #crop im_c = im[y:y+h, x:x+w] speed = pytesseract.image_to_string(im_c) print(im_path +" : " + speed) Here's an example of an image The output for it is: frame10008.jpg : VAeVAs} I've gotten a tiny improvement in some images by adding the following config to the tesseract image to string function: config="--psm 7" Without the new config, it would detect nothing for this image. Now it outputs frame100.jpg : | U | Any ideas as to what I'm doing wrong? Is there a different approach I could be taking to solve this problem? I'm open to not using Tesseract at all.
I've found a decent workaround. First off I've made the image larger. More area for tesseract to work with helped it a lot. Second, to get rid of non-digit outputs, I've used the following config on the image to string function: config = "--psm 7 outputbase digits" That line now looks like this: speed = pytesseract.image_to_string(im_c, config = "--psm 7 outputbase digits") The data coming back is far from perfect but the success rate is high enough that I should be able to clean up the garbage data and interpolate where tesseract returns no digits.
5
1
70,381,558
2021-12-16
https://stackoverflow.com/questions/70381558/get-the-name-or-label-from-django-integerchoices-providing-a-valid-value
I have django 3.2 and an IntegerChoices class class Type(models.IntegerChoices): GENERAL = 2, _("general address") DELIVERY = 4, _("delivery address") BILLING = 6, _("billing address") I can get the Value name and label easily by doing Type.GENERAL , Type.GENERAL.name and Type.GENERAL.label. But how can I get these values if I only have the value, e.g. I want to get the Name DELIVERY from the value 4. Type[4].name ist not working as the list values does not correspond to the int values of the enum. Thanks Matt
IntegerChoices.choices returns you a list of tuples with all content. In your case you'll have something like: [(2, 'general address'), (4, 'delivery address'), (6, 'billing address')] Thus it can be done this way: class Type(models.IntegerChoices): GENERAL = 2, _("general address") DELIVERY = 4, _("delivery address") BILLING = 6, _("billing address") def label_by_type_value(value): choices = [c[1] for c in Type.choices if c[0] == value] return choices[0] if len(choices) else None
5
4
70,409,343
2021-12-19
https://stackoverflow.com/questions/70409343/assert-to-check-if-a-element-present-in-a-list-or-not
I am trying to find if a particular element (int/string type), exists in my list or not. But I am using assert to evaluate my condition, meaning if the assert condition states True (element is present inside the list), False for element not being there in the list. Here is what I am trying- def test(x): try: for i in x: assert i==210410 return True except AssertionError as msg: print('Error') x=[210410,'ABC',21228,'YMCA',31334,'KJHG'] The output results to Error, even if the element is in the list. Can you please help me to sort this issue out?
Try using this: assert 210410 in x
8
18
70,401,561
2021-12-18
https://stackoverflow.com/questions/70401561/integrate-js-scripts-on-streamlit
I am trying to integrate my Medium profile on a streamlit app using the below code snippet (generate through https://medium-widget.pixelpoint.io/) import streamlit as st st.markdown(''' <div id="medium-widget"></div> <script src="https://medium-widget.pixelpoint.io/widget.js"></script> <script>MediumWidget.Init({renderTo: '#medium-widget', params: {"resource":"https://medium.com/@mehulgupta_7991","postsPerLine":3,"limit":9,"picture":"small","fields":["description","author","claps","publishAt"],"ratio":"landscape"}})</script>''') But looks like the tag is getting ignored. Any idea where I am going wrong?
You can use Streamlit component html. Code: import streamlit as st import streamlit.components.v1 as components components.html(''' <div id="medium-widget"></div> <script src="https://medium-widget.pixelpoint.io/widget.js"></script> <script>MediumWidget.Init({renderTo: '#medium-widget', params: {"resource":"https://medium.com/@mehulgupta_7991","postsPerLine":3,"limit":9,"picture":"small","fields":["description","author","claps","publishAt"],"ratio":"landscape"}})</script>''') Output:
6
3